, my AI assistant saved a reminiscence with an significance rating of 8/10. Content material: “Investigating Bun.js as a potential runtime swap.”
I by no means truly switched to Bun. To be truthful, it was a two-day curiosity that went nowhere. However this reminiscence endured for six months, popping up every time I requested about my construct course of and quietly pushing the AI towards a Bun answer with confidence.
There was nothing improper with the system; it was doing precisely what it was presupposed to do. That was the difficulty.
Right here’s the failure mode nobody talks about when constructing AI reminiscence methods. You make it work correctly. It remembers issues, retrieves issues, the entire great things. And for some time, the AI appears intelligent.
Then you definitely truly begin utilizing it.
Reminiscences pile up. Selections get reversed. Preferences shift. The system doesn’t discover.
You casually point out one thing in January, and it will get saved with excessive significance.
Cool.
By April, the AI treats it like a present truth. And typically, it takes some time to understand you’ve been working from outdated information.
A system that remembers every part doesn’t have a reminiscence. It has an archive. And an archive that grows with out hygiene shortly turns into messier than having no reminiscence in any respect.
Nick Lawson wrote an awesome piece right here on TDS describing how he applied simply that. You’ll wish to learn it; the storage/retrieval structure is basically good.
However there’s an issue with this sort of system: what occurs to reminiscences as they age?
When ought to they die?
Which reminiscence is extra dependable than the others?
What number of overlapping reminiscences must be mixed into one?
That’s what this text is about. Not storing and never retrieving, however what occurs in between.
I’ll cowl sufficient of the bottom layer to comply with alongside, even if you happen to haven’t learn Nick’s piece. However the brand new floor begins the place his article ends.
Let’s get into it.
The Downside With “Store and Retrieve”
Most reminiscence methods usually assume a two-step course of. Write. Learn. Checkmate.
Certain, that’s tremendous if you happen to’re constructing a submitting cupboard. Not if you happen to’re making an attempt to construct an assistant you could depend on for months.
What does that appear like?
The reminiscence you wrote in week one stays in week eight simply as recent and high-priority because the day you made it, despite the fact that the choice you made was reversed two weeks in the past.
The opposite reminiscence, which contradicts your earlier resolution, was filed away casually and easily by no means had time to turn out to be a precedence as a result of it hasn’t obtained practically sufficient accesses to push itself up the queue.
And so, with out hesitation, your assistant pulls a call you unmade. It’s not till the third try that you simply lastly catch onto the sample that your assistant has been counting on out of date data the entire time.
The issue isn’t remembering, it’s failing to let go.
The distinction I wished to construct: an strategy to reminiscence that works like a mind, not like a database. Reminiscence decays. It will get outmoded.
Some reminiscences aren’t very dependable from the beginning. Others expire after a sure interval. The mind manages all of those routinely and with out you doing something. That was my goal.
The Basis (Temporary, I Promise)
Let’s get a fast context test.
Somewhat than encoding your reminiscences and operating cosine similarity searches, you retain them in plain textual content inside an SQLite database, which the LLM can seek the advice of for a concise index on each request.
There’s no want for any embedding course of, third-party API, or additional recordsdata. The LLM’s language understanding performs the retrieval job. It appears too easy. However it truly does surprisingly effectively on a private degree.
My schema builds on high of that with lifecycle fields:
# memory_store.py
import sqlite3
import json
from datetime import datetime
from pathlib import Path
from contextlib import contextmanager
DB_PATH = Path("agent_memory.db")
@contextmanager
def _db():
conn = sqlite3.join(DB_PATH)
conn.row_factory = sqlite3.Row
attempt:
yield conn
lastly:
conn.shut()
def init_db():
with _db() as conn:
conn.execute("""
CREATE TABLE IF NOT EXISTS reminiscences (
id INTEGER PRIMARY KEY AUTOINCREMENT,
content material TEXT NOT NULL,
abstract TEXT,
tags TEXT DEFAULT '[]',
-- Lifecycle fields — that is what this text provides
significance REAL DEFAULT 5.0,
confidence REAL DEFAULT 1.0,
access_count INTEGER DEFAULT 0,
decay_score REAL DEFAULT 1.0,
standing TEXT DEFAULT 'energetic',
contradicted_by INTEGER REFERENCES reminiscences(id),
created_at TEXT NOT NULL,
last_accessed TEXT,
expires_at TEXT
)
""")
conn.execute("""
CREATE TABLE IF NOT EXISTS memory_events (
id INTEGER PRIMARY KEY AUTOINCREMENT,
memory_id INTEGER REFERENCES reminiscences(id),
event_type TEXT NOT NULL,
element TEXT,
occurred_at TEXT NOT NULL
)
""")
conn.commit()
def store_memory(content material: str, abstract: str = None, tags: checklist[str] = None,
significance: float = 5.0, confidence: float = 1.0) -> int:
with _db() as conn:
cur = conn.execute("""
INSERT INTO reminiscences
(content material, abstract, tags, significance, confidence, created_at)
VALUES (?, ?, ?, ?, ?, ?)
""", (
content material,
abstract or content material[:120],
json.dumps(tags or []),
significance,
confidence,
datetime.now().isoformat()
))
conn.commit()
return cur.lastrowid
def log_event(memory_id: int, event_type: str, element: str = ""):
# Pulled this out of each module that wanted it — was copy-pasting
# the identical INSERT 4 occasions. Traditional.
with _db() as conn:
conn.execute("""
INSERT INTO memory_events (memory_id, event_type, element, occurred_at)
VALUES (?, ?, ?, ?)
""", (memory_id, event_type, element, datetime.now().isoformat()))
conn.commit()
init_db()The attention-grabbing columns are those you don’t see in a typical reminiscence schema: confidence, decay_score, standing, contradicted_by, expires_at. Every one solutions a query a few reminiscence’s well being that “does it exist?” can’t.
Reminiscence Decay
The primary drawback is just about easy: previous reminiscences don’t tidy themselves.
Every reminiscence within the database is assigned a decay_score from 0 to 1. It begins at 1.0 on the level of creation and decays over time, relying on how way back the reminiscence was final accessed.
Reminiscences you retain referencing keep recent. Whereas reminiscences that aren’t consulted for a number of months fade in direction of zero.
As soon as they fall beneath the relevance threshold, they’re archived, not deleted, as a result of fading away doesn’t imply they have been improper, simply now not helpful.
# decay.py
import math
from datetime import datetime
from memory_store import _db, log_event
HALF_LIFE_DAYS = 30 # tune this — 30 works effectively for conversational reminiscence,
# push to 90+ if you happen to're monitoring long-running tasks
def _decay_score(last_accessed: str | None, created_at: str, access_count: int) -> float:
ref = last_accessed or created_at
days_idle = (datetime.now() - datetime.fromisoformat(ref)).days
# Commonplace exponential decay: e^(-ln2 * t / half_life)
# (In follow, the rating halves each HALF_LIFE_DAYS.)
rating = math.exp(-0.693 * days_idle / HALF_LIFE_DAYS)
# Steadily accessed reminiscences earn a small freshness bonus.
# Cap at 1.0 — this is not meant to inflate past recent.
return min(1.0, rating + min(0.3, access_count * 0.03))
def run_decay_pass():
"""Run daily. Updates scores, archives anything below 0.1."""
with _db() as conn:
rows = conn.execute("""
SELECT id, created_at, last_accessed, access_count
FROM reminiscences WHERE standing = 'energetic'
""").fetchall()
to_archive = [(r["id"],) for r in rows
if _decay_score(r["last_accessed"], r["created_at"], r["access_count"]) < 0.1]
to_update = [(_decay_score(r["last_accessed"], r["created_at"], r["access_count"]), r["id"])
for r in rows
if _decay_score(r["last_accessed"], r["created_at"], r["access_count"]) >= 0.1]
if to_archive:
conn.executemany(
"UPDATE memories SET status='archived', decay_score=0.0 WHERE id=?",
to_archive
)
if to_update:
conn.executemany(
"UPDATE memories SET decay_score=? WHERE id=?",
to_update
)
conn.commit()
for (mid,) in to_archive:
log_event(mid, "archived", "decay below threshold")
print(f"Decay pass: {len(to_update)} updated, {len(to_archive)} archived.")HALF_LIFE_DAYS lives on the module degree as a result of that’s the quantity you’ll doubtless wish to change, and default values for capabilities reside someplace in limbo.
The batched executemany as an alternative of looping execute issues when you’ve gathered a couple of hundred reminiscences. SQLite is quick, however not “500 individual commits in a daily cron job” quick.
That is additionally what would have caught the difficulty with Bun.js again on the intro of this put up. My forgotten reminiscence would have pale away inside two months, with out me even having to delete it.
Contradiction Detection
That is the half no person builds and the one which causes probably the most harm when it’s lacking.
Let’s take this state of affairs: you inform the AI that you simply’re utilizing PostgreSQL. Then three months later, you migrate to MySQL, briefly mentioning it in dialog.
Now, you could have fourteen reminiscences associated to PostgreSQL with excessive significance, whereas your single reminiscence involving MySQL has low significance.
So once you ask about your database setup six months from now, the AI confidently says “you’re using PostgreSQL,” and also you spend ten minutes confused earlier than you realise what’s taking place.
I bumped into this myself. I’d stopped utilizing poetry and began utilizing uv as my dependency supervisor, I discussed it as soon as, with out triggering a excessive significance rating, after which spent every week questioning why the assistant saved suggesting poetry instructions. The previous reminiscence wasn’t improper; it simply hadn’t been outmoded.
The repair: when a brand new reminiscence is created, test whether or not it contradicts something already saved and actively mark older ones as outmoded.
# contradiction.py
import json
from openai import OpenAI
from memory_store import _db, log_event
consumer = OpenAI()
def _build_index(exclude_id: int) -> str:
with _db() as conn:
rows = conn.execute("""
SELECT id, abstract FROM reminiscences
WHERE standing = 'energetic' AND id != ?
ORDER BY significance DESC, created_at DESC
LIMIT 80
""", (exclude_id,)).fetchall()
return "n".be a part of(f"[{r['id']}] {r['summary']}" for r in rows)
def check_for_contradictions(new_content: str, new_id: int) -> checklist[int]:
"""
Name instantly after storing a brand new reminiscence.
Returns IDs of reminiscences now outmoded by the brand new one.
"""
index = _build_index(exclude_id=new_id)
if not index:
return []
resp = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
temperature=0,
messages=[{"role": "user", "content": f"""A new memory was just stored:
"{new_content}"
Which of these existing memories does it directly contradict or supersede?
A contradiction means the new info makes the old one factually wrong or outdated.
NOT contradictions:
- "User likes Python" vs "User also uses JavaScript" (additive, not contradictory)
- "Working on study tracker" vs "Added auth to study tracker" (same project, progression)
CONTRADICTIONS:
- "Uses PostgreSQL" vs "Migrated to MySQL" (one replaces the other)
- "Deadline is March 15" vs "Deadline pushed to April 1" (superseded)
EXISTING MEMORIES:
{index}
JSON array of IDs only. [] if none."""}]
)
uncooked = resp.selections[0].message.content material.strip()
attempt:
old_ids = json.masses(uncooked)
if not isinstance(old_ids, checklist):
return []
besides json.JSONDecodeError:
return []
if not old_ids:
return []
now = __import__("datetime").datetime.now().isoformat()
with _db() as conn:
conn.executemany("""
UPDATE reminiscences
SET standing = 'outmoded', contradicted_by = ?
WHERE id = ? AND standing = 'energetic'
""", [(new_id, oid) for oid in old_ids])
conn.commit()
for oid in old_ids:
log_event(oid, "superseded", f"by #{new_id}: {new_content[:100]}")
return old_idsHowever the contradicted_by deserves an additional point out. When a reminiscence is outmoded by a more recent one, it isn’t merely deleted. Somewhat, a reference to the alternative is added to it, enabling you to backtrack to the unique reminiscence from the up to date one when wanted.
If you happen to’re debugging why the AI stated one thing bizarre, you’ll be able to pull up the reminiscence it used and hint its historical past by memory_events. Seems, “why does the AI think this?” is a query you ask extra usually than you’d count on.
As for the 80-memory restrict within the contradiction test, it’s fairly cheap because you don’t essentially want the entire reminiscences out there to search out conflicts. These reminiscences which have the very best possibilities of contradicting the brand new reminiscence are current and extremely essential anyway, which is what the ORDER BY displays.
Confidence Scoring
Two reminiscences could be about the identical truth. In a single case, the declare is explicitly made: “I use FastAPI, always have.” In one other case, the opposite was inferred (“they seem to prefer async frameworks”). These shouldn’t be weighted equally.
Confidence scores are what assist the system differentiate between what you stated to it and what it discovered about you. It begins at evaluation time, in the intervening time a reminiscence is saved, with one small LLM name:
# confidence.py
from openai import OpenAI
from memory_store import _db, log_event
from datetime import datetime
consumer = OpenAI()
def assess_confidence(content material: str, user_msg: str, assistant_msg: str) -> float:
"""
Synchronous LLM name within the write path. Provides ~200ms.
Value it for reminiscences that'll affect responses for months.
"""
resp = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
temperature=0,
messages=[{"role": "user", "content": f"""Rate confidence in this memory (0.0-1.0):
MEMORY: {content}
FROM THIS EXCHANGE:
User: {user_msg}
Assistant: {assistant_msg}
Scale:
1.0 = explicit, direct statement ("I use Python", "deadline is March 15")
0.7 = clearly implied but not stated outright
0.5 = reasonable inference, could be wrong
0.3 = weak inference — user might disagree
0.1 = speculation
Single float only."""}]
)
attempt:
return max(0.0, min(1.0, float(resp.selections[0].message.content material.strip())))
besides ValueError:
return 0.5
def reinforce(memory_id: int, bump: float = 0.1):
"""
Bump confidence when a later dialog confirms one thing the system already knew.
TODO: I have never wired up the detection that triggers this but —
determining "this new conversation confirms memory X" is tougher than it sounds.
The operate works, the caller does not exist. Will replace when I've one thing
that does not produce too many false positives.
"""
with _db() as conn:
conn.execute("""
UPDATE reminiscences
SET confidence = MIN(1.0, confidence + ?),
access_count = access_count + 1,
last_accessed = ?
WHERE id = ?
""", (bump, datetime.now().isoformat(), memory_id))
conn.commit()
log_event(memory_id, "reinforced", f"+{bump:.2f}")The reinforce operate is partially full, and I’m being upfront about that.
The logic for detecting “this conversation confirms an existing memory” is genuinely onerous to get proper with out producing false positives, and I’d somewhat ship trustworthy, incomplete code than assured code that does the improper factor quietly. It’s in there, it really works, the set off simply doesn’t exist but.
Confidence straight influences the retrieval sorting. A reminiscence that’s rated at 8 significance however solely 0.3 confidence ranks behind a reminiscence with significance at 6 and confidence at 0.9.
That is precisely the thought. Excessive confidence in a weaker reminiscence beats low confidence in a strong-seeming one when the query is “what does the AI actually know?”
Compression and Elevation
Nick’s consolidation agent seems for similarities throughout reminiscences. However what I wish to do is be much more aggressive: discover teams of reminiscences which can be principally repeating themselves in different conversations, and exchange these with one higher entry.
Not “what connects these?”; “can I replace these five with one?”
In different phrases, you’re not grouping reminiscences, you’re rewriting them right into a cleaner model of the reality.
After a couple of months of working with a private assistant, you get fairly a couple of duplicate reminiscences. “User prefers short function names” from January. “User mentioned keeping code readable over clever” from February. “User asked to avoid one-liners in the refactor” from March.
This is identical desire. It must be put collectively right into a single reminiscence.
# compression.py
import json
from openai import OpenAI
from memory_store import _db, log_event, store_memory
from datetime import datetime
consumer = OpenAI()
def run_compression_pass():
"""
Full compression cycle: discover clusters, merge every, archive originals.
Runs weekly. Calls gpt-4o for synthesis so it isn't low-cost — do not
set off this on each session.
"""
with _db() as conn:
rows = conn.execute("""
SELECT id, abstract, confidence, access_count, significance
FROM reminiscences
WHERE standing = 'energetic'
ORDER BY significance DESC, access_count DESC
LIMIT 100
""").fetchall()
if len(rows) < 5:
return
index = "n".be a part of(
f"[{r['id']}] (conf:{r['confidence']:.1f} hits:{r['access_count']}) {r['summary']}"
for r in rows
)
# gpt-4o-mini for cluster identification — simply grouping, not synthesising
cluster_resp = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
temperature=0,
messages=[{"role": "user", "content": f"""Review this memory index and identify groups that
could be merged into a single, more useful memory.
Merge candidates:
- Multiple memories about the same topic from different conversations
- Incremental updates that could be expressed as one current state
- Related preferences that form a clear pattern
Do NOT merge:
- Different topics that share a tag
- Memories where each individual detail matters separately
MEMORY INDEX:
{index}
JSON array of arrays. Example: [[3,7,12],[5,9]]
Return [] if nothing qualifies."""}]
)
attempt:
clusters = json.masses(cluster_resp.selections[0].message.content material.strip())
clusters = [c for c in clusters if isinstance(c, list) and len(c) >= 2]
besides (json.JSONDecodeError, TypeError):
return
if not clusters:
return
row_map = {r["id"]: r for r in rows}
for cluster_ids in clusters:
legitimate = [mid for mid in cluster_ids if mid in row_map]
if len(legitimate) >= 2:
_compress(legitimate, row_map)
def _compress(memory_ids: checklist[int], row_map: dict):
"""Synthesise a cluster into one elevated memory, archive the rest."""
with _db() as conn:
ph = ",".be a part of("?" * len(memory_ids))
source_rows = conn.execute(
f"SELECT id, content, importance, access_count FROM memories WHERE id IN ({ph})",
memory_ids
).fetchall()
if not source_rows:
return
bullets = "n".be a part of(f"- {r['content']}" for r in source_rows)
avg_importance = sum(r["importance"] for r in source_rows) / len(source_rows)
peak_access = max(r["access_count"] for r in source_rows)
# gpt-4o for the precise merge — that is the step that decides
# what survives, so use the higher mannequin
synth_resp = consumer.chat.completions.create(
mannequin="gpt-4o",
temperature=0,
messages=[{"role": "user", "content": f"""Compress these related memories into one better memory.
Be specific. Keep all important details. Don't repeat yourself.
MEMORIES:
{bullets}
JSON: {{"content": "...", "summary": "max 120 chars", "tags": ["..."]}}"""}]
)
attempt:
merged = json.masses(synth_resp.selections[0].message.content material.strip())
besides json.JSONDecodeError:
return # synthesis failed, depart originals alone
with _db() as conn:
ph = ",".be a part of("?" * len(memory_ids))
conn.execute(
f"UPDATE memories SET status='compressed' WHERE id IN ({ph})",
memory_ids
)
cur = conn.execute("""
INSERT INTO reminiscences
(content material, abstract, tags, significance, confidence,
access_count, decay_score, standing, created_at)
VALUES (?, ?, ?, ?, 0.85, ?, 1.0, 'energetic', ?)
""", (
merged["content"],
merged.get("summary", merged["content"][:120]),
json.dumps(merged.get("tags", [])),
min(10.0, avg_importance * 1.2),
peak_access,
datetime.now().isoformat()
))
conn.commit()
new_id = cur.lastrowid
for mid in memory_ids:
log_event(mid, "compressed", f"merged into #{new_id}")
print(f"[compression] {len(memory_ids)} memories collapsed into #{new_id}")The cluster identification makes use of gpt-4o-mini since that’s all we’re doing at this level. The synthesis makes use of gpt-4o as a result of that’s the place precise data is being created from a number of sources.
Doing each with a budget mannequin to save lots of a couple of cents felt just like the improper trade-off for one thing that runs as soon as every week and makes everlasting choices.
The merged reminiscence will get confidence=0.85. Undoubtedly not 1.0, since compression stays a synthesis course of, which can lead to lack of nuance. However 0.85 acknowledges the excessive sign power in a number of converging conversations.
Expiring Reminiscences
Some issues shouldn’t final ceaselessly by design. A deadline. A brief blocker. “Waiting to hear back from Alice about the API spec.” That’s helpful context immediately. In three weeks, it’s simply noise.
# expiry.py
import json
from openai import OpenAI
from memory_store import _db, log_event
from datetime import datetime
consumer = OpenAI()
def maybe_set_expiry(content material: str, memory_id: int):
"""Check at write time whether this memory has a natural end date."""
immediately = datetime.now().strftime("%Y-%m-%d")
resp = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
temperature=0,
messages=[{"role": "user", "content": f"""Does this memory have a natural expiration?
MEMORY: "{content}"
TODAY: {today}
Expires if it contains:
- A deadline or specific due date
- A temporary state ("currently blocked on...", "waiting for...")
- A one-time event ("meeting Thursday", "presenting tomorrow")
- An explicit time bound ("this sprint", "until we ship v2")
If yes: {{"expires": true, "date": "YYYY-MM-DD"}}
If no: {{"expires": false}}
JSON only."""}]
)
attempt:
parsed = json.masses(resp.selections[0].message.content material.strip())
besides json.JSONDecodeError:
return
if parsed.get("expires") and parsed.get("date"):
with _db() as conn:
conn.execute(
"UPDATE memories SET expires_at=? WHERE id=?",
(parsed["date"], memory_id)
)
conn.commit()
def purge_expired():
"""Archive anything past its expiry date. Safe to call daily."""
now = datetime.now().isoformat()
with _db() as conn:
expired = [
r["id"] for r in conn.execute("""
SELECT id FROM reminiscences
WHERE expires_at IS NOT NULL
AND expires_at < ?
AND standing = 'energetic'
""", (now,)).fetchall()
]
if expired:
conn.executemany(
"UPDATE memories SET status='expired' WHERE id=?",
[(mid,) for mid in expired]
)
conn.commit()
# Log occasions after closing the write connection.
# log_event opens its personal connection — nesting them on the identical
# SQLite file can impasse in default journal mode.
for mid in expired:
log_event(mid, "expired", "past expiry date")
if expired:
print(f"Expired {len(expired)} memories.")The cause subject that was in an earlier model of this received reduce. It was satisfying to mannequin, however nothing ever learn it. Unused columns in SQLite are nonetheless columns it’s a must to keep in mind exist. The date string is sufficient.
Wiring It Collectively

All 5 passes want a scheduler. Right here’s the coordinator, with threading executed correctly:
# lifecycle.py
import time
import threading
from datetime import datetime, timedelta
from decay import run_decay_pass
from expiry import purge_expired
from compression import run_compression_pass
class LifecycleScheduler:
"""
Background upkeep for the reminiscence retailer.
Decay + expiry run day by day. Compression runs weekly (calls gpt-4o).
Utilization:
scheduler = LifecycleScheduler()
scheduler.begin() # as soon as at startup
scheduler.force_run() # for testing
scheduler.cease() # clear shutdown
"""
def __init__(self, decay_interval_h: int = 23, compression_interval_days: int = 6):
self._decay_interval = timedelta(hours=decay_interval_h)
self._compress_interval = timedelta(days=compression_interval_days)
self._last_decay = None
self._last_compression = None
self._stop_event = threading.Occasion()
self._thread = None
def begin(self):
if self._thread and self._thread.is_alive():
return
self._stop_event.clear()
self._thread = threading.Thread(goal=self._loop, daemon=True)
self._thread.begin()
def cease(self):
self._stop_event.set()
def force_run(self):
self._run(drive=True)
def _loop(self):
whereas not self._stop_event.is_set():
self._run()
# Sleep in brief increments so cease() is definitely responsive.
# threading.Occasion().wait() in a loop creates a brand new Occasion each
# iteration that is by no means set — seems proper, blocks accurately,
# however cease() by no means truly wakes it up.
for _ in vary(60):
if self._stop_event.is_set():
break
time.sleep(60)
def _run(self, drive: bool = False):
now = datetime.now()
print(f"[lifecycle] {now.strftime('%H:%M:%S')}")
purge_expired()
if drive or not self._last_decay or (now - self._last_decay) >= self._decay_interval:
run_decay_pass()
self._last_decay = now
if drive or not self._last_compression or (now - self._last_compression) >= self._compress_interval:
run_compression_pass()
self._last_compression = now
print("[lifecycle] done.")And the write path, the place contradiction detection, confidence scoring, and expiry all get triggered each time a reminiscence is saved:
# memory_writer.py
import json
from openai import OpenAI
from memory_store import store_memory
from confidence import assess_confidence
from contradiction import check_for_contradictions
from expiry import maybe_set_expiry
consumer = OpenAI()
def maybe_store(user_msg: str, assistant_msg: str) -> int | None:
resp = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
temperature=0,
messages=[{"role": "user", "content": f"""Should this conversation turn be saved to long-term memory?
USER: {user_msg}
ASSISTANT: {assistant_msg}
Save if it contains:
- user preferences or personal context
- project decisions, trade-offs made
- bugs found, fixes applied, approaches ruled out
- explicit instructions ("always...", "never...", "I prefer...")
Don't save: greetings, one-off lookups, generic back-and-forth.
If yes: {{"save": true, "content": "...", "summary": "max 100 chars", "tags": [...], "importance": 1-10}}
If no: {{"save": false}}
JSON solely."""}]
)
attempt:
resolution = json.masses(resp.selections[0].message.content material.strip())
besides json.JSONDecodeError:
return None
if not resolution.get("save"):
return None
confidence = assess_confidence(resolution["content"], user_msg, assistant_msg)
mid = store_memory(
content material = resolution["content"],
abstract = resolution.get("summary"),
tags = resolution.get("tags", []),
significance = resolution.get("importance", 5),
confidence = confidence
)
outmoded = check_for_contradictions(resolution["content"], mid)
if outmoded:
print(f"[memory] #{mid} superseded {superseded}")
maybe_set_expiry(resolution["content"], mid)
return midWhat Retrieval Appears Like Now
With the lifecycle operating, the reminiscence index the LLM reads on each question carries an precise sign about every reminiscence’s well being:
# retrieval.py
import json
from datetime import datetime
from openai import OpenAI
from memory_store import _db
consumer = OpenAI()
def get_active_memories(restrict: int = 60) -> checklist[dict]:
with _db() as conn:
rows = conn.execute("""
SELECT id, content material, abstract, tags, significance,
confidence, decay_score, access_count, created_at
FROM reminiscences
WHERE standing = 'energetic'
AND decay_score > 0.15
ORDER BY (significance * confidence * decay_score) DESC
LIMIT ?
""", (restrict,)).fetchall()
return [dict(r) for r in rows]
def retrieve_relevant_memories(question: str, top_n: int = 6) -> checklist[dict]:
reminiscences = get_active_memories()
if not reminiscences:
return []
index = "n".be a part of(
f"[{m['id']}] (conf:{m['confidence']:.1f} fresh:{m['decay_score']:.1f}) {m['summary']}"
for m in reminiscences
)
resp = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
temperature=0,
messages=[{"role": "user", "content": f"""Pick the most relevant memories for this message.
MEMORY INDEX (conf=confidence 0-1, fresh=recency 0-1):
{index}
MESSAGE: {query}
Prefer high-conf, high-fresh memories when relevance is otherwise equal.
JSON array of IDs, max {top_n}. Return [] if nothing matches."""}]
)
uncooked = resp.selections[0].message.content material.strip()
attempt:
ids = json.masses(uncooked)
if not isinstance(ids, checklist):
return []
besides json.JSONDecodeError:
return []
mem_by_id = {m["id"]: m for m in reminiscences}
chosen = []
now = datetime.now().isoformat()
with _db() as conn:
for mid in ids:
if mid not in mem_by_id:
proceed
conn.execute("""
UPDATE reminiscences
SET access_count = access_count + 1, last_accessed = ?
WHERE id = ?
""", (now, mid))
chosen.append(mem_by_id[mid])
conn.commit()
return chosenThe type order in get_active_memories is significance * confidence * decay_score. That composite rating is the place all 5 lifecycle ideas converge into one quantity. A reminiscence that’s essential however poorly supported surfaces beneath one which’s reasonably essential and persistently strengthened. One which hasn’t been touched in three months competes poorly towards a current one, no matter its unique rating.
That is what the state of well being of the data seems like. And that’s precisely what we would like!
Is This Overkill?
No. However I assumed it was, for longer than I’d wish to admit.
I saved telling myself I’d add these items “later, when the system got bigger.” However that’s not true. It’s not about how giant the system is; it’s about how lengthy it’s been round. Simply three months of on a regular basis utilization is greater than sufficient.
In my case, I discovered myself manually battling decay by the second month, opening up the SQLite file through the DB Browser, manually deleting rows, and manually updating the significance scores.
And that’s exactly what it is best to by no means do: if you happen to’re manually cleansing the system, the system isn’t actually working.
The overhead is actual, however it’s small. Decay and expiry are pure SQLite, milliseconds. Contradiction detection provides one gpt-4o-mini name per write, perhaps 200ms. Compression calls gpt-4o however runs as soon as every week on a handful of clusters.
Total, the price for a day by day private assistant is a couple of additional mini calls per dialog and a weekly synthesis job that most likely prices lower than a cup of espresso monthly.
Nicely, it depends upon your intention. If you’re constructing a system you will use for 2 weeks after which put to another use, neglect about every part beneath. Retailer-and-retrieve is sufficient. However if you’re engaged on one thing you plan to get to know you, which is what’s intriguing right here, what we’re speaking about is non-negotiable.
The place This Truly Leaves You
Nick Lawson confirmed that the embedding pipeline could be elective at a private scale. This opened up the potential of a less complicated structure. What this text offers is the operational framework that makes that structure work past the primary month.
There are different potential ideas for the design of the reminiscence lifecycle; decay, contradiction, confidence, compression, and expiry should not the one choices, however these are those that I saved wishing I had for debugging my very own database.
And since every of those depends on the identical SQLite information construction and LLM judgment-based framework that Nick launched, you’re nonetheless zero infrastructure. You solely want one native file. You possibly can learn all of it. You possibly can hint the occasions of your total reminiscence lifecycle in memory_events.
You possibly can open the database and ask: why does the agent assume this? What received outmoded? What decayed? What received merged into what? The system’s reasoning is clear in a manner {that a} vector index by no means is.
That issues greater than I anticipated it to. Not only for debugging. For belief. An AI assistant you’ll be able to audit is one you’ll belief. Belief is what turns a instrument into one thing you truly depend on.
And that solely occurs when your system is aware of not simply find out how to keep in mind, however when to neglect.
Earlier than you go!
I’m constructing a neighborhood for builders and information scientists the place I share sensible tutorials, break down complicated CS ideas, and drop the occasional rant concerning the tech trade.
If that feels like your type of area, be a part of my free e-newsletter.



