On this tutorial, we construct a common long-term reminiscence layer for AI brokers utilizing Mem0, OpenAI fashions, and ChromaDB. We design a system that may extract structured recollections from pure conversations, retailer them semantically, retrieve them intelligently, and combine them immediately into personalised agent responses. We transfer past easy chat historical past and implement persistent, user-scoped reminiscence with full CRUD management, semantic search, multi-user isolation, and customized configuration. Lastly, we assemble a production-ready memory-augmented agent structure that demonstrates how trendy AI programs can cause with contextual continuity fairly than function statelessly.
!pip set up mem0ai openai wealthy chromadb -q
import os
import getpass
from datetime import datetime
print("=" * 60)
print("🔐 MEM0 Advanced Tutorial — API Key Setup")
print("=" * 60)
OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key: ")
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
print("n✅ API key set!n")
from openai import OpenAI
from mem0 import Reminiscence
from wealthy.console import Console
from wealthy.panel import Panel
from wealthy.desk import Desk
from wealthy.markdown import Markdown
from wealthy import print as rprint
import json
console = Console()
openai_client = OpenAI()
console.rule("[bold cyan]MODULE 1: Basic Memory Setup[/bold cyan]")
reminiscence = Reminiscence()
print(Panel(
"[green]✓ Memory instance created with default config[/green]n"
" • LLM: gpt-4.1-nano (OpenAI)n"
" • Vector Store: ChromaDB (local)n"
" • Embedder: text-embedding-3-small",
title="Memory Config", border_style="cyan"
))
We set up all required dependencies and securely configure our OpenAI API key. We initialize the Mem0 Reminiscence occasion together with the OpenAI consumer and Wealthy console utilities. We set up the muse of our long-term reminiscence system with the default configuration powered by ChromaDB and OpenAI embeddings.
console.rule("[bold cyan]MODULE 2: Adding & Retrieving Memories[/bold cyan]")
USER_ID = "alice_tutorial"
print("n📝 Adding memories for user:", USER_ID)
conversations = [
[
{"role": "user", "content": "Hi! I'm Alice. I'm a software engineer who loves Python and machine learning."},
{"role": "assistant", "content": "Nice to meet you Alice! Python and ML are great areas to be in."}
],
[
{"role": "user", "content": "I prefer dark mode in all my IDEs and I use VS Code as my main editor."},
{"role": "assistant", "content": "Good to know! VS Code with dark mode is a popular combo."}
],
[
{"role": "user", "content": "I'm currently building a RAG pipeline for my company's internal docs. It's for a fintech startup."},
{"role": "assistant", "content": "That's exciting! RAG pipelines are really valuable for enterprise use cases."}
],
[
{"role": "user", "content": "I have a dog named Max and I enjoy hiking on weekends."},
{"role": "assistant", "content": "Max sounds lovely! Hiking is a great way to recharge."}
],
]
outcomes = []
for i, convo in enumerate(conversations):
end result = reminiscence.add(convo, user_id=USER_ID)
extracted = end result.get("results", [])
for mem in extracted:
outcomes.append(mem)
print(f" Conversation {i+1}: {len(extracted)} memory(ies) extracted")
print(f"n✅ Total memories stored: {len(results)}")We simulate life like multi-turn conversations and retailer them utilizing Mem0’s automated reminiscence extraction pipeline. We add structured conversational information for a selected consumer and permit the LLM to extract significant long-term information. We confirm what number of recollections are created, confirming that semantic data is efficiently continued.
console.rule("[bold cyan]MODULE 3: Semantic Search[/bold cyan]")
queries = [
"What programming languages does the user prefer?",
"What is Alice working on professionally?",
"What are Alice's hobbies?",
"What tools and IDE does Alice use?",
]
for question in queries:
search_results = reminiscence.search(question=question, user_id=USER_ID, restrict=2)
desk = Desk(title=f"🔍 Query: {query}", show_lines=True)
desk.add_column("Memory", type="white", max_width=60)
desk.add_column("Score", type="green", justify="center")
for r in search_results.get("results", []):
rating = r.get("score", "N/A")
score_str = f"{score:.4f}" if isinstance(rating, float) else str(rating)
desk.add_row(r["memory"], score_str)
console.print(desk)
print()
console.rule("[bold cyan]MODULE 4: CRUD Operations[/bold cyan]")
all_memories = reminiscence.get_all(user_id=USER_ID)
memories_list = all_memories.get("results", [])
print(f"n📚 All memories for '{USER_ID}':")
for i, mem in enumerate(memories_list):
print(f" [{i+1}] ID: {mem['id'][:8]}... → {mem['memory']}")
if memories_list:
first_id = memories_list[0]["id"]
original_text = memories_list[0]["memory"]
print(f"n✏️ Updating memory: '{original_text}'")
reminiscence.replace(memory_id=first_id, information=original_text + " (confirmed)")
up to date = reminiscence.get(memory_id=first_id)
print(f" After update: '{updated['memory']}'")We carry out semantic search queries to retrieve related recollections utilizing pure language. We reveal how Mem0 ranks saved recollections by similarity rating and returns probably the most contextually aligned data. We additionally carry out CRUD operations by itemizing, updating, and validating saved reminiscence entries.
console.rule("[bold cyan]MODULE 5: Memory-Augmented Chat[/bold cyan]")
def chat_with_memory(user_message: str, user_id: str, session_history: listing) -> str:
related = reminiscence.search(question=user_message, user_id=user_id, restrict=5)
memory_context = "n".be a part of(
f"- {r['memory']}" for r in related.get("results", [])
) or "No relevant memories found."
system_prompt = f"""You're a extremely personalised AI assistant.
You could have entry to long-term recollections about this consumer.
RELEVANT USER MEMORIES:
{memory_context}
Use these recollections to offer context-aware, personalised responses.
Be pure — do not explicitly announce that you simply're utilizing recollections."""
messages = [{"role": "system", "content": system_prompt}]
messages.lengthen(session_history[-6:])
messages.append({"role": "user", "content": user_message})
response = openai_client.chat.completions.create(
mannequin="gpt-4.1-nano-2025-04-14",
messages=messages
)
assistant_response = response.decisions[0].message.content material
alternate = [
{"role": "user", "content": user_message},
{"role": "assistant", "content": assistant_response}
]
reminiscence.add(alternate, user_id=user_id)
session_history.append({"role": "user", "content": user_message})
session_history.append({"role": "assistant", "content": assistant_response})
return assistant_response
session = []
demo_messages = [
"Can you recommend a good IDE setup for me?",
"What kind of project am I currently building at work?",
"Suggest a weekend activity I might enjoy.",
"What's a good tech stack for my current project?",
]
print("n🤖 Starting memory-augmented conversation with Alice...n")
for msg in demo_messages:
print(Panel(f"[bold yellow]User:[/bold yellow] {msg}", border_style="yellow"))
response = chat_with_memory(msg, USER_ID, session)
print(Panel(f"[bold green]Assistant:[/bold green] {response}", border_style="green"))
print()We construct a completely memory-augmented chat loop that retrieves related recollections earlier than producing responses. We dynamically inject personalised context into the system immediate and retailer every new alternate again into long-term reminiscence. We simulate a multi-turn session to reveal contextual continuity and personalization in motion.
console.rule("[bold cyan]MODULE 6: Multi-User Memory Isolation[/bold cyan]")
USER_BOB = "bob_tutorial"
bob_conversations = [
[
{"role": "user", "content": "I'm Bob, a data scientist specializing in computer vision and PyTorch."},
{"role": "assistant", "content": "Great to meet you Bob!"}
],
[
{"role": "user", "content": "I prefer Jupyter notebooks over VS Code, and I use Vim keybindings."},
{"role": "assistant", "content": "Classic setup for data science work!"}
],
]
for convo in bob_conversations:
reminiscence.add(convo, user_id=USER_BOB)
print("n🔐 Testing memory isolation between Alice and Bob:n")
test_query = "What programming tools does this user prefer?"
alice_results = reminiscence.search(question=test_query, user_id=USER_ID, restrict=3)
bob_results = reminiscence.search(question=test_query, user_id=USER_BOB, restrict=3)
print("👩 Alice's memories:")
for r in alice_results.get("results", []):
print(f" • {r['memory']}")
print("n👨 Bob's memories:")
for r in bob_results.get("results", []):
print(f" • {r['memory']}")We reveal user-level reminiscence isolation by introducing a second consumer with distinct preferences. We retailer separate conversational information and validate that searches stay scoped to the right user_id. We affirm that reminiscence namespaces are remoted, making certain safe multi-user agent deployments.
print("n✅ Memory isolation confirmed — users cannot see each other's data.")
console.rule("[bold cyan]MODULE 7: Custom Configuration[/bold cyan]")
custom_config = {
"llm": {
"provider": "openai",
"config": {
"model": "gpt-4.1-nano-2025-04-14",
"temperature": 0.1,
"max_tokens": 2000,
}
},
"embedder": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
}
},
"vector_store": {
"provider": "chroma",
"config": {
"collection_name": "advanced_tutorial_v2",
"path": "/tmp/chroma_advanced",
}
},
"version": "v1.1"
}
custom_memory = Reminiscence.from_config(custom_config)
print(Panel(
"[green]✓ Custom memory instance created[/green]n"
" • LLM: gpt-4.1-nano with temperature=0.1n"
" • Embedder: text-embedding-3-smalln"
" • Vector Store: ChromaDB at /tmp/chroma_advancedn"
" • Collection: advanced_tutorial_v2",
title="Custom Config Applied", border_style="magenta"
))
custom_memory.add(
[{"role": "user", "content": "I'm a researcher studying neural plasticity and brain-computer interfaces."}],
user_id="researcher_01"
)
end result = custom_memory.search("What field does this person work in?", user_id="researcher_01", restrict=2)
print("n🔍 Custom memory search result:")
for r in end result.get("results", []):
print(f" • {r['memory']}")
console.rule("[bold cyan]MODULE 8: Memory History[/bold cyan]")
all_alice = reminiscence.get_all(user_id=USER_ID)
alice_memories = all_alice.get("results", [])
desk = Desk(title=f"📋 Full Memory Profile: {USER_ID}", show_lines=True, width=90)
desk.add_column("#", type="dim", width=3)
desk.add_column("Memory ID", type="cyan", width=12)
desk.add_column("Memory Content", type="white")
desk.add_column("Created At", type="yellow", width=12)
for i, mem in enumerate(alice_memories):
mem_id = mem["id"][:8] + "..."
created = mem.get("created_at", "N/A")
if created and created != "N/A":
attempt:
created = datetime.fromisoformat(created.exchange("Z", "+00:00")).strftime("%m/%d %H:%M")
besides:
created = str(created)[:10]
desk.add_row(str(i+1), mem_id, mem["memory"], created)
console.print(desk)
console.rule("[bold cyan]MODULE 9: Memory Deletion[/bold cyan]")
all_mems = reminiscence.get_all(user_id=USER_ID).get("results", [])
if all_mems:
last_mem = all_mems[-1]
print(f"n🗑️ Deleting memory: '{last_mem['memory']}'")
reminiscence.delete(memory_id=last_mem["id"])
updated_count = len(reminiscence.get_all(user_id=USER_ID).get("results", []))
print(f"✅ Deleted. Remaining memories for {USER_ID}: {updated_count}")
console.rule("[bold cyan]✅ TUTORIAL COMPLETE[/bold cyan]")
abstract = """
# 🎓 Mem0 Superior Tutorial Abstract
## What You Realized:
1. **Primary Setup** — Instantiate Reminiscence with default & customized configs
2. **Add Recollections** — From conversations (auto-extracted by LLM)
3. **Semantic Search** — Retrieve related recollections by pure language question
4. **CRUD Operations** — Get, Replace, Delete particular person recollections
5. **Reminiscence-Augmented Chat** — Full pipeline: retrieve → reply → retailer
6. **Multi-Consumer Isolation** — Separate reminiscence namespaces per user_id
7. **Customized Configuration** — Customized LLM, embedder, and vector retailer
8. **Reminiscence Historical past** — View full reminiscence profiles with timestamps
9. **Cleanup** — Delete particular or all recollections
## Key Ideas:
- `reminiscence.add(messages, user_id=...)`
- `reminiscence.search(question, user_id=...)`
- `reminiscence.get_all(user_id=...)`
- `reminiscence.replace(memory_id, information)`
- `reminiscence.delete(memory_id)`
- `Reminiscence.from_config(config)`
## Subsequent Steps:
- Swap ChromaDB for Qdrant, Pinecone, or Weaviate
- Use the hosted Mem0 Platform (app.mem0.ai) for manufacturing
- Combine with LangChain, CrewAI, or LangGraph brokers
- Add `agent_id` for agent-level reminiscence scoping
"""
console.print(Markdown(abstract))We create a completely customized Mem0 configuration with specific parameters for the LLM, embedder, and vector retailer. We check the customized reminiscence occasion and discover reminiscence historical past, timestamps, and structured profiling. Lastly, we reveal deletion and cleanup operations, finishing the complete lifecycle administration of long-term agent reminiscence.
In conclusion, we carried out an entire reminiscence infrastructure for AI brokers utilizing Mem0 as a common reminiscence abstraction layer. We demonstrated tips on how to add, retrieve, replace, delete, isolate, and customise long-term recollections whereas integrating them right into a dynamic chat loop. We confirmed how semantic reminiscence retrieval transforms generic assistants into context-aware programs able to personalization and continuity throughout classes. With this basis in place, we at the moment are geared up to increase the structure into multi-agent programs, enterprise-grade deployments, various vector databases, and superior agent frameworks, turning reminiscence right into a core functionality fairly than an afterthought.
Take a look at the Full Implementation Code and Pocket book. Additionally, be happy to observe us on Twitter and don’t overlook to hitch our 130k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you possibly can be a part of us on telegram as properly.
Have to accomplice with us for selling your GitHub Repo OR Hugging Face Web page OR Product Launch OR Webinar and so forth.? Join with us



