Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openmem.blog/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks you through your first end-to-end memory operations using a local Postgres instance as the backend. By the end you will have added a memory, searched it, pulled prompt-ready context, updated it, and deleted it — all through the standard OMP API.
Prerequisites
  • Python 3.11 or later
  • Docker (used to run the Postgres + pgvector container in step 1)
1

Start a Postgres instance

Pull and run the official pgvector image:
docker run --rm -d -p 5432:5432 -e POSTGRES_PASSWORD=postgres pgvector/pgvector:pg16
The container listens on port 5432 and creates a default postgres database. The --rm flag removes the container automatically when you stop it.
2

Install the SDK

pip install openmem
The core package includes the Postgres adapter and all required dependencies. See Installation for extras that enable other providers.
3

Set the connection URL

export PG_URL="postgresql://postgres:postgres@localhost:5432/postgres"
The SDK reads PG_URL from the environment when you pass it as the url argument, or you can supply the string directly in code.
4

Run your first memory operations

Save the following as quickstart.py and run it with python quickstart.py:
import os
from openmem import Memory

url = os.environ.get(
    "PG_URL", "postgresql://postgres:postgres@localhost:5432/postgres"
)
mem = Memory(provider="postgres", url=url)

# Add a memory
m = mem.add(
    content="User prefers pnpm over npm",
    user_id="kek",
    scope="coding/preferences",
    tags=["tooling", "nodejs"],
)
print(f"added: {m.id}")

# Search
results = mem.search(
    query="package manager preferences",
    user_id="kek",
    scope="coding/*",
    limit=5,
)
for r in results:
    print(f"  {r.score:.3f}  {r.memory.content}")

# Get prompt-ready context
ctx = mem.context(
    query="set up a new node project",
    user_id="kek",
    token_budget=500,
)
print(f"\ncontext ({ctx.token_count} tok):\n{ctx.text}")

# Update / supersede
updated = mem.update(
    m.id, content="User prefers bun for new projects", supersedes=[m.id]
)
print(f"\nsuperseded: {updated.id} supersedes={updated.supersedes}")

# Delete
mem.delete(updated.id)
print("deleted.")
Each call goes through the same Memory facade. Swapping the provider= argument is the only change needed to point the same code at a different backend.
5

Verify provider capabilities

Add this at the end of the script to inspect what the backend supports:
caps = mem.capabilities()
print(f"\nprovider={caps.provider} verbs={caps.verbs}")
if caps.features.graph_queries:
    print("graph queries supported")
The capabilities() call is cached per Memory instance — it only hits the backend once. Use it to degrade gracefully when a feature is not available on your chosen provider.

What’s next

  • Memory model — understand the fields on a memory record, how scopes work, and what a context block contains.
  • Switch providers — use the same code against Mem0, Supermemory, or Letta with one line changed.
  • LLM integration — inject ctx.text into an OpenAI or Anthropic prompt.