This guide walks you through your first end-to-end memory operations using a local Postgres instance as the backend. By the end you will have added a memory, searched it, pulled prompt-ready context, updated it, and deleted it — all through the standard OMP API.Documentation Index
Fetch the complete documentation index at: https://docs.openmem.blog/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
- Python 3.11 or later
- Docker (used to run the Postgres + pgvector container in step 1)
Start a Postgres instance
Pull and run the official The container listens on port 5432 and creates a default
pgvector image:postgres database. The --rm flag removes the container automatically when you stop it.Install the SDK
Set the connection URL
PG_URL from the environment when you pass it as the url argument, or you can supply the string directly in code.Run your first memory operations
Save the following as Each call goes through the same
quickstart.py and run it with python quickstart.py:Memory facade. Swapping the provider= argument is the only change needed to point the same code at a different backend.What’s next
- Memory model — understand the fields on a memory record, how scopes work, and what a context block contains.
- Switch providers — use the same code against Mem0, Supermemory, or Letta with one line changed.
- LLM integration — inject
ctx.textinto an OpenAI or Anthropic prompt.