Open Memory Protocol (OMP) gives your AI application one consistent API for storing, searching, and retrieving memory — regardless of which backend powers it. Write your code once againstDocumentation Index
Fetch the complete documentation index at: https://docs.openmem.blog/llms.txt
Use this file to discover all available pages before exploring further.
Memory.add(), Memory.search(), and Memory.context(), then swap between Postgres, Mem0, Supermemory, or Letta with a single line change.
Quick Start
Run your first memory operation in under 5 minutes using the Postgres backend.
Installation
Install the
openmem package with the extras for your chosen provider.Core Concepts
Understand memories, scopes, providers, and context blocks.
API Reference
Full HTTP endpoint reference with request and response schemas.
How it works
OMP sits between your application and any memory backend. You call the same Python methods (or HTTP routes) regardless of which provider you choose. The SDK automatically detects whether a provider speaks OMP natively and selects the right adapter for you.Supported providers
| Provider | Status |
|---|---|
| Postgres + pgvector | Ready |
| Mem0 | Available |
| Supermemory | Available |
| Letta | Available |
| Any native OMP server | Passthrough |
Switch providers
Learn how to change backends with zero application code changes.
Async usage
Use AsyncMemory for non-blocking memory operations in async apps.
LLM integration
Inject ranked, citation-tagged memory into any LLM prompt.
HTTP server
Run
omp-server to expose OMP over HTTP for any language or framework.