Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openmem.blog/llms.txt

Use this file to discover all available pages before exploring further.

AsyncMemory gives you the full OMP verb set — add, search, context, list, update, delete, audit, and capabilities — as coroutines. It is a drop-in async counterpart to Memory: method names, parameters, and error semantics are identical; the only difference is that every call is awaitable. Use AsyncMemory in any application that runs an event loop, including FastAPI, aiohttp, and scripts driven by asyncio.run.

Install

AsyncMemory is not included in the default openmem install. Add the [async] extra:
pip install 'openmem[async]'

Basic usage

Use async with to open a connection, run your operations, and release resources automatically when the block exits.
import asyncio
from openmem import AsyncMemory

async def main():
    async with AsyncMemory(provider="postgres",
                           url="postgresql://postgres:postgres@localhost:5432/postgres") as mem:
        rec = await mem.add(content="user prefers dark mode", user_id="u1")
        hits = await mem.search("dark mode", user_id="u1")
        print(hits[0].memory.content)

asyncio.run(main())

Context manager vs. manual lifecycle

async with AsyncMemory(provider="postgres", url="postgresql://localhost/omp") as mem:
    await mem.add(content="hello", user_id="u1")
# Resources are released automatically here
The async with form is preferred. It guarantees close() is called even when an exception occurs, and it binds the instance to the current event loop at entry time, which makes cross-loop misuse detectable early.

Provider async implementation

Different providers use different async strategies under the hood. The behavior visible to your application is the same, but cancellation semantics vary.
ProviderAsync implementation
postgresNative asyncpg client
passthrough (native OMP server)Native httpx async client
mem0 / supermemory / lettaThread pool wrapper

Cancellation behavior

task = asyncio.create_task(mem.search("slow query", "u1"))
await asyncio.sleep(0.05)
task.cancel()
try:
    await task
except asyncio.CancelledError:
    pass  # Cancellation propagates within 50 ms; pool connection is reclaimed
For Postgres and passthrough, cancellation propagates within 50 ms and the connection pool is reclaimed. For mem0, Supermemory, and Letta, the event loop returns to the awaiter immediately, but the underlying thread continues running until the HTTP request completes. The backend may have observable side-effects.

FastAPI integration

AsyncMemory is a natural fit for FastAPI dependency injection. Create a new instance per request using a dependency so each request gets a properly scoped connection.
from fastapi import FastAPI, Depends
from openmem import AsyncMemory

app = FastAPI()

async def get_mem():
    async with AsyncMemory(provider="postgres", url="postgresql://localhost/omp") as mem:
        yield mem

@app.post("/remember")
async def remember(content: str, user_id: str, mem: AsyncMemory = Depends(get_mem)):
    record = await mem.add(content=content, user_id=user_id)
    return {"id": record.id}
For higher throughput, consider constructing a single AsyncMemory instance in a lifespan context and sharing it across requests — Postgres and passthrough manage an internal connection pool, so concurrent requests are safe.

Event loop binding

AsyncMemory is bound to the event loop it was first used on. Do not share a single instance across multiple threads or across multiple asyncio.run() calls. If you do, the SDK raises RuntimeError: AsyncMemory is bound to a different event loop before any backend call is made. Construct a new AsyncMemory instance for each loop.