Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openmem.blog/llms.txt

Use this file to discover all available pages before exploring further.

AsyncMemory is the async/await OMP client. It mirrors Memory exactly — same constructor arguments, same method signatures — but every verb is a coroutine that you await. It is designed for use in asyncio-based applications such as FastAPI, LangChain async pipelines, or any async agent framework.

Installation

The async client requires the [async] extras, which pull in asyncpg and httpx:
pip install 'openmem[async]'

Import

from openmem import AsyncMemory
Importing openmem itself does not load any async dependencies. AsyncMemory is only resolved when you import it. If asyncpg is not installed, the import raises a clear ImportError with the exact install instruction to run.

Constructor

The constructor is identical to Memory:
AsyncMemory(provider="postgres", **config)
All provider-specific **config arguments are the same as for Memory. One additional keyword argument is available for async-only tuning:
executor_max_workers
number
Maximum number of worker threads in the ThreadPoolExecutor used for mem0, supermemory, and letta providers. Defaults to the Python default (min(32, os.cpu_count() + 4)). Ignored by native async providers (postgres, passthrough).

The preferred way to use AsyncMemory is as an async context manager. This ensures connection pools and HTTP clients are cleanly shut down when you are done:
async with AsyncMemory(provider="postgres", url="postgresql://localhost/omp") as mem:
    rec = await mem.add(content="user prefers dark mode", user_id="u1")
    results = await mem.search("UI settings", user_id="u1")
You can also instantiate it manually and call close() explicitly:
mem = AsyncMemory(provider="postgres", url="postgresql://localhost/omp")
try:
    rec = await mem.add(content="user prefers dark mode", user_id="u1")
finally:
    await mem.close()
Prefer async with over manual close(). It is idempotent and exception-safe.

Methods

All methods on AsyncMemory are coroutines with signatures identical to their Memory counterparts. Every call must be awaited.
Sync (Memory)Async (AsyncMemory)
mem.add(...)await mem.add(...)
mem.search(...)await mem.search(...)
mem.get(id)await mem.get(id)
mem.update(id, ...)await mem.update(id, ...)
mem.delete(id)await mem.delete(id)
mem.list(user_id, ...)await mem.list(user_id, ...)
mem.context(query, ...)await mem.context(query, ...)
mem.audit(user_id, ...)await mem.audit(user_id, ...)
mem.capabilities()await mem.capabilities()
For full parameter documentation for each method, refer to the Memory class reference. The signatures and return types are identical.

Provider async implementation

Different providers use different strategies to deliver async I/O:
ProviderAsync backendCancellation
postgresasyncpg (native async)within 50 ms
passthroughhttpx async (native async)within 50 ms
mem0ThreadPoolExecutor wrapreturns to awaiter immediately
supermemoryThreadPoolExecutor wrapreturns to awaiter immediately
lettaThreadPoolExecutor wrapreturns to awaiter immediately
For mem0, supermemory, and letta, the SDK wraps the synchronous adapter in a ThreadPoolExecutor. These providers do not support true async cancellation — asyncio.CancelledError returns control to the awaiter immediately, but the background thread may continue running until the network request completes.

Cross-loop safety

AsyncMemory is bound to the event loop on the first verb call (or __aenter__). If you attempt to use the same instance on a different event loop, it raises:
RuntimeError: AsyncMemory is bound to a different event loop
Do not share an AsyncMemory instance across threads or event loops. Create one instance per event loop. This is enforced at runtime; the error is raised before any backend call is made.

Full example

import asyncio
from openmem import AsyncMemory

async def main():
    async with AsyncMemory(provider="postgres", url="postgresql://localhost/omp") as mem:
        # Check provider capabilities
        caps = await mem.capabilities()
        print(f"Provider: {caps.provider}, OMP {caps.omp_version}")

        # Add a memory
        record = await mem.add(
            content="User prefers pnpm over npm",
            user_id="u1",
            scope="coding/preferences",
            tags=["tooling", "nodejs"],
            source={"app": "cursor", "type": "explicit"},
            confidence=0.95,
        )
        print(f"Stored: {record.id}")

        # Semantic search
        results = await mem.search(
            query="package manager preferences",
            user_id="u1",
            scope="coding/preferences",
            limit=5,
        )
        for r in results:
            print(f"{r.score:.2f}  {r.memory.content}")

        # Get prompt-ready context
        ctx = await mem.context(
            query="set up a new Node project",
            user_id="u1",
            token_budget=400,
        )
        print(ctx.text)

        # Update
        updated = await mem.update(
            record.id,
            content="User prefers bun for new projects",
            supersedes=[record.id],
        )

        # Paginated list
        page = await mem.list(user_id="u1", scope="coding/preferences", limit=20)
        for m in page.items:
            print(m.id, m.content)

        # Audit log (if supported)
        if caps.features.supports_audit:
            entries = await mem.audit(user_id="u1", limit=10)
            for e in entries:
                print(e.action, e.memory_id)

        # Delete
        await mem.delete(updated.id)

asyncio.run(main())

FastAPI integration example

from contextlib import asynccontextmanager
from fastapi import FastAPI
from openmem import AsyncMemory

mem: AsyncMemory

@asynccontextmanager
async def lifespan(app: FastAPI):
    global mem
    mem = AsyncMemory(provider="postgres", url="postgresql://localhost/omp")
    yield
    await mem.close()

app = FastAPI(lifespan=lifespan)

@app.post("/memories")
async def add_memory(content: str, user_id: str):
    record = await mem.add(content=content, user_id=user_id)
    return {"id": record.id}
In FastAPI or any framework with a single long-running event loop, create AsyncMemory once at startup using a lifespan handler and reuse it across requests.