Framework Adapters
Building · Q3 2026

First-class adapters for the agent frameworks you already use.

CrewAI, LangChain, LlamaIndex. Drop REM in as memory, keep your existing graph. One import, zero refactor — the continuity layer slides under the framework you’re already paying to run.

CrewAI
crew memory backend
Register REM as the shared memory for a Crew. Each agent gets its own namespace automatically, and Dream Engine consolidation runs between tasks.
pip install remlabs[crewai]
from crewai import Crew, Agent
from remlabs.crewai import REMMemory

memory = REMMemory(
    api_key="sk_live_…",
    namespace="research-crew",
)

crew = Crew(
    agents=[Agent(role="analyst")],
    memory=memory,
)
LangChain
vector store + retriever
A drop-in VectorStore with hybrid retrieval, confidence scores, and automatic dream-driven re-ranking. Works with every LangChain chain and agent.
pip install remlabs[langchain]
from langchain.chains import RetrievalQA
from remlabs.langchain import REMVectorStore

store = REMVectorStore(
    api_key="sk_live_…",
    namespace="support-kb",
)

qa = RetrievalQA.from_chain_type(
    llm=llm,
    retriever=store.as_retriever(k=6),
)
LlamaIndex
custom index + retriever
Expose REM as a first-class BaseIndex. Ingest Documents, query with the standard retriever API, get back nodes annotated with REM confidence and lineage.
pip install remlabs[llama-index]
from llama_index.core import Document
from remlabs.llama_index import REMIndex

index = REMIndex.from_documents(
    [Document(text="…")],
    api_key="sk_live_…",
    namespace="docs",
)

qe = index.as_query_engine()
response = qe.query("How does consolidation work?")
Why REM over default memory
Three reasons your agent graph gets smarter.

The defaults (vector store, in-memory list, basic summary buffer) are fine for demos. REM is what you want the moment a real user keeps talking to your agent past week one.

01
Consolidation, not just storage
Default memory hoards tokens and re-indexes them. REM runs nine Dream Engine strategies — synthesize, pattern, contradiction, compress, associate, validate, evolve, forecast, reflect — so your agent remembers less and knows more.
02
Multi-agent namespaces
Every agent in a Crew, every chain in a pipeline, every tenant in your SaaS gets its own scoped memory with RBAC — with zero context bleed. Default framework memory doesn’t ship with this at all.
03
+15.33pp on SWE-bench Lite
Cross-session error memory lifts SWE-bench Lite by +15.33pp strict (n=150, 95% CI [+9.33, +22.00], p<0.05, paired bootstrap). A default vector store will not get you there — and REM swaps in under the same retriever API your framework already uses.
Commitment

All three adapters ship together this quarter. This page flips from Building to Shipped the same day the CrewAI, LangChain, and LlamaIndex extras land on PyPI. Track progress on the roadmap, read the commits on, or open a feature request in Discord.