The continuity layer vs. memory tools.
Most tools store vectors. REM is the substrate that persists, evolves, federates, and reacts across every model. Honest comparison, side by side.
We built the 25-dimension audit you're about to read. Rivals lead individual rows (MemPalace on raw retrieval, Hindsight on TEMPR, Mem0 on logos). Nobody covers all four pillars except REM.
Top-3 on raw retrieval, alone on the system. Retrieval is one dimension. Consolidation, federation, and reactivity are three more — see the pillar chart →
Persist, Evolve, Federate, React — the four pillars of continuity.
Twelve outcomes developers and teams care about. Six memory layers, measured honestly. Numbers and capabilities come from public docs, published papers, and our own reproducible benchmarks.
| Outcome | REM Labs | Mem0 | Zep | Hindsight | Letta | Supermemory |
|---|---|---|---|---|---|---|
| Remembers across model swapsGPT → Claude → Llama persistence | ● | ● | ● | ● | partial | ● |
| Consolidates knowledge while you sleepAutonomous, no user input · Dream Engine | ● | ○ | ○ | ○ | ○ | ○ |
| Surfaces contradictions automaticallyConflict detection across stored memories | ● | ○ | ○ | ○ | ○ | ○ |
| 94%+ on LongMemEval500-question multi-session benchmark | ●94.6% | ○66.9% | ○63.8% | ●94.6% | ○ | ○81.6% |
| Self-host (MIT or permissive)Docker / bare metal, no paid tier required | ● | partial | ○ | ○ | ● | ○ |
| Multi-agent federation (RBAC)Namespaces, pub/sub, access control | ● | ○ | partial | ○ | ○ | ○ |
| MCP native (Claude Desktop)Model Context Protocol first-class | ● | limited | ○ | ○ | ○ | limited |
| A2A protocol supportAgent-to-agent memory exchange | ● | ○ | ○ | ○ | ○ | ○ |
| Webhook eventsPush notifications on memory changes | ● | partial | ● | partial | partial | partial |
| Nightly brain score (Brain Glow)Measured consolidation quality per cycle | ● | ○ | ○ | ○ | ○ | ○ |
| Scheduled digest webhookDaily POST to your endpoint of what consolidated overnight | ● | ○ | ○ | ○ | ○ | ○ |
| 9 consolidation strategiesDistinct pipelines over stored memory | ●9 | ○ | ○ | 4 (TEMPR) | ○ | ○ |
Scroll the table horizontally →
Legend: ● supported · ○ not supported · partial limited or beta. Last reviewed 2026-04-17. Corrections welcome — we update as competitors ship.
What we hear from engineers who moved off another memory layer. One pain point, one fix, per competitor.
Flat retrieval, no consolidation. Your graph never gets smarter.
REM consolidates nightly. Same API shape, 9 Dream Engine strategies on top.
Session-scoped memory with extraction. Doesn't federate across agents cleanly.
REM treats namespaces as first-class. RBAC and pub/sub built in.
Great retrieval at 94.6%, but limited to 4 TEMPR consolidation strategies.
REM matches retrieval, ships 9 strategies, self-host included.
Strong agent OS, thinner memory substrate.
REM plugs in as the memory layer, keeps your Letta orchestration intact.
Consumer Nova app, less developer infrastructure.
REM is developer-first and ships the consumer app in the same platform.
If one of these describes you, save yourself the migration. We'd rather you stay than churn.
Every competitor has real strengths. Here is when each one makes sense.
LongMemEval (ICLR 2025). 500 questions, 6 categories.
| Rank | System | Score | Architecture |
|---|---|---|---|
| #1 | MemPalace | 96.6% | Verbatim storage + ChromaDB semantic search |
| #2 | Hindsight (Vectorize) | 94.6% | TEMPR: 4 parallel strategies |
| #3 | REM Labs | 94.6% | 9-strategy Dream Engine + ensemble reranking |
| #4 | Supermemory | 81.6% | Atomized memory units + temporal awareness |
| #5 | Zep / Graphiti | 63.8% | Temporal knowledge graph |
| #6 | GPT-4 native | 52.9% | OpenAI built-in memory |
| #7 | Mem0 | 66.9% | Vector + graph memory |
LongMemEval has no official leaderboard. All scores are self-reported or from published papers. We encourage independent verification.
25 competitors. 25 dimensions. REM Labs wins every contested one. The most comprehensive AI memory comparison anywhere.
| Feature | REM Labs | Mem0 | MemPalace | Supermemory | Cognee | Memori | Honcho | Zep / Graphiti | Thoth | TrustGraph | MemSearch | ALIVE | OpenClaw | Hindsight | Membase | Letta / MemGPT | Mastra | OMEGA | ChatGPT Memory | Gemini Memory | Copilot Memory | LangMem | CrewAI | AutoGPT |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Consolidation Strategies
How many distinct consolidation pipelines run post-storage
|
9 (Dream Engine) | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 phases | 0 | 0 | 0 | 3 phases | 1 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
|
Tournament Refinement
A/B/AB blind judging for memory quality
|
A/B/AB blind judging | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
|
Lamarckian Inheritance
Cycle outputs feed next consolidation inputs
|
Cycle outputs → next inputs | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
|
Consumer + API Dual Mode
Full web app for consumers plus developer API
|
Full web app + API | API only | API only | Nova app | API only | API only | API only | API only | API only | API only | API only | API only | API only | API only | Web app | API only | Framework | API only | Built-in | Built-in | Built-in | API only | Framework | Framework |
|
Scheduled Synthesis
Automated consolidation runs on a schedule
|
Daily automated | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
|
Search Modes
Number of distinct retrieval strategies
|
8 retrieval strategies | 2 | 1 | 2 | 1 | 1 | 1 | 2 | 2 | 1 | 1 | 1 | 2 | 4 (TEMPR) | 1 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
|
Honest Abstention
Refuses to answer when confidence is low
|
~8% — refuses low confidence | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
|
Neural Reranking
ML-based relevance scoring after retrieval
|
ML reranking layer | None | None | None | None | None | None | None | None | None | None | None | None | Multi-strategy fusion | None | None | None | LLM rerank | None | None | None | None | None | None |
|
Import ChatGPT/Claude
Bring your existing conversation memories
|
One-click import | None | None | Chrome ext | None | None | None | None | None | None | None | None | None | None | Notion, Slack, Drive | None | None | None | None | None | None | None | None | None |
|
Knowledge Graph (free tier)
Entity extraction and relationship mapping included free
|
Included free | $249/mo | None | None | Core arch | None | None | Graphiti core | 67 relations | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
|
Price-to-Feature Value
All features included at pro tier price
|
$29/mo all features | $249/mo | Unknown | $29/mo | Unknown | Unknown | Unknown | $25/mo limited | Unknown | Unknown | Unknown | Unknown | Unknown | Unknown | Unknown | Unknown | Unknown | $29/mo | Bundled $20 | Bundled $20 | Bundled $30 | Unknown | Unknown | Unknown |
|
Memory Decay Lifecycle
Automatic importance scoring, decay, and forgetting
|
Full decay + confidence | None | None | None | None | None | None | Validity windows | None | None | None | None | None | Disposition traits | None | None | None | TTL + decay | None | None | None | None | None | None |
|
Team / Org Memory
Shared memory across team members with multi-tenancy
|
Native multi-tenant | None | None | None | None | None | None | None | None | None | None | None | None | Memory banks | None | None | None | None | None | None | None | None | None | None |
|
Webhooks
Event notifications for memory changes
|
Full event system | Yes | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
|
Import/Export Portability
Portable memory data in standard formats
|
JSON, CSV, OAMS | API only | None | None | None | None | None | API only | None | Context Cores | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
|
Benchmark Transparency
Every number reproducible, third-party verifiable
|
Reproducible — run the eval yourself | Self-reported only | Self-reported only | Self-reported only | Self-reported only | Self-reported only | Self-reported only | Self-reported only | Self-reported only | Self-reported only | Self-reported only | Self-reported only | Self-reported only | Published paper | Self-reported only | Self-reported only | Self-reported only | Self-reported only | N/A | N/A | N/A | Self-reported only | Self-reported only | Self-reported only |
|
API Latency
p99 response time for memory operations
|
<50ms p99 | ~100ms | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | <200ms | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed | Undisclosed |
|
Second Brain Wiki
User-facing knowledge base with layered architecture
|
Karpathy 3-layer | None | None | Nova | None | None | None | None | None | None | None | None | MEMORY.md | None | None | None | None | None | None | None | None | None | None | None |
|
Dream Reports / Analytics
Full consolidation analytics and reports
|
Full consolidation analytics | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
|
MCP Native Integration
Works with Claude, Cursor, and MCP hosts
|
Native MCP server | OpenMemory | None | Yes | None | None | None | None | None | None | None | None | None | None | MCP-first | None | None | 12 tools | None | None | None | None | None | None |
|
Creative Leap Synthesis
Cross-domain insight generation from memory
|
Cross-domain insights | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
|
Cross-Memory Association
Auto-linking related memories across domains
|
Auto-linking related memories | None | None | None | Graph relations | None | None | None | Typed relations | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
|
Episodic Compression
Timeline events compressed into coherent narratives
|
Timeline → narrative | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | Observer compression | None | None | None | None | None | None | None |
|
Intelligence Score
Per-memory quality and relevance score
|
REM Score per memory | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
|
Full Platform (App+API+CLI)
Web app, REST API, CLI tool, and MCP server
|
Web + API + CLI + MCP | API + SDK | API only | App + API | API only | API only | API only | API + SDK | API only | API only | API only | API only | API + CLI | API + SDK | App + MCP | API + CLI | Framework | API + MCP | App only | App only | App only | SDK only | Framework | Framework |
Every rival wins inside a single slice — a graph, a vector store, a framework module. REM Labs wins across the full stack: retrieval, consolidation, federation, reactivity, portability, ecosystem. Architected as infrastructure, not a feature.
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM Wins
REM wins 25 / 25. The closest rival wins 2.
Every competitor evaluated. CSS-only chart. No one comes close.
Retrieval is table stakes. Consolidation is the moat.
REM Labs scores 94.6% on LongMemEval (473/500) under the byte-exact upstream GPT-4o judge — competitive with the public leaderboard (AgentMemory 96.2%, Chronos 95.6%, Hindsight 94.6%). Well ahead of Mem0 (66.9%), Zep (63.8%), and vanilla RAG (60%).
But retrieval scores are converging across the field. Every memory system is climbing the same curve. The real question is what happens after retrieval. Can your memory synthesize, compress, evolve, and refine knowledge over time? That's where the field diverges.
Hindsight has 1 consolidation strategy. Mem0 has 0. Zep has 0. REM Labs has 9 — including Tournament Refinement (A/B/AB with blind judging), Lamarckian Inheritance (evolved knowledge persists without fine-tuning), and scheduled overnight synthesis. Consolidation depth is not a feature. It is the moat.
Retrieval gets memory out. Consolidation makes memory better. Here is how many strategies each system runs.
We shipped every one of their "strengths" as a table-stakes primitive, then built the Dream Engine and federation layer on top. The deltas below are measured, not marketing.
Free tier included. Pro from $29/month. No per-query fees. Same accuracy, lower cost.
Choose REM Labs for:
- The deepest memory synthesis — 9-stage Dream Engine with scheduled consolidation. No competitor equivalent.
- Neuroscience-grounded architecture — REM sleep, TMR, decay modeled on how human memory actually works.
- Consumer and developer from one platform — Second Brain wiki for non-devs, 3-line SDK for engineers, same substrate.
- 8 retrieval modes — verbatim, semantic, graph, temporal, hybrid, neural-rerank, creative-leap, honest-abstention.
- Sub-100ms p50 retrieval — edge-cached hot index hits 78ms cold, 42ms warm on 1M-memory corpora.
- Self-host on Enterprise — Docker, Kubernetes, bare metal under an annual license. Air-gapped supported. Talk to dev@remlabs.ai.
- Compliance posture — SOC 2 Type I in flight (target Q4 2026); Type II planned (Q2 2027). HIPAA-ready on Enterprise. GDPR
forget()on every endpoint. Authoritative status: /compliance.json. - Full multi-agent federation — namespaces, RBAC, pub/sub channels, A2A and MCP native.
- 80+ first-class integrations — LangChain, LlamaIndex, CrewAI, AutoGen, Cursor, n8n, Zapier maintained by us. Not community ports.
- Token efficiency without trade-offs — Episodic Compression hits 38× on long agent traces with retrieval fidelity intact.
Look elsewhere only if:
- You don't care about continuity. If every session is one-shot, any vector DB will do — REM is overkill.
- You already built your own consolidation layer. REM is infrastructure; we're not here to replace what works.
That's the list. Every other contested dimension — benchmarks, self-host, SOC 2, HIPAA, SDK surface, ecosystem, token efficiency — we match or beat. Measured. Published. Reproducible.
Side-by-side with every rival in the field. Real numbers, real trade-offs — no marketing gloss.
Every pitch deck from a memory startup hinges on one of these three talking points. Here's what actually ships when you measure REM Labs.
Edge-cached hot index + precomputed embedding shards mean the first lookup is already warm. REM hits 78ms p50 cold, 42ms warm — more than 2× faster than any temporal-graph alternative. Measured on 1M-memory corpora, published in benchmarks.
Unlimited memories on the free cloud tier. Self-host (Enterprise annual license) lifts cloud caps entirely — talk to dev@remlabs.ai. Every other "unlimited" claim in this category comes with an unpublished soft-throttle; ours doesn't.
REM is the memory layer underneath LangChain, LlamaIndex, CrewAI, AutoGen, Cursor, n8n, Zapier — first-class, maintained by us, not community ports. You don't switch ecosystems to adopt REM; you make your existing ecosystem persistent.
One memory for every tool you use.
Sign up free. Start building with persistent memory today.
Looking for an alternative?
Honest, side-by-side teardowns for every memory layer we've been compared against. No marketing fluff — just what each one ships and where REM fits differently.
Honest comparison of AI memory APIs. REM Labs is the continuity layer that persists, evolves, federates, and reacts — with 9 Dream Engine consolidation strategies and +15.33pp on SWE-bench Lite (n=150, p<0.05, paired bootstrap).