Already running on Mem0, Zep, or MemPalace? You don't start over. @remlabs/cli migrate imports the existing store, preserves keys and timestamps, and runs your first dream cycle so consolidation kicks in from day one.
Each --from adapter preserves keys, scores, and timestamps. Pick a tab for the cURL, Python, or Node version.
# 1. Export Mem0 # https://docs.mem0.ai/api-reference/memory/export-memories # 2. Run the migration (private beta) $ npx @remlabs/cli migrate \ --from mem0 \ ./mem0-export.json # 3. First Dream cycle runs automatically # ✓ 3,148 memories imported # ✓ Tags + scores preserved # ✓ Dream consolidate complete
from remlabs import Rem import json rem = Rem(api_key="bl_live_...") with open("mem0-export.json") as f: rows = json.load(f) for r in rows: rem.store( key=r["id"], content=r["memory"], tags=r.get("categories", []), ) rem.dream(strategy="consolidate")
import { rem } from '@remlabs/sdk'; import rows from './mem0-export.json' assert { type: 'json' }; for (const r of rows) { await rem.store({ key: r.id, content: r.memory, tags: r.categories ?? [], }); } await rem.dream({ strategy: 'consolidate' });
# 1. Export from Zep $ zep memory get-sessions > zep-export.json # 2. Migrate (private beta) $ npx @remlabs/cli migrate \ --from zep \ ./zep-export.json # 3. Sessions become REM namespaces # ✓ 142 sessions, 8,210 messages imported # ✓ Roles + timestamps preserved
from remlabs import Rem rem = Rem(api_key="bl_live_...") for session in sessions: rem.store_batch( namespace=session["id"], items=[{ "key": m["uuid"], "content": m["content"], "role": m["role"], "ts": m["created_at"], } for m in session["messages"]], )
import { rem } from '@remlabs/sdk'; for (const session of sessions) { await rem.store_batch({ namespace: session.id, items: session.messages.map(m => ({ key: m.uuid, content: m.content, role: m.role, ts: m.created_at, })), }); }
# 1. Export your palace $ mempalace export > mempalace.jsonl # 2. Migrate (private beta) $ npx @remlabs/cli migrate \ --from mempalace \ ./mempalace.jsonl # 3. Verify $ npx @remlabs/cli recall "first key" # ✓ Palace links preserved as tags
from remlabs import Rem import json rem = Rem(api_key="bl_live_...") with open("mempalace.jsonl") as f: for line in f: row = json.loads(line) rem.store( key=row["key"], content=row["value"], tags=row.get("links", []), )
import { rem } from '@remlabs/sdk'; import { readFile } from 'node:fs/promises'; const raw = await readFile('mempalace.jsonl', 'utf8'); for (const line of raw.split('\n').filter(Boolean)) { const row = JSON.parse(line); await rem.store({ key: row.key, content: row.value, tags: row.links ?? [], }); }
If you're hand-writing the migration instead of using the CLI, here's the call-by-call mapping. The shape is consistent: store writes, recall reads, dream consolidates.
We're in private beta. The @remlabs/cli ships with the SDK in Q3 2026. Until then, the API mapping above is exercised by hand and the migration recipes are the contract we're building against — not fully published packages.
What's stable today: the REST endpoints (/v1/memory/store, /v1/memory/recall, /v1/dream) and the namespace model. If you want to do your own migration before Q3, those are reproducible — see the API reference.
What's not stable yet: bulk-import perf for >100k memories, semantic dedup across sources, automatic schema detection. That's the work for the GA cut.
Email dev@remlabs.ai for early access →