The continuity layer for intelligence

Your memories don't just sit there.
They get smarter overnight.

The Dream Engine transforms raw memories into structured knowledge. 9 consolidation strategies. Tournament refinement that kills noise. Runs on cron, on events, or one API call. What every memory system is missing.

The 9 strategies. Named. Explained.

Every other memory system stops at retrieve. The Dream Engine runs nine autonomous consolidation strategies against your namespace — each with a defined purpose, output artifact, and run cadence. Names are stable API identifiers.

Synthesize
01
Collapse many memories into a single distilled insight.
INPUT →3 memories tagged "Dream Engine features"
OUTPUT →1 entry: "REM Labs Dream Engine = 9 autonomous strategies running nightly, delivering +15.33pp on SWE-bench Lite (n=150, p<0.05)."
You store 30 Slack messages about Q2 OKRs across two weeks. Synthesize runs. Output: one paragraph summarizing the decisions, the owners, and the open threads — with confidence 0.91 and a lineage pointer back to all 30 source memories.
ArtifactDistilled summary memory (high confidence)
CadenceNightly
Pattern
02
Detect recurring structures across your history.
INPUT →7 notes on agent failures across 2 weeks
OUTPUT →"Pattern: 5/7 failures happen when context exceeds 150k tokens. Recommend chunking at 100k."
You log deploys, standups, and commits for six months. Pattern runs continuously. Output: "You always ship on Thursdays" — plus correlated findings like "Friday deploys precede 3x more weekend incidents." Surfaced the moment the signal crosses threshold.
Artifactpattern memory with frequency + support count
CadenceContinuous
Contradict
03
Surface conflicting memories and flag them for resolution.
INPUT →"Mem0 raised $15M" + "Mem0 raised $24M Series A"
OUTPUT →"Contradiction flagged. Resolve: which is current? Most recent timestamp wins unless overridden."
In March you stored "Redis handles our load fine." In April new p99 latency memories land. Contradict fires the instant the conflict is detectable. Output: a flagged contradiction pair with both sources, a confidence delta, and a resolution suggestion — never silently overwritten.
Artifactcontradiction record with both source IDs
CadenceContinuous
Reflect
04
Second-order insights: what the agent believes vs. what is actually true.
INPUT →20 decisions logged during Q1
OUTPUT →"Reflection: 60% of Q1 decisions optimized for speed, 30% correctness, 10% cost. Trend: shifting toward correctness in March."
You run a directed dream with strategy: "reflect". The engine compares stored beliefs against downstream outcomes. Output: "Agent believes customers want feature X; actual churn exits cite onboarding friction, not missing features." Gap surfaced explicitly.
Artifactreflection note — belief vs. evidence diff
CadenceDirected
Associate
05
Build cross-namespace connections the single-agent view misses.
INPUT →memories tagged "Claude Code", "Cursor", "Continue"
OUTPUT →graph edge: "AI coding assistants — common thread: file-system access + LSP integration."
Support logs customer X across 14 tickets. Product logs feature Y separately. Associate runs nightly across both namespaces. Output: a bidirectional link — "Customer X is the highest-signal user of feature Y, responsible for 40% of its usage" — discoverable in one recall call.
Artifactedge records in the knowledge graph
CadenceNightly
Compress
06
Reduce token footprint on verbose memories.
INPUT →47 notes on one topic
OUTPUT →"1 canonical reference + 46 linked sub-entries. 92% size reduction, 0% information loss."
A 2,400-token meeting transcript arrives. Compress runs nightly with information-loss checks. Output: a 240-token dense memory at a typical 10:1 ratio, full source retained as lineage. Recall now costs an order of magnitude less to read, with zero claim loss verified by back-translation.
Artifactcompressed memory, source preserved in lineage
CadenceNightly
Validate
07
Score confidence and recency. Decay stale facts.
INPUT →stored fact "Python 3.13 released Oct 2024"
OUTPUT →"Validated against source URL. Confidence: 0.98."
"Our CFO is Alex" was stored 11 months ago. Validate runs nightly and cross-checks against recent memories, integrations, and signals. Output: confidence dropped from 0.94 to 0.42 — the memory is now ranked low in recall and flagged for refresh. Stale facts stop poisoning retrieval.
ArtifactUpdated confidence + recency scores
CadenceNightly
Evolve
08
Progressive refinement via tournament-style competition.
INPUT →old note "Mem0 is the leader"
OUTPUT →"Evolved (Apr 2026): Mem0 still top-of-mind but REM now leads LongMemEval. Old claim flagged as stale."
Old hypothesis: "GraphQL is better for our API." New evidence: three failed integrations this month. Evolve runs the old insight into an A/B/AB tournament with the new data. Output: a superseded version — "GraphQL for internal, REST for third-party partners" — with the loser archived, not discarded.
ArtifactNew version of the belief, prior archived
CadenceNightly
Forecast
09
Predictive insights built on your own historical patterns.
INPUT →weekly project decisions across the last sprint
OUTPUT →"Prediction: 3 contradictions likely in the next 7 days based on pace of change. Suggest scheduling a review session."
Forecast runs weekly against the full namespace. It reads Pattern outputs, Validate recency, and your own shipping cadence. Output: "Based on the last 14 strat cycles, the team will ship Strat22 Friday — confidence 0.86, next likely slip is Sunday." Delivered to the morning brief.
Artifactforecast record with probability + horizon
CadenceWeekly

What the Dream Engine does while you sleep.

At 23:00 local time, the scheduler wakes. The first move is a namespace snapshot — an immutable point-in-time copy of every memory, edge, and confidence score. The snapshot is what the cycle reads from. Your live namespace continues to accept writes without contention. If the cycle fails, the snapshot is discarded and nothing is touched. If it succeeds, the diff lands atomically.

Next, all nine strategies fan out in parallel against the snapshot. Synthesize clusters semantic neighbors. Pattern scans for recurring structures. Contradict cross-references conflicting claims. Reflect compares beliefs against outcomes. Associate links across namespaces. Compress reduces verbose entries. Validate rescores confidence and recency. Evolve feeds superseded hypotheses into tournament rounds. Forecast projects forward from what already exists. Each strategy is a pure function against the snapshot — no cross-talk, no ordering bugs.

The raw outputs come back noisy. That is expected. A dedupe and conflict-resolve pass collapses near-identical insights, runs pairwise contradictions through the A/B/AB tournament with a blind Borda judge, and discards anything that fails the 0.6 novelty threshold. "No change" is a legal outcome. Slop is caught before it ever touches your namespace.

Surviving insights are written back with confidence scores, versioned, and linked to the source memories they were derived from. Every insight has a lineage graph back to source memories — you can always trace a claim to the exact entries that produced it, diff any two cycles, and roll back a bad dream without losing good ones. Knowledge is inspectable, editable, reversible.

The cycle closes by emitting a dream.completed webhook with a structured diff: what was added, merged, refined, flagged, and archived. Downstream automations pick up the signal — Slack posts the morning brief, Notion updates the decision log, the next agent in the pipeline wakes up with a smarter context. You sleep. The namespace gets sharper.

01Snapshot namespace (immutable copy)
02Run all 9 strategies in parallel
03Dedupe + conflict-resolve (tournament)
04Write back with confidence + lineage
05Emit dream.completed webhook

Why this is the moat.

Three structural reasons this is hard to replicate — and why it compounds instead of plateauing.

Strategy count
Nobody else has 9 strategies.
Most have one — retrieve. Mem0 has 1 (store and fetch). Zep has 2 (summarize + fact-store). Letta has 0 autonomous — the model has to be told what to do. The Dream Engine runs nine in parallel on a schedule, and they're composable: you can chain any subset with strategies: [...].
Temporal advantage
It runs while you sleep.
Agents wake up smarter without any prompt engineering. No orchestration code, no retry loops, no context-window juggling. Consolidation is a background process the runtime owns. By the time a human opens the console in the morning, the namespace has already contradicted itself, refined the survivors, and forecast the next move.
Tournament refinement
The best insight wins. Not the latest.
Every candidate enters an A/B/AB tournament with a blind Borda judge. Not the latest. Not the loudest. The most validated. "No change" is a legal outcome — the original wins when it already is the best answer. This is why the knowledge base gets denser over time instead of drifting toward recency bias.

Dream Engine nightly → measurable accuracy gain.

Retrieval accuracy on LongMemEval-style queries, measured against the same namespace at set intervals. Every run, the cycle compounds. This is the reason nightly dreaming matters.

Day Retrieval accuracy Progression Notes
Day 1 68%
Baseline — raw memories, zero dream cycles
Day 7 78%
After 7 Synthesize runs — redundancy collapsed, confidence calibrated
Day 30 94.6%
LongMemEval-comparable — full 9-strategy nightly pipeline active
Day 90 96.1%
Extrapolated — Evolve + Forecast compound across 90 cycles
How to read this. Day 30 is measured — 473/500 on LongMemEval under a byte-exact upstream GPT-4o judge. Day 90 is extrapolated from the nightly delta and has not yet been independently scored. Accuracy gains come from the namespace getting denser, not from a new model — the same retrieval call returns better memories.

The Dream Engine consolidation methodology.

Day 1 = stock vector search on raw memories. No Dream Engine runs yet. The retrieval stack is cosine similarity over embedded text — the same primitive every memory library ships. Measured accuracy: 68% recall on the 500 questions in the LongMemEval public set.

Every night, the Dream Engine runs 9 strategies on the full memory corpus. Synthesize consolidates semantic neighbors. Pattern extracts recurring signals. Contradict flags conflicts. Compress reduces verbose entries with zero-loss back-translation. Validate rescores confidence and recency. Evolve runs superseded hypotheses into tournament rounds. Associate links across namespaces. Reflect diffs belief against outcome. Forecast projects forward. The namespace gets denser, not larger.

After 30 nights, the same corpus scores 94.6% (473/500) under a byte-exact upstream GPT-4o judge — the same judge configuration the LongMemEval public leaderboard uses. This is the number we publish. No retrieval model was swapped, no questions were re-selected, no answers were post-edited. The only change is what lives in the memory store.

We publish the full test harness, the 500 questions, and the judge configuration at /benchmarks. Reproducible end-to-end — clone the repo, point it at your namespace, run the scorer, get back a number that matches ours to the question.

847 memories in. 23 insights out.

Raw data goes in. Structured knowledge comes out. Here's what a single cycle produces from a real knowledge base.

Day 1 — Raw Data
847 memories stored
"Met with Acme Corp, they need enterprise SSO"
"Competitor X raised $24M Series A"
"User complained about onboarding friction"
"Q2 revenue target is $180K ARR"
+ 843 more disconnected entries...
Day 30 — After Dream Engine
23 insights discovered
Pattern: 4 of your last 6 churned customers mentioned "no SSO" in exit surveys. Acme Corp deal depends on it.
Framework: Enterprise readiness checklist derived from 12 lost deals: SSO, SOC 2, data residency, SLA.
Principle: Your highest-value customers evaluate security before features. Lead with compliance, not demo.
Forecast: At current close rate, $180K ARR requires 8 more enterprise deals. Pipeline has 5. Gap = 3.

Five depth levels. Each run advances.

Consecutive runs go deeper, not wider. The engine tracks what it already processed and advances to the next level automatically.

L0
Extraction
Raw data becomes structured facts. Text is parsed into atomic units with metadata, entities, and timestamps.
L1
Pattern Recognition
Facts become themes. The engine identifies what keeps coming up, what contradicts, and what's trending.
L2
Insight Generation
Patterns become connections. Cross-domain links surface relationships spanning topics you'd never compare manually.
L3
Framework Building
Insights become mental models. Clusters of connected insights organize into reusable decision frameworks.
L4
Principle Crystallization
Frameworks become strategic truths. The highest-confidence conclusions distill into principles that guide all future decisions.

Set a persona or let it auto-detect.

The persona parameter tunes which patterns the engine prioritizes. Auto-detected from content if omitted.

</>
Developers
Architecture decisions, debugging patterns, tech debt clusters, recurring bugs. The engine tracks how your codebase thinking evolves.
persona: "developer"
Founders
Competitive intel, deal patterns, strategy evolution, market signals. Connects what your customers say with what your metrics show.
persona: "founder"
📚
Researchers
Paper synthesis, methodology contradictions, cross-study connections. Finds what the literature says vs. what your data shows.
persona: "researcher"
🎓
Students
Course connections, knowledge gaps, exam prep. Links concepts across subjects to build deeper understanding.
persona: "student"
👥
Teams
Meeting decisions, project context, institutional knowledge. The collective memory of your entire organization.
persona: "team"
💡
Personal
Life experiences, goals, relationships, habits. Tracks your growth over time and surfaces patterns in your life.
persona: "personal"

Five trigger modes.

Cron is the default. But every mode composes with webhooks and automations.

Real-time
On every memory write. Dedup, categorize, and connect automatically.
📈
Threshold
After 10+ new memories accumulate. Auto-triggered.
📅
Scheduled
Hourly, daily, or weekly. Set it and forget it.
🔗
Event-driven
On import, webhook, or integration event.
On-demand
One click in the console. Results in 30-60 seconds.

9 strategies. Chain them or run individually.

Each strategy is a discrete operation. Pass strategies: ["synthesize", "validate", "compress"] to chain them in sequence. Each output feeds the next. Or use full_cycle to run all 9.

1
Synthesize
Merge multiple memories into unified, higher-order knowledge. Eliminates duplication while preserving nuance.
In: 6 notes about auth bugs
Out: "Auth failures cluster around token refresh in multi-tab sessions"
2
Pattern Extract
Surface recurring themes, statistical regularities, and frequency signals across your entire knowledge base.
In: 200 meeting notes
Out: "3 of 4 churned accounts mentioned pricing in week 2"
3
Insight Generate
Discover non-obvious cross-domain connections. Links concepts you stored months apart in different contexts.
In: Hiring notes + bug reports
Out: "Senior hires reduced P1 bugs by 40% within 90 days"
4
Compress
Merge overlapping memories and collapse redundancy. Your knowledge base gets smaller and denser, never bloated.
In: 847 raw memories
Out: 612 memories (28% reduction, zero information loss)
5
Associate
Build bidirectional knowledge graph links. Every memory gets connected to its semantic neighbors.
In: Isolated API design notes
Out: 18 new cross-links to performance, auth, and DX memories
6
Validate
Flag contradictions and verify internal consistency. Catches when your stored beliefs conflict with newer evidence.
In: "Redis handles our load fine" + latest perf data
Out: Contradiction flagged: p99 latency up 3x since January
7
Evolve
Update beliefs with new evidence using Bayesian reasoning. Confidence scores shift as data accumulates.
In: "GraphQL is better for our API" (0.7 confidence)
Out: Confidence raised to 0.91 after 4 corroborating data points
8
Forecast
Project trends and predict likely outcomes based on historical patterns in your knowledge.
In: 6 months of sprint velocity data
Out: "At current rate, v2.0 ships March 18 +/- 12 days"
9
Reflect
Meta-analysis: identify knowledge gaps, blind spots, and areas where your understanding is thin or stale.
In: Full knowledge base scan
Out: "No data on competitor pricing since Q3. 4 assumptions unvalidated."

Every output survives an adversarial tournament.

Three candidates compete. A blind judge picks the winner. Knowledge only changes when the replacement is measurably better. "No change" is a valid outcome.

Step 1 — Candidate A
Original knowledge
The current insight as stored. This is the baseline to beat. No change unless something measurably better exists.
Step 2 — Candidate B
Adversarial challenge
A fresh model pass that attacks the original. Reframes assumptions, introduces counter-evidence, proposes alternatives.
Step 3 — Candidate AB
Synthesis
A third pass merges the strongest elements of A and B into a unified, higher-fidelity result. Often the winner.
Live example — watch a tournament round
A · Original
"Python is slower than JavaScript."
B · Challenge
"Python has faster numeric computing via NumPy; JS V8 is faster for I/O."
AB · Synthesis
"Python excels at numeric computing, JavaScript at I/O-bound tasks."
Judge
Blind judge selects AB 0.94 confidence
Reason: AB captures domain-specific nuance that neither A nor B provides alone.
Blind Borda judging
No model grades its own work. Judges see A, B, AB in randomized order with no labels.
Borda count scoring across multiple judges eliminates self-critique bias. "No change needed" is a valid outcome — the original wins when it is already the best answer.
Why this matters
Most memory systems store what you tell them, unchanged. The Dream Engine actively improves it.
Over 30 dream cycles, a single memory can be refined dozens of times. Naive claims get replaced with nuanced, battle-tested knowledge. Automatically.

How consolidation approaches differ.

A snapshot of how AI memory tools handle knowledge consolidation today.

Provider Consolidation Strategies Tournament Refinement
Mem0 None — memories are static after storage
Zep Temporal knowledge graphs — structure, no synthesis
Membase Knowledge graph — no consolidation
Thoth 4-phase dream cycle: dedup → enrichment → inference → decay 4 (phases)
OpenClaw 3-phase dream: light sleep → REM → deep sleep 3 (phases)
Hindsight Observation consolidation — basic automatic synthesis 1 (consolidate)
REM Labs Dream Engine 9 strategies, 5 depth levels, autonomous scheduling 9 A/B/AB tournament with blind Borda judging

Writes back to the memory store. No fine-tuning.

Refined knowledge is written back to the memory store directly. Next cycle inherits the improvement. No GPUs, no retraining, no deployment pipeline.

Fine-tuning approach
Train a new model version
Requires GPU clusters, training data curation, evaluation suites, and deployment pipelines. Takes days to weeks. Costs thousands. Knowledge is frozen into weights and cannot be inspected or corrected.
 vs 
Memory-layer approach (Dream Engine)
Evolve the memory layer directly
Refined knowledge is written back to the memory store immediately. The next cycle inherits the improvement. No retraining, no GPUs, no deployment. Knowledge is always inspectable, editable, and reversible.
Cycle N
Tournament refines "Python is slower" into "Python excels at numeric computing, JS at I/O"
Cycle N+1
Next cycle starts with the refined version. Connects it to GPU benchmarking data stored last week.
Cycle N+2
Knowledge compounds: "For ML pipelines, Python + CUDA > JS. For edge inference, JS + WASM wins."

A scheduled cycle, start to finish.

847 stored memories. Total wall time: 15 minutes. Zero human intervention. Results delivered via webhook.

11:00 PM
Cycle begins Scan
847 memories loaded. Embedding similarity matrix computed. Stale and duplicate candidates identified.
11:02 PM
Clustering complete Associate
23 semantic clusters identified. Largest cluster: "API design decisions" (47 memories). Smallest: "Office logistics" (3 memories).
11:05 PM
Redundancy eliminated Synthesize + Compress
12 redundant entries merged. "Added rate limiting to /users endpoint" appeared 4 times across different dates — collapsed into one with full timeline.
11:08 PM
Recurring themes surfaced Pattern Extract
3 themes found: (1) Auth token issues recur monthly, (2) deploys on Fridays correlate with weekend incidents, (3) customer complaints cluster around onboarding flow.
11:12 PM
Contradictions flagged Validate
2 contradictions detected: (1) "Redis handles our load" vs. p99 latency data showing 3x increase, (2) "We don't need SSO" vs. 4 lost enterprise deals citing SSO.
11:15 PM
Tournament refinement Tournament
5 insights enter A/B/AB tournament. 3 syntheses win, 1 original retained (already optimal), 1 adversarial reframing wins. Blind judge confidence range: 0.82–0.96.
6:00 AM
Morning Brief ready Complete
18 new cross-links established. 5 refined insights. 2 contradictions flagged for review. 12 redundancies eliminated. Knowledge base is now 835 memories — smaller, denser, smarter.

LongMemEval retrieval accuracy.

We measure against the hardest public benchmark for long-term memory systems.

REM Labs
94.6%
LongMemEval accuracy (473/500)
Consolidation strategies: 9 (full pipeline)
Tournament refinement: A/B/AB + blind Borda judge
Methodology: byte-exact upstream GPT-4o judge — competitive with the public leaderboard, reproducible
Hindsight
94.6%
LongMemEval accuracy
Consolidation strategies: 1 (TEMPR consolidate)
Tournament refinement: None
Backed by: Nous Research partnership

Built-in slop detection.

Diminishing returns detection, similarity thresholds, and rate limits prevent over-processing. The engine stops when there's nothing left to improve.

>0.6
Slop detection threshold
Every output is compared against existing insights. If similarity exceeds 60%, the entry is filtered out before storage.
20/day
Rate limit per namespace
Quality degrades with over-processing. The engine enforces a ceiling and recommends waiting when value is low.
Auto
Diminishing returns detection
If 3 consecutive runs produce minimal new insights, the engine auto-advances to the next depth level or switches strategy.

Two ways to run the Dream Engine.

Let it run automatically every night, or aim it at a specific question and have all 9 strategies work with that goal in mind.

Mode 1
Nightly automatic
Scheduler wakes at 23:00 local. Snapshot → 9 strategies → tournament → write-back → webhook. You sleep. The namespace gets sharper. Zero prompting.
Mode 2
Directed
Send a task (e.g. "Why is churn up?") and the engine runs all 9 strategies with that goal. Results shape around the question you actually asked.
# Directed dream — run the full 9-strategy cycle with a specific task
curl -X POST https://remlabs.ai/v1/memory/dream/start \
  -H "Authorization: Bearer $REM_API_KEY" \
  -d '{"task":"Why is churn up?"}'

Three patterns: single, pipeline, cron.

Run one strategy, chain multiple, or schedule recurring cycles. All return the same result shape.

# Single strategy
curl -X POST https://remlabs.ai/v1/dream/run \
  -H "Authorization: Bearer $REM_API_KEY" \
  -d '{ "strategy": "synthesize" }'

# Strategy pipeline — chain in sequence, each output feeds the next
curl -X POST https://remlabs.ai/v1/dream/run \
  -H "Authorization: Bearer $REM_API_KEY" \
  -d '{
    "strategies": ["synthesize", "validate", "compress"],
    "namespace": "support-team"
  }'

# Scheduled — runs nightly, results in your webhook
curl -X POST https://remlabs.ai/v1/dream/subscribe \
  -H "Authorization: Bearer $REM_API_KEY" \
  -d '{
    "schedule": "0 2 * * *",
    "strategies": ["full_cycle"],
    "webhook": "https://your-app.com/dream-results"
  }'

Dream is one primitive in a pipeline.

Every REM primitive feeds into every other. Dream doesn't exist in isolation — it's the processing stage between ingestion and action.

# The full pipeline: dump → dream → recall → act

# 1. Data comes in
npx @remlabs/cli dump chatgpt ~/export.zip
npx @remlabs/cli dump slack --channel general

# 2. Dream Engine processes it
npx @remlabs/cli dream --strategies synthesize,validate,compress

# 3. Refined knowledge is now searchable
npx @remlabs/cli recall "what patterns exist in customer churn?"

# 4. Webhook fires on dream.completed → triggers automation
# → Slack post with new insights
# → Notion page updated
# → Next agent picks up refined context
dump
Data in
dream
Process
recall
Query
watch
React

Why this can't be bolted on later.

Every competitor stores and retrieves. Adding consolidation after the fact means retrofitting the entire data model. Here's why Dream Engine is a structural advantage, not a feature.

Versioned knowledge
Every memory has a version history. Dream cycles create new versions, not overwrites. You can diff any two cycles and see exactly what changed, when, and why.
Confidence scores
Every piece of knowledge carries a confidence score. Evolve strategy updates these scores as evidence accumulates. Recall can filter by confidence threshold.
Cross-namespace dreaming
Multi-agent teams share namespaces. Dream Engine can consolidate across namespaces to find patterns that no single agent sees. Collective intelligence, not siloed memory.
Observable diffs
Every dream cycle produces a structured diff: what was added, removed, merged, refined, and flagged. Machine-readable for downstream systems. Human-readable in the morning brief.
Event-driven chaining
dream.completed webhook fires with results. Use it to trigger a Slack post, update Notion, start another dream cycle on a different namespace, or feed results to another agent.
Model-agnostic
Knowledge lives in the memory layer, not model weights. Switch from GPT to Claude to Llama — your Dream Engine results carry over. No reprocessing, no migration.

Your AI catches itself disagreeing.

The moment two stored memories conflict, the Contradict strategy fires. Nothing is silently overwritten. You get a flagged pair and a resolution suggestion — every time.

Memory A
March 4, 2026
"Mem0 raised $15M from USV."
Memory B
April 3, 2026
"Mem0 raised $24M Series A led by Peak XV."
REM Alert
April 4, 2026
Contradiction detected. Both facts cite Mem0 funding. Most recent wins unless you override. Confirm current number.
This happens nightly. You wake up to a cleaner knowledge base.

Don't want to send memories to our servers?

Self-host the full stack. Same 9 strategies, same tournament refinement, same benchmark numbers — on your own infrastructure.

01Docker compose. One command, local dev, full feature parity. Swap the model endpoint, keep everything else.
02Kubernetes Helm chart. Production-grade. Horizontal scale on the strategy workers, stateful on the namespace store.
03Air-gapped build (Enterprise). No outbound network calls. Ship the whole runtime into SCIF or VPC. Signed images, reproducible builds.
Talk to us → Self-host docs →

One API call to start a cycle.

Free tier. No credit card. Import your data, run a dream, see what comes out.

Open Dream Studio Import Data API Docs