Full data sovereignty. Same API. Mac Mini, Docker, Kubernetes, or bare metal.
Mac Mini
Turn your Mac Mini into a permanent memory server
One command. Always-on. Your memories never leave your hardware.
Shell
curl -fsSL https://get.remlabs.ai | sh
Always-on 24/7 -- auto-starts on boot via launchd. No babysitting required.
Apple Silicon optimized -- native arm64 binary. Low power consumption, silent operation.
Access from anywhere on your network -- available at rem.local via mDNS/Bonjour.
Same API, same SDKs -- point your existing code at http://rem.local:7700 and everything works.
System Requirements
Hardware specifications
REM Labs is lightweight by design. The SQLite-backed architecture means no external database is needed.
Resource
Minimum
Recommended
CPU
1 core
2+ cores
RAM
2 GB
4 GB
Disk
10 GB
50 GB SSD
OS
Linux (amd64/arm64), macOS (Apple Silicon or Intel)
Ports:8080 (API) and 8081 (metrics / health checks). The Mac Mini installer uses port 7700 by default for local-network convenience; Docker and Kubernetes deployments default to 8080.
Docker
Container deployment
The fastest path to self-hosted REM Labs. Pull the image, set your keys, and you have a running memory API in under a minute.
Step 1 -- Pull the image
Shell
docker pull ghcr.io/remlabs/remlabs-server:latest
Step 2 -- Configure environment
Create a .env file with your settings:
.env
# Required
REM_API_KEY=your-api-key-here
REM_ENCRYPTION_KEY=your-aes-256-key
# Optional -- server config
REM_PORT=8080
REM_METRICS_PORT=8081
REM_DATA_DIR=/data
REM_LOG_LEVEL=info
# Optional -- Dream Engine (memory consolidation)# Provide at least one LLM API key:# REM_OPENAI_API_KEY=sk-...# REM_ANTHROPIC_API_KEY=sk-ant-...# REM_OLLAMA_URL=http://host.docker.internal:11434
Ollama endpoint for local LLM Dream Engine (e.g. http://localhost:11434)
No
Persistent data: Always mount a volume to /data. This directory contains the SQLite database, full-text search indexes, and configuration. Without a mounted volume, data is lost when the container restarts.
Kubernetes
Kubernetes deployment
Deploy REM Labs on any Kubernetes cluster. A Helm chart is available for production use, or you can apply the manifests below directly.
# Apply manifests
kubectl apply -f remlabs-k8s.yaml
# Check pod status
kubectl -n remlabs get pods
# Tail logs
kubectl -n remlabs logs -f deploy/remlabs-server
Scaling note: REM Labs uses SQLite with WAL mode, which supports concurrent reads but serializes writes. For most workloads a single replica is sufficient. If you need horizontal read scaling, run multiple read replicas with Litestream replication pointing at a shared object store (S3, GCS). Write traffic should still go to a single primary instance. The Helm chart supports this via replication.enabled=true.
Data
Backup, restore, and migration
All data lives in a single directory. Back it up, move it, or migrate from cloud to self-hosted.
SQLite file location
Inside the data directory (REM_DATA_DIR, default /data):
# Hot backup (safe while server is running)
docker exec remlabs sqlite3 /data/remlabs.db \
".backup /data/backup-$(date +%Y%m%d).db"# Copy backup out of the container
docker cp remlabs:/data/backup-$(date +%Y%m%d).db ./
# Or snapshot the entire data volume
docker run --rm -v remlabs-data:/data -v $(pwd):/backup alpine \
tar czf /backup/remlabs-backup-$(date +%Y%m%d).tar.gz -C /data .
Restore
Shell
# Stop the server
docker compose down
# Restore from backup
docker run --rm -v remlabs-data:/data -v $(pwd):/backup alpine \
sh -c "rm -f /data/remlabs.db* && tar xzf /backup/remlabs-backup-20260415.tar.gz -C /data"# Restart
docker compose up -d
Migrate from hosted to self-hosted
Shell
# Export all memories from REM Labs cloud
curl -H "Authorization: Bearer $REM_API_KEY" \
https://remlabs.ai/v1/memories/export > memories-export.json
# Import into your self-hosted instance
curl -X POST http://localhost:8080/v1/memories/import \
-H "Authorization: Bearer $REM_API_KEY" \
-H "Content-Type: application/json" \
-d @memories-export.json
Zero lock-in: The export includes all memories, metadata, tags, and relationship data as JSON. Your self-hosted instance rebuilds full-text and semantic indexes automatically on import.
Feature Parity
What you get with self-hosted
Self-hosted deployments include all core API features: memory storage, search (full-text + semantic), tagging, namespaces, relationships, and the full REST API. SDKs and CLI tools work identically -- just point them at your self-hosted URL.
Full API -- every endpoint available on cloud works the same on self-hosted
Dream Engine -- requires an LLM API key (OpenAI, Anthropic, or Ollama) for consolidation. Without one, memory storage and retrieval work normally, but background dream consolidation is disabled.
Air-gapped support -- with Ollama, the entire stack runs offline. No external network calls.
Same SDKs -- Python, Node.js, and CLI all accept a custom base URL. Set REM_BASE_URL=http://your-server:8080 and everything works.
Comparison
Cloud vs. Self-Hosted
Both options use the same API and SDKs. Choose based on your requirements.
Feature
Cloud
Self-Hosted
Setup
Instant -- sign up and go
One command (Mac) or Docker pull
Infrastructure
Fully managed by REM Labs
Your hardware, your control
Updates
Automatic -- always latest
Manual updates on your schedule
Dream Engine
Included
Requires LLM API key
Data sovereignty
US-based infrastructure
Full control -- data never leaves your network
Air-gapped
Not available
Supported -- runs fully offline
Hardware
Managed cloud
Bring your own -- Mac Mini, server, VM
Maintenance
Zero maintenance
You manage backups and updates
Need enterprise self-hosting?
Dedicated support, custom SLAs, air-gapped deployments, and compliance guidance for regulated industries.