Self-Host

Run REM Labs on your own infrastructure.

Full data sovereignty. Same API. Mac Mini, Docker, Kubernetes, or bare metal.

Turn your Mac Mini into a permanent memory server

One command. Always-on. Your memories never leave your hardware.

Shell
curl -fsSL https://get.remlabs.ai | sh
  • Always-on 24/7 -- auto-starts on boot via launchd. No babysitting required.
  • Apple Silicon optimized -- native arm64 binary. Low power consumption, silent operation.
  • Access from anywhere on your network -- available at rem.local via mDNS/Bonjour.
  • Same API, same SDKs -- point your existing code at http://rem.local:7700 and everything works.

Hardware specifications

REM Labs is lightweight by design. The SQLite-backed architecture means no external database is needed.

Resource Minimum Recommended
CPU 1 core 2+ cores
RAM 2 GB 4 GB
Disk 10 GB 50 GB SSD
OS Linux (amd64/arm64), macOS (Apple Silicon or Intel)

Ports: 8080 (API) and 8081 (metrics / health checks). The Mac Mini installer uses port 7700 by default for local-network convenience; Docker and Kubernetes deployments default to 8080.

Container deployment

The fastest path to self-hosted REM Labs. Pull the image, set your keys, and you have a running memory API in under a minute.

Step 1 -- Pull the image

Shell
docker pull ghcr.io/remlabs/remlabs-server:latest

Step 2 -- Configure environment

Create a .env file with your settings:

.env
# Required REM_API_KEY=your-api-key-here REM_ENCRYPTION_KEY=your-aes-256-key # Optional -- server config REM_PORT=8080 REM_METRICS_PORT=8081 REM_DATA_DIR=/data REM_LOG_LEVEL=info # Optional -- Dream Engine (memory consolidation) # Provide at least one LLM API key: # REM_OPENAI_API_KEY=sk-... # REM_ANTHROPIC_API_KEY=sk-ant-... # REM_OLLAMA_URL=http://host.docker.internal:11434

Step 3 -- Run with Docker

Shell
docker run -d \ --name remlabs \ --env-file .env \ -p 8080:8080 \ -p 8081:8081 \ -v remlabs-data:/data \ --restart unless-stopped \ ghcr.io/remlabs/remlabs-server:latest

Step 4 -- Verify it works

Shell
# Health check curl http://localhost:8081/healthz # Store a memory curl -X POST http://localhost:8080/v1/memories \ -H "Authorization: Bearer $REM_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "content": "Self-hosted deployment verified." }' # Search memories curl http://localhost:8080/v1/memories/search?q=deployment \ -H "Authorization: Bearer $REM_API_KEY"

Docker Compose

For production use, docker-compose gives you declarative config and easy restarts.

docker-compose.yml
version: '3.8' services: remlabs: image: ghcr.io/remlabs/remlabs-server:latest container_name: remlabs restart: unless-stopped ports: - '8080:8080' # API - '8081:8081' # Metrics + health volumes: - remlabs-data:/data environment: - REM_API_KEY=${REM_API_KEY} - REM_ENCRYPTION_KEY=${REM_ENCRYPTION_KEY} - REM_PORT=8080 - REM_METRICS_PORT=8081 - REM_DATA_DIR=/data - REM_LOG_LEVEL=info # Uncomment for Dream Engine: # - REM_OPENAI_API_KEY=${REM_OPENAI_API_KEY} # - REM_ANTHROPIC_API_KEY=${REM_ANTHROPIC_API_KEY} # - REM_OLLAMA_URL=http://host.docker.internal:11434 healthcheck: test: ['CMD', 'curl', '-f', 'http://localhost:8081/healthz'] interval: 30s timeout: 5s retries: 3 start_period: 10s volumes: remlabs-data: driver: local
Shell
# Start docker compose up -d # View logs docker compose logs -f remlabs # Stop docker compose down

Environment variables

Variable Description Required
REM_API_KEY API key for authenticating requests Yes
REM_ENCRYPTION_KEY AES-256 key for encrypting memories at rest Yes
REM_PORT API server port (default: 8080) No
REM_METRICS_PORT Metrics and health check port (default: 8081) No
REM_DATA_DIR Data directory for SQLite databases and indexes (default: /data) No
REM_LOG_LEVEL Logging verbosity: debug, info, warn, error (default: info) No
REM_OPENAI_API_KEY OpenAI key for Dream Engine consolidation No
REM_ANTHROPIC_API_KEY Anthropic key for Dream Engine consolidation No
REM_OLLAMA_URL Ollama endpoint for local LLM Dream Engine (e.g. http://localhost:11434) No

Persistent data: Always mount a volume to /data. This directory contains the SQLite database, full-text search indexes, and configuration. Without a mounted volume, data is lost when the container restarts.

Kubernetes deployment

Deploy REM Labs on any Kubernetes cluster. A Helm chart is available for production use, or you can apply the manifests below directly.

Option A -- Helm chart

Shell
# Add the REM Labs Helm repo helm repo add remlabs https://charts.remlabs.ai helm repo update # Install helm install remlabs remlabs/remlabs-server \ --namespace remlabs --create-namespace \ --set apiKey=your-api-key \ --set encryptionKey=your-aes-256-key \ --set persistence.size=50Gi \ --set resources.requests.memory=2Gi \ --set resources.requests.cpu=1

Option B -- Raw manifests

If you prefer kubectl apply, here are the core resources: a Secret, PersistentVolumeClaim, Deployment, and Service.

yaml -- remlabs-k8s.yaml
apiVersion: v1 kind: Namespace metadata: name: remlabs --- apiVersion: v1 kind: Secret metadata: name: remlabs-secrets namespace: remlabs type: Opaque stringData: REM_API_KEY: your-api-key REM_ENCRYPTION_KEY: your-aes-256-key --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: remlabs-data namespace: remlabs spec: accessModes: [ReadWriteOnce] resources: requests: storage: 50Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: remlabs-server namespace: remlabs labels: app: remlabs spec: replicas: 1 selector: matchLabels: app: remlabs template: metadata: labels: app: remlabs spec: containers: - name: remlabs image: ghcr.io/remlabs/remlabs-server:latest ports: - containerPort: 8080 name: api - containerPort: 8081 name: metrics envFrom: - secretRef: name: remlabs-secrets env: - name: REM_PORT value: "8080" - name: REM_METRICS_PORT value: "8081" - name: REM_DATA_DIR value: /data - name: REM_LOG_LEVEL value: info volumeMounts: - name: data mountPath: /data resources: requests: memory: 2Gi cpu: "1" limits: memory: 4Gi cpu: "2" livenessProbe: httpGet: path: /healthz port: 8081 initialDelaySeconds: 10 periodSeconds: 30 readinessProbe: httpGet: path: /healthz port: 8081 initialDelaySeconds: 5 periodSeconds: 10 volumes: - name: data persistentVolumeClaim: claimName: remlabs-data --- apiVersion: v1 kind: Service metadata: name: remlabs namespace: remlabs spec: selector: app: remlabs ports: - name: api port: 8080 targetPort: 8080 - name: metrics port: 8081 targetPort: 8081 type: ClusterIP
Shell
# Apply manifests kubectl apply -f remlabs-k8s.yaml # Check pod status kubectl -n remlabs get pods # Tail logs kubectl -n remlabs logs -f deploy/remlabs-server

Scaling note: REM Labs uses SQLite with WAL mode, which supports concurrent reads but serializes writes. For most workloads a single replica is sufficient. If you need horizontal read scaling, run multiple read replicas with Litestream replication pointing at a shared object store (S3, GCS). Write traffic should still go to a single primary instance. The Helm chart supports this via replication.enabled=true.

Backup, restore, and migration

All data lives in a single directory. Back it up, move it, or migrate from cloud to self-hosted.

SQLite file location

Inside the data directory (REM_DATA_DIR, default /data):

  • remlabs.db -- primary SQLite database (memories, metadata, FTS5 indexes)
  • remlabs.db-wal -- write-ahead log (auto-managed)
  • remlabs.db-shm -- shared memory file (auto-managed)

Backup

Shell
# Hot backup (safe while server is running) docker exec remlabs sqlite3 /data/remlabs.db \ ".backup /data/backup-$(date +%Y%m%d).db" # Copy backup out of the container docker cp remlabs:/data/backup-$(date +%Y%m%d).db ./ # Or snapshot the entire data volume docker run --rm -v remlabs-data:/data -v $(pwd):/backup alpine \ tar czf /backup/remlabs-backup-$(date +%Y%m%d).tar.gz -C /data .

Restore

Shell
# Stop the server docker compose down # Restore from backup docker run --rm -v remlabs-data:/data -v $(pwd):/backup alpine \ sh -c "rm -f /data/remlabs.db* && tar xzf /backup/remlabs-backup-20260415.tar.gz -C /data" # Restart docker compose up -d

Migrate from hosted to self-hosted

Shell
# Export all memories from REM Labs cloud curl -H "Authorization: Bearer $REM_API_KEY" \ https://remlabs.ai/v1/memories/export > memories-export.json # Import into your self-hosted instance curl -X POST http://localhost:8080/v1/memories/import \ -H "Authorization: Bearer $REM_API_KEY" \ -H "Content-Type: application/json" \ -d @memories-export.json

Zero lock-in: The export includes all memories, metadata, tags, and relationship data as JSON. Your self-hosted instance rebuilds full-text and semantic indexes automatically on import.

What you get with self-hosted

Self-hosted deployments include all core API features: memory storage, search (full-text + semantic), tagging, namespaces, relationships, and the full REST API. SDKs and CLI tools work identically -- just point them at your self-hosted URL.

  • Full API -- every endpoint available on cloud works the same on self-hosted
  • Dream Engine -- requires an LLM API key (OpenAI, Anthropic, or Ollama) for consolidation. Without one, memory storage and retrieval work normally, but background dream consolidation is disabled.
  • Air-gapped support -- with Ollama, the entire stack runs offline. No external network calls.
  • Same SDKs -- Python, Node.js, and CLI all accept a custom base URL. Set REM_BASE_URL=http://your-server:8080 and everything works.

Cloud vs. Self-Hosted

Both options use the same API and SDKs. Choose based on your requirements.

Feature Cloud Self-Hosted
Setup Instant -- sign up and go One command (Mac) or Docker pull
Infrastructure Fully managed by REM Labs Your hardware, your control
Updates Automatic -- always latest Manual updates on your schedule
Dream Engine Included Requires LLM API key
Data sovereignty US-based infrastructure Full control -- data never leaves your network
Air-gapped Not available Supported -- runs fully offline
Hardware Managed cloud Bring your own -- Mac Mini, server, VM
Maintenance Zero maintenance You manage backups and updates

Need enterprise self-hosting?

Dedicated support, custom SLAs, air-gapped deployments, and compliance guidance for regulated industries.

Contact dev@remlabs.ai