Fredy Acuna
  • Posts
  • Projects
  • Contact
LinkedInXGitHubMedium

© 2026 Fredhii. All rights reserved.

Back to posts
How to Self-Host Engram Cloud on Dokploy (Persistent Memory for AI Agents)

How to Self-Host Engram Cloud on Dokploy (Persistent Memory for AI Agents)

Fredy Acuna / May 4, 2026 / 12 min read

Self-Host Your Own Persistent Memory for AI Agents

Engram is a persistent memory system for AI agents that connect over MCP (Claude Code, Cursor, Codex, etc.). Locally it stores everything in SQLite, and optionally you can replicate your memory to a cloud server to access it from any machine or share it across projects.

In this guide we'll deploy Engram Cloud on your own VPS using Dokploy, with real authentication (no insecure mode), automatic HTTPS, and a full web dashboard. This guide is born out of doing it in production and solving EVERY error that came up along the way.


What You'll Learn

  • Deploy Engram Cloud using the official GHCR image (no building required)
  • Configure authentication with bearer token + JWT secret
  • Isolate postgres inside the compose (no host exposure)
  • Wire your domain with automatic HTTPS via Traefik
  • Access the web dashboard to browse your memory
  • Enroll projects from your local client
  • Solve the 5 common errors (yes, you'll hit them — I hit them all)

Prerequisites

Before you start, make sure you have:

  • A working Dokploy instance (check How to Install Coolify if you need a similar setup)
  • A domain or subdomain ready to point (e.g. engram.yourdomain.com)
  • SSH access to the VPS (recommended for troubleshooting)
  • Your Dokploy instance connected to GitHub (we'll use repo-based deploy)

Understanding Engram

Engram is an agent-agnostic Go binary. It runs in several modes:

  • Local CLI/TUI: stores memory in ~/.engram/ (SQLite)
  • MCP server: exposes tools (mem_save, mem_search, etc.) to your AI agent
  • Cloud server: replicates local memory to a remote postgres, with web dashboard

Key principle: the local SQLite is always the source of truth. Cloud acts as a replicated index, NOT primary storage. If your cloud goes down, you keep working locally without losing anything.

The Project Model

Each project in Engram is a fully isolated namespace. If you work on 10 projects, each has its own memory, observations, and sessions. They share nothing.

The project name resolves from the MCP server's cwd, not from what the LLM passes. If you open Claude Code in ~/projects/foo, that's project foo. Open it in ~/projects/bar, it's bar. Separate memory.


Step 1: Create the Deploy Repo

Instead of pasting docker-compose.yml directly into Dokploy, we'll create a private GitHub repo with the configuration. Dokploy will clone it and deploy from there.

Create a new directory:

mkdir engram-deploy && cd engram-deploy

Create the docker-compose.yml:

services:
  postgres:
    image: postgres:16-alpine
    container_name: engram-cloud-postgres
    restart: unless-stopped
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}']
      interval: 5s
      timeout: 3s
      retries: 10
    volumes:
      - engram-cloud-pg:/var/lib/postgresql/data

  cloud:
    image: ghcr.io/gentleman-programming/engram:${ENGRAM_VERSION}
    container_name: engram-cloud
    restart: unless-stopped
    depends_on:
      postgres:
        condition: service_healthy
    environment:
      ENGRAM_DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}?sslmode=disable
      ENGRAM_JWT_SECRET: ${ENGRAM_JWT_SECRET}
      ENGRAM_CLOUD_TOKEN: ${ENGRAM_CLOUD_TOKEN}
      ENGRAM_CLOUD_INSECURE_NO_AUTH: '0'
      ENGRAM_CLOUD_ALLOWED_PROJECTS: ${ENGRAM_CLOUD_ALLOWED_PROJECTS}
      ENGRAM_CLOUD_HOST: 0.0.0.0
      ENGRAM_PORT: '18080'
    expose:
      - '18080'
    command: ['cloud', 'serve']

volumes:
  engram-cloud-pg:

And a .gitignore so you don't commit secrets:

.env
.env.local
*.local

Compose decisions explained:

  • Pre-published official image: ghcr.io/gentleman-programming/engram already ships with cloud serve as the default CMD. No building — Dokploy does docker pull and starts in seconds.
  • Version pinned via ENGRAM_VERSION: never use latest in production. When you bump it manually, you know exactly what's changing.
  • Postgres NOT exposed to the host: we removed the ports: mapping from the upstream compose. The DB is only reachable from the compose's internal network. Safer.
  • expose: 18080 (no host port): Traefik (bundled with Dokploy) grabs the service from the internal network and adds HTTPS. We don't expose ports directly to the public VPS.
  • ENGRAM_CLOUD_INSECURE_NO_AUTH: '0': the upstream compose ships '1' for local dev. Make sure it's '0' in production.

Push to a private GitHub repo:

git init -b main
git add -A
git commit -m "feat: initial engram cloud deploy compose"
gh repo create <your-username>/engram-deploy --private --source=. --push

Step 2: Generate the Secrets

Engram Cloud needs three distinct secrets. Each plays a different role:

VariableRoleWho sees it
POSTGRES_PASSWORDPostgres user passwordServer only
ENGRAM_JWT_SECRETHMAC key for signing internal tokensServer only
ENGRAM_CLOUD_TOKENShared bearer token (client sends it in headers)Server AND client

Important: each one needs a different value. Reusing the same secret for two different roles is a terrible practice.

Generation

# POSTGRES_PASSWORD — MUST be URL-safe (it goes inside a postgres:// URL)
openssl rand -hex 32

# ENGRAM_JWT_SECRET — only lives in env vars, base64 is fine
openssl rand -base64 48

# ENGRAM_CLOUD_TOKEN — only travels in headers, base64 is fine
openssl rand -base64 48

Why hex for postgres?

The password gets injected into ENGRAM_DATABASE_URL. If it has URL-reserved characters (:, /, @, ?, #, +), it breaks the parser and you'll see cryptic invalid port errors that have NOTHING to do with the actual port. Hex ([0-9a-f]) is fully URL-safe and avoids the trap entirely.


Step 3: Create the Service in Dokploy

  1. Go to your Dokploy dashboard → Create Service → Docker Compose
  2. Name the service (e.g. engram-cloud)
  3. Source: Git
  4. Paste your private repo URL
  5. Branch: main
  6. Compose path: docker-compose.yml

Step 4: Configure Environment Variables

In the Environment tab of your service, paste this and replace the <...> placeholders with the values you just generated:

ENGRAM_VERSION=v1.15.7
POSTGRES_USER=engram
POSTGRES_DB=engram_cloud
POSTGRES_PASSWORD=<output of openssl rand -hex 32>
ENGRAM_JWT_SECRET=<output of openssl rand -base64 48>
ENGRAM_CLOUD_TOKEN=<output of openssl rand -base64 48>
ENGRAM_CLOUD_ALLOWED_PROJECTS=personal

About ENGRAM_CLOUD_ALLOWED_PROJECTS: this is a server-side whitelist of which projects the cloud can accept. Start with personal or the project name where you'll use Engram first. To add more later, append them comma-separated and redeploy:

ENGRAM_CLOUD_ALLOWED_PROJECTS=personal,blog,work,experiments

Important: redeploying in Dokploy with a published image is NOT a build + long downtime. It's just docker pull (already cached) and a container restart with the new env vars. 2 to 5 seconds of downtime, during which your client keeps working with the local SQLite without losing anything. Sync resumes automatically when it comes back.


Step 5: Configure the Domain

In the Domains tab of your service:

  1. Add Domain
  2. Service Name: cloud
  3. Host: engram.yourdomain.com
  4. Container Port: 18080
  5. Path: /
  6. HTTPS: enabled (Traefik + Let's Encrypt automatic)
  7. Save

Before deploying: make sure the DNS for engram.yourdomain.com already points to your VPS IP. If Let's Encrypt can't validate the domain, the deploy will succeed but without TLS.


Step 6: Deploy and Verify

Click Deploy. Within a minute you should see:

  • Container engram-cloud-postgres: Up (healthy)
  • Container engram-cloud: Up

Check the engram-cloud logs. If it starts cleanly you'll see something like:

cloud serve listening on 0.0.0.0:18080

Open https://engram.yourdomain.com/dashboard/login in your browser. Paste your ENGRAM_CLOUD_TOKEN. Login. Done — you're in the dashboard.


Step 7: Configure the Local Client

Now let's point your local Engram client at the server.

Verify the binary version

engram version

You need at least v1.15.x (older versions don't have cloud commands). If it says engram dev or engram vdev, that's a development build without cloud features. Reinstall with a specific tag:

go install github.com/Gentleman-Programming/engram/cmd/engram@v1.15.7

The key detail: the @v1.15.7 is MANDATORY. Without a tag, Go builds from main as a versionless dev build. With the tag, you get the binary with the proper version.

Verify the version is right now:

engram version
# should say: engram 1.15.7

engram --help | grep cloud
# should list the 'cloud' subcommand

Set the token in your shell

Add this to your ~/.bashrc or ~/.zshrc (whichever you actually use — check with echo $SHELL):

export ENGRAM_CLOUD_TOKEN='<the same token you set in Dokploy>'

Reload your shell (source ~/.bashrc or open a new terminal).

Point the client at the server

engram cloud config --server https://engram.yourdomain.com
engram cloud status

You should see:

Cloud status: configured (target=cloud)
Server: https://engram.yourdomain.com
Auth status: ready (token provided via runtime cloud config)
Sync readiness: ready for explicit --project sync (project must be enrolled)

If it says Auth status: token not configured, your shell isn't reading the env var. Check echo "${ENGRAM_CLOUD_TOKEN:0:6}..." (it should print the first 6 chars).


Step 8: Enroll Your First Project

cd ~/path/to/the-project-you-want-to-sync
engram cloud enroll personal       # only the first time
engram sync --cloud --project personal

Reload the dashboard at https://engram.yourdomain.com/dashboard. The 0 / 0 / 0 now show real numbers: the project, you as contributor, and the total chunks synced.


The Web Dashboard

Engram Cloud ships a complete dashboard. Without the admin token, you'll see:

PathPurpose
/dashboardLanding
/dashboard/statsGeneral metrics
/dashboard/activityRecent activity
/dashboard/projectsYour project list
/dashboard/projects/{name}Project detail: observations, sessions, prompts
/dashboard/browser/observationsBrowse ALL your observations
/dashboard/browser/sessionsBrowse sessions
/dashboard/browser/promptsPrompt history
/dashboard/contributorsWho contributed what (useful for teams)

For admin features (pause/resume sync per project, audit log), generate another token with openssl rand -base64 48 and add ENGRAM_CLOUD_ADMIN=<token> as a Dokploy env var. Then access /dashboard/admin/*.


Troubleshooting (The 5 Real Errors)

These are the errors you'll hit in roughly this order. I had them all.

1. cloud auth token is required: set ENGRAM_CLOUD_TOKEN

Cause: missing ENGRAM_CLOUD_TOKEN in the server env vars. The upstream compose ships ENGRAM_CLOUD_INSECURE_NO_AUTH=1 (local dev mode without auth). When moving to production, you need to add the token.

Fix: add ENGRAM_CLOUD_TOKEN to Dokploy Environment and redeploy.

2. cannot parse ... invalid port ":XXXXX" after host

Cause: your POSTGRES_PASSWORD contains URL-reserved characters (:, /, +, @). The postgres URL parser gets confused and thinks part of the password is a port.

Fix: regenerate the password with openssl rand -hex 32 (alphabet [0-9a-f], fully URL-safe).

3. password authentication failed for user "engram" (after changing the password)

Cause: you changed POSTGRES_PASSWORD in Dokploy but the postgres volume was already initialized with the old password. The official postgres image only applies POSTGRES_PASSWORD the FIRST time it creates the data dir. Changing it later does NOT update the existing user.

Fix: delete the postgres volume and redeploy. Since the cloud never managed to store any of your data, you lose nothing.

4. volume is in use when running docker volume rm

Cause: you hit "Stop" in Dokploy, but "Stop" only halts containers — it doesn't remove them. Stopped containers still hold references to their volumes.

Fix:

# Find the stopped container
docker ps -a | grep engram

# Remove it (not just stop)
docker rm <container-id>

# Now you can
docker volume rm <real-volume-name>

Note: the real volume name has a Dokploy prefix. Run docker volume ls | grep engram to find the exact name, something like <projectid>_engram-cloud-pg.

If it's easier: delete the entire app in Dokploy and recreate it. Wipes everything in one shot.

5. engram dev (client without cloud commands)

Cause: you installed the binary with go install ...@latest or no tag at all. That builds from main as a development build, which doesn't include the cloud commands if your Go cache picked a commit before the feature landed.

Fix:

go install github.com/Gentleman-Programming/engram/cmd/engram@v1.15.7
goenv rehash   # only if you use goenv
engram version  # must say 1.15.7, NOT dev

Bonus: the wrong shell

If you set ENGRAM_CLOUD_TOKEN in ~/.zshrc but engram cloud status keeps saying token not configured, check your shell:

echo $SHELL

If it says /bin/bash, .zshrc is never loaded. You need to put the export in ~/.bashrc instead.


Security Considerations

For serious deployments:

  1. Token rotation: if you suspect ENGRAM_CLOUD_TOKEN has leaked, regenerate both (server and client) and redeploy. Anyone with that token has read/write access to ALL whitelisted projects.
  2. Postgres volume backups: if it actually matters, set up backups for the engram-cloud-pg volume (Dokploy integrates with S3-compatible storage).
  3. HTTPS is mandatory: never run on plain HTTP in production. The bearer token travels with every request — without TLS, everyone on the path is sniffing it.
  4. ENGRAM_CLOUD_ALLOWED_PROJECTS as defense in depth: even if the token leaks, only the projects in the whitelist can be synced. Keep the list tight and specific.
  5. Don't commit secrets: the .gitignore with .env is mandatory. Secrets live only in Dokploy → Environment.

Conclusion

You now have your own persistent memory infrastructure for AI agents running on your VPS:

  • Official published image (no custom builds)
  • Automatic HTTPS via Traefik
  • Bearer token + JWT auth
  • Postgres isolated on the internal network
  • Full web dashboard
  • Compatible with Claude Code, Cursor, OpenCode, Gemini CLI, Codex, and any MCP client

And the most important part: your memory belongs to you. It doesn't depend on an external cloud service. If Engram disappeared tomorrow, you'd still have everything locally in SQLite plus a postgres replica on your VPS.


Related Resources

  • Engram on GitHub
  • Engram Cloud official docs
  • Dokploy documentation
  • How to Self-Host Gemma on Dokploy
  • Free Self-Hosted Obsidian Sync with Live Sync and Traefik

Subscribe to my newsletter

Get updates on my work and projects.

We care about your data. Read our privacy policy.