📋 Pre-flight Checks
🔍 Problem Description
Engram already has the right primitive for multi-machine usage: engram sync exports/imports compressed chunks that are safe to move between machines.
That works, but it is still manual:
- Save memory on machine A
- Run
engram sync
- Commit/push or otherwise transfer the sync artifacts
- Pull on machine B
- Run
engram sync --import
For users who switch between two or more computers during the day, this friction is high enough that memory becomes stale in practice.
Important constraints:
- Syncing the live SQLite file through OneDrive/Dropbox/etc. is unsafe
- The current chunk-based sync model is correct, but the UX is incomplete for real multi-PC workflows
💡 Proposed Solution
Add an opt-in automatic sync layer on top of the existing chunk-based sync model.
Design goals:
- Keep SQLite local as the source of truth
- Keep the existing chunk export/import format
- Add background/scheduled auto-sync
- Make the transport pluggable so Git can be supported first, with Cloud as a future option
Proposed UX:
Configure a sync profile once:
engram sync init git --repo ~/engram-sync
Then start Engram normally:
engram mcp
Behavior:
- on startup: pull/import remote changes if available
- after local writes: debounce, export chunks, commit, push
- periodically: pull/import in the background
- on shutdown: final sync attempt
Suggested commands:
engram sync init git --repo <path-or-url>
engram sync enable
engram sync disable
engram sync status
engram sync run-once
engram sync doctor
Implementation outline:
- add a sync config file under
ENGRAM_DATA_DIR
- add sync orchestration on top of existing export/import logic
- add Git transport hooks for pull, add/commit, and push
- never block writes if sync fails
- never sync the live SQLite database file directly
📦 Affected Area
Sync (multi-instance)
🔄 Alternatives Considered
-
Sync the SQLite file with Dropbox/OneDrive
Rejected because it is unsafe with concurrent machine usage and risks corruption/conflicts.
-
Require PostgreSQL/shared DB for multi-machine use
Rejected for this issue because it is too heavy for solo users with multiple machines and overlaps with separate shared-memory/team proposals.
-
Keep manual sync only
Rejected because it technically works, but the UX friction is high enough that many users will not keep memory in sync across devices.
📎 Additional Context
Related:
Why this matters:
Engram is already very close to solving the same-memory-on-every-machine problem.
The missing piece is not a new storage engine. It is a safe, opt-in automation layer built on top of the sync model that already exists.
📋 Pre-flight Checks
status:approvedbefore a PR can be opened🔍 Problem Description
Engram already has the right primitive for multi-machine usage:
engram syncexports/imports compressed chunks that are safe to move between machines.That works, but it is still manual:
engram syncengram sync --importFor users who switch between two or more computers during the day, this friction is high enough that memory becomes stale in practice.
Important constraints:
💡 Proposed Solution
Add an opt-in automatic sync layer on top of the existing chunk-based sync model.
Design goals:
Proposed UX:
Configure a sync profile once:
engram sync init git --repo ~/engram-syncThen start Engram normally:
engram mcpBehavior:
Suggested commands:
engram sync init git --repo <path-or-url>engram sync enableengram sync disableengram sync statusengram sync run-onceengram sync doctorImplementation outline:
ENGRAM_DATA_DIR📦 Affected Area
Sync (multi-instance)
🔄 Alternatives Considered
Sync the SQLite file with Dropbox/OneDrive
Rejected because it is unsafe with concurrent machine usage and risks corruption/conflicts.
Require PostgreSQL/shared DB for multi-machine use
Rejected for this issue because it is too heavy for solo users with multiple machines and overlaps with separate shared-memory/team proposals.
Keep manual sync only
Rejected because it technically works, but the UX friction is high enough that many users will not keep memory in sync across devices.
📎 Additional Context
Related:
Why this matters:
Engram is already very close to solving the same-memory-on-every-machine problem.
The missing piece is not a new storage engine. It is a safe, opt-in automation layer built on top of the sync model that already exists.