A brain-inspired memory system for AI agents.
Transparent. Debuggable. No black boxes.
Quick Start β’ How It Works β’ Architecture β’ CLI β’ Configuration β’ Contributing
Most AI memory systems are just vector search with extra steps. You can't read the memories, you can't debug why something was recalled, and you can't fix it when it breaks.
MemoryClaw takes a different approach. Every memory is a plain markdown file. Every retrieval is explainable β you can see exactly which keywords matched and why. Vector search exists only as a fallback for the ~20% of queries where keywords aren't enough.
"What did I discuss with Mahesh last week?"
β keyword extraction: [mahesh, discuss, week]
β alias expansion: [mahesh, mahes, discuss, talk, conversation, week]
β 3 episodes matched (tags: mahesh, meeting)
β 1 semantic fact found (Mahesh: works at Uber)
β injected into working memory (412 tokens)
No embeddings computed. No API calls. Pure file I/O. Fully transparent.
| Traditional RAG | MemoryClaw | |
|---|---|---|
| Storage | Opaque vector embeddings | Plain markdown files you can read and edit |
| Retrieval | Black-box similarity search | Keyword + tag matching with explainable scoring |
| Debuggability | "Why was this recalled?" β Good luck | Every match shows exactly which keywords hit |
| Cost | Embedding API calls on every message | Zero API calls in default mode |
| Privacy | Often requires cloud APIs | 100% local by default (Ollama) |
| Fallback | N/A | Vector search kicks in only when keywords fail |
MemoryClaw implements a four-tier memory hierarchy inspired by human cognition:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β WORKING MEMORY β
β Active context for current task β
β (~400-800 tokens) β
βββββββββββββββββββββββββββββββββββββββββββββββββββ€
β EPISODIC MEMORY β
β Compressed records of past interactions β
β Markdown + YAML frontmatter β
βββββββββββββββββββββββββββββββββββββββββββββββββββ€
β SEMANTIC MEMORY β
β Persistent facts & relationships β
β contacts.md, projects.md, preferences.md β
βββββββββββββββββββββββββββββββββββββββββββββββββββ€
β PROCEDURAL MEMORY β
β Reusable skills compiled from experience β
β Auto-generated, user-approved β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
User query
β
βββ Extract keywords βββ Expand aliases/synonyms
β β
β βββββββββββ΄ββββββββββ
β β Episode Search β
β β (tag + summary β
β β scoring) β
β βββββββββββ¬ββββββββββ
β β
β Results < threshold?
β β β
β Yes No
β β β
β Vector Fallback Use keyword
β (optional) results
β β β
β ββββββββ¬ββββββββ
β β
β Blend results
β β
β Semantic fact lookup
β β
β Inject into working memory
β β
βββββββββββββββββββββ LLM Call (~500 tokens)
git clone https://github.com/imjohnzakkam/memoryclaw.git
cd memoryclaw
bun install# Search your memories
bun run src/cli.ts retrieve "meeting with John about project deadline"
# Audit recent memories
bun run src/cli.ts audit
# Process raw logs into episodes
bun run src/cli.ts consolidate
# Build the search index
bun run src/cli.ts indexbun test
# 102 tests across 14 filesMemoryClaw hooks into OpenClaw's lifecycle β logging interactions, retrieving context before LLM calls, and running consolidation as a background service.
src/
βββ config.ts # YAML config loader with defaults
βββ keywords.ts # Keyword extraction + stop word removal
βββ aliases.ts # Synonym/alias expansion
βββ episodic.ts # Episode parsing + scored keyword search
βββ semantic.ts # Semantic fact entity lookup
βββ vector.ts # Vector fallback (Ollama / OpenAI-compatible)
βββ retrieve.ts # Full retrieval pipeline orchestrator
βββ logger.ts # Raw interaction logging
βββ llm.ts # Chat completion abstraction
βββ consolidate.ts # Logs β episodes + facts (LLM-powered)
βββ semantic-writer.ts # Semantic updates with dedup + conflict detection
βββ memories.ts # User commands: audit, search, delete
βββ patterns.ts # Repeated action pattern detection
βββ skill-compiler.ts # Skill template generation + approval
βββ working-memory.ts # Working memory manager + onBeforeLLM hook
βββ indexer.ts # SQLite FTS5 inverted index
βββ cli.ts # CLI entry point
βββ plugin.ts # OpenClaw plugin integration
βββ index.ts # Public API
~/.openclaw/memoryclaw/
βββ episodes/ # Compressed interaction summaries
β βββ 2025-04-08_14-32-10_meeting-with-john.md
βββ semantic/ # Persistent facts
β βββ contacts.md
β βββ projects.md
β βββ _pending.md # Low-confidence facts awaiting review
βββ skills/ # Auto-generated skill templates
βββ logs/ # Raw interaction logs
β βββ processed/
βββ index/ # SQLite FTS5 search index
βββ config.yaml
Every memory is a plain markdown file with structured YAML frontmatter:
---
timestamp: 2025-04-08T14:32:10Z
tags: [email, projectX, deadline, john]
summary: "Discussed project deadline with John. Confirmed April 10th."
participants: [user, assistant]
confidence: high
---
**Details:**
- Recipient: John <john@example.com>
- Subject: Project X deadline confirmation
- Agreed on April 10th hard deadlinememoryclaw <command> [args]
Retrieval
retrieve <query> Search episodes + semantic facts
search <query> Keyword search across memories
Memory Management
audit [limit] Show recent episodes and pending facts
delete <filename> Delete an episode
delete-fact <file> <key> Remove a fact from a semantic file
Consolidation
consolidate Process raw logs into episodes + facts
Skills
patterns [threshold] Detect repeated action patterns
compile [threshold] Generate draft skills from patterns
skills List all skills (draft + approved)
approve-skill <file> Approve a draft skill
reject-skill <file> Reject a draft skill
Indexing
index Build/rebuild SQLite FTS5 index
index-search <query> Search using the full-text index
Create ~/.openclaw/memoryclaw/config.yaml:
memoryclaw:
disableDefaultMemory: true
path: ~/.openclaw/memoryclaw
retrieval:
primary: keyword
minPrimaryResults: 2 # Fallback triggers below this
fallback: none # "none" or "vector"
maxResults: 5
blendStrategy: primary_first
llm:
provider: ollama # "ollama" or "openai-compatible"
baseUrl: http://localhost:11434
model: llama3:8b
embeddingModel: nomic-embed
apiKey: "" # Only needed for openai-compatible
consolidation:
interval: 60 # Minutes between consolidation runs
skillThreshold: 7 # Min pattern occurrences for skill suggestion
factValidation: true # Schema-validate extracted facts
pendingReview: true # Low-confidence facts β _pending.md
semantic:
files: [contacts.md, projects.md]MemoryClaw supports any OpenAI-compatible API. Use Ollama for fully local operation, or point it at any hosted endpoint:
# Local (default)
llm:
provider: ollama
baseUrl: http://localhost:11434
model: llama3:8b
# Hosted / OpenAI-compatible
llm:
provider: openai-compatible
baseUrl: https://api.openai.com/v1
model: gpt-4
apiKey: sk-...- Keyword-first retrieval. Transparent, explainable, zero-cost. Not another RAG wrapper.
- 80/20 rule. Keywords handle ~80% of queries. Vector fallback catches the rest.
- Files you can read. Every memory is a markdown file.
grepyour memories. Edit them in vim. Version them with git. - Privacy by default. Ollama for local LLM. No cloud dependencies. Your memories stay on your machine.
- Honest about trade-offs. Keyword matching struggles with paraphrasing β that's why vector fallback exists. Auto-generated skills require approval β because auto-executing buggy code is worse than no automation.
MemoryClaw is designed to fail gracefully and transparently:
- Confidence scoring β Every episode carries a
confidencefield (low/medium/high) - Fact staging β Low-confidence facts go to
_pending.md, not canonical files - Conflict detection β Contradictory facts are flagged, never silently overwritten
- Source tracing β Every fact links back to the originating log file
- Raw log preservation β Original logs kept for re-processing if summarization improves
| Component | Technology |
|---|---|
| Runtime | Bun |
| Language | TypeScript (strict mode, ES modules) |
| Storage | Markdown + YAML frontmatter |
| Search Index | SQLite FTS5 (via bun:sqlite) |
| LLM | Configurable β Ollama / OpenAI-compatible |
| Frontmatter | gray-matter |
| Testing | Vitest (102 tests, 14 files) |
| Platform | OpenClaw plugin |
Contributions are welcome! See CONTRIBUTING.md for guidelines.
# Clone and install
git clone https://github.com/imjohnzakkam/memoryclaw.git
cd memoryclaw
bun install
# Run tests
bun test
# Run a specific test file
bun test tests/keywords.test.ts- Core retrieval pipeline (keyword + tag scoring)
- Synonym/alias expansion
- Vector fallback (Ollama + OpenAI-compatible)
- Raw interaction logging
- Consolidation daemon (logs β episodes + facts)
- Semantic memory with dedup + conflict detection
- Pattern detection + skill compilation
- Working memory injection (
onBeforeLLMhook) - SQLite FTS5 indexing
- OpenClaw plugin integration
- CLI for all operations
- Web UI for memory browsing
- Multi-agent memory sharing
- Cross-device sync
- Advanced pattern mining (PrefixSpan)
MIT Β© John Zakkam
Built with frustration at black-box AI memory systems.
