π Debot is a lightweight and secure personal AI assistant inspired by Clawdbot and Nanobot.
π‘οΈ Secure by Design: Rust for core agent implementation, and minimal dependencies reduce attack surface and vulnerabilities.
π° Extremely Token-Saving: Built-in intelligent router analyzes prompt complexity and automatically selects the cheapest suitable model β ~71% cost reduction vs. always using a top-tier model.
πͺΆ Ultra-Lightweight: About ~10.8k lines of Rust + Python code (excluding tests) β still far smaller than typical monolithic agents.
π¬ Research-Ready: Clean, readable code that's easy to understand, modify, and extend for research.
β‘οΈ Lightning Fast: Minimal footprint means faster startup, lower resource usage, and quicker iterations.
When building the Rust Python extension inside CI or containers on newer Python versions (for example Python 3.14), set the following environment variable so PyO3 uses the stable ABI forward-compatibility:
export PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1If you need to specify a particular Python executable for maturin builds, set PYO3_PYTHON to the interpreter path.
π Easy-to-Use: One-click to deploy and you're ready to go.
| Category | What Debot Can Do |
|---|---|
| βοΈ Writing & Communication | AI text humanization, content summarization, natural and human-like output |
| π» Software Engineering | Test-driven development, systematic debugging, code review, git worktree management |
| π§ Planning & Design | Brainstorming, implementation planning, subagent-driven parallel execution |
| π Research & Analysis | Web search, real-time market analysis, URL and video summarization |
| π Task Management | Daily routines, scheduled tasks (cron), workflow automation |
| π Knowledge & Memory | Long-term memory, semantic search, personal knowledge base |
Install from source (latest features, recommended for development)
Note
Requires Python β₯ 3.11 and a Rust toolchain (for the native extension). On Linux you also need patchelf (pip install patchelf).
git clone https://github.com/BotMesh/debot.git
cd debot
python3 -m venv .venv
source .venv/bin/activate
pip install .Install with uv (stable, fast)
uv tool install debotInstall from PyPI (stable)
pip install debotTip
Set your API key in ~/.debot/config.json.
Get API keys: OpenRouter (LLM) Β· Brave Search (optional, for web search)
You can also change the model to minimax/minimax-m2 for lower cost.
1. Initialize
debot onboard2. Configure (~/.debot/config.json)
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
},
"anthropic": {
"apiKey": "sk-ant-xxx"
},
"groq": {
"apiKey": "gsk_xxx"
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
},
"webSearch": {
"apiKey": "BSA-xxx"
}
}Tip
Adding multiple provider keys enables cross-provider fallback. If one provider's credits run out, Debot automatically routes to another.
3. Chat
debot agent -m "What is 2+2?"That's it! You have a working AI assistant in 2 minutes.
Run Debot with your own local models using vLLM or any OpenAI-compatible server.
1. Start your vLLM server
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 80002. Configure (~/.debot/config.json)
{
"providers": {
"vllm": {
"apiKey": "dummy",
"apiBase": "http://localhost:8000/v1"
}
},
"agents": {
"defaults": {
"model": "meta-llama/Llama-3.1-8B-Instruct"
}
}
}3. Chat
debot agent -m "Hello from my local LLM!"Debot automatically compacts long conversations to keep context windows efficient. When a conversation exceeds ~90% of the model's context window, old messages are summarized into a single "compaction" entry.
Features:
- β Automatic β Triggered silently when context limit approached
- β
Manual β Use
/compactcommand in Telegram or CLI - β Configurable β Tune per-model or globally
- β Tracked β View compaction stats in session metadata
Usage:
# Manual compaction via CLI
debot sessions compact telegram:12345 --keep-last 50
# View/configure compaction settings
debot config compaction --show
debot config compaction --keep-last 30 --trigger-ratio 0.85
# Per-model settings
debot config compaction-model "anthropic/claude-opus-4-5" --keep-last 40Telegram:
/compact # Use default keep-last=50
/compact 30 # Keep last 30 messages
/compact 30 --verbose # Show detailed results
Debot includes a built-in intelligent router (powered by Rust) that automatically selects an LLM based on task complexity. This saves cost by sending simple prompts to cheaper models and reserving powerful models for harder tasks.
How it works:
- Analyzes incoming prompts across multiple dimensions (reasoning difficulty, code complexity, multiβstep reasoning, token count, creativity, technical depth, etc.).
- Scores each dimension via heuristics and keyword patterns.
- Maps the overall score to a tier:
SIMPLEβMEDIUMβCOMPLEXβREASONING. - Selects a model for that tier (configurable in
rust/src/router/config.rs).
Example tier mapping (actual mapping is defined in Rust config):
| Tier | Example Model |
|---|---|
SIMPLE |
openai/gpt-3.5-turbo |
MEDIUM |
openai/gpt-4o-mini |
COMPLEX |
anthropic/claude-opus-4-5 |
REASONING |
openai/o3 |
The router runs automatically β no configuration needed unless you want custom tier mapping.
Automatic Fallback & Escalation:
When a model fails, Debot automatically retries with alternatives:
- Preβcheck: Estimates token count and compares against model context. If too large, it escalates to a largerβcontext model.
- Billing fallback (
insufficient_credits): Tries sameβtier alternatives (ordered by cost) before escalating to the next tier. - Context / other errors: Escalates to a more capable tier.
- Crossβprovider routing: If OpenRouter credits run out, Debot can route to providers where youβve configured direct API keys.
Configure multiple provider keys in
~/.debot/config.jsonto enable crossβprovider fallback β see Configuration. If OpenRouter is configured, it is used as the primary API base by default, soinsufficient_creditsoften indicates OpenRouter balance issues.
Router CLI tools:
# Test how the router scores any prompt
debot router test "implement a distributed cache with consistent hashing"
# View accumulated routing metrics (in long-running sessions)
debot router metricsRouter benchmarking (token cost savings):
We provide a lightweight benchmark that estimates token-cost savings by comparing router-selected models against fixed baselines.
It uses open datasets from benchmarks/ and a naive or tiktoken-based token estimator.
make benchmark-routerTo change baselines or increase coverage:
python benchmarks/router_savings.py --max-samples 200 --configs-per-dataset 5 \
--baseline-models anthropic/claude-opus-4-5,openai/o3,openai/gpt-4o-miniInterpreting results:
- If the baseline is a very cheap model (e.g.
openai/gpt-4o-mini), router cost can be higher by design. - For meaningful savings, compare against strong baselines like
anthropic/claude-opus-4-5oropenai/o3.
Latest benchmark snapshot (2026-02-12, --max-samples 50):
- prompts: 350
- tokens (estimated): 9,453
- router cost: $0.014898
- baseline
anthropic/claude-opus-4-5: $0.236325 β savings $0.221427 (93.70%) - baseline
openai/o3: $0.075624 β savings $0.060726 (80.30%) - baseline
openai/gpt-4o-mini: $0.005672 β savings -$0.009226 (-162.67%)
Notes:
- GAIA is gated on Hugging Face and will be skipped unless you provide
HF_TOKEN.
Debot stores persistent memory under your workspace at memory/ (by default your workspace is ~/.debot/workspace). The memory system supports:
MEMORY.mdβ long-term notes you want the agent to remember.YYYY-MM-DD.mdβ daily notes..index.jsonβ a simple local semantic index (auto-generated).
How it works
- The Rust extension (or the Python fallback) exposes
MemoryStore.build_index()andMemoryStore.search(query, max_results, min_score)to build a local vector index and search it. - If
OPENAI_API_KEYorOPENROUTER_API_KEYis set, Debot will attempt to use the remote embeddings API and fall back to a deterministic local embedding when not available.
Quick enable & usage
- Build and install the Rust extension (in development environments with Python β§ 3.14 you may need to set
PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1):
python3 -m venv .venv
source .venv/bin/activate
pip install . # builds the Rust extension automatically via maturinTip
On Python β₯ 3.14 you may need export PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1 before the install.
On Linux, install patchelf first: pip install patchelf.
- Optionally provide an embeddings key (recommended for better results):
export OPENAI_API_KEY="sk-..."
# or
export OPENROUTER_API_KEY="or-..."- Build index and search (Python example):
from pathlib import Path
from debot.agent.memory import search_memory, MemoryStore
# Build index explicitly (if you've updated memory files)
store = MemoryStore(ws)
store.build_index()
# Search
results = search_memory(ws, "when did I last deploy?", max_results=5)
for r in results:
print(r["score"], r["path"])
print(r["snippet"][:200])
print("---")Notes
- If the
.index.jsonfile is missing,search_memory()will attempt to callbuild_index()automatically. - The local deterministic embedding is SHA256-based and works offline but yields lower-quality semantic matches than remote embeddings.
Tip
The apiKey can be any non-empty string for local servers that don't require authentication.
Talk to your Debot through Telegram or WhatsApp β anytime, anywhere.
| Channel | Setup |
|---|---|
| Telegram | Easy (just a token) |
| Medium (scan QR) |
Telegram (Recommended)
1. Create a bot
- Open Telegram, search
@BotFather - Send
/newbot, follow prompts - Copy the token
2. Configure
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}Get your user ID from
@userinfoboton Telegram.
3. Run
debot gatewayRequires Node.js β₯18.
1. Link device
debot channels login
# Scan QR with WhatsApp β Settings β Linked Devices2. Configure
{
"channels": {
"whatsapp": {
"enabled": true,
"allowFrom": ["+1234567890"]
}
}
}3. Run (two terminals)
# Terminal 1
debot channels login
# Terminal 2
debot gatewaydebot comes with 21 built-in skills covering the full development and writing lifecycle:
Development Workflow
| Skill | Description |
|---|---|
| brainstorming π§ | Turn ideas into fully formed designs and specs through collaborative dialogue |
| writing-plans π | Create comprehensive implementation plans with bite-sized tasks |
| executing-plans |
Execute plans with review checkpoints between batches |
| subagent-driven-development π€ | Dispatch independent subagents per task with two-stage review |
| dispatching-parallel-agents π | Run 2+ independent tasks in parallel across agents |
| finishing-a-development-branch π | Guide branch completion β merge, PR, or cleanup |
Code Quality & Review
| Skill | Description |
|---|---|
| test-driven-development π§ͺ | Write tests first, watch them fail, implement minimal code to pass |
| systematic-debugging π | Four-phase root cause investigation before attempting fixes |
| verification-before-completion β | Run verification commands and confirm output before claiming done |
| requesting-code-review π€ | Dispatch code-reviewer subagent to catch issues early |
| receiving-code-review π₯ | Evaluate review feedback with rigor before implementing |
Writing & Communication
| Skill | Description |
|---|---|
| humanizer βοΈ | Remove AI writing patterns to produce natural, human-like text |
| summarize π | Summarize URLs, files, and YouTube videos |
Tools & Infrastructure
| Skill | Description |
|---|---|
| github π | Interact with GitHub using the gh CLI β PRs, issues, CI runs, and queries |
| weather β | Get weather info using wttr.in and Open-Meteo APIs |
| tmux π₯οΈ | Remote-control tmux sessions for terminal automation |
| using-git-worktrees π³ | Create isolated git worktrees for feature work |
Skill Management
| Skill | Description |
|---|---|
| skill-creator π§ | Create and package new custom skills |
| writing-skills π | TDD-driven skill development and editing |
| find-skills π | Discover available skills in workspace and system |
Usage:
# List available skills
debot skills list
# List system (built-in) and workspace skills as JSON
debot skills list --json
# Install a system skill to your workspace
debot skills install github
debot skills install weather
# Filter skills by name
debot skills list --query githubCreate a custom skill:
Each skill is a directory with a SKILL.md file containing YAML frontmatter and instructions:
---
name: my-skill
description: "A custom skill that does X"
metadata: {"debot": {"emoji": "β¨", "requires": {"bins": ["tool"]}}}
---
# My Custom Skill
Instructions for the agent on how to use this skill...Place your skill in ~/.debot/workspace/skills/<skill-name>/SKILL.md and it will be automatically available to your agent.
Config file: ~/.debot/config.json
Note
Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
| Provider | Purpose | Get API Key |
|---|---|---|
openrouter |
LLM (recommended, access to all models) | openrouter.ai |
anthropic |
LLM (Claude direct) | console.anthropic.com |
openai |
LLM (GPT direct) | platform.openai.com |
groq |
LLM + Voice transcription (Whisper) | console.groq.com |
gemini |
LLM (Gemini direct) | aistudio.google.com |
Full config example
{
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
},
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
},
"anthropic": {
"apiKey": "sk-ant-xxx"
},
"openai": {
"apiKey": "sk-xxx"
},
"groq": {
"apiKey": "gsk_xxx"
},
"gemini": {
"apiKey": "AIza-xxx"
}
},
"channels": {
"telegram": {
"enabled": true,
"token": "123456:ABC...",
"allowFrom": ["123456789"]
},
"whatsapp": {
"enabled": false
}
},
"tools": {
"web": {
"search": {
"apiKey": "BSA..."
}
}
}
}| Command | Description |
|---|---|
debot onboard |
Initialize config & workspace |
debot agent -m "..." |
Chat with the agent |
debot agent |
Interactive chat mode |
debot gateway |
Start the gateway |
debot status |
Show status |
debot channels login |
Link WhatsApp (scan QR) |
debot channels status |
Show channel status |
debot sessions compact <key> |
Manually compact a session |
debot config compaction |
View/configure compaction settings |
debot config compaction-model <model> |
Set per-model compaction settings |
Scheduled Tasks (Cron)
# Add a job
debot cron add --name "daily" --message "Good morning!" --cron "0 9 * * *"
debot cron add --name "hourly" --message "Check status" --every 3600
# List jobs
debot cron list
# Remove a job
debot cron remove <job_id>Tip
The -v ~/.debot:/root/.debot flag mounts your local config directory into the container, so your config and workspace persist across container restarts.
Build and run debot in a container:
# Build the image
docker build -t debot .
# Initialize config (first time only)
docker run -v ~/.debot:/root/.debot --rm debot onboard
# Edit config on host to add API keys
vim ~/.debot/config.json
# Run gateway (connects to Telegram/WhatsApp)
docker run -v ~/.debot:/root/.debot -p 18790:18790 debot gateway
# Or run a single command
docker run -v ~/.debot:/root/.debot --rm debot agent -m "Hello!"
docker run -v ~/.debot:/root/.debot --rm debot statusPre-built images are automatically published to GitHub Container Registry:
# Pull latest image
docker pull ghcr.io/botmesh/debot:latest
# Run with pulled image
docker run -v ~/.debot:/root/.debot -p 18790:18790 ghcr.io/botmesh/debot:latest gateway
# Pull specific version
docker pull ghcr.io/botmesh/debot:v1.0.0Available Tags:
latestβ Latest main branchmainβ Main branchv1.0.0β Release versionsmain-<short-sha>β Specific commits
For more info, see Container Publishing Guide
A Makefile is provided for common development tasks:
make install # Install debot (builds Rust extension via maturin)
make build # Build the Rust extension only (release mode)
make test # Build + install + run pytest
make lint # Run ruff linterFirst-time setup:
git clone https://github.com/botmesh/debot.git
cd debot
python3 -m venv .venv
source .venv/bin/activate
pip install patchelf # Linux only
make installRunning tests:
make testThis builds the Rust extension, installs the wheel, installs dev dependencies, and runs the full test suite.
PRs welcome! The codebase is intentionally small and readable. π€
Roadmap β Pick an item and open a PR!
- Voice Transcription β Support for Groq Whisper (Issue #13)
- Multi-modal β See and hear (images, voice, video)
- Intelligent Model Router β Automatically selects the best LLM model based on task complexity
- Long-term memory β Never forget important context
- Better reasoning β Multi-step planning and reflection
- More integrations β Discord, Slack, email, calendar
- Self-improvement β Learn from feedback and mistakes
debot is for educational, research, and technical exchange purposes only

