Skip to content

BotMesh/debot

Repository files navigation

debot

Debot: The Lightweight and Secure OpenClaw

PyPI Downloads CI Python License

🐈 Debot is a lightweight and secure personal AI assistant inspired by Clawdbot and Nanobot.

Key Features of Debot:

πŸ›‘οΈ Secure by Design: Rust for core agent implementation, and minimal dependencies reduce attack surface and vulnerabilities.

πŸ’° Extremely Token-Saving: Built-in intelligent router analyzes prompt complexity and automatically selects the cheapest suitable model β€” ~71% cost reduction vs. always using a top-tier model.

πŸͺΆ Ultra-Lightweight: About ~10.8k lines of Rust + Python code (excluding tests) β€” still far smaller than typical monolithic agents.

πŸ”¬ Research-Ready: Clean, readable code that's easy to understand, modify, and extend for research.

⚑️ Lightning Fast: Minimal footprint means faster startup, lower resource usage, and quicker iterations.

CI / Docker notes for building the Rust extension

When building the Rust Python extension inside CI or containers on newer Python versions (for example Python 3.14), set the following environment variable so PyO3 uses the stable ABI forward-compatibility:

export PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1

If you need to specify a particular Python executable for maturin builds, set PYO3_PYTHON to the interpreter path. πŸ’Ž Easy-to-Use: One-click to deploy and you're ready to go.

πŸ—οΈ Architecture

debot architecture

Core Capabilities

Category What Debot Can Do
✍️ Writing & Communication AI text humanization, content summarization, natural and human-like output
πŸ’» Software Engineering Test-driven development, systematic debugging, code review, git worktree management
🧠 Planning & Design Brainstorming, implementation planning, subagent-driven parallel execution
πŸ” Research & Analysis Web search, real-time market analysis, URL and video summarization
πŸ“… Task Management Daily routines, scheduled tasks (cron), workflow automation
πŸ“š Knowledge & Memory Long-term memory, semantic search, personal knowledge base

πŸ“¦ Install

Install from source (latest features, recommended for development)

Note

Requires Python β‰₯ 3.11 and a Rust toolchain (for the native extension). On Linux you also need patchelf (pip install patchelf).

git clone https://github.com/BotMesh/debot.git
cd debot
python3 -m venv .venv
source .venv/bin/activate
pip install .

Install with uv (stable, fast)

uv tool install debot

Install from PyPI (stable)

pip install debot

πŸš€ Quick Start

Tip

Set your API key in ~/.debot/config.json. Get API keys: OpenRouter (LLM) Β· Brave Search (optional, for web search) You can also change the model to minimax/minimax-m2 for lower cost.

1. Initialize

debot onboard

2. Configure (~/.debot/config.json)

{
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-v1-xxx"
    },
    "anthropic": {
      "apiKey": "sk-ant-xxx"
    },
    "groq": {
      "apiKey": "gsk_xxx"
    }
  },
  "agents": {
    "defaults": {
      "model": "anthropic/claude-opus-4-5"
    }
  },
  "webSearch": {
    "apiKey": "BSA-xxx"
  }
}

Tip

Adding multiple provider keys enables cross-provider fallback. If one provider's credits run out, Debot automatically routes to another.

3. Chat

debot agent -m "What is 2+2?"

That's it! You have a working AI assistant in 2 minutes.

πŸ–₯️ Local Models (vLLM)

Run Debot with your own local models using vLLM or any OpenAI-compatible server.

1. Start your vLLM server

vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000

2. Configure (~/.debot/config.json)

{
  "providers": {
    "vllm": {
      "apiKey": "dummy",
      "apiBase": "http://localhost:8000/v1"
    }
  },
  "agents": {
    "defaults": {
      "model": "meta-llama/Llama-3.1-8B-Instruct"
    }
  }
}

3. Chat

debot agent -m "Hello from my local LLM!"

πŸ’Ύ Session Compaction

Debot automatically compacts long conversations to keep context windows efficient. When a conversation exceeds ~90% of the model's context window, old messages are summarized into a single "compaction" entry.

Features:

  • βœ… Automatic β€” Triggered silently when context limit approached
  • βœ… Manual β€” Use /compact command in Telegram or CLI
  • βœ… Configurable β€” Tune per-model or globally
  • βœ… Tracked β€” View compaction stats in session metadata

Usage:

# Manual compaction via CLI
debot sessions compact telegram:12345 --keep-last 50

# View/configure compaction settings
debot config compaction --show
debot config compaction --keep-last 30 --trigger-ratio 0.85

# Per-model settings
debot config compaction-model "anthropic/claude-opus-4-5" --keep-last 40

Telegram:

/compact              # Use default keep-last=50
/compact 30           # Keep last 30 messages
/compact 30 --verbose # Show detailed results

πŸš€ Intelligent Model Router

Debot includes a built-in intelligent router (powered by Rust) that automatically selects an LLM based on task complexity. This saves cost by sending simple prompts to cheaper models and reserving powerful models for harder tasks.

How it works:

  • Analyzes incoming prompts across multiple dimensions (reasoning difficulty, code complexity, multi‑step reasoning, token count, creativity, technical depth, etc.).
  • Scores each dimension via heuristics and keyword patterns.
  • Maps the overall score to a tier: SIMPLE β†’ MEDIUM β†’ COMPLEX β†’ REASONING.
  • Selects a model for that tier (configurable in rust/src/router/config.rs).

Example tier mapping (actual mapping is defined in Rust config):

Tier Example Model
SIMPLE openai/gpt-3.5-turbo
MEDIUM openai/gpt-4o-mini
COMPLEX anthropic/claude-opus-4-5
REASONING openai/o3

The router runs automatically β€” no configuration needed unless you want custom tier mapping.

Automatic Fallback & Escalation:

When a model fails, Debot automatically retries with alternatives:

  1. Pre‑check: Estimates token count and compares against model context. If too large, it escalates to a larger‑context model.
  2. Billing fallback (insufficient_credits): Tries same‑tier alternatives (ordered by cost) before escalating to the next tier.
  3. Context / other errors: Escalates to a more capable tier.
  4. Cross‑provider routing: If OpenRouter credits run out, Debot can route to providers where you’ve configured direct API keys.

Configure multiple provider keys in ~/.debot/config.json to enable cross‑provider fallback β€” see Configuration. If OpenRouter is configured, it is used as the primary API base by default, so insufficient_credits often indicates OpenRouter balance issues.

Router CLI tools:

# Test how the router scores any prompt
debot router test "implement a distributed cache with consistent hashing"

# View accumulated routing metrics (in long-running sessions)
debot router metrics

Router benchmarking (token cost savings):

We provide a lightweight benchmark that estimates token-cost savings by comparing router-selected models against fixed baselines. It uses open datasets from benchmarks/ and a naive or tiktoken-based token estimator.

make benchmark-router

To change baselines or increase coverage:

python benchmarks/router_savings.py --max-samples 200 --configs-per-dataset 5 \
  --baseline-models anthropic/claude-opus-4-5,openai/o3,openai/gpt-4o-mini

Interpreting results:

  • If the baseline is a very cheap model (e.g. openai/gpt-4o-mini), router cost can be higher by design.
  • For meaningful savings, compare against strong baselines like anthropic/claude-opus-4-5 or openai/o3.

Latest benchmark snapshot (2026-02-12, --max-samples 50):

  • prompts: 350
  • tokens (estimated): 9,453
  • router cost: $0.014898
  • baseline anthropic/claude-opus-4-5: $0.236325 β†’ savings $0.221427 (93.70%)
  • baseline openai/o3: $0.075624 β†’ savings $0.060726 (80.30%)
  • baseline openai/gpt-4o-mini: $0.005672 β†’ savings -$0.009226 (-162.67%)

Notes:

  • GAIA is gated on Hugging Face and will be skipped unless you provide HF_TOKEN.

🧠 Long-term memory

Debot stores persistent memory under your workspace at memory/ (by default your workspace is ~/.debot/workspace). The memory system supports:

  • MEMORY.md β€” long-term notes you want the agent to remember.
  • YYYY-MM-DD.md β€” daily notes.
  • .index.json β€” a simple local semantic index (auto-generated).

How it works

  • The Rust extension (or the Python fallback) exposes MemoryStore.build_index() and MemoryStore.search(query, max_results, min_score) to build a local vector index and search it.
  • If OPENAI_API_KEY or OPENROUTER_API_KEY is set, Debot will attempt to use the remote embeddings API and fall back to a deterministic local embedding when not available.

Quick enable & usage

  1. Build and install the Rust extension (in development environments with Python ≧ 3.14 you may need to set PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1):
python3 -m venv .venv
source .venv/bin/activate
pip install .          # builds the Rust extension automatically via maturin

Tip

On Python β‰₯ 3.14 you may need export PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1 before the install. On Linux, install patchelf first: pip install patchelf.

  1. Optionally provide an embeddings key (recommended for better results):
export OPENAI_API_KEY="sk-..."
# or
export OPENROUTER_API_KEY="or-..."
  1. Build index and search (Python example):
from pathlib import Path
from debot.agent.memory import search_memory, MemoryStore
# Build index explicitly (if you've updated memory files)
store = MemoryStore(ws)
store.build_index()

# Search
results = search_memory(ws, "when did I last deploy?", max_results=5)
for r in results:
  print(r["score"], r["path"])
  print(r["snippet"][:200])
  print("---")

Notes

  • If the .index.json file is missing, search_memory() will attempt to call build_index() automatically.
  • The local deterministic embedding is SHA256-based and works offline but yields lower-quality semantic matches than remote embeddings.

Tip

The apiKey can be any non-empty string for local servers that don't require authentication.

πŸ’¬ Chat Apps

Talk to your Debot through Telegram or WhatsApp β€” anytime, anywhere.

Channel Setup
Telegram Easy (just a token)
WhatsApp Medium (scan QR)
Telegram (Recommended)

1. Create a bot

  • Open Telegram, search @BotFather
  • Send /newbot, follow prompts
  • Copy the token

2. Configure

{
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "YOUR_BOT_TOKEN",
      "allowFrom": ["YOUR_USER_ID"]
    }
  }
}

Get your user ID from @userinfobot on Telegram.

3. Run

debot gateway
WhatsApp

Requires Node.js β‰₯18.

1. Link device

debot channels login
# Scan QR with WhatsApp β†’ Settings β†’ Linked Devices

2. Configure

{
  "channels": {
    "whatsapp": {
      "enabled": true,
      "allowFrom": ["+1234567890"]
    }
  }
}

3. Run (two terminals)

# Terminal 1
debot channels login

# Terminal 2
debot gateway

🎯 Built-in Skills

debot comes with 21 built-in skills covering the full development and writing lifecycle:

Development Workflow

Skill Description
brainstorming 🧠 Turn ideas into fully formed designs and specs through collaborative dialogue
writing-plans πŸ“ Create comprehensive implementation plans with bite-sized tasks
executing-plans ▢️ Execute plans with review checkpoints between batches
subagent-driven-development πŸ€– Dispatch independent subagents per task with two-stage review
dispatching-parallel-agents πŸ”€ Run 2+ independent tasks in parallel across agents
finishing-a-development-branch 🏁 Guide branch completion β€” merge, PR, or cleanup

Code Quality & Review

Skill Description
test-driven-development πŸ§ͺ Write tests first, watch them fail, implement minimal code to pass
systematic-debugging πŸ” Four-phase root cause investigation before attempting fixes
verification-before-completion βœ… Run verification commands and confirm output before claiming done
requesting-code-review πŸ“€ Dispatch code-reviewer subagent to catch issues early
receiving-code-review πŸ“₯ Evaluate review feedback with rigor before implementing

Writing & Communication

Skill Description
humanizer ✍️ Remove AI writing patterns to produce natural, human-like text
summarize πŸ“„ Summarize URLs, files, and YouTube videos

Tools & Infrastructure

Skill Description
github πŸ™ Interact with GitHub using the gh CLI β€” PRs, issues, CI runs, and queries
weather β›… Get weather info using wttr.in and Open-Meteo APIs
tmux πŸ–₯️ Remote-control tmux sessions for terminal automation
using-git-worktrees 🌳 Create isolated git worktrees for feature work

Skill Management

Skill Description
skill-creator πŸ”§ Create and package new custom skills
writing-skills πŸ“– TDD-driven skill development and editing
find-skills πŸ”Ž Discover available skills in workspace and system

Usage:

# List available skills
debot skills list

# List system (built-in) and workspace skills as JSON
debot skills list --json

# Install a system skill to your workspace
debot skills install github
debot skills install weather

# Filter skills by name
debot skills list --query github

Create a custom skill:

Each skill is a directory with a SKILL.md file containing YAML frontmatter and instructions:

---
name: my-skill
description: "A custom skill that does X"
metadata: {"debot": {"emoji": "✨", "requires": {"bins": ["tool"]}}}
---

# My Custom Skill

Instructions for the agent on how to use this skill...

Place your skill in ~/.debot/workspace/skills/<skill-name>/SKILL.md and it will be automatically available to your agent.

βš™οΈ Configuration

Config file: ~/.debot/config.json

Providers

Note

Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.

Provider Purpose Get API Key
openrouter LLM (recommended, access to all models) openrouter.ai
anthropic LLM (Claude direct) console.anthropic.com
openai LLM (GPT direct) platform.openai.com
groq LLM + Voice transcription (Whisper) console.groq.com
gemini LLM (Gemini direct) aistudio.google.com
Full config example
{
  "agents": {
    "defaults": {
      "model": "anthropic/claude-opus-4-5"
    }
  },
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-v1-xxx"
    },
    "anthropic": {
      "apiKey": "sk-ant-xxx"
    },
    "openai": {
      "apiKey": "sk-xxx"
    },
    "groq": {
      "apiKey": "gsk_xxx"
    },
    "gemini": {
      "apiKey": "AIza-xxx"
    }
  },
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "123456:ABC...",
      "allowFrom": ["123456789"]
    },
    "whatsapp": {
      "enabled": false
    }
  },
  "tools": {
    "web": {
      "search": {
        "apiKey": "BSA..."
      }
    }
  }
}

CLI Reference

Command Description
debot onboard Initialize config & workspace
debot agent -m "..." Chat with the agent
debot agent Interactive chat mode
debot gateway Start the gateway
debot status Show status
debot channels login Link WhatsApp (scan QR)
debot channels status Show channel status
debot sessions compact <key> Manually compact a session
debot config compaction View/configure compaction settings
debot config compaction-model <model> Set per-model compaction settings
Scheduled Tasks (Cron)
# Add a job
debot cron add --name "daily" --message "Good morning!" --cron "0 9 * * *"
debot cron add --name "hourly" --message "Check status" --every 3600

# List jobs
debot cron list

# Remove a job
debot cron remove <job_id>

🐳 Docker

Tip

The -v ~/.debot:/root/.debot flag mounts your local config directory into the container, so your config and workspace persist across container restarts.

Build & Run Locally

Build and run debot in a container:

# Build the image
docker build -t debot .

# Initialize config (first time only)
docker run -v ~/.debot:/root/.debot --rm debot onboard

# Edit config on host to add API keys
vim ~/.debot/config.json

# Run gateway (connects to Telegram/WhatsApp)
docker run -v ~/.debot:/root/.debot -p 18790:18790 debot gateway

# Or run a single command
docker run -v ~/.debot:/root/.debot --rm debot agent -m "Hello!"
docker run -v ~/.debot:/root/.debot --rm debot status

πŸ“¦ Pull from GitHub Container Registry

Pre-built images are automatically published to GitHub Container Registry:

# Pull latest image
docker pull ghcr.io/botmesh/debot:latest

# Run with pulled image
docker run -v ~/.debot:/root/.debot -p 18790:18790 ghcr.io/botmesh/debot:latest gateway

# Pull specific version
docker pull ghcr.io/botmesh/debot:v1.0.0

Available Tags:

  • latest β€” Latest main branch
  • main β€” Main branch
  • v1.0.0 β€” Release versions
  • main-<short-sha> β€” Specific commits

For more info, see Container Publishing Guide

πŸ› οΈ Development

A Makefile is provided for common development tasks:

make install       # Install debot (builds Rust extension via maturin)
make build         # Build the Rust extension only (release mode)
make test          # Build + install + run pytest
make lint          # Run ruff linter

First-time setup:

git clone https://github.com/botmesh/debot.git
cd debot
python3 -m venv .venv
source .venv/bin/activate
pip install patchelf     # Linux only
make install

Running tests:

make test

This builds the Rust extension, installs the wheel, installs dev dependencies, and runs the full test suite.

🀝 Contribute & Roadmap

PRs welcome! The codebase is intentionally small and readable. πŸ€—

Roadmap β€” Pick an item and open a PR!

  • Voice Transcription β€” Support for Groq Whisper (Issue #13)
  • Multi-modal β€” See and hear (images, voice, video)
  • Intelligent Model Router β€” Automatically selects the best LLM model based on task complexity
  • Long-term memory β€” Never forget important context
  • Better reasoning β€” Multi-step planning and reflection
  • More integrations β€” Discord, Slack, email, calendar
  • Self-improvement β€” Learn from feedback and mistakes

debot is for educational, research, and technical exchange purposes only

About

Debot: The Lightweight and Secure OpenClaw

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors