Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
PYTHON ?= python3
PIP ?= $(PYTHON) -m pip
RUFF ?= ruff
PYTEST ?= pytest
MATURIN ?= maturin
PYO3_USE_ABI3_FORWARD_COMPATIBILITY ?= 1

.PHONY: lint lint-check format format-rust test install build deps-build

lint:
$(RUFF) format debot/
cargo fmt --manifest-path rust/Cargo.toml

deps-build:
$(PIP) install maturin

build: deps-build
PYO3_USE_ABI3_FORWARD_COMPATIBILITY=$(PYO3_USE_ABI3_FORWARD_COMPATIBILITY) \
$(MATURIN) build --release -m rust/Cargo.toml

install:
PYO3_USE_ABI3_FORWARD_COMPATIBILITY=$(PYO3_USE_ABI3_FORWARD_COMPATIBILITY) \
$(PIP) install .

test: build
@WHEEL=$$(ls -1t rust/target/wheels/*.whl | head -n 1); \
PYO3_USE_ABI3_FORWARD_COMPATIBILITY=$(PYO3_USE_ABI3_FORWARD_COMPATIBILITY) \
$(PIP) install $$WHEEL
$(PIP) install ".[dev]"
$(PYTEST) tests/ -v --tb=short
82 changes: 59 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,29 +42,6 @@ If you need to specify a particular Python executable for maturin builds, set `P
<img src="debot_arch.png" alt="debot architecture" width="800">
</p>

## ✨ Features

<table align="center">
<tr align="center">
<th><p align="center">📈 24/7 Real-Time Market Analysis</p></th>
<th><p align="center">🚀 Full-Stack Software Engineer</p></th>
<th><p align="center">📅 Smart Daily Routine Manager</p></th>
<th><p align="center">📚 Personal Knowledge Assistant</p></th>
</tr>
<tr>
<td align="center"><p align="center"><img src="case/search.gif" width="180" height="400"></p></td>
<td align="center"><p align="center"><img src="case/code.gif" width="180" height="400"></p></td>
<td align="center"><p align="center"><img src="case/scedule.gif" width="180" height="400"></p></td>
<td align="center"><p align="center"><img src="case/memory.gif" width="180" height="400"></p></td>
</tr>
<tr>
<td align="center">Discovery • Insights • Trends</td>
<td align="center">Develop • Deploy • Scale</td>
<td align="center">Schedule • Automate • Organize</td>
<td align="center">Learn • Memory • Reasoning</td>
</tr>
</table>

### Core Capabilities

| Category | What Debot Can Do |
Expand Down Expand Up @@ -123,6 +100,12 @@ debot onboard
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
},
"anthropic": {
"apiKey": "sk-ant-xxx"
},
"groq": {
"apiKey": "gsk_xxx"
}
},
"agents": {
Expand All @@ -136,6 +119,9 @@ debot onboard
}
```

> [!TIP]
> Adding multiple provider keys enables **cross-provider fallback**. If one provider's credits run out, Debot automatically routes to another.


**3. Chat**

Expand Down Expand Up @@ -231,6 +217,17 @@ Debot includes a **built-in intelligent router** (powered by Rust) that automati

The router runs automatically — no configuration needed. You can customize the tier-to-model mapping by editing the Rust router config (see `rust/src/router/config.rs`).

**Automatic Fallback & Escalation:**

When a model fails, Debot doesn't just give up — it automatically retries with alternative models:

1. **Pre-check**: Before calling the API, estimates token count and compares against the model's context window. If the prompt is too large, skips straight to a bigger model.
2. **Billing fallback (402 / insufficient credits)**: Tries same-tier alternatives from cheaper providers first (e.g. Groq free tier → DeepSeek → OpenAI), then escalates to the next tier.
3. **Context window exceeded**: Escalates to the next tier with a larger context window.
4. **Cross-provider routing**: If your OpenRouter credits run out, Debot automatically routes to providers where you have direct API keys (Anthropic, Groq, OpenAI, etc.).

> Configure multiple provider keys in `~/.debot/config.json` to enable cross-provider fallback — see [Configuration](#%EF%B8%8F-configuration).

**Cost savings benchmark:**

We ran 33 representative prompts (greetings, code tasks, architecture design, formal proofs) through the router and simulated a typical daily workload of 70 queries (see `experiments/router_cost_savings.py`):
Expand Down Expand Up @@ -506,8 +503,17 @@ Config file: `~/.debot/config.json`
"openrouter": {
"apiKey": "sk-or-v1-xxx"
},
"anthropic": {
"apiKey": "sk-ant-xxx"
},
"openai": {
"apiKey": "sk-xxx"
},
"groq": {
"apiKey": "gsk_xxx"
},
"gemini": {
"apiKey": "AIza-xxx"
}
},
"channels": {
Expand Down Expand Up @@ -615,6 +621,36 @@ docker pull ghcr.io/BotMesh/debot:v1.0.0
For more info, see [Container Publishing Guide](./.github/CONTAINER_PUBLISHING.md)


## 🛠️ Development

A `Makefile` is provided for common development tasks:

```bash
make install # Install debot (builds Rust extension via maturin)
make build # Build the Rust extension only (release mode)
make test # Build + install + run pytest
make lint # Run ruff linter
```

**First-time setup:**

```bash
git clone https://github.com/BotMesh/debot.git
cd debot
python3 -m venv .venv
source .venv/bin/activate
pip install patchelf # Linux only
make install
```

**Running tests:**

```bash
make test
```

This builds the Rust extension, installs the wheel, installs dev dependencies, and runs the full test suite.

## 🤝 Contribute & Roadmap

PRs welcome! The codebase is intentionally small and readable. 🤗
Expand Down
Binary file removed case/code.gif
Binary file not shown.
Binary file removed case/memory.gif
Binary file not shown.
Binary file removed case/scedule.gif
Binary file not shown.
Binary file removed case/search.gif
Binary file not shown.
4 changes: 1 addition & 3 deletions debot/agent/_context_py.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,9 +180,7 @@ def add_tool_result(
Returns:
Updated message list.
"""
messages.append(
{"role": "tool", "tool_call_id": tool_call_id, "name": tool_name, "content": result}
)
messages.append({"role": "tool", "tool_call_id": tool_call_id, "name": tool_name, "content": result})
return messages

def add_assistant_message(
Expand Down
4 changes: 1 addition & 3 deletions debot/agent/_memory_py.py
Original file line number Diff line number Diff line change
Expand Up @@ -178,9 +178,7 @@ def search(self, query: str, max_results: int = 5, min_score: float = 0.0) -> Li
scored.sort(key=lambda x: x[0], reverse=True)
results = []
for score, e in scored[:max_results]:
results.append(
{"path": e.get("path", ""), "snippet": e.get("text", ""), "score": score}
)
results.append({"path": e.get("path", ""), "snippet": e.get("text", ""), "score": score})
return results


Expand Down
8 changes: 2 additions & 6 deletions debot/agent/_skills_py.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,19 +41,15 @@ def list_skills(self, filter_unavailable: bool = True) -> list[dict[str, str]]:
if skill_dir.is_dir():
skill_file = skill_dir / "SKILL.md"
if skill_file.exists():
skills.append(
{"name": skill_dir.name, "path": str(skill_file), "source": "workspace"}
)
skills.append({"name": skill_dir.name, "path": str(skill_file), "source": "workspace"})

# Built-in skills
if self.builtin_skills and self.builtin_skills.exists():
for skill_dir in self.builtin_skills.iterdir():
if skill_dir.is_dir():
skill_file = skill_dir / "SKILL.md"
if skill_file.exists() and not any(s["name"] == skill_dir.name for s in skills):
skills.append(
{"name": skill_dir.name, "path": str(skill_file), "source": "builtin"}
)
skills.append({"name": skill_dir.name, "path": str(skill_file), "source": "builtin"})

# Filter by requirements
if filter_unavailable:
Expand Down
72 changes: 31 additions & 41 deletions debot/agent/loop.py
Original file line number Diff line number Diff line change
Expand Up @@ -193,19 +193,13 @@ async def _process_message(self, msg: InboundMessage) -> OutboundMessage | None:

if compaction_enabled:
# Naive token estimate: 1 token ~= chars_per_token characters
estimated_tokens = sum(len(str(m.get("content", ""))) for m in messages) // max(
1, chars_per_token
)
estimated_tokens = sum(len(str(m.get("content", ""))) for m in messages) // max(1, chars_per_token)
if estimated_tokens >= int(max_tokens * compaction_trigger_ratio):
if not compaction_silent:
logger.info(
f"Context near limit ({estimated_tokens}/{max_tokens} tokens). Running compaction."
)
logger.info(f"Context near limit ({estimated_tokens}/{max_tokens} tokens). Running compaction.")
# Compact the session using configured keep_last
try:
compacted = self.sessions.compact_session(
msg.session_key, keep_last=compaction_keep_last
)
compacted = self.sessions.compact_session(msg.session_key, keep_last=compaction_keep_last)
if compacted > 0:
# Rebuild messages from the compacted history
session = self.sessions.get_or_create(msg.session_key)
Expand All @@ -215,9 +209,7 @@ async def _process_message(self, msg: InboundMessage) -> OutboundMessage | None:
media=msg.media if msg.media else None,
)
if not compaction_silent:
logger.info(
f"Auto-compaction completed: {compacted} messages compacted."
)
logger.info(f"Auto-compaction completed: {compacted} messages compacted.")
except Exception as e:
logger.warning(f"Auto-compaction failed: {e}")

Expand Down Expand Up @@ -304,16 +296,20 @@ async def _process_message(self, msg: InboundMessage) -> OutboundMessage | None:
tried.add(alt["model"])
logger.warning(
"Billing fallback: {} failed [{}] → trying same-tier {} (${:.2f}/M)",
chosen_model, response.finish_reason,
alt["model"], alt["cost"],
chosen_model,
response.finish_reason,
alt["model"],
alt["cost"],
)
try:
_debot_rust.record_escalation()
except Exception:
pass
chosen_model = alt["model"]
response = await self.provider.chat(
messages=messages, tools=self.tools.get_definitions(), model=chosen_model
messages=messages,
tools=self.tools.get_definitions(),
model=chosen_model,
)
if response.finish_reason not in _fail_reasons:
rerouted = True
Expand All @@ -332,7 +328,8 @@ async def _process_message(self, msg: InboundMessage) -> OutboundMessage | None:
tried.add(fb["model"])
logger.warning(
"Billing fallback: same-tier exhausted, escalating → {} ({})",
fb["model"], fb["tier"],
fb["model"],
fb["tier"],
)
try:
_debot_rust.record_escalation()
Expand All @@ -342,7 +339,9 @@ async def _process_message(self, msg: InboundMessage) -> OutboundMessage | None:
current_tier = fb["tier"]
esc_tier = fb["tier"]
response = await self.provider.chat(
messages=messages, tools=self.tools.get_definitions(), model=chosen_model
messages=messages,
tools=self.tools.get_definitions(),
model=chosen_model,
)
if response.finish_reason not in _fail_reasons:
break
Expand All @@ -355,8 +354,11 @@ async def _process_message(self, msg: InboundMessage) -> OutboundMessage | None:
fb = json.loads(fb_json)
logger.warning(
"Escalating: {} ({}) failed [{}] → {} ({})",
chosen_model, current_tier, response.finish_reason,
fb["model"], fb["tier"],
chosen_model,
current_tier,
response.finish_reason,
fb["model"],
fb["tier"],
)
try:
_debot_rust.record_escalation()
Expand All @@ -365,7 +367,9 @@ async def _process_message(self, msg: InboundMessage) -> OutboundMessage | None:
chosen_model = fb["model"]
current_tier = fb["tier"]
response = await self.provider.chat(
messages=messages, tools=self.tools.get_definitions(), model=chosen_model
messages=messages,
tools=self.tools.get_definitions(),
model=chosen_model,
)
if response.finish_reason not in _fail_reasons:
break
Expand Down Expand Up @@ -397,18 +401,14 @@ async def _process_message(self, msg: InboundMessage) -> OutboundMessage | None:
}
for tc in response.tool_calls
]
messages = self.context.add_assistant_message(
messages, response.content, tool_call_dicts
)
messages = self.context.add_assistant_message(messages, response.content, tool_call_dicts)

# Execute tools
for tool_call in response.tool_calls:
args_str = json.dumps(tool_call.arguments)
logger.debug(f"Executing tool: {tool_call.name} with arguments: {args_str}")
result = await self.tools.execute(tool_call.name, tool_call.arguments)
messages = self.context.add_tool_result(
messages, tool_call.id, tool_call.name, result
)
messages = self.context.add_tool_result(messages, tool_call.id, tool_call.name, result)
else:
# No tool calls, we're done
final_content = response.content
Expand Down Expand Up @@ -457,9 +457,7 @@ async def _process_system_message(self, msg: InboundMessage) -> OutboundMessage
spawn_tool.set_context(origin_channel, origin_chat_id)

# Build messages with the announce content
messages = self.context.build_messages(
history=session.get_history(), current_message=msg.content
)
messages = self.context.build_messages(history=session.get_history(), current_message=msg.content)

# Agent loop (limited for announce handling)
iteration = 0
Expand All @@ -468,9 +466,7 @@ async def _process_system_message(self, msg: InboundMessage) -> OutboundMessage
while iteration < self.max_iterations:
iteration += 1

response = await self.provider.chat(
messages=messages, tools=self.tools.get_definitions(), model=self.model
)
response = await self.provider.chat(messages=messages, tools=self.tools.get_definitions(), model=self.model)

if response.has_tool_calls:
tool_call_dicts = [
Expand All @@ -481,17 +477,13 @@ async def _process_system_message(self, msg: InboundMessage) -> OutboundMessage
}
for tc in response.tool_calls
]
messages = self.context.add_assistant_message(
messages, response.content, tool_call_dicts
)
messages = self.context.add_assistant_message(messages, response.content, tool_call_dicts)

for tool_call in response.tool_calls:
args_str = json.dumps(tool_call.arguments)
logger.debug(f"Executing tool: {tool_call.name} with arguments: {args_str}")
result = await self.tools.execute(tool_call.name, tool_call.arguments)
messages = self.context.add_tool_result(
messages, tool_call.id, tool_call.name, result
)
messages = self.context.add_tool_result(messages, tool_call.id, tool_call.name, result)
else:
final_content = response.content
break
Expand All @@ -504,9 +496,7 @@ async def _process_system_message(self, msg: InboundMessage) -> OutboundMessage
session.add_message("assistant", final_content)
self.sessions.save(session)

return OutboundMessage(
channel=origin_channel, chat_id=origin_chat_id, content=final_content
)
return OutboundMessage(channel=origin_channel, chat_id=origin_chat_id, content=final_content)

async def process_direct(self, content: str, session_key: str = "cli:direct") -> str:
"""
Expand Down
4 changes: 1 addition & 3 deletions debot/agent/subagent.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,9 +201,7 @@ async def _announce_result(
)

await self.bus.publish_inbound(msg)
logger.debug(
f"Subagent [{task_id}] announced result to {origin['channel']}:{origin['chat_id']}"
)
logger.debug(f"Subagent [{task_id}] announced result to {origin['channel']}:{origin['chat_id']}")

def _build_subagent_prompt(self, task: str) -> str:
"""Build a focused system prompt for the subagent."""
Expand Down
Loading
Loading