Skip to content

feat(mcp): add external MCP server support on main#2013

Merged
theonlyhennygod merged 4 commits intomainfrom
issue-1380-mcp-main
Feb 27, 2026
Merged

feat(mcp): add external MCP server support on main#2013
theonlyhennygod merged 4 commits intomainfrom
issue-1380-mcp-main

Conversation

@theonlyhennygod
Copy link
Copy Markdown
Collaborator

@theonlyhennygod theonlyhennygod commented Feb 27, 2026

Summary

  • add native MCP client support for stdio / http / sse transports
  • wire MCP registry into channel startup so discovered MCP tools are available at runtime
  • add [mcp] config schema + mcpServers compatibility alias with transport validation
  • add daemon startup port-conflict guard and MCP transport/client/protocol tests

Validation

  • cargo check
  • cargo test mcp_ -- --nocapture

Closes #1380

Summary by CodeRabbit

Release Notes

  • New Features

    • Added Model Context Protocol (MCP) support enabling connections to external MCP servers
    • Support for multiple MCP transport types: Stdio, HTTP, and SSE
    • Dynamic MCP tool discovery and registration into the tool registry
    • Added pre-flight port availability validation during daemon startup to detect conflicts and provide actionable guidance
  • Tests

    • Added comprehensive tests for MCP client functionality, transport handling, and tool registration

@theonlyhennygod theonlyhennygod self-assigned this Feb 27, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 27, 2026

Warning

Rate limit exceeded

@theonlyhennygod has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 8 minutes and 57 seconds before requesting another review.

βŒ› How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 992ecd9 and 6186b34.

πŸ“’ Files selected for processing (3)
  • src/onboard/wizard.rs
  • src/tools/mcp_client.rs
  • src/tools/mcp_transport.rs

Note

.coderabbit.yaml has unrecognized properties

CodeRabbit is using all valid settings from your configuration. Unrecognized properties (listed below) have been ignored and may indicate typos or deprecated fields that can be removed.

⚠️ Parsing warnings (1)
Validation error: Unrecognized key(s) in object: 'tools', 'path_filters', 'review_instructions'
βš™οΈ Configuration instructions
  • Please see the configuration documentation for more information.
  • You can also validate your configuration using the online YAML validator.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
πŸ“ Walkthrough

Walkthrough

Adds comprehensive Model Context Protocol (MCP) server support with stdio, HTTP, and SSE transport backends. Introduces configuration structures, JSON-RPC protocol handling, transport abstraction, and tool registration. Integrates MCP tools into the existing tool registry with non-fatal error handling, adds daemon port binding pre-flight checks, and extends the onboarding wizard.

Changes

Cohort / File(s) Summary
Configuration System
src/config/schema.rs, src/config/mod.rs
Adds McpConfig, McpServerConfig, and McpTransport types with comprehensive validation (unique server names, timeout bounds, transport-specific URL requirements). Integrates MCP config into Config struct with deserialization support and defaults.
MCP Protocol & Transport
src/tools/mcp_protocol.rs, src/tools/mcp_transport.rs
Implements JSON-RPC 2.0 client-side protocol definitions (JsonRpcRequest, JsonRpcResponse, JsonRpcError, McpToolDef). Adds McpTransportConn trait with implementations for Stdio, Http, and SSE backends via factory function create_transport.
MCP Client & Tool Wrapper
src/tools/mcp_client.rs, src/tools/mcp_tool.rs
Introduces McpServer for per-server connections with handshake, tool discovery, and invocation. McpRegistry aggregates multiple servers with prefixed tool naming. McpToolWrapper adapts MCP tools to the existing Tool trait interface.
Module Exports
src/tools/mod.rs
Exposes four new MCP submodules and publicly exports McpRegistry and McpToolWrapper.
Tool Registry Integration
src/channels/mod.rs
Refactors tool registry to accumulate tools before freezing. Conditionally initializes MCP registry, registers discovered tools via McpToolWrapper, and appends to the tool set with non-fatal error handling.
Daemon & Onboarding
src/daemon/mod.rs, src/onboard/wizard.rs
Adds pre-flight port binding checks with heuristic daemon detection. Initializes McpConfig::default() in wizard config construction.

Sequence Diagram

sequenceDiagram
    participant App as Application Startup
    participant Cfg as Config Loader
    participant Registry as Tool Registry
    participant MCP as MCP Registry
    participant Server as MCP Server
    
    App->>Cfg: Load configuration
    Cfg->>Cfg: Validate MCP config (names, URLs, timeouts)
    Cfg-->>App: Return Config with mcp settings
    
    App->>Registry: Initialize (mutable collection)
    Registry->>Registry: Accumulate non-MCP tools
    
    opt MCP Enabled
        Registry->>MCP: Create McpRegistry
        MCP->>Server: Connect to configured servers
        Server-->>MCP: Return handshake + tool list
        MCP->>MCP: Build flat tool index (prefixed names)
        MCP-->>Registry: Registry ready with tools
        Registry->>Registry: Wrap discovered MCP tools with MCPToolWrapper
        Registry->>Registry: Append wrapped MCP tools
    end
    
    Registry->>Registry: Freeze tool set in Arc
    Registry-->>App: Ready for runtime use
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Suggested labels

size: XL, risk: high, config, tool, core, onboard: wizard

πŸš₯ Pre-merge checks | βœ… 3 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is severely incomplete; it omits nearly all required template sections including Label Snapshot, Change Metadata, Linked Issue details, Validation Evidence, Security Impact, Privacy checks, Compatibility, i18n, Human Verification, Side Effects, Rollback Plan, and Risks/Mitigations. Complete the description template with all required sections, particularly Label Snapshot (risk/size/scope), Change Metadata, full Linked Issue block, Validation Evidence (test output), Security Impact statement, and Rollback Plan. Provide specific test results and risk analysis.
Docstring Coverage ⚠️ Warning Docstring coverage is 57.63% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
βœ… Passed checks (3 passed)
Check name Status Explanation
Title check βœ… Passed The title 'feat(mcp): add external MCP server support on main' clearly summarizes the primary change: adding MCP (Model Context Protocol) server support, which aligns with the changeset's core objective.
Linked Issues check βœ… Passed The PR successfully implements the core coding requirements from issue #1380: adds MCP configuration schema (McpConfig, McpServerConfig, McpTransport), supports stdio/http/sse transports with validation, integrates MCP registry into runtime, and maintains backward compatibility with no breaking changes.
Out of Scope Changes check βœ… Passed All code changes are directly scoped to the linked issue #1380 objectives: MCP server/client/transport support, config schema additions, registry integration, and port-conflict guards. No extraneous or unrelated modifications detected beyond the stated feature scope.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
πŸ§ͺ Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch issue-1380-mcp-main

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Nitpick comments (2)
src/daemon/mod.rs (1)

345-386: Consider adding tests for port availability checks.

The new helper functions lack test coverage. While the logic is straightforward, adding at least a basic test for check_port_available would validate the port-in-use detection:

πŸ§ͺ Example test
#[tokio::test]
async fn check_port_available_detects_bound_port() {
    // Bind a port first
    let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
    let port = listener.local_addr().unwrap().port();
    
    // Should detect it's in use
    let result = check_port_available("127.0.0.1", port).await;
    assert!(result.is_err());
    
    // After dropping, should be available
    drop(listener);
    let result = check_port_available("127.0.0.1", port).await;
    assert!(result.is_ok());
}

Based on learnings: "For high-risk code changes, include at least one boundary/failure-mode validation in addition to standard checks."

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/daemon/mod.rs` around lines 345 - 386, Add an async unit test for
check_port_available that binds a temporary listener (e.g.,
tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap()), captures its port
via listener.local_addr().unwrap().port(), calls
check_port_available("127.0.0.1", port).await and assert!(result.is_err()), then
drop(listener) and assert!(check_port_available(...).await.is_ok()); place the
test as #[tokio::test] in the same module or tests module and import any needed
items (check_port_available, tokio) so the boundary/failure-mode (port-in-use ->
error, then available -> ok) is validated; optionally add a simple negative test
for is_zeroclaw_daemon_running using an unused port to assert it returns false.
src/channels/mod.rs (1)

4658-4694: Extract MCP tool bootstrap into a helper to keep start_channels readable.

This block is cohesive and can be moved to a dedicated async helper to reduce start_channels complexity and keep startup orchestration focused.

♻️ Proposed refactor
+async fn register_mcp_tools_if_enabled(config: &Config, built_tools: &mut Vec<Box<dyn Tool>>) {
+    if !config.mcp.enabled || config.mcp.servers.is_empty() {
+        return;
+    }
+
+    tracing::info!(
+        "Initializing MCP client β€” {} server(s) configured",
+        config.mcp.servers.len()
+    );
+
+    match crate::tools::McpRegistry::connect_all(&config.mcp.servers).await {
+        Ok(registry) => {
+            let registry = Arc::new(registry);
+            let mut registered = 0usize;
+            for name in registry.tool_names() {
+                if let Some(def) = registry.get_tool_def(&name).await {
+                    built_tools.push(Box::new(crate::tools::McpToolWrapper::new(
+                        name,
+                        def,
+                        Arc::clone(&registry),
+                    )));
+                    registered += 1;
+                }
+            }
+            tracing::info!(
+                "MCP: {} tool(s) registered from {} server(s)",
+                registered,
+                registry.server_count()
+            );
+        }
+        Err(e) => tracing::error!("MCP registry failed to initialize: {e:#}"),
+    }
+}
...
-    // Wire MCP tools into the registry before freezing β€” non-fatal.
-    if config.mcp.enabled && !config.mcp.servers.is_empty() {
-        ...
-    }
+    register_mcp_tools_if_enabled(&config, &mut built_tools).await;

As per coding guidelines: "Keep each module focused on one concern... keep error paths obvious and localized."

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/channels/mod.rs` around lines 4658 - 4694, Extract the MCP bootstrap
block in start_channels into a new async helper (e.g., init_mcp_tools or
similar) that takes the MCP config and returns the built tools (Vec<Box<...>> or
Arc<Vec<Box<...>>>) so start_channels only calls that helper and assigns its
result to tools_registry; move the logic that calls
crate::tools::McpRegistry::connect_all, iterates registry.tool_names(), calls
registry.get_tool_def(...).await and creates
crate::tools::McpToolWrapper::new(...) into the helper, preserve the
tracing::info!/tracing::error! messages and the non-fatal error handling, and
ensure you return/convert the built_tools to the same type used when
constructing tools_registry so callers of start_channels remain unchanged.
πŸ€– Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/channels/mod.rs`:
- Around line 4664-4665: The code awaits
crate::tools::McpRegistry::connect_all(&config.mcp.servers).await without any
timeout, so wrap that call in tokio::time::timeout (e.g., 30–60s) and handle a
elapsed error as non-fatal: log the timeout with context and skip/continue
without panicking; alternatively implement a per-server timeout inside the
McpRegistry connect loop (around each transport creation) so a slow
StdioTransport::new / HttpTransport::new / SseTransport::new is treated like
other connection errors and logged/skipped rather than blocking channel startup.

In `@src/config/schema.rs`:
- Around line 267-269: The mcpServers alias on McpConfig.servers (type
Vec<McpServerConfig>) only supports TOML array entries and can silently fail for
legacy keyed-entry shapes; add unit tests that deserialize both the array form
(e.g., [[mcp.servers]] / [[mcpServers]]) and any previously-supported
keyed-entry form to prove behavior, document the alias and default
(enabled=false) plus any incompatibility/migration steps in the config docs, and
implement a compatibility adapter for McpConfig (or a custom deserializer for
McpConfig.servers) that detects and transforms legacy keyed-entry maps into the
Vec<McpServerConfig> shape before validation so old keyed configs continue to
work; update tests to assert MCP remains enabled when expected.

In `@src/daemon/mod.rs`:
- Around line 361-386: The is_zeroclaw_daemon_running function builds a URL that
will be malformed for IPv6 hosts and its comment inaccurately mentions
"runtime.components"; update the URL construction to wrap IPv6 addresses in
brackets (reuse the same IPv6-bracketing logic used in check_port_available)
when formatting the address for the health URL (in is_zeroclaw_daemon_running),
and correct the comment to reflect that the code checks for the top-level
"runtime" key (or extend the JSON check to look for "runtime.components" if
intended); ensure the function still uses reqwest client timeout and JSON
parsing as before.

In `@src/tools/mcp_client.rs`:
- Around line 95-97: The notification currently uses
JsonRpcRequest::notification assigned to notif and calls
transport.send_and_recv(&notif).await which can block forever for a
notification; change this to a non-blocking send (use
transport.send(&notif).await if available) or wrap the send_and_recv call with a
bounded timeout (e.g. tokio::time::timeout) and ignore timeout/errors for
best-effort behavior so startup cannot hang; update the code path around notif
and transport.send_and_recv to use the chosen non-blocking approach and ensure
errors/timeouts are discarded.
- Around line 222-224: The code silently overwrites duplicate keys when
inserting into tool_index using the prefixed key built from config.name and
tool.name; update the logic around prefixed (constructed from config.name and
tool.name) and tool_index.insert to detect existing entries for the same key
(e.g., via tool_index.contains_key or entry API) and handle duplicates
explicitly: either return/propagate an error or record multiple (server_idx,
tool.name) targets (e.g., change the map value to a Vec and push), and include a
clear log message referencing prefixed, server_idx, config.name, and tool.name
so duplicates are not silently shadowed.

In `@src/tools/mcp_tool.rs`:
- Around line 54-56: The execute method currently forwards args directly to
self.registry.call_tool; validate and sanitize args against this tool's
input_schema before dispatching by parsing/validating the incoming
serde_json::Value (e.g., attempt to deserialize into the expected input shape or
run input_schema.validate) and return an appropriate anyhow::Error (or a
structured ToolResult error) if validation fails; only call
self.registry.call_tool(&self.prefixed_name, args) when validation succeeds, and
ensure no panics occur in execute so runtime/remote execution never receives
malformed input.

In `@src/tools/mcp_transport.rs`:
- Around line 209-233: The SSE branch currently silently falls back to plain
HTTP by POSTing and reading an immediate body; instead detect when SSE/event
stream behavior is required (reference the event_source field and the SSE
handling block that builds the POST on self.client with self.base_url and
self.headers) and return an explicit error (e.g., bail! or Err with a contextual
message) stating "SSE transport not implemented" or "server uses SSE event
stream; SSE client not supported" rather than attempting HTTP; update the SSE
handling code path that parses JsonRpcResponse from resp.text() to short-circuit
with that error so callers get a clear failure instead of silent fallback.
- Around line 147-150: The HttpTransport request-building code is sending
JSON-RPC without Content-Type and should mirror SseTransport: change the post
body construction in the HttpTransport send path (the code creating req via
self.client.post(&self.url).body(body)) to use reqwest's .json(&payload) or
otherwise add header("Content-Type", "application/json") before sending; locate
the method on the HttpTransport impl that constructs req (the block iterating
for (key, value) in &self.headers) and replace the manual serialized body with
.json(&payload) or append the explicit header to ensure strict MCP servers
accept the request.
- Around line 80-88: The recv_raw method currently reads lines via
stdout_lines.next_line() which can buffer arbitrarily large input; change the
implementation to use tokio_util::codec::FramedRead with
tokio_util::codec::LinesCodec::new_with_max_length(MAX_LINE_BYTES) (or an
appropriate constant) to enforce a max line length and avoid memory exhaustion,
replace uses of stdout_lines.next_line() with the framed read stream and handle
the Result<Option<String>, _> accordingly in recv_raw; also enable the "codec"
feature for tokio-util in Cargo.toml so LinesCodec is available.

---

Nitpick comments:
In `@src/channels/mod.rs`:
- Around line 4658-4694: Extract the MCP bootstrap block in start_channels into
a new async helper (e.g., init_mcp_tools or similar) that takes the MCP config
and returns the built tools (Vec<Box<...>> or Arc<Vec<Box<...>>>) so
start_channels only calls that helper and assigns its result to tools_registry;
move the logic that calls crate::tools::McpRegistry::connect_all, iterates
registry.tool_names(), calls registry.get_tool_def(...).await and creates
crate::tools::McpToolWrapper::new(...) into the helper, preserve the
tracing::info!/tracing::error! messages and the non-fatal error handling, and
ensure you return/convert the built_tools to the same type used when
constructing tools_registry so callers of start_channels remain unchanged.

In `@src/daemon/mod.rs`:
- Around line 345-386: Add an async unit test for check_port_available that
binds a temporary listener (e.g.,
tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap()), captures its port
via listener.local_addr().unwrap().port(), calls
check_port_available("127.0.0.1", port).await and assert!(result.is_err()), then
drop(listener) and assert!(check_port_available(...).await.is_ok()); place the
test as #[tokio::test] in the same module or tests module and import any needed
items (check_port_available, tokio) so the boundary/failure-mode (port-in-use ->
error, then available -> ok) is validated; optionally add a simple negative test
for is_zeroclaw_daemon_running using an unused port to assert it returns false.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 4196fd3 and 992ecd9.

β›” Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
πŸ“’ Files selected for processing (10)
  • src/channels/mod.rs
  • src/config/mod.rs
  • src/config/schema.rs
  • src/daemon/mod.rs
  • src/onboard/wizard.rs
  • src/tools/mcp_client.rs
  • src/tools/mcp_protocol.rs
  • src/tools/mcp_tool.rs
  • src/tools/mcp_transport.rs
  • src/tools/mod.rs

Comment thread src/channels/mod.rs
Comment on lines +4664 to +4665
match crate::tools::McpRegistry::connect_all(&config.mcp.servers).await {
Ok(registry) => {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# 1) Confirm startup callsite in channel bootstrap
echo "=== MCP startup callsite in channels/mod.rs ==="
rg -n -C4 'McpRegistry::connect_all|mcp\.enabled|mcp\.servers' src/channels/mod.rs | head -50

# 2) Inspect MCP connect path for explicit timeout guards
echo -e "\n=== MCP connect logic and timeout patterns ==="
rg -n -C6 'pub async fn connect_all|McpServer::connect|tokio::time::timeout|timeout\(|connect_timeout|read_timeout|request_timeout' src/tools/mcp_client.rs

# 3) Check HTTP/SSE client construction
echo -e "\n=== HTTP client and SSE client setup ==="
rg -n -C4 'reqwest::Client::builder|Client::new|sse|eventsource|http' src/tools/mcp_client.rs | head -80

Repository: zeroclaw-labs/zeroclaw

Length of output: 5127


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Get McpServer::connect implementation
echo "=== McpServer::connect implementation ==="
rg -n -A20 'impl McpServer|pub async fn connect' src/tools/mcp_client.rs | grep -A20 'pub async fn connect'

# Check transport creation for timeout bounds
echo -e "\n=== create_transport and transport initialization ==="
rg -n -A25 'fn create_transport|StdioTransport::new|HttpTransport::new|SseTransport::new' src/tools/mcp_client.rs

# Check if there's a timeout constant used for connection
echo -e "\n=== Timeout constants and default durations ==="
rg -n 'const.*TIMEOUT|Duration::from|const RECV_TIMEOUT|const.*CONNECT' src/tools/mcp_client.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 2091


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find create_transport function
echo "=== create_transport function ==="
rg -n -A30 'fn create_transport' src/tools/mcp_client.rs

# Search for HTTP and SSE transport implementations
echo -e "\n=== Transport struct implementations ==="
rg -n -B2 -A15 'struct.*Transport|impl.*Transport' src/tools/mcp_client.rs | head -100

# Check for reqwest client or any http setup
echo -e "\n=== HTTP/reqwest setup ==="
fd -e rs src/tools | xargs rg -l 'reqwest' 

# Check Cargo.toml for timeout-related dependencies
echo -e "\n=== Dependencies with timeout support ==="
grep -i 'timeout\|reqwest\|tokio' Cargo.toml | head -20

Repository: zeroclaw-labs/zeroclaw

Length of output: 99


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Search for create_transport anywhere in the file
echo "=== Searching for create_transport ==="
rg -n 'create_transport' src/tools/mcp_client.rs

# Get file size to understand what we're dealing with
echo -e "\n=== File info ==="
wc -l src/tools/mcp_client.rs

# List all function definitions in the file
echo -e "\n=== All pub fn and async fn definitions ==="
rg -n 'pub (async )?fn|^fn ' src/tools/mcp_client.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 1031


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find and inspect the mcp_transport module
echo "=== mcp_transport module location ==="
fd -e rs mcp_transport src/

# Check the create_transport function in mcp_transport
echo -e "\n=== create_transport implementation ==="
rg -n -A40 'pub fn create_transport' src/tools/mcp_transport.rs

# Check for timeout constants and HTTP client setup
echo -e "\n=== Timeout constants and HTTP client setup ==="
rg -n 'const.*TIMEOUT|Duration::from|timeout|reqwest::Client|http::Client' src/tools/mcp_transport.rs | head -40

Repository: zeroclaw-labs/zeroclaw

Length of output: 2077


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check StdioTransport implementation for timeout bounds
echo "=== StdioTransport::new implementation ==="
rg -n -B5 -A25 'impl StdioTransport|StdioTransport::new' src/tools/mcp_transport.rs

# Check HttpTransport::new for full context
echo -e "\n=== HttpTransport::new implementation ==="
rg -n -B2 -A30 'impl HttpTransport|fn new' src/tools/mcp_transport.rs | head -80

# Check if there are connection timeouts or any handshake timeouts
echo -e "\n=== Full McpServer::connect to see all timeout handling ==="
sed -n '49,120p' src/tools/mcp_client.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 7671


🏁 Script executed:

#!/bin/bash
set -euo pivefail

# Verify the overall startup timeout context
echo "=== Startup context around connect_all call ==="
sed -n '4655,4700p' src/channels/mod.rs

# Check if there's any timeout wrapping the entire MCP init block
echo -e "\n=== Full MCP startup block context ==="
sed -n '4658,4695p' src/channels/mod.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 119


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Verify the overall startup timeout context
echo "=== Startup context around connect_all call ==="
sed -n '4655,4700p' src/channels/mod.rs

# Check if there's any timeout wrapping the entire MCP init block
echo -e "\n=== Full MCP startup block context ==="
sed -n '4658,4695p' src/channels/mod.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 3448


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check if tokio::time::timeout is used anywhere in the connect_all context
echo "=== Check for timeout patterns in connect_all or parent ==="
rg -n -B5 -A30 'pub async fn connect_all' src/tools/mcp_client.rs

# Double-check mcp_transport for any timeout on transport creation
echo -e "\n=== Check transport traits for timeout handling ==="
rg -n 'trait.*Transport|async fn send_and_recv' src/tools/mcp_transport.rs

# Verify no timeout wrapper at startup invocation
echo -e "\n=== Verify startup does not wrap connect_all in timeout ==="
sed -n '4658,4695p' src/channels/mod.rs | grep -i timeout || echo "No timeout wrapper found at startup call"

Repository: zeroclaw-labs/zeroclaw

Length of output: 2232


Add timeout guards around MCP server connection to prevent blocking channel startup.

Lines 4664-4665 await McpRegistry::connect_all without a timeout wrapper. The MCP client applies timeouts to protocol operations (initialize, tools/list) after the transport is created, but transport creation itself (StdioTransport::new, HttpTransport::new, SseTransport::new) has no timeout. If a stdio-based MCP server is slow to spawn or stuck, or if a remote HTTP/SSE endpoint hangs during connection, the entire channel initialization will block indefinitely.

Wrap connect_all with tokio::time::timeout at the startup level (e.g., 30–60 seconds), or add a per-server timeout inside the loop at line 215. Either approach should treat timeout as non-fatal (log and skip that server), consistent with the existing error handling.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/channels/mod.rs` around lines 4664 - 4665, The code awaits
crate::tools::McpRegistry::connect_all(&config.mcp.servers).await without any
timeout, so wrap that call in tokio::time::timeout (e.g., 30–60s) and handle a
elapsed error as non-fatal: log the timeout with context and skip/continue
without panicking; alternatively implement a per-server timeout inside the
McpRegistry connect loop (around each transport creation) so a slow
StdioTransport::new / HttpTransport::new / SseTransport::new is treated like
other connection errors and logged/skipped rather than blocking channel startup.

Comment thread src/config/schema.rs
Comment on lines +267 to +269
/// External MCP server connections (`[mcp]`).
#[serde(default, alias = "mcpServers")]
pub mcp: McpConfig,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== MCP schema and aliases ==="
rg -n 'pub mcp: McpConfig|alias = "mcpServers"|pub struct McpConfig|pub servers: Vec<McpServerConfig>' src/config/schema.rs

echo
echo "=== Custom deserialization / adapters for legacy mcpServers shape ==="
rg -n 'impl\s+.*Deserialize\s+for\s+(Config|McpConfig|McpServerConfig)|deserialize_with|serde\(untagged\)|HashMap<\s*String,\s*McpServerConfig' src/config/schema.rs

echo
echo "=== Local tests mentioning mcpServers compatibility ==="
rg -n 'mcpServers|\\[mcp\\]|\\[\\[mcp\\.servers\\]\\]|camoufox' src/config/schema.rs

echo
echo "=== Check surrounding lines of pub mcp field for complete context ==="
rg -B 5 -A 5 'pub mcp: McpConfig' src/config/schema.rs | head -20

Repository: zeroclaw-labs/zeroclaw

Length of output: 361


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Search for legacy mcpServers format examples ==="
rg -i 'mcpServers|mcp.*servers|legacy.*mcp' --type md --type toml --type rs -A 3 -B 1

echo
echo "=== Check git history for mcpServers references (if any) ==="
git log --all --oneline --grep="mcpServers\|mcp" 2>/dev/null | head -10 || echo "(git history unavailable)"

echo
echo "=== Look for CHANGELOG or MIGRATION docs ==="
fd -t f -i 'CHANGELOG\|MIGRATION\|UPGRADE' . | head -5

echo
echo "=== Check PR/issue context for mcpServers format intent ==="
rg -i 'entry.style|keyed.*server|mcpServers.*format|backward.*compat' --type md -B 2 -A 2

Repository: zeroclaw-labs/zeroclaw

Length of output: 162


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Search for mcpServers usage anywhere in repo ==="
rg -i 'mcpServers' -B 2 -A 2 | head -50

echo
echo "=== Look for config examples/templates ==="
fd -i 'example.*config\|config.*example\|\.toml\|\.yaml' -t f | xargs rg -l 'mcp' 2>/dev/null | head -5

echo
echo "=== Check McpConfig struct definition and defaults ==="
rg -A 20 'pub struct McpConfig' src/config/schema.rs

echo
echo "=== Check how McpServerConfig is structured ==="
rg -A 15 'pub struct McpServerConfig' src/config/schema.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 1923


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Search for Config/McpConfig test deserialization ==="
rg -A 10 'McpConfig|Config.*mcp' src/config/schema.rs | grep -A 10 'test\|fn.*default\|#\[test\]' | head -40

echo
echo "=== Look for mcp config in test files ==="
fd -t f '*.rs' | xargs rg -l 'McpConfig\|mcp.*test' 2>/dev/null | head -5

echo
echo "=== Check tools/mcp files for usage patterns ==="
rg -i 'config.*mcp|mcpconfig' src/tools/mcp_client.rs src/tools/mcp_tool.rs -B 2 -A 2 2>/dev/null | head -30

Repository: zeroclaw-labs/zeroclaw

Length of output: 893


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Find test files mentioning config/mcp ==="
fd -t f -g '**test*.rs' | head -10

echo
echo "=== Search all test files for McpConfig fixtures ==="
rg 'McpConfig|mcp.*servers' --glob '*test*.rs' -B 2 -A 5 | head -40

echo
echo "=== Look for TOML config examples in docs/examples ==="
fd -t f -g '**example*' -o -g '**doc*' 2>/dev/null | head -10

echo
echo "=== Check if there are any config TOML files in repo ==="
fd -t f -e toml | head -10

echo
echo "=== Look at validate_mcp_config implementation to understand expected shape ==="
rg -A 25 'fn validate_mcp_config' src/config/schema.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 269


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Full validate_mcp_config implementation ==="
rg -A 40 'fn validate_mcp_config' src/config/schema.rs

echo
echo "=== Check for inline tests in schema.rs ==="
rg -n '#\[cfg\(test\)\]|#\[test\]' src/config/schema.rs

echo
echo "=== Look for Config test initializers with mcp field ==="
rg -B 5 -A 15 'Config\s*\{.*mcp:' src/config/schema.rs | head -60

echo
echo "=== Check documentation comments around McpConfig ==="
rg -B 10 'pub struct McpConfig' src/config/schema.rs | grep '///'

Repository: zeroclaw-labs/zeroclaw

Length of output: 5146


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Search test blocks for mcpServers or mcp usage ==="
rg '#\[test\]' -A 30 src/config/schema.rs | rg -i 'mcpservers|mcp.*config|toml.*mcp' -B 3 -A 3

echo
echo "=== Search for toml! macro with mcp/mcpServers ==="
rg 'toml!' -A 10 src/config/schema.rs | rg -i 'mcp|mcpservers' -B 3 -A 3

echo
echo "=== Look at test function names to find MCP-related test ==="
rg '#\[test\]' -A 2 src/config/schema.rs | grep 'fn.*mcp' -i

echo
echo "=== Check actual deserialization of mcpServers in any test ==="
rg '\\[mcp\\]|\\[mcpServers\\]|\\"mcpServers\\"' src/config/schema.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 120


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Search entire repo for any mcpServers format documentation ==="
rg -i 'mcpservers' . 2>/dev/null | grep -v '^Binary'

echo
echo "=== Check if there are any README or docs about config format ==="
fd -t f -e md | xargs rg -l 'config\|mcp' 2>/dev/null | head -5

echo
echo "=== Look at the full Config struct definition with mcp field ==="
rg -B 30 'pub mcp: McpConfig' src/config/schema.rs | head -50

Repository: zeroclaw-labs/zeroclaw

Length of output: 335


mcpServers alias lacks documentation and likely structural incompatibility with keyed-entry configs.

The mcpServers alias on McpConfig.servers (line 578) maps to Vec<McpServerConfig>, which expects TOML array format ([[mcp.servers]] or [[mcpServers]]). If legacy configs used keyed entries ([mcpServers.server_name]), deserialization will silently fail and fall back to defaultsβ€”leaving MCP disabled (enabled=false).

Per coding guidelines, config schema changes must document defaults, compatibility impact, and migration path. This change provides none: no test fixtures demonstrate the alias works, no migration guide exists, and no adapters handle legacy shape transformation if needed.

Add:

  • Test fixtures showing mcpServers deserialization succeeds with expected format
  • Documentation of the alias shape and any breaking changes from prior format
  • Migration path if legacy keyed-entry format was previously supported

Also applies to: 268

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/schema.rs` around lines 267 - 269, The mcpServers alias on
McpConfig.servers (type Vec<McpServerConfig>) only supports TOML array entries
and can silently fail for legacy keyed-entry shapes; add unit tests that
deserialize both the array form (e.g., [[mcp.servers]] / [[mcpServers]]) and any
previously-supported keyed-entry form to prove behavior, document the alias and
default (enabled=false) plus any incompatibility/migration steps in the config
docs, and implement a compatibility adapter for McpConfig (or a custom
deserializer for McpConfig.servers) that detects and transforms legacy
keyed-entry maps into the Vec<McpServerConfig> shape before validation so old
keyed configs continue to work; update tests to assert MCP remains enabled when
expected.

Comment thread src/daemon/mod.rs
Comment on lines +345 to +359
/// Check if a port is available for binding
async fn check_port_available(host: &str, port: u16) -> Result<()> {
let addr: std::net::SocketAddr = format!("{host}:{port}").parse()?;
match tokio::net::TcpListener::bind(addr).await {
Ok(listener) => {
// Successfully bound - close it and return Ok
drop(listener);
Ok(())
}
Err(e) if e.kind() == std::io::ErrorKind::AddrInUse => {
bail!("Port {} is already in use", port)
}
Err(e) => bail!("Failed to check port {}: {}", port, e),
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

🧩 Analysis chain

🏁 Script executed:

# Check for IPv6 references and how addresses are used
rg -n "ipv6|::1|\[::\]" --type=toml --type=md --type=rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 92


🏁 Script executed:

# Check how check_port_available is called and where host comes from
rg -n "check_port_available" src/ -B 2 -A 2

Repository: zeroclaw-labs/zeroclaw

Length of output: 851


🏁 Script executed:

# Look for other SocketAddr construction patterns in the daemon
rg -n "SocketAddr|TcpListener::bind" src/daemon/ -B 1 -A 1

Repository: zeroclaw-labs/zeroclaw

Length of output: 374


🏁 Script executed:

# Check the function signature and module context for check_port_available
sed -n '340,365p' src/daemon/mod.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 1002


🏁 Script executed:

# Find Config struct and how host field is populated
rg -n "struct Config" src/ -A 10

Repository: zeroclaw-labs/zeroclaw

Length of output: 2563


🏁 Script executed:

# Check CLI argument parsing or config loading
rg -n "host.*String|String.*host" src/ -B 2 -A 2

Repository: zeroclaw-labs/zeroclaw

Length of output: 9113


🏁 Script executed:

# Check for documentation on supported formats
rg -n "localhost|127.0.0.1|hostname|address" src/daemon/mod.rs -i

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

# Look at the full run function to understand host usage
sed -n '11,30p' src/daemon/mod.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 1043


🏁 Script executed:

# Test Rust string parsing behavior with IPv6
python3 << 'PYTHON_EOF'
# Demonstrate the issue: bare IPv6 address parsing
test_cases = [
    "127.0.0.1:8080",      # IPv4 - works
    "[::1]:8080",          # IPv6 with brackets - works
    "::1:8080",            # IPv6 without brackets - ambiguous (would this parse?)
    "localhost:8080"       # hostname - works
]

print("IPv6 address parsing issue:")
print("The string '::1:8080' is ambiguous because ':' separates IPv6 segments.")
print("Rust's SocketAddr::from_str expects either:")
print("  - IPv4:port format (e.g., '127.0.0.1:8080')")
print("  - [IPv6]:port format (e.g., '[::1]:8080')")
print("  - hostname:port (e.g., 'localhost:8080')")
print()
print("A bare IPv6 like '::1:8080' will fail to parse correctly.")
PYTHON_EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 401


The proposed IPv6 fix has a logic error that would break IPv4 addresses.

The string parsing issue with bare IPv6 addresses is real (e.g., ::1:8080 is ambiguous without brackets), but the proposed fix is flawed. Checking if host.contains(':') would incorrectly wrap IPv4 addresses like 127.0.0.1:8080 in brackets.

Since the default host is 127.0.0.1 and IPv6 is not documented or currently used, this is a defensive edge case. If IPv6 support is needed, use proper detection:

 async fn check_port_available(host: &str, port: u16) -> Result<()> {
-    let addr: std::net::SocketAddr = format!("{host}:{port}").parse()?;
+    let addr: std::net::SocketAddr = if !host.starts_with('[') && host.contains(':') {
+        // Bare IPv6 address - add brackets
+        format!("[{host}]:{port}").parse()?
+    } else {
+        format!("{host}:{port}").parse()?
+    };

Alternatively, parse host as an IpAddr first and reconstruct, or document that only hostnames and IPv4 are supported.

πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Check if a port is available for binding
async fn check_port_available(host: &str, port: u16) -> Result<()> {
let addr: std::net::SocketAddr = format!("{host}:{port}").parse()?;
match tokio::net::TcpListener::bind(addr).await {
Ok(listener) => {
// Successfully bound - close it and return Ok
drop(listener);
Ok(())
}
Err(e) if e.kind() == std::io::ErrorKind::AddrInUse => {
bail!("Port {} is already in use", port)
}
Err(e) => bail!("Failed to check port {}: {}", port, e),
}
}
/// Check if a port is available for binding
async fn check_port_available(host: &str, port: u16) -> Result<()> {
let addr: std::net::SocketAddr = if !host.starts_with('[') && host.contains(':') {
// Bare IPv6 address - add brackets
format!("[{host}]:{port}").parse()?
} else {
format!("{host}:{port}").parse()?
};
match tokio::net::TcpListener::bind(addr).await {
Ok(listener) => {
// Successfully bound - close it and return Ok
drop(listener);
Ok(())
}
Err(e) if e.kind() == std::io::ErrorKind::AddrInUse => {
bail!("Port {} is already in use", port)
}
Err(e) => bail!("Failed to check port {}: {}", port, e),
}
}

Comment thread src/daemon/mod.rs
Comment on lines +361 to +386
/// Check if a running daemon on this port is our zeroclaw daemon
async fn is_zeroclaw_daemon_running(host: &str, port: u16) -> bool {
let url = format!("http://{}:{}/health", host, port);
match reqwest::Client::builder()
.timeout(Duration::from_secs(2))
.build()
{
Ok(client) => match client.get(&url).send().await {
Ok(resp) => {
if resp.status().is_success() {
// Check if response looks like our health endpoint
if let Ok(json) = resp.json::<serde_json::Value>().await {
// Our health endpoint has "status" and "runtime.components"
json.get("status").is_some() && json.get("runtime").is_some()
} else {
false
}
} else {
false
}
}
Err(_) => false,
},
Err(_) => false,
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

IPv6 URL formatting issue and minor comment inconsistency.

Similar to check_port_available, the URL http://{host}:{port}/health will be malformed for IPv6 addresses. IPv6 hosts require brackets in URLs (e.g., http://[::1]:8080/health).

Also, the comment on line 373 mentions "runtime.components" but the code only checks for the top-level "runtime" key.

πŸ› οΈ Proposed fix
 async fn is_zeroclaw_daemon_running(host: &str, port: u16) -> bool {
-    let url = format!("http://{}:{}/health", host, port);
+    let url = if host.contains(':') {
+        format!("http://[{}]:{}/health", host, port)
+    } else {
+        format!("http://{}:{}/health", host, port)
+    };
     match reqwest::Client::builder()
         .timeout(Duration::from_secs(2))
         .build()
     {
         Ok(client) => match client.get(&url).send().await {
             Ok(resp) => {
                 if resp.status().is_success() {
-                    // Check if response looks like our health endpoint
-                    if let Ok(json) = resp.json::<serde_json::Value>().await {
-                        // Our health endpoint has "status" and "runtime.components"
+                    if let Ok(json) = resp.json::<serde_json::Value>().await {
+                        // Our health endpoint has "status" and "runtime" keys
                         json.get("status").is_some() && json.get("runtime").is_some()
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Check if a running daemon on this port is our zeroclaw daemon
async fn is_zeroclaw_daemon_running(host: &str, port: u16) -> bool {
let url = format!("http://{}:{}/health", host, port);
match reqwest::Client::builder()
.timeout(Duration::from_secs(2))
.build()
{
Ok(client) => match client.get(&url).send().await {
Ok(resp) => {
if resp.status().is_success() {
// Check if response looks like our health endpoint
if let Ok(json) = resp.json::<serde_json::Value>().await {
// Our health endpoint has "status" and "runtime.components"
json.get("status").is_some() && json.get("runtime").is_some()
} else {
false
}
} else {
false
}
}
Err(_) => false,
},
Err(_) => false,
}
}
/// Check if a running daemon on this port is our zeroclaw daemon
async fn is_zeroclaw_daemon_running(host: &str, port: u16) -> bool {
let url = if host.contains(':') {
format!("http://[{}]:{}/health", host, port)
} else {
format!("http://{}:{}/health", host, port)
};
match reqwest::Client::builder()
.timeout(Duration::from_secs(2))
.build()
{
Ok(client) => match client.get(&url).send().await {
Ok(resp) => {
if resp.status().is_success() {
if let Ok(json) = resp.json::<serde_json::Value>().await {
// Our health endpoint has "status" and "runtime" keys
json.get("status").is_some() && json.get("runtime").is_some()
} else {
false
}
} else {
false
}
}
Err(_) => false,
},
Err(_) => false,
}
}
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/daemon/mod.rs` around lines 361 - 386, The is_zeroclaw_daemon_running
function builds a URL that will be malformed for IPv6 hosts and its comment
inaccurately mentions "runtime.components"; update the URL construction to wrap
IPv6 addresses in brackets (reuse the same IPv6-bracketing logic used in
check_port_available) when formatting the address for the health URL (in
is_zeroclaw_daemon_running), and correct the comment to reflect that the code
checks for the top-level "runtime" key (or extend the JSON check to look for
"runtime.components" if intended); ensure the function still uses reqwest client
timeout and JSON parsing as before.

Comment thread src/tools/mcp_client.rs
Comment on lines +95 to +97
let notif = JsonRpcRequest::notification("notifications/initialized", json!({}));
// Best effort - ignore errors for notifications
let _ = transport.send_and_recv(&notif).await;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

Notification send can block startup indefinitely

Line 97 uses send_and_recv for a JSON-RPC notification and does not wrap it with timeout. If the server correctly sends no response, startup can hang forever here.

Suggested fix
 let notif = JsonRpcRequest::notification("notifications/initialized", json!({}));
-// Best effort - ignore errors for notifications
-let _ = transport.send_and_recv(&notif).await;
+// Best effort: never block startup on notification path
+let _ = timeout(
+    Duration::from_secs(2),
+    transport.send_and_recv(&notif),
+).await;
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let notif = JsonRpcRequest::notification("notifications/initialized", json!({}));
// Best effort - ignore errors for notifications
let _ = transport.send_and_recv(&notif).await;
use tokio::time::{timeout, Duration};
let notif = JsonRpcRequest::notification("notifications/initialized", json!({}));
// Best effort: never block startup on notification path
let _ = timeout(
Duration::from_secs(2),
transport.send_and_recv(&notif),
).await;
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_client.rs` around lines 95 - 97, The notification currently
uses JsonRpcRequest::notification assigned to notif and calls
transport.send_and_recv(&notif).await which can block forever for a
notification; change this to a non-blocking send (use
transport.send(&notif).await if available) or wrap the send_and_recv call with a
bounded timeout (e.g. tokio::time::timeout) and ignore timeout/errors for
best-effort behavior so startup cannot hang; update the code path around notif
and transport.send_and_recv to use the chosen non-blocking approach and ensure
errors/timeouts are discarded.

Comment thread src/tools/mcp_client.rs
Comment on lines +222 to +224
let prefixed = format!("{}__{}", config.name, tool.name);
tool_index.insert(prefixed, (server_idx, tool.name.clone()));
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Duplicate MCP tool names are silently overwritten

tool_index.insert(prefixed, ...) will overwrite an existing entry with the same prefixed key. Duplicate server_name + tool_name combinations become silently shadowed and calls can route to the wrong server/tool pair.

Suggested fix
 let prefixed = format!("{}__{}", config.name, tool.name);
-tool_index.insert(prefixed, (server_idx, tool.name.clone()));
+if tool_index.contains_key(&prefixed) {
+    tracing::warn!(
+        "Duplicate MCP tool registration `{}` from server `{}`; keeping first registration",
+        prefixed,
+        config.name
+    );
+} else {
+    tool_index.insert(prefixed, (server_idx, tool.name.clone()));
+}
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let prefixed = format!("{}__{}", config.name, tool.name);
tool_index.insert(prefixed, (server_idx, tool.name.clone()));
}
let prefixed = format!("{}__{}", config.name, tool.name);
if tool_index.contains_key(&prefixed) {
tracing::warn!(
"Duplicate MCP tool registration `{}` from server `{}`; keeping first registration",
prefixed,
config.name
);
} else {
tool_index.insert(prefixed, (server_idx, tool.name.clone()));
}
}
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_client.rs` around lines 222 - 224, The code silently overwrites
duplicate keys when inserting into tool_index using the prefixed key built from
config.name and tool.name; update the logic around prefixed (constructed from
config.name and tool.name) and tool_index.insert to detect existing entries for
the same key (e.g., via tool_index.contains_key or entry API) and handle
duplicates explicitly: either return/propagate an error or record multiple
(server_idx, tool.name) targets (e.g., change the map value to a Vec and push),
and include a clear log message referencing prefixed, server_idx, config.name,
and tool.name so duplicates are not silently shadowed.

Comment thread src/tools/mcp_tool.rs
Comment on lines +54 to +56
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
match self.registry.call_tool(&self.prefixed_name, args).await {
Ok(output) => Ok(ToolResult {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate MCP tool arguments before dispatch

execute forwards args directly to the server without validating against input_schema. This bypasses the tool-layer contract and pushes malformed input to runtime/remote execution.

Suggested fix (minimum guard)
 async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
+    if !args.is_object() {
+        return Ok(ToolResult {
+            success: false,
+            output: String::new(),
+            error: Some("invalid arguments: expected JSON object".to_string()),
+        });
+    }
+
+    // TODO: validate against self.input_schema before dispatch.
     match self.registry.call_tool(&self.prefixed_name, args).await {
         Ok(output) => Ok(ToolResult {
             success: true,
             output,
             error: None,
As per coding guidelines: `src/tools/**/*.rs`: Implement `Tool` in `src/tools/` with strict parameter schema; validate and sanitize all inputs; return structured `ToolResult` and avoid panics in runtime path.
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
match self.registry.call_tool(&self.prefixed_name, args).await {
Ok(output) => Ok(ToolResult {
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
if !args.is_object() {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("invalid arguments: expected JSON object".to_string()),
});
}
// TODO: validate against self.input_schema before dispatch.
match self.registry.call_tool(&self.prefixed_name, args).await {
Ok(output) => Ok(ToolResult {
success: true,
output,
error: None,
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_tool.rs` around lines 54 - 56, The execute method currently
forwards args directly to self.registry.call_tool; validate and sanitize args
against this tool's input_schema before dispatching by parsing/validating the
incoming serde_json::Value (e.g., attempt to deserialize into the expected input
shape or run input_schema.validate) and return an appropriate anyhow::Error (or
a structured ToolResult error) if validation fails; only call
self.registry.call_tool(&self.prefixed_name, args) when validation succeeds, and
ensure no panics occur in execute so runtime/remote execution never receives
malformed input.

Comment on lines +80 to +88
async fn recv_raw(&mut self) -> Result<String> {
let line = self
.stdout_lines
.next_line()
.await?
.ok_or_else(|| anyhow!("MCP server closed stdout"))?;
if line.len() > MAX_LINE_BYTES {
bail!("MCP response too large: {} bytes", line.len());
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's find and examine the mcp_transport.rs file
find . -name "mcp_transport.rs" -type f

Repository: zeroclaw-labs/zeroclaw

Length of output: 94


🏁 Script executed:

# Check the Cargo.toml to understand dependencies
find . -name "Cargo.toml" -type f | head -5

Repository: zeroclaw-labs/zeroclaw

Length of output: 223


🏁 Script executed:

# Examine the mcp_transport.rs file, focusing on lines 80-88 and surrounding context
cat -n ./src/tools/mcp_transport.rs | sed -n '60,110p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 2029


🏁 Script executed:

# Check the main Cargo.toml for dependencies
cat ./Cargo.toml | grep -A 50 "\[dependencies\]"

Repository: zeroclaw-labs/zeroclaw

Length of output: 2074


🏁 Script executed:

# Look at the StdioTransport struct definition and initialization
cat -n ./src/tools/mcp_transport.rs | sed -n '1,70p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 2922


🏁 Script executed:

# Check what MAX_LINE_BYTES is defined as
rg "MAX_LINE_BYTES" ./src/tools/mcp_transport.rs -B 2 -A 2

Repository: zeroclaw-labs/zeroclaw

Length of output: 417


🏁 Script executed:

# Let me check the exact implementation around lines 80-90 more carefully
cat -n ./src/tools/mcp_transport.rs | sed -n '75,95p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 894


🏁 Script executed:

# Check if there's an extension trait being used for Lines
rg "impl.*Lines" ./src/tools/ -A 5

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

# Let's also check the tokio documentation for Lines and next_line behavior
web_search "tokio::io::Lines next_line method tokio 1.42"

Repository: zeroclaw-labs/zeroclaw

Length of output: 116


🌐 Web query:

tokio::io::Lines next_line method buffer limit tokio 1.42

πŸ’‘ Result:

In Tokio 1.42, tokio::io::Lines::next_line() does not enforce a maximum line length / buffer limit. Internally it keeps a String plus a Vec<u8> and keeps appending data until it finds \n (via read_line_internal β†’ read_until_internal with delimiter b'\n'), so a line can grow until you run out of memory if no newline arrives. [1] [2]

If you need a hard cap for untrusted input, use a codec with an explicit limit, e.g. tokio_util::codec::LinesCodec::new_with_max_length(max_len), which is designed to reject/handle overlong lines. [3]

Sources: [1] https://docs.rs/tokio/1.42.0/src/tokio/io/util/lines.rs.html, [2] https://docs.rs/crate/tokio/1.42.0/source/src/io/util/read_line.rs, [3] https://docs.rs/tokio-util/latest/tokio_util/codec/struct.LinesCodec.html


🏁 Script executed:

# Check if tokio-util has LinesCodec with max_length feature
rg "LinesCodec" ./Cargo.toml -A 5

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

# Let's check what version of tokio-util is available and its features
cat ./Cargo.toml | grep -A 5 "tokio-util"

Repository: zeroclaw-labs/zeroclaw

Length of output: 386


🌐 Web query:

tokio_util codec FramedRead LinesCodec new_with_max_length

πŸ’‘ Result:

tokio_util::codec::LinesCodec::new_with_max_length(max_length: usize) constructs a LinesCodec that splits input on \n, but caps the maximum accepted line length. (docs.rs)

  • LinesCodec::new() has no upper bound (can buffer unbounded data until \n). (docs.rs)
  • With new_with_max_length, decode() returns a LinesCodecError if a line exceeds the limit; after that, it discards data until the next newline, yielding None while discarding, then resumes normal decoding. (docs.rs)
  • tokio_util::codec (including FramedRead/LinesCodec) is behind the crate feature codec. (docs.rs)

Typical use with FramedRead

use tokio_stream::StreamExt;
use tokio_util::codec::{FramedRead, LinesCodec};

let input = b"ok\nthis line is way too long...\n";
let codec = LinesCodec::new_with_max_length(8);

let mut reader = FramedRead::new(&input[..], codec);

while let Some(item) = reader.next().await {
    match item {
        Ok(line) => println!("line: {line}"),
        Err(e) => {
            eprintln!("codec error: {e}");
            // after this, the overlong line is discarded up to '\n'
        }
    }
}

FramedRead adapts an AsyncRead into a Stream of decoded frames using the codec you provide. (docs.rs)

Citations:


🏁 Script executed:

# Let's verify the exact behavior by checking if there are any existing tests or examples
# showing how the transport is being used
rg "recv_raw\|StdioTransport" ./src/tools/ -B 2 -A 5

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

# Check if there's any pattern of using FramedRead elsewhere in the codebase
rg "FramedRead" ./src/ -B 2 -A 2

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

# Let's check the full file to see if there are HTTP and SSE transport implementations
# mentioned in the original review's scratchpad
wc -l ./src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 98


🏁 Script executed:

# Check lines around 147-150 for HTTP transport
cat -n ./src/tools/mcp_transport.rs | sed -n '140,160p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 830


🏁 Script executed:

# Check lines around 209-233 for SSE transport
cat -n ./src/tools/mcp_transport.rs | sed -n '200,240p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 1812


🏁 Script executed:

# Let's see the full HTTP transport struct and implementation
cat -n ./src/tools/mcp_transport.rs | sed -n '112,175p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 2398


🏁 Script executed:

# Check if there are any uses or tests of these transports
rg "McpTransportConn\|StdioTransport\|HttpTransport\|SseTransport" ./src/ --type rs | grep -v "^./src/tools/mcp_transport.rs"

Repository: zeroclaw-labs/zeroclaw

Length of output: 92


🏁 Script executed:

# Let's check the reqwest documentation behavior for Content-Type with body()
web_search "reqwest post body Content-Type application/json default behavior"

Repository: zeroclaw-labs/zeroclaw

Length of output: 116


🏁 Script executed:

# Fix the rg search for McpTransportConn usages
rg "McpTransportConn|StdioTransport|HttpTransport|SseTransport" ./src --include="*.rs" | grep -v "^./src/tools/mcp_transport.rs"

Repository: zeroclaw-labs/zeroclaw

Length of output: 507


🌐 Web query:

reqwest post body Content-Type header default application/json behavior

πŸ’‘ Result:

In Rust reqwest, the default Content-Type depends on which builder method you use:

  • .body(...): only sets the request body; it does not automatically set a Content-Type header. (docs.rs)
  • .json(&value): serializes to JSON and sets Content-Type: application/json only if the header isn’t already present (entry(CONTENT_TYPE).or_insert_with(...)). It does not add a charset parameter. (docs.rs)
  • .form(&value): sets Content-Type: application/x-www-form-urlencoded (if not already present). (docs.rs)
  • .multipart(form): sets Content-Type: multipart/form-data; boundary=... (and Content-Length when computable). (docs.rs)

So: there’s no global default of application/json for POST bodiesβ€”use .json(...) (or set the header yourself when using .body(...)).

Citations:


🏁 Script executed:

# Search for McpTransportConn usage more carefully
rg "McpTransportConn" ./src

Repository: zeroclaw-labs/zeroclaw

Length of output: 632


🏁 Script executed:

# Check if tokio-util codec feature is used anywhere
rg "tokio_util" ./src -A 2 -B 2

Repository: zeroclaw-labs/zeroclaw

Length of output: 935


🏁 Script executed:

# Check the tokio_util dependency more carefully
cat ./Cargo.toml | grep -A 3 "tokio-util"

Repository: zeroclaw-labs/zeroclaw

Length of output: 249


🏁 Script executed:

# Verify tokio-util codec feature is available and compatible
web_search "tokio-util 0.7 codec feature FramedRead LinesCodec"

Repository: zeroclaw-labs/zeroclaw

Length of output: 116


🏁 Script executed:

# Check if there are any other references to Lines in the codebase that might conflict
rg "Lines<" ./src --include="*.rs"

Repository: zeroclaw-labs/zeroclaw

Length of output: 507


🌐 Web query:

tokio-util 0.7 codec feature FramedRead LinesCodec max_length

πŸ’‘ Result:

FramedRead, LinesCodec, and LinesCodec::new_with_max_length are all behind tokio-util’s codec feature.

Cargo.toml

[dependencies]
tokio-util = { version = "0.7", features = ["codec"] }
tokio-stream = "0.1" # if you want StreamExt::next()

(β€œAvailable on crate feature codec only.” for both the codec module and FramedRead/LinesCodec.) [1][2][3]

Use LinesCodec with a max line length

use tokio_util::codec::{FramedRead, LinesCodec};

let max = 8 * 1024; // bytes/chars in the buffered line (see docs)
let codec = LinesCodec::new_with_max_length(max);

let reader = /* some AsyncRead */;
let framed = FramedRead::new(reader, codec);
  • LinesCodec::new() has no upper bound (effectively usize::MAX) and is a potential DoS/memory-risk with untrusted input. [2]
  • LinesCodec::new_with_max_length(max_length) enforces a limit; if a line exceeds it, decode returns a LinesCodecError and then discards until the next newline. [2]
  • You can check the configured limit with LinesCodec::max_length(). [2]

Sources: tokio-util codec module docs + LinesCodec + FramedRead API docs. [1][2][3]


🏁 Script executed:

# Check for Lines usage in source
rg "Lines<" ./src

Repository: zeroclaw-labs/zeroclaw

Length of output: 170


🏁 Script executed:

# Let's verify the exact diff would work by checking FramedRead trait bounds
# and looking at what next() returns on StreamExt
rg "async_trait|impl.*Stream" ./src/tools/mcp_transport.rs -A 3

Repository: zeroclaw-labs/zeroclaw

Length of output: 966


Unbounded line buffering in stdio receive path allows memory exhaustion.

At line 83, next_line() has no maximum length enforcement; it will buffer an arbitrarily large line in memory until a newline is found or the connection closes, enabling a misbehaving MCP server to exhaust memory.

The suggested fix using tokio_util::codec::FramedRead with LinesCodec::new_with_max_length() is the correct approach. However, the codec feature must be enabled on the tokio-util dependency first.

Required changes

Update Cargo.toml:

-tokio-util = { version = "0.7", default-features = false }
+tokio-util = { version = "0.7", default-features = false, features = ["codec"] }

Then update src/tools/mcp_transport.rs:

-use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
+use tokio::io::AsyncWriteExt;
+use tokio_util::codec::{FramedRead, LinesCodec};
+use tokio_stream::StreamExt;

 pub struct StdioTransport {
     _child: Child,
     stdin: tokio::process::ChildStdin,
-    stdout_lines: tokio::io::Lines<BufReader<tokio::process::ChildStdout>>,
+    stdout_lines: FramedRead<tokio::process::ChildStdout, LinesCodec>,
 }

 ...
-        let stdout_lines = BufReader::new(stdout).lines();
+        let stdout_lines = FramedRead::new(stdout, LinesCodec::new_with_max_length(MAX_LINE_BYTES));
 ...
-        let line = self
-            .stdout_lines
-            .next_line()
-            .await?
-            .ok_or_else(|| anyhow!("MCP server closed stdout"))?;
-        if line.len() > MAX_LINE_BYTES {
-            bail!("MCP response too large: {} bytes", line.len());
-        }
+        let line = self
+            .stdout_lines
+            .next()
+            .await
+            .ok_or_else(|| anyhow!("MCP server closed stdout"))?
+            .context("failed reading MCP response line")?;
         Ok(line)
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_transport.rs` around lines 80 - 88, The recv_raw method
currently reads lines via stdout_lines.next_line() which can buffer arbitrarily
large input; change the implementation to use tokio_util::codec::FramedRead with
tokio_util::codec::LinesCodec::new_with_max_length(MAX_LINE_BYTES) (or an
appropriate constant) to enforce a max line length and avoid memory exhaustion,
replace uses of stdout_lines.next_line() with the framed read stream and handle
the Result<Option<String>, _> accordingly in recv_raw; also enable the "codec"
feature for tokio-util in Cargo.toml so LinesCodec is available.

Comment on lines +147 to +150
let mut req = self.client.post(&self.url).body(body);
for (key, value) in &self.headers {
req = req.header(key, value);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "mcp_transport.rs" -type f

Repository: zeroclaw-labs/zeroclaw

Length of output: 94


🏁 Script executed:

head -200 src/tools/mcp_transport.rs | tail -100

Repository: zeroclaw-labs/zeroclaw

Length of output: 2975


🏁 Script executed:

cat -n src/tools/mcp_transport.rs | sed -n '140,160p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 830


🏁 Script executed:

cat -n src/tools/mcp_transport.rs | sed -n '180,220p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 1697


🏁 Script executed:

rg -n "Content-Type|application/json" src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 126


🏁 Script executed:

find . -name "*test*" -o -name "*tests.rs" | xargs grep -l "HttpTransport\|mcp_transport" 2>/dev/null

Repository: zeroclaw-labs/zeroclaw

Length of output: 50


🏁 Script executed:

grep -n "\.json\|\.body" src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 159


🏁 Script executed:

web_search
MCP Model Context Protocol HTTP binding Content-Type requirements

Repository: zeroclaw-labs/zeroclaw

Length of output: 158


🌐 Web query:

MCP Model Context Protocol HTTP binding specification Content-Type JSON requirements

πŸ’‘ Result:

In the MCP Streamable HTTP transport (spec revision 2025-11-25):

  • Payload format/encoding: MCP messages are JSON-RPC, and JSON-RPC messages MUST be UTF-8 encoded. [1]
  • POST requests (client β†’ server):
    • Client MUST use HTTP POST. [1]
    • Client MUST send an Accept header that includes both application/json and text/event-stream. [1]
    • The POST body MUST be a single JSON-RPC message (request/notification/response). [1]
    • The spec section above does not explicitly mandate a Content-Type request header value; however, MCP examples commonly use Content-Type: application/json for the POST. [2]
  • POST responses (server β†’ client): If the POST contained a JSON-RPC request, the server MUST respond with either:
    • Content-Type: text/event-stream (SSE stream), or
    • Content-Type: application/json (a single JSON object response). [1]

Sources: MCP β€œTransports” (Streamable HTTP) [1]; MCP transport examples showing Content-Type: application/json on POST [2].

References:
[1] (modelcontextprotocol.io)
[2] (modelcontextprotocol.io)

Citations:


Add explicit Content-Type header for JSON POST requests to MCP server.

HttpTransport at line 147 sends JSON-RPC requests without Content-Type: application/json, which may break strict MCP HTTP servers. SseTransport in the same file correctly sets this header. Use reqwest's .json() method instead of manual serialization:

Suggested fix
-        let body = serde_json::to_string(request)?;
-
-        let mut req = self.client.post(&self.url).body(body);
+        let mut req = self
+            .client
+            .post(&self.url)
+            .json(request);
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let mut req = self.client.post(&self.url).body(body);
for (key, value) in &self.headers {
req = req.header(key, value);
}
let mut req = self
.client
.post(&self.url)
.json(request);
for (key, value) in &self.headers {
req = req.header(key, value);
}
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_transport.rs` around lines 147 - 150, The HttpTransport
request-building code is sending JSON-RPC without Content-Type and should mirror
SseTransport: change the post body construction in the HttpTransport send path
(the code creating req via self.client.post(&self.url).body(body)) to use
reqwest's .json(&payload) or otherwise add header("Content-Type",
"application/json") before sending; locate the method on the HttpTransport impl
that constructs req (the block iterating for (key, value) in &self.headers) and
replace the manual serialized body with .json(&payload) or append the explicit
header to ensure strict MCP servers accept the request.

Comment on lines +209 to +233
// For SSE, we POST the request and the response comes via SSE stream.
// Simplified implementation: treat as HTTP for now, proper SSE would
// maintain a persistent event stream.
let body = serde_json::to_string(request)?;
let url = format!("{}/message", self.base_url.trim_end_matches('/'));

let mut req = self
.client
.post(&url)
.body(body)
.header("Content-Type", "application/json");
for (key, value) in &self.headers {
req = req.header(key, value);
}

let resp = req.send().await.context("SSE POST to MCP server failed")?;

if !resp.status().is_success() {
bail!("MCP server returned HTTP {}", resp.status());
}

// For now, parse response directly. Full SSE would read from event stream.
let resp_text = resp.text().await.context("failed to read SSE response")?;
let mcp_resp: JsonRpcResponse = serde_json::from_str(&resp_text)
.with_context(|| format!("invalid JSON-RPC response: {}", resp_text))?;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

# First, locate and read the relevant file
find . -name "mcp_transport.rs" -type f

Repository: zeroclaw-labs/zeroclaw

Length of output: 94


🏁 Script executed:

# Read the file to see the context around lines 209-233
wc -l src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 96


🏁 Script executed:

# Read the full implementation to understand the context
sed -n '1,50p' src/tools/mcp_transport.rs
sed -n '180,260p' src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 4471


🏁 Script executed:

# Check for SseTransport struct definition and any SSE-related comments/todos
rg -A 10 -B 5 "struct SseTransport" src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 540


🏁 Script executed:

# Search for SSE-related comments throughout the file
rg -i "sse|event.stream" src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 1218


SSE transport lacks event-stream handling and falls back to HTTP silently.

Lines 209-233 treat SSE as HTTP: the implementation POSTs a request and reads the immediate response body, never consuming an SSE event stream. The event_source field (marked #[allow(dead_code)]) and explicit comments ("treat as HTTP for now", "Full SSE would read from event stream") confirm this is incomplete. Against servers that deliver replies only via SSE, this silently fails with mismatched behavior.

Per coding guidelines for src/tools/**/*.rs: Prefer explicit bail!/errors for unsupported or unsafe states; never silently broaden permissions/capabilities.

Replace with:

 impl McpTransportConn for SseTransport {
     async fn send_and_recv(&mut self, request: &JsonRpcRequest) -> Result<JsonRpcResponse> {
-        // For SSE, we POST the request and the response comes via SSE stream.
-        // Simplified implementation: treat as HTTP for now, proper SSE would
-        // maintain a persistent event stream.
-        let body = serde_json::to_string(request)?;
-        let url = format!("{}/message", self.base_url.trim_end_matches('/'));
-
-        let mut req = self
-            .client
-            .post(&url)
-            .body(body)
-            .header("Content-Type", "application/json");
-        for (key, value) in &self.headers {
-            req = req.header(key, value);
-        }
-
-        let resp = req.send().await.context("SSE POST to MCP server failed")?;
-
-        if !resp.status().is_success() {
-            bail!("MCP server returned HTTP {}", resp.status());
-        }
-
-        // For now, parse response directly. Full SSE would read from event stream.
-        let resp_text = resp.text().await.context("failed to read SSE response")?;
-        let mcp_resp: JsonRpcResponse = serde_json::from_str(&resp_text)
-            .with_context(|| format!("invalid JSON-RPC response: {}", resp_text))?;
-
-        Ok(mcp_resp)
+        let _ = request;
+        bail!("SSE transport event-stream response handling is not implemented; use stdio or http")
     }
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_transport.rs` around lines 209 - 233, The SSE branch currently
silently falls back to plain HTTP by POSTing and reading an immediate body;
instead detect when SSE/event stream behavior is required (reference the
event_source field and the SSE handling block that builds the POST on
self.client with self.base_url and self.headers) and return an explicit error
(e.g., bail! or Err with a contextual message) stating "SSE transport not
implemented" or "server uses SSE event stream; SSE client not supported" rather
than attempting HTTP; update the SSE handling code path that parses
JsonRpcResponse from resp.text() to short-circuit with that error so callers get
a clear failure instead of silent fallback.

@theonlyhennygod theonlyhennygod merged commit 4fa8206 into main Feb 27, 2026
3 of 7 checks passed
@theonlyhennygod theonlyhennygod deleted the issue-1380-mcp-main branch February 27, 2026 03:14
@github-actions
Copy link
Copy Markdown

Thanks for contributing to ZeroClaw.

For faster review, please ensure:

  • PR template sections are fully completed
  • cargo fmt --all -- --check, cargo clippy --all-targets -- -D warnings, and cargo test are included
  • If automation/agents were used heavily, add brief workflow notes
  • Scope is focused (prefer one concern per PR)

See CONTRIBUTING.md and docs/pr-workflow.md for full collaboration rules.

@github-actions
Copy link
Copy Markdown

PR intake checks failed (blocking)

Fast safe checks found blocking safety issues:

  • Missing Linear issue key reference (RMN-<id>, CDV-<id>, or COM-<id>) in PR title/body.
  • Missing required PR template sections: ## Validation Evidence (required), ## Security Impact (required), ## Privacy and Data Hygiene (required), ## Rollback Plan (required)
  • Incomplete required PR template fields: summary problem, summary why it matters, summary what changed, validation commands, security risk/mitigation, privacy status, rollback plan

Action items:

  1. Complete required PR template sections/fields.
  2. Link this PR to exactly one active Linear issue key (RMN-xxx/CDV-xxx/COM-xxx).
  3. Remove tabs, trailing whitespace, and merge conflict markers from added lines.
  4. Re-run local checks before pushing:
    • ./scripts/ci/rust_quality_gate.sh
    • ./scripts/ci/rust_strict_delta_gate.sh
    • ./scripts/ci/docs_quality_gate.sh

Detected Linear keys: none

Run logs: https://github.com/zeroclaw-labs/zeroclaw/actions/runs/22471191383

Detected blocking line issues (sample):

  • none

Detected advisory line issues (sample):

  • none

@github-actions github-actions bot added dependencies Auto scope: dependency manifest/lock/policy changed. channel Auto scope: src/channels/** changed. config Auto scope: src/config/** changed. daemon Auto scope: src/daemon/** changed. onboard Auto scope: src/onboard/** changed. tool Auto scope: src/tools/** changed. size: XL Auto size: >1000 non-doc changed lines. risk: high Auto risk: security/runtime/gateway/tools/workflows. distinguished contributor Contributor with 50+ merged PRs. daemon: core Auto module: daemon core files changed. config: core Auto module: config core files changed. onboard: wizard Auto module: onboard/wizard changed. channel: core Auto module: channel core files changed. provider: ollama Auto module: provider/ollama changed. and removed channel Auto scope: src/channels/** changed. config Auto scope: src/config/** changed. daemon Auto scope: src/daemon/** changed. onboard Auto scope: src/onboard/** changed. labels Feb 27, 2026
dgnsrekt added a commit to dgnsrekt/zeroclaw that referenced this pull request Mar 1, 2026
Merges upstream main (4e70abf) into sammy to incorporate the official
MCP (Model Context Protocol) client implementation from PR zeroclaw-labs#2013.

Key additions:
- src/tools/mcp_client.rs β€” stdio/HTTP/SSE MCP client
- src/tools/mcp_protocol.rs β€” JSON-RPC protocol layer
- src/tools/mcp_transport.rs β€” transport abstraction
- src/tools/mcp_tool.rs β€” Tool trait integration
- src/tools/xlsx_read.rs β€” Excel reader tool
- Various agent loop, skills, and config improvements

sammy-specific changes (docker-compose.yml, .gitignore) had no conflicts.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@gh-xj gh-xj mentioned this pull request Mar 2, 2026
5 tasks
@richin13
Copy link
Copy Markdown
Contributor

Is this planned to be added to 'master'?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

channel: core Auto module: channel core files changed. config: core Auto module: config core files changed. daemon: core Auto module: daemon core files changed. dependencies Auto scope: dependency manifest/lock/policy changed. distinguished contributor Contributor with 50+ merged PRs. onboard: wizard Auto module: onboard/wizard changed. provider: ollama Auto module: provider/ollama changed. risk: high Auto risk: security/runtime/gateway/tools/workflows. size: XL Auto size: >1000 non-doc changed lines. tool Auto scope: src/tools/** changed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]: Externel MCP Server Support

2 participants