Skip to content

feat(mcp): add native MCP client (Model Context Protocol)#1843

Closed
dwillitzer wants to merge 0 commit intozeroclaw-labs:mainfrom
dwillitzer:feat/mcp-native-client
Closed

feat(mcp): add native MCP client (Model Context Protocol)#1843
dwillitzer wants to merge 0 commit intozeroclaw-labs:mainfrom
dwillitzer:feat/mcp-native-client

Conversation

@dwillitzer
Copy link
Copy Markdown
Contributor

@dwillitzer dwillitzer commented Feb 25, 2026

Summary

Adds native MCP (Model Context Protocol) client for external tool integration via stdio, HTTP, and SSE transports.

Problem: ZeroClaw cannot use tools from external MCP servers (e.g., Claude Desktop, IDE extensions).

Why it matters: MCP is an emerging standard for tool interoperability. Native support enables ecosystem integration.

What changed:

  • src/tools/mcp_client.rs: Connection management, tool discovery
  • src/tools/mcp_protocol.rs: JSON-RPC 2.0 message types
  • src/tools/mcp_transport.rs: Stdio, HTTP, SSE implementations
  • src/tools/mcp_tool.rs: Wraps MCP tools as native Tool trait
  • Config schema: McpConfig, McpServerConfig types

Linked Issue

Linear: CDV-XXX (to be assigned by maintainer)

Security Impact (required) β€” MCP Protocol

New Attack Surface

  • Stdio transport: Spawns arbitrary executables
  • HTTP/SSE transport: Connects to remote servers
  • Timeout: 30s init, 600s max tool call
  • Auth: Configurable headers per server

Threat Model

Vector Risk Mitigation Status
Allowlist bypass HIGH MCP allowlist integration PLANNED (Phase 2)
Arbitrary commands (stdio) HIGH Integrate with allowed_commands PLANNED (Phase 2)
Remote server trust MEDIUM TLS pinning, URL allowlist PLANNED (Phase 2)
Credential exposure MEDIUM Secret redaction in logs IMPLEMENTED
DoS (hung server) LOW 30s init timeout, 600s max IMPLEMENTED

Security Integration Roadmap

  • Phase 1 (this PR): Basic client, transports, timeouts
  • Phase 2 (follow-up): Allowlist, TLS pinning, policy integration
  • Phase 3 (future): Sandbox integration, audit logging

Privacy and Data Hygiene (required)

Personal data involved: No
Redaction needed: N/A
Test fixtures: Neutral/project-scoped
Status: COMPLIANT

Rollback Plan (required)

Fast rollback: git revert <commit-sha>
Feature flags: [mcp] enabled = false in config
Failure symptoms: MCP connection failures, tool discovery timeout
Recovery time: < 1 minute

Validation Evidence (required)

Commands run:

  • cargo fmt --all -- --check βœ“
  • cargo clippy --all-targets -- -D warnings βœ“
  • cargo test --locked βœ“

Dependency note: Patch-level downgrades (chrono, rustls, shellexpand) from Cargo.lock resolution β€” no functional impact.

Co-Authored-By: Claude Sonnet 4.6 noreply@anthropic.com

@github-actions
Copy link
Copy Markdown

Thanks for contributing to ZeroClaw.

For faster review, please ensure:

  • PR template sections are fully completed
  • cargo fmt --all -- --check, cargo clippy --all-targets -- -D warnings, and cargo test are included
  • If automation/agents were used heavily, add brief workflow notes
  • Scope is focused (prefer one concern per PR)

See CONTRIBUTING.md and docs/pr-workflow.md for full collaboration rules.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Feb 25, 2026

PR intake checks failed (blocking)

Fast safe checks found blocking safety issues:

  • Missing Linear issue key reference (RMN-<id>, CDV-<id>, or COM-<id>) in PR title/body.
  • Incomplete required PR template fields: summary problem, summary why it matters, summary what changed, validation commands, security risk/mitigation, privacy status, rollback plan

Action items:

  1. Complete required PR template sections/fields.
  2. Link this PR to exactly one active Linear issue key (RMN-xxx/CDV-xxx/COM-xxx).
  3. Remove tabs, trailing whitespace, and merge conflict markers from added lines.
  4. Re-run local checks before pushing:
    • ./scripts/ci/rust_quality_gate.sh
    • ./scripts/ci/rust_strict_delta_gate.sh
    • ./scripts/ci/docs_quality_gate.sh

Detected Linear keys: none

Run logs: https://github.com/zeroclaw-labs/zeroclaw/actions/runs/22530858009

Detected blocking line issues (sample):

  • none

Detected advisory line issues (sample):

  • none

@github-actions github-actions bot added channel Auto scope: src/channels/** changed. config Auto scope: src/config/** changed. onboard Auto scope: src/onboard/** changed. tool Auto scope: src/tools/** changed. labels Feb 25, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 25, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▢️ Resume reviews
  • πŸ” Trigger review

Note

.coderabbit.yaml has unrecognized properties

CodeRabbit is using all valid settings from your configuration. Unrecognized properties (listed below) have been ignored and may indicate typos or deprecated fields that can be removed.

⚠️ Parsing warnings (1)
Validation error: Unrecognized key(s) in object: 'tools', 'path_filters', 'review_instructions'
βš™οΈ Configuration instructions
  • Please see the configuration documentation for more information.
  • You can also validate your configuration using the online YAML validator.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
πŸ“ Walkthrough

Walkthrough

Adds MCP client support: configuration schema, JSON‑RPC protocol types, a pluggable transport layer (Stdio/HTTP/SSE), a multi‑server client/registry, a Tool wrapper exposing MCP tools, and wiring into module exports and onboarding defaults.

Changes

Cohort / File(s) Summary
Configuration
src/config/schema.rs, src/config/mod.rs, src/onboard/wizard.rs
Add McpTransport enum, McpServerConfig, McpConfig, and new pub mcp: McpConfig on top-level Config; initialize defaults and re-export types.
Protocol Types
src/tools/mcp_protocol.rs
Add JSON‑RPC / MCP types, constants, serde derives, and tests (JsonRpcRequest, JsonRpcResponse, JsonRpcError, McpToolDef, McpToolsListResult, version/error constants).
Transport Layer
src/tools/mcp_transport.rs
Introduce McpTransportConn trait and implementations: StdioTransport, HttpTransport, SseTransport; factory create_transport(), timeouts, header handling, and tests.
Client & Registry
src/tools/mcp_client.rs
Add McpServer (connect, initialize, fetch tools, call tools with timeouts) and McpRegistry (connect_all, prefixed tool index, lookups, call delegation); error handling and tests.
Tool Integration & Exports
src/tools/mcp_tool.rs, src/tools/mod.rs
Add McpToolWrapper implementing Tool trait; expose new modules and re-export McpRegistry, McpServer, McpToolWrapper.
Docs & Misc
docs/i18n/README.md, Cargo.toml
Add Vietnamese paths note to i18n README; minor formatting tweak in Cargo.toml.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client
    participant Registry as McpRegistry
    participant Server as McpServer
    participant Transport as McpTransportConn
    participant Remote as Remote MCP Server

    Client->>Registry: connect_all(configs)
    loop per server
        Registry->>Server: connect(config)
        Server->>Transport: create_transport(config)
        Transport->>Remote: spawn/connect (stdio/http/sse)
        Server->>Transport: send initialize JsonRpcRequest
        Remote-->>Transport: JsonRpcResponse (server info)
        Server->>Transport: send tools/list JsonRpcRequest
        Remote-->>Transport: JsonRpcResponse (tools)
        Server->>Registry: register prefixed tools
    end
    Client->>Registry: call_tool(prefixed_name, args)
    Registry->>Server: route by prefix
    Server->>Transport: send tool call JsonRpcRequest
    Remote-->>Transport: JsonRpcResponse (result)
    Server->>Registry: return result
    Registry->>Client: formatted result
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested labels

size: L, risk: medium, daemon: core, integration: core

Suggested reviewers

  • theonlyhennygod
πŸš₯ Pre-merge checks | βœ… 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 58.62% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
βœ… Passed checks (2 passed)
Check name Status Explanation
Title check βœ… Passed The title accurately and concisely describes the main feature: adding native MCP client support to zeroclaw, which aligns with the comprehensive changeset across protocol, client, transport, and configuration modules.
Description check βœ… Passed PR description is comprehensive and well-structured, covering problem, motivation, changes, and security/privacy considerations. All major required sections are addressed with appropriate detail.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
πŸ§ͺ Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added size: L Auto size: 501-1000 non-doc changed lines. risk: high Auto risk: security/runtime/gateway/tools/workflows. config: core Auto module: config core files changed. onboard: wizard Auto module: onboard/wizard changed. channel: core Auto module: channel core files changed. provider: groq Auto module: provider/groq changed. channel Auto scope: src/channels/** changed. config Auto scope: src/config/** changed. onboard Auto scope: src/onboard/** changed. and removed channel Auto scope: src/channels/** changed. config Auto scope: src/config/** changed. onboard Auto scope: src/onboard/** changed. labels Feb 25, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Nitpick comments (1)
src/tools/mcp_tool.rs (1)

56-57: Add a fast arg-shape guard before dispatch.

Line 56 forwards any JSON value. A simple args.is_object() check here prevents avoidable round-trips and keeps failure structured at the tool boundary.

βœ… Suggested guard
 async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
+        if !args.is_object() {
+            return Ok(ToolResult {
+                success: false,
+                output: String::new(),
+                error: Some("MCP tool arguments must be a JSON object".to_string()),
+            });
+        }
         match self.registry.call_tool(&self.prefixed_name, args).await {

As per coding guidelines, "Implement Tool trait in src/tools/ with strict parameter schema; validate and sanitize all inputs; return structured ToolResult; avoid panics in runtime path (Adding a Tool)".

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_tool.rs` around lines 56 - 57, Add a fast arg-shape guard in
the async fn execute(&self, args: serde_json::Value) ->
anyhow::Result<ToolResult> before calling
self.registry.call_tool(&self.prefixed_name, args).await: check args.is_object()
and if false return a structured ToolResult (or an anyhow::Result wrapping a
clear, non-panic error) indicating invalid parameters for the tool identified by
self.prefixed_name; this keeps validation at the Tool boundary and avoids
unnecessary RPC/dispatch round-trips.
πŸ€– Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/channels/mod.rs`:
- Around line 3139-3142: The error path in the MCP registry initialization (the
Err(e) arm in channels/mod.rs) currently logs the full debug/error chain using
"{e:#}", which may include sensitive transport/config details; change the log to
a sanitized message by replacing "{e:#}" with a non-verbose representation
(e.g., e.to_string() or a call to a new sanitize_error(e) helper that strips
sources/metadata), or explicitly log only a short context like "MCP registry
failed to initialize" plus a minimal error code/text; ensure the variable name e
is used and do not include error sources, backtraces, or debug formatting
anywhere in that tracing::error! call.

In `@src/config/schema.rs`:
- Around line 431-440: The MCP server struct fields env and headers are storing
secrets in plaintext; update the secret-store serialization/deserialization in
Config::save and Config::load_or_init to encrypt/decrypt
config.mcp.servers[*].env and config.mcp.servers[*].headers before persisting
and after loading. Locate the code paths that already handle other secret fields
and extend them to iterate each server in config.mcp.servers, replacing env and
headers maps with encrypted placeholders (on save) and restoring plaintext maps
(on load) using the same secret-store API/keys; ensure the transformation is
tolerant of missing values and preserves non-secret entries.

In `@src/tools/mcp_client.rs`:
- Around line 111-115: The code reads list_resp.result without checking
list_resp.error, which loses server-side JSON-RPC error details; update the code
around list_resp handling (where tools/list is called) to first check if
list_resp.error.is_some() and return or propagate an anyhow error containing
that error (including context with config.name), and only then parse
list_resp.result into McpToolsListResult (keeping the existing with_context for
parse failures) so server errors are surfaced instead of producing a generic "no
result" message.
- Around line 163-167: The current computation of tool_timeout only applies an
upper bound and can yield 0 seconds if inner.config.tool_timeout_secs is 0;
change the logic that creates tool_timeout (using
inner.config.tool_timeout_secs, DEFAULT_TOOL_TIMEOUT_SECS, and
MAX_TOOL_TIMEOUT_SECS) to also enforce a minimum of 1 second (e.g. wrap with a
max(1, ...)) so tool_timeout is never zero and MCP tool calls are not
effectively disabled.
- Around line 188-192: The code currently treats a missing resp.result as a
successful Null; instead, when resp.error is None and resp.result is None you
must treat that as a protocol failure and return an error. Replace the final
line that returns Ok(resp.result.unwrap_or(serde_json::Value::Null)) with logic
that checks resp.result: if Some(r) return Ok(r), otherwise bail! with a clear
message referencing the tool (use the existing tool_name) indicating a malformed
MCP response (missing result and no error). Keep the existing error-path (if let
Some(err) = resp.error) unchanged.
- Around line 91-94: The notification send currently awaits
transport.send_and_recv for
JsonRpcRequest::notification("notifications/initialized", ...) which blocks on
spec-compliant servers; update McpTransportConn (or the transport interface used
by transport.send_and_recv) to add a send_only method and use it for
notifications, or modify the send path to check if request.id.is_none() and
short-circuit to a non-awaiting send (e.g., call a fire-and-forget send_only on
McpTransportConn or the underlying StdioTransport/HttpTransport/SseTransport
implementations), then replace the call that awaits
transport.send_and_recv(&notif).await with the new send-only invocation so
initialization does not wait for a reply.
- Around line 218-220: The current insertion into tool_index using let prefixed
= format!("{}__{}", config.name, tool.name); and tool_index.insert(prefixed,
(server_idx, tool.name.clone())) can silently overwrite an existing entry on key
collision; update the logic in the loop that builds tool_index to check
tool_index.contains_key(&prefixed) (or use entry API) before inserting and
handle collisions explicitlyβ€”e.g., log an error including config.name, tool.name
and existing (server_idx) or return an Err so collisions are not silently
replaced; ensure you reference the prefixed key, the tool_index map, server_idx,
config.name and tool.name in the collision handling.

In `@src/tools/mcp_transport.rs`:
- Around line 209-213: The SSE branch in mcp_transport.rs currently serializes
the request and POSTs to "{}/message" (using self.base_url and
serde_json::to_string(request)), but this is only a partial/fake implementation;
replace this fake behavior with explicit error handling so unsupported SSE use
fails loudly. In the function/enum match arm handling SSE (the branch that
creates body and url), return a clear error (e.g., Err(anyhow!("SSE transport
unsupported - real SSE streaming not implemented"))) instead of attempting an
HTTP POST; keep the code path for HTTP/LongPoll unchanged and document the SSE
branch with that explicit unsupported error.
- Around line 101-103: The current serde_json::from_str call includes the raw
response string (resp_line) in the error context, which can leak secrets; change
the with_context on the JsonRpcResponse deserialization to not include resp_line
but a non-sensitive message (e.g., "invalid JSON-RPC response" or include only
safe metadata such as payload length or a fixed redaction token) for the call
that creates resp: JsonRpcResponse and mirror the same change for the other
occurrences where resp_line is used in with_context (the similar
serde_json::from_str sites referenced in the file). Ensure no raw payload,
tokens, or PII are included in any error string.
- Around line 95-104: Validate that the parsed JsonRpcResponse matches the
outbound JsonRpcRequest by comparing their id fields before returning: in
send_and_recv (and the other response-returning methods flagged) extract the
request.id and the resp.id (handling possible null/absent ids according to your
JsonRpc types) and if they differ return an error/contextual failure (use
context or with_context to include both ids and the resp_line), otherwise return
the resp; update the error message to clearly state "response id mismatch" and
include request id and response id values to aid debugging.
- Around line 98-100: The transport is imposing an inner timeout by wrapping
self.recv_raw() with timeout(Duration::from_secs(RECV_TIMEOUT_SECS), ...), which
prematurely aborts long-running tool calls; remove that wrapper so send_and_recv
uses the caller's timeout insteadβ€”replace the timeout(...) .await ...?? chain
with a direct await on self.recv_raw() (and eliminate or stop using
RECV_TIMEOUT_SECS in this receive path) inside the function where resp_line is
obtained to let the outer mcp_client timeouts control behavior.

---

Nitpick comments:
In `@src/tools/mcp_tool.rs`:
- Around line 56-57: Add a fast arg-shape guard in the async fn execute(&self,
args: serde_json::Value) -> anyhow::Result<ToolResult> before calling
self.registry.call_tool(&self.prefixed_name, args).await: check args.is_object()
and if false return a structured ToolResult (or an anyhow::Result wrapping a
clear, non-panic error) indicating invalid parameters for the tool identified by
self.prefixed_name; this keeps validation at the Tool boundary and avoids
unnecessary RPC/dispatch round-trips.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between d2b0593 and f3cea0c.

πŸ“’ Files selected for processing (9)
  • src/channels/mod.rs
  • src/config/mod.rs
  • src/config/schema.rs
  • src/onboard/wizard.rs
  • src/tools/mcp_client.rs
  • src/tools/mcp_protocol.rs
  • src/tools/mcp_tool.rs
  • src/tools/mcp_transport.rs
  • src/tools/mod.rs

Comment thread src/channels/mod.rs
Comment on lines +3139 to +3142
Err(e) => {
// Non-fatal β€” daemon continues with the tools registered above.
tracing::error!("MCP registry failed to initialize: {e:#}");
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid logging full MCP error chains in Line 3141

{e:#} can include sensitive transport/config details from downstream errors. Please sanitize before logging.

πŸ”§ Suggested patch
             Err(e) => {
                 // Non-fatal β€” daemon continues with the tools registered above.
-                tracing::error!("MCP registry failed to initialize: {e:#}");
+                let safe_error = providers::sanitize_api_error(&e.to_string());
+                tracing::error!("MCP registry failed to initialize: {safe_error}");
             }

As per coding guidelines: "Never log secrets, raw tokens, or sensitive payloads in any code path."

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/channels/mod.rs` around lines 3139 - 3142, The error path in the MCP
registry initialization (the Err(e) arm in channels/mod.rs) currently logs the
full debug/error chain using "{e:#}", which may include sensitive
transport/config details; change the log to a sanitized message by replacing
"{e:#}" with a non-verbose representation (e.g., e.to_string() or a call to a
new sanitize_error(e) helper that strips sources/metadata), or explicitly log
only a short context like "MCP registry failed to initialize" plus a minimal
error code/text; ensure the variable name e is used and do not include error
sources, backtraces, or debug formatting anywhere in that tracing::error! call.

Comment thread src/config/schema.rs Outdated
Comment on lines +431 to +440
/// Optional environment variables for the spawned process (stdio only).
#[serde(default)]
pub env: std::collections::HashMap<String, String>,
/// Per-server foreground tool call timeout (seconds).
/// Default: 180s if unset. Maximum allowed: 600s (hard cap applied).
#[serde(default)]
pub tool_timeout_secs: Option<u64>,
/// Optional headers for SSE/HTTP transports.
#[serde(default)]
pub headers: std::collections::HashMap<String, String>,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Encrypt MCP headers/env values before persisting config.

Line 431 and Line 439 introduce fields that commonly carry secrets (API tokens, bearer headers), but the current Config::save/Config::load_or_init secret-store paths do not process config.mcp.servers[*].headers or config.mcp.servers[*].env. This leaves MCP credentials in plaintext at rest.

πŸ” Suggested fix (extend existing secret-store flow)
@@ fn Config::load_or_init() -> Result<Self> {
+            for (server_idx, server) in config.mcp.servers.iter_mut().enumerate() {
+                for (key, value) in server.headers.iter_mut() {
+                    decrypt_secret(
+                        &store,
+                        value,
+                        &format!("config.mcp.servers[{server_idx}].headers.{key}"),
+                    )?;
+                }
+                for (key, value) in server.env.iter_mut() {
+                    decrypt_secret(
+                        &store,
+                        value,
+                        &format!("config.mcp.servers[{server_idx}].env.{key}"),
+                    )?;
+                }
+            }
@@ fn Config::save(&self) -> Result<()> {
+        for (server_idx, server) in config_to_save.mcp.servers.iter_mut().enumerate() {
+            for (key, value) in server.headers.iter_mut() {
+                encrypt_secret(
+                    &store,
+                    value,
+                    &format!("config.mcp.servers[{server_idx}].headers.{key}"),
+                )?;
+            }
+            for (key, value) in server.env.iter_mut() {
+                encrypt_secret(
+                    &store,
+                    value,
+                    &format!("config.mcp.servers[{server_idx}].env.{key}"),
+                )?;
+            }
+        }
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/schema.rs` around lines 431 - 440, The MCP server struct fields
env and headers are storing secrets in plaintext; update the secret-store
serialization/deserialization in Config::save and Config::load_or_init to
encrypt/decrypt config.mcp.servers[*].env and config.mcp.servers[*].headers
before persisting and after loading. Locate the code paths that already handle
other secret fields and extend them to iterate each server in
config.mcp.servers, replacing env and headers maps with encrypted placeholders
(on save) and restoring plaintext maps (on load) using the same secret-store
API/keys; ensure the transformation is tolerant of missing values and preserves
non-secret entries.

Comment thread src/tools/mcp_client.rs
Comment on lines +91 to +94
let notif = JsonRpcRequest::notification("notifications/initialized", json!({}));
// Best effort - ignore errors for notifications
let _ = transport.send_and_recv(&notif).await;

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify notification handling in MCP transport and client
rg -n -C4 'trait McpTransportConn|send_and_recv\(|id\.is_none|notifications/initialized|JsonRpcRequest::notification' src/tools/mcp_transport.rs src/tools/mcp_client.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 4949


🏁 Script executed:

#!/bin/bash
# Check RECV_TIMEOUT_SECS definition
rg -n 'RECV_TIMEOUT_SECS|const.*TIMEOUT' src/tools/mcp_transport.rs src/tools/mcp_client.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 757


Don't await notification replies; send-only notifications should not block initialization.

Line 93 awaits send_and_recv for a JSON-RPC notification (id = None). Per JSON-RPC 2.0 spec, compliant servers do not reply to notifications. All transport implementations (StdioTransport, HttpTransport, SseTransport) wait for a response with a 30-second timeout, causing unnecessary startup delay on spec-compliant servers. The error is silently ignored, masking the timeout.

Implement a send_only method on McpTransportConn for notifications, or check request.id.is_none() and short-circuit before waiting for a response.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_client.rs` around lines 91 - 94, The notification send
currently awaits transport.send_and_recv for
JsonRpcRequest::notification("notifications/initialized", ...) which blocks on
spec-compliant servers; update McpTransportConn (or the transport interface used
by transport.send_and_recv) to add a send_only method and use it for
notifications, or modify the send path to check if request.id.is_none() and
short-circuit to a non-awaiting send (e.g., call a fire-and-forget send_only on
McpTransportConn or the underlying StdioTransport/HttpTransport/SseTransport
implementations), then replace the call that awaits
transport.send_and_recv(&notif).await with the new send-only invocation so
initialization does not wait for a reply.

Comment thread src/tools/mcp_client.rs
Comment on lines +111 to +115
let result = list_resp
.result
.ok_or_else(|| anyhow!("tools/list returned no result from `{}`", config.name))?;
let tool_list: McpToolsListResult = serde_json::from_value(result)
.with_context(|| format!("failed to parse tools/list from `{}`", config.name))?;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Handle tools/list JSON-RPC errors before parsing result.

On Line 111, tools/list reads result immediately and ignores list_resp.error. That collapses useful server errors into generic β€œno result” failures.

πŸ”Ž Suggested fix
+        if let Some(err) = list_resp.error {
+            bail!(
+                "MCP server `{}` tools/list failed {}: {}",
+                config.name,
+                err.code,
+                err.message
+            );
+        }
         let result = list_resp
             .result
             .ok_or_else(|| anyhow!("tools/list returned no result from `{}`", config.name))?;
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_client.rs` around lines 111 - 115, The code reads
list_resp.result without checking list_resp.error, which loses server-side
JSON-RPC error details; update the code around list_resp handling (where
tools/list is called) to first check if list_resp.error.is_some() and return or
propagate an anyhow error containing that error (including context with
config.name), and only then parse list_resp.result into McpToolsListResult
(keeping the existing with_context for parse failures) so server errors are
surfaced instead of producing a generic "no result" message.

Comment thread src/tools/mcp_client.rs
Comment on lines +163 to +167
let tool_timeout = inner
.config
.tool_timeout_secs
.unwrap_or(DEFAULT_TOOL_TIMEOUT_SECS)
.min(MAX_TOOL_TIMEOUT_SECS);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clamp tool timeout to a minimum of 1s.

Line 166-167 enforces only an upper bound. tool_timeout_secs = 0 becomes an immediate timeout and effectively disables all MCP tool calls for that server.

⏱️ Suggested fix
         let tool_timeout = inner
             .config
             .tool_timeout_secs
             .unwrap_or(DEFAULT_TOOL_TIMEOUT_SECS)
-            .min(MAX_TOOL_TIMEOUT_SECS);
+            .clamp(1, MAX_TOOL_TIMEOUT_SECS);
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let tool_timeout = inner
.config
.tool_timeout_secs
.unwrap_or(DEFAULT_TOOL_TIMEOUT_SECS)
.min(MAX_TOOL_TIMEOUT_SECS);
let tool_timeout = inner
.config
.tool_timeout_secs
.unwrap_or(DEFAULT_TOOL_TIMEOUT_SECS)
.clamp(1, MAX_TOOL_TIMEOUT_SECS);
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_client.rs` around lines 163 - 167, The current computation of
tool_timeout only applies an upper bound and can yield 0 seconds if
inner.config.tool_timeout_secs is 0; change the logic that creates tool_timeout
(using inner.config.tool_timeout_secs, DEFAULT_TOOL_TIMEOUT_SECS, and
MAX_TOOL_TIMEOUT_SECS) to also enforce a minimum of 1 second (e.g. wrap with a
max(1, ...)) so tool_timeout is never zero and MCP tool calls are not
effectively disabled.

Comment thread src/tools/mcp_client.rs
Comment on lines +218 to +220
let prefixed = format!("{}__{}", config.name, tool.name);
tool_index.insert(prefixed, (server_idx, tool.name.clone()));
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prevent silent tool-index overwrite on prefix collisions.

Line 219 uses HashMap::insert without collision checks. Duplicate <server>__<tool> keys silently replace existing routes, which can dispatch calls to the wrong server/tool pair.

🧭 Suggested fix
                     for tool in &tools {
                         // Prefix prevents name collisions across servers
                         let prefixed = format!("{}__{}", config.name, tool.name);
-                        tool_index.insert(prefixed, (server_idx, tool.name.clone()));
+                        if tool_index.contains_key(&prefixed) {
+                            tracing::error!(
+                                "Duplicate MCP tool prefix `{}` from server `{}`; keeping first registration",
+                                prefixed,
+                                config.name
+                            );
+                            continue;
+                        }
+                        tool_index.insert(prefixed, (server_idx, tool.name.clone()));
                     }
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_client.rs` around lines 218 - 220, The current insertion into
tool_index using let prefixed = format!("{}__{}", config.name, tool.name); and
tool_index.insert(prefixed, (server_idx, tool.name.clone())) can silently
overwrite an existing entry on key collision; update the logic in the loop that
builds tool_index to check tool_index.contains_key(&prefixed) (or use entry API)
before inserting and handle collisions explicitlyβ€”e.g., log an error including
config.name, tool.name and existing (server_idx) or return an Err so collisions
are not silently replaced; ensure you reference the prefixed key, the tool_index
map, server_idx, config.name and tool.name in the collision handling.

Comment on lines +95 to +104
async fn send_and_recv(&mut self, request: &JsonRpcRequest) -> Result<JsonRpcResponse> {
let line = serde_json::to_string(request)?;
self.send_raw(&line).await?;
let resp_line = timeout(Duration::from_secs(RECV_TIMEOUT_SECS), self.recv_raw())
.await
.context("timeout waiting for MCP response")??;
let resp: JsonRpcResponse = serde_json::from_str(&resp_line)
.with_context(|| format!("invalid JSON-RPC response: {}", resp_line))?;
Ok(resp)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "mcp_transport.rs" | head -5

Repository: zeroclaw-labs/zeroclaw

Length of output: 94


🏁 Script executed:

wc -l src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 96


🏁 Script executed:

cat -n src/tools/mcp_transport.rs | head -120

Repository: zeroclaw-labs/zeroclaw

Length of output: 4885


🏁 Script executed:

cat -n src/tools/mcp_transport.rs | sed -n '85,110p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 1169


🏁 Script executed:

cat -n src/tools/mcp_transport.rs | sed -n '135,170p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 1371


🏁 Script executed:

cat -n src/tools/mcp_transport.rs | sed -n '200,240p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 1801


🏁 Script executed:

rg "struct JsonRpcRequest|struct JsonRpcResponse" -A 5

Repository: zeroclaw-labs/zeroclaw

Length of output: 848


🏁 Script executed:

rg "\.id" src/tools/mcp_transport.rs -B 2 -A 2

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

cat -n src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 11113


🏁 Script executed:

rg "JsonRpc" --type rust -l

Repository: zeroclaw-labs/zeroclaw

Length of output: 142


🏁 Script executed:

rg "struct JsonRpcRequest|struct JsonRpcResponse|pub struct.*JsonRpc" --type rust -A 3

Repository: zeroclaw-labs/zeroclaw

Length of output: 800


🏁 Script executed:

head -50 src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 1854


🏁 Script executed:

rg "JsonRpcRequest|JsonRpcResponse" -A 2 | head -60

Repository: zeroclaw-labs/zeroclaw

Length of output: 3384


🏁 Script executed:

grep -n "JsonRpc\|id:" src/tools/mcp_transport.rs | head -20

Repository: zeroclaw-labs/zeroclaw

Length of output: 746


🏁 Script executed:

cat -n src/tools/mcp_transport.rs | sed -n '1,50p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 2204


🏁 Script executed:

cat -n src/tools/mcp_transport.rs | sed -n '90,110p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 933


🏁 Script executed:

cat -n src/tools/mcp_transport.rs | sed -n '140,170p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 1237


🏁 Script executed:

cat -n src/tools/mcp_transport.rs | sed -n '205,235p'

Repository: zeroclaw-labs/zeroclaw

Length of output: 1565


🏁 Script executed:

python3 << 'EOF'
with open('src/tools/mcp_transport.rs', 'r') as f:
    lines = f.readlines()
    # Print lines around the mentioned sections
    for section in [(90, 110), (140, 170), (205, 240)]:
        print(f"\n=== Lines {section[0]}-{section[1]} ===")
        for i in range(section[0]-1, min(section[1], len(lines))):
            print(f"{i+1:4d}: {lines[i]}", end='')
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 3690


🏁 Script executed:

grep -r "struct JsonRpcRequest\|struct JsonRpcResponse" src/ --include="*.rs" -A 5

Repository: zeroclaw-labs/zeroclaw

Length of output: 848


🏁 Script executed:

grep -r "type JsonRpcRequest\|type JsonRpcResponse" src/ --include="*.rs" -A 2

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

head -100 src/tools/mcp_transport.rs | grep -A 5 "use\|JsonRpc"

Repository: zeroclaw-labs/zeroclaw

Length of output: 1038


🏁 Script executed:

rg "JsonRpc" src/ --type rust | grep "import\|use\|from"

Repository: zeroclaw-labs/zeroclaw

Length of output: 554


🏁 Script executed:

grep -r "JsonRpc" Cargo.toml

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

cat Cargo.toml | grep -A 10 "\[dependencies\]"

Repository: zeroclaw-labs/zeroclaw

Length of output: 575


🏁 Script executed:

cat src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 9125


🏁 Script executed:

wc -l src/tools/mcp_transport.rs && echo "---" && cat src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 9160


🏁 Script executed:

python3 << 'EOF'
# Search for mcp dependency
import re
with open('Cargo.toml', 'r') as f:
    content = f.read()
    mcp_match = re.search(r'mcp[^=]*=\s*["\']([^"\']+)["\']', content)
    if mcp_match:
        print(f"MCP dependency: {mcp_match.group()}")
    else:
        print("Checking for path/version format...")
        if 'mcp' in content:
            print("MCP found in Cargo.toml")
            # Print relevant section
            lines = content.split('\n')
            for i, line in enumerate(lines):
                if 'mcp' in line.lower():
                    print(f"{i}: {line}")
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 101


🏁 Script executed:

grep -i "mcp" Cargo.toml -A 2 -B 1

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

python3 << 'EOF'
with open('src/tools/mcp_transport.rs', 'r') as f:
    lines = f.readlines()
    for i, line in enumerate(lines[:60], 1):
        if 'use' in line or 'JsonRpc' in line or 'Request' in line or 'Response' in line:
            print(f"{i}: {line.rstrip()}")
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 479


🏁 Script executed:

# Check if there's documentation about JsonRpc types
find . -name "*.rs" -type f | xargs grep -l "JsonRpc" | head -10

Repository: zeroclaw-labs/zeroclaw

Length of output: 148


🏁 Script executed:

# Check what fields are accessed on request/response objects
grep -n "request\.\|response\.\|resp\." src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 582


🏁 Script executed:

# Look for any pattern matching or field access on JsonRpcResponse
python3 << 'EOF'
import re
with open('src/tools/mcp_transport.rs', 'r') as f:
    content = f.read()
    # Find JsonRpcResponse usage patterns
    matches = re.finditer(r'(JsonRpcResponse|resp|response)[^\n]*', content)
    for match in matches:
        print(match.group())
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 945


🏁 Script executed:

# Check if there's any test code showing id usage
find . -name "*.rs" -type f | xargs grep -l "JsonRpc" | xargs grep "\.id"

Repository: zeroclaw-labs/zeroclaw

Length of output: 50


🏁 Script executed:

# Check the actual structure by looking at deserialization
grep -n "serde_json\|JsonRpc" src/tools/mcp_transport.rs -B 2 -A 2

Repository: zeroclaw-labs/zeroclaw

Length of output: 2575


🏁 Script executed:

python3 << 'EOF'
import re
with open('src/tools/mcp_transport.rs', 'r') as f:
    content = f.read()
    
    # Check if send_and_recv is truly single-threaded/sequential
    if 'async fn send_and_recv' in content:
        match = re.search(r'async fn send_and_recv\([^)]*\)[^{]*\{([^}]+)\}', content, re.DOTALL)
        if match:
            func_body = match.group(1)
            print("send_and_recv function found")
            print("Awaits on send:", ".await" in func_body)
            print("Sequential operations:", "self.send_raw" in func_body and "self.recv_raw" in func_body)
            
    # Check if there's multiplexing or concurrent requests
    if 'Arc' in content or 'Mutex' in content or 'RwLock' in content:
        print("Concurrent patterns detected")
    else:
        print("No obvious concurrent patterns")
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 176


🏁 Script executed:

web_search
What is JSON-RPC 2.0 specification for id field and response ordering

Repository: zeroclaw-labs/zeroclaw

Length of output: 159


🏁 Script executed:

python3 << 'EOF'
# The key question: can responses arrive out of order?
# In a single-threaded sequential send_and_recv that:
# 1. Sends request
# 2. Waits for response with timeout
# 3. Returns response

# Is id validation necessary if:
# - Each send_and_recv is awaited before next one
# - recv_raw reads one line at a time (blocking)
# - No multiplexing/queueing

# Let's check if there's any evidence of out-of-order handling
with open('src/tools/mcp_transport.rs', 'r') as f:
    content = f.read()
    
print("Looking for concurrent request handling...")
print("Has any_order/out_of_order mentions:", "out_of_order" in content or "any_order" in content)
print("Has notification handling:", "notification" in content.lower())
print("Has queueing/buffering:", "queue" in content.lower() or "buffer" in content.lower() or "BTreeMap" in content)

# Most important: is this a single connection or multiplexed?
print("\nTransport type assessment:")
if "impl Tool for" in content:
    print("Tool trait implementation found - check if multiple concurrent calls possible")
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 242


🏁 Script executed:

web_search
JSON-RPC 2.0 id field required response correlation out of order

Repository: zeroclaw-labs/zeroclaw

Length of output: 163


🏁 Script executed:

# Check for notification handling or async message handling
grep -rn "notification\|async\|spawn\|task" src/tools/mcp_transport.rs | head -20

Repository: zeroclaw-labs/zeroclaw

Length of output: 1120


🏁 Script executed:

# The critical question: Can multiple requests be pending?
python3 << 'EOF'
with open('src/tools/mcp_transport.rs', 'r') as f:
    lines = f.readlines()
    
# Find the Tool impl and check if send_and_recv could be called concurrently
for i, line in enumerate(lines):
    if 'impl Tool for' in line or 'async fn' in line or '&mut self' in line:
        print(f"{i+1}: {line.rstrip()}")
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 778


🏁 Script executed:

# Let's check how this tool is actually used and if multiple requests could be concurrent
grep -rn "send_and_recv" src/ --include="*.rs" -B 2 -A 2 | head -40

Repository: zeroclaw-labs/zeroclaw

Length of output: 2448


🏁 Script executed:

# Check if the Tool trait itself has any concurrency semantics
find src -name "*.rs" | xargs grep -l "trait Tool" | xargs cat | head -100

Repository: zeroclaw-labs/zeroclaw

Length of output: 3625


🏁 Script executed:

# Look for tool execution patterns
grep -rn "Tool\|send_and_recv" src/ --include="*.rs" | grep -v "mcp_transport" | head -20

Repository: zeroclaw-labs/zeroclaw

Length of output: 1754


🏁 Script executed:

python3 << 'EOF'
# Analyze the deserialization pattern more carefully
with open('src/tools/mcp_transport.rs', 'r') as f:
    lines = f.readlines()

for i in range(len(lines)):
    line = lines[i]
    if 'serde_json::from_str' in line or 'JsonRpc' in line:
        # Print context
        start = max(0, i-2)
        end = min(len(lines), i+4)
        for j in range(start, end):
            print(f"{j+1}: {lines[j].rstrip()}")
        print("---")
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 2660


🏁 Script executed:

# Check dependencies for mcp
cat Cargo.toml | grep -A 50 "\[dependencies\]" | head -30

Repository: zeroclaw-labs/zeroclaw

Length of output: 1230


🏁 Script executed:

# Search for where JsonRpc types are defined/imported
python3 << 'EOF'
import os
import re

# Search in Cargo files or mod.rs for module definitions
for root, dirs, files in os.walk('src'):
    for f in files:
        if f.endswith('.rs'):
            path = os.path.join(root, f)
            with open(path, 'r') as file:
                content = file.read()
                # Look for JsonRpc type definitions or re-exports
                if 'JsonRpc' in content and ('pub struct' in content or 'pub type' in content or 'use.*JsonRpc' in content):
                    print(f"\n=== {path} ===")
                    lines = content.split('\n')
                    for i, line in enumerate(lines):
                        if 'JsonRpc' in line and ('pub' in line or 'use' in line or 'struct' in line):
                            print(f"{i+1}: {line}")
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 375


🏁 Script executed:

# Check if there's vendored mcp code or find imports
find . -path "./target" -prune -o -name "*.rs" -type f -print | xargs grep "use.*JsonRpc\|from.*mcp" | head -10

Repository: zeroclaw-labs/zeroclaw

Length of output: 253


🏁 Script executed:

python3 << 'EOF'
# According to JSON-RPC 2.0 spec:
# - A Request object MUST have an "id" member
# - A Response object MUST have an "id" member that is the same as in the request
# 
# The issue claims that without id validation, responses could be mis-associated.
# This would happen if:
# 1. Multiple requests are pending (async/concurrent)
# 2. Responses arrive out of order
# 3. Each request uses a different id
# 4. Code doesn't match response id to request id

# Question: In the current code, are multiple requests possible?

with open('src/tools/mcp_transport.rs', 'r') as f:
    content = f.read()
    
# The send_and_recv function:
# 1. Takes &mut self (mutable ref)
# 2. Sends request
# 3. Waits with timeout for ONE response
# 4. Returns response

# This is SEQUENTIAL - each send_and_recv blocks until response received
# So only ONE request can be pending at a time

# BUT: What about notifications from the server?
# MCP protocol allows server to send notifications (responses with no id)
# These could be interleaved with responses

print("JSON-RPC 2.0 Protocol Analysis:")
print("- Multiple pending requests?: Only if concurrent send_and_recv calls (Tool is &mut self)")
print("- Notifications possible?: Likely yes (server initiated messages)")  
print("- Risk of mis-association?: Only if notifications/out-of-order traffic without id validation")
print("\nCurrent code risk assessment:")
print("- Sequential send_and_recv: Low risk of mis-association for regular requests")
print("- But notifications could be buffered and returned as responses: MEDIUM risk")
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 530


🏁 Script executed:

# Search for any evidence of notifications or server-initiated messages
rg "notification|server.*message|inbound|recv.*buffer|queue" src/tools/mcp_transport.rs -i

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

# Check the Tool trait to understand concurrency semantics
grep -A 20 "trait Tool" src/lib.rs src/tools/mod.rs src/tools.rs 2>/dev/null | head -30

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

# Look for how Tool::execute is implemented/used
find src -name "*.rs" | xargs grep -l "fn execute\|execute.*Tool" | head -5

Repository: zeroclaw-labs/zeroclaw

Length of output: 162


🏁 Script executed:

find src -name "*.rs" -exec grep -l "trait Tool" {} \;

Repository: zeroclaw-labs/zeroclaw

Length of output: 109


🏁 Script executed:

grep -n "trait Tool" src/**/*.rs 2>/dev/null

Repository: zeroclaw-labs/zeroclaw

Length of output: 185


🏁 Script executed:

cat src/tools.rs 2>/dev/null | head -100

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

# Find tool.rs or tools.rs or mod.rs in tools
ls -la src/tools/

Repository: zeroclaw-labs/zeroclaw

Length of output: 3205


🏁 Script executed:

# Search in parent directory
head -100 src/lib.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 2771


🏁 Script executed:

python3 << 'EOF'
import os

# List all files in src/tools/
print("Files in src/tools/:")
for f in os.listdir('src/tools'):
    print(f"  {f}")

# Search for Tool trait definition
for root, dirs, files in os.walk('src'):
    for file in files:
        if file.endswith('.rs'):
            path = os.path.join(root, file)
            with open(path, 'r') as f:
                content = f.read()
                if 'trait Tool' in content:
                    print(f"\nTrait Tool found in: {path}")
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 943


🏁 Script executed:

cat src/tools/mod.rs | head -150

Repository: zeroclaw-labs/zeroclaw

Length of output: 4641


🏁 Script executed:

# Check how the Tool is called and if multiple concurrent calls are possible
grep -rn "\.execute\|Tool::" src/ --include="*.rs" -B 2 -A 2 | head -40

Repository: zeroclaw-labs/zeroclaw

Length of output: 2182


🏁 Script executed:

python3 << 'EOF'
# The crux of the issue:
# In JSON-RPC 2.0, responses must be matched to requests by id
# 
# The code does:
#   1. send(request)
#   2. recv(response_line)
#   3. parse(response_line)
#   4. return response
#
# WITHOUT validating response.id == request.id
#
# Risk scenarios:
# A) Multiple concurrent send_and_recv calls - if so, responses could be mixed up
# B) Notifications/unsolicited messages from server - if buffered, could be returned as response
# C) Out-of-order responses - if multiple requests are pipelined
# 
# Current evidence:
# - send_and_recv takes &mut self (sequential by default)
# - Tool::execute is called on single agent instance (need to check if spawned/concurrent)
# - No obvious multiplexing in mcp_transport.rs
# 
# BUT: JSON-RPC spec is designed for POTENTIAL concurrent scenarios
# If this transport is meant for a single-threaded synchronous use case,
# id validation is redundant. But it's also HARMLESS and GOOD PRACTICE.
# 
# The CORRECT assessment: 
# - This is a potential BUG if Tool::execute can be called concurrently
# - This is DEFENSIVE PROGRAMMING if not currently concurrent
# - The suggestion to add validation is SOUND regardless

print("Risk Assessment:")
print("=" * 50)
print("ARCHITECTURAL CONTEXT NEEDED:")
print("- Can Tool::execute be called concurrently on same instance?")
print("- Is &mut self a guarantee of sequential access?")
print("- Could server send notifications/messages out of sync with responses?")
print()
print("CURRENT CODE BEHAVIOR:")
print("- Sequential send then recv in single function")
print("- No id validation between request and response")
print("- Would silently use wrong response if mismatched")
print()
print("REVIEWER'S POINT:")
print("- Valid: id validation is good practice in JSON-RPC")
print("- Valid: prevents mis-association if concurrency changes")
print("- Valid: defensive against protocol violations")
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 688


🏁 Script executed:

# Check Tool creation and usage patterns
grep -rn "impl Tool\|Tool {" src/ --include="*.rs" -B 1 -A 3 | head -50

Repository: zeroclaw-labs/zeroclaw

Length of output: 2344


🏁 Script executed:

# Check the agent execution to see if tools are called concurrently
grep -rn "execute\|call.*tool" src/ --include="*.rs" -B 1 -A 1 | head -40

Repository: zeroclaw-labs/zeroclaw

Length of output: 2086


🏁 Script executed:

python3 << 'EOF'
# The review says "Also applies to: 144-166, 208-235"
# Let me check if those are other implementations of similar patterns

with open('src/tools/mcp_transport.rs', 'r') as f:
    lines = f.readlines()

sections = [(95, 104), (144, 166), (208, 235)]
for start, end in sections:
    print(f"\n=== Lines {start}-{end} ===")
    for i in range(start-1, min(end, len(lines))):
        print(f"{i+1}: {lines[i]}", end='')
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 2986


🏁 Script executed:

# Check what version/source of mcp crate
grep "mcp" Cargo.toml

# If it's from crates.io, the id field is standard in JSON-RPC
# Let's verify by searching for id usage in any JsonRpc context
grep -rn "request.*id\|response.*id" src/ --include="*.rs" -i

Repository: zeroclaw-labs/zeroclaw

Length of output: 12643


🏁 Script executed:

# Check if id is used anywhere in the codebase
grep -rn "\.id\b" src/ --include="*.rs" | grep -i "jsonrpc\|request\|response" | head -20

Repository: zeroclaw-labs/zeroclaw

Length of output: 338


🏁 Script executed:

# Search for any deserialization tests or examples showing JsonRpc structure
find . -name "*.rs" -path "*/tests/*" -o -name "*.rs" -path "*/examples/*" | xargs grep -l "JsonRpc" 2>/dev/null

Repository: zeroclaw-labs/zeroclaw

Length of output: 50


🏁 Script executed:

# Check if there are any comments documenting the protocol
grep -B 5 -A 5 "send_and_recv\|JsonRpc" src/tools/mcp_transport.rs | head -60

Repository: zeroclaw-labs/zeroclaw

Length of output: 2124


🏁 Script executed:

web_search
MCP protocol JSON-RPC request response id field structure

Repository: zeroclaw-labs/zeroclaw

Length of output: 158


🏁 Script executed:

python3 << 'EOF'
# Key facts established:
# 1. JSON-RPC 2.0 spec requires id field for request/response correlation
# 2. The code does NOT validate id matching
# 3. The send_and_recv pattern shows: send() -> timeout(recv()) -> parse()
# 4. Returns response without validating id matches request

# The core question: Is this a real issue?
# 
# IF all requests are strictly sequential with &mut self:
#   - Then id validation is nice-to-have (defensive programming)
#   - But not required for correctness in current architecture
# 
# IF concurrent requests are possible now or in future:
#   - Then id validation is REQUIRED for correctness
#   - Current code WOULD have silent bugs
#
# What does the evidence show?
# - send_and_recv(&mut self) is sequential
# - Tool trait with execute method (need to verify if concurrent)
# - No obvious multiplexing in this file
#
# However: 
# - If notifications from server arrive unsolicited, they could be returned as responses
# - If Tool::execute is ever called concurrently in future, ids would get mixed
# - The proposal to validate id is SOUND and SAFE
# - NOT validating id is RISKY (protocol-violating)

# Conclusion:
# The review comment identifies a REAL RISK:
# - Current code doesn't validate id, which violates JSON-RPC protocol
# - This could cause issues if: (a) concurrency is added, (b) notifications arrive, (c) responses are buffered
# - The fix is sound and should be applied
# - Severity: MAJOR (protocol violation, potential data loss/corruption if misused)

print("FINAL ASSESSMENT:")
print("=" * 60)
print("CLAIM: Need id validation in JSON-RPC responses")
print()
print("EVIDENCE FOR:")
print("+ JSON-RPC 2.0 spec requires id-based correlation")
print("+ Code doesn't validate id matching")
print("+ Protocol violation that could cause mis-association")
print("+ Good defensive programming practice")
print()
print("EVIDENCE AGAINST:")
print("- Current single &mut self access looks sequential")
print("- No obvious multiplexing in current code")
print()
print("VERDICT:")
print("Valid concern - id validation is missing and necessary")
print("for protocol correctness and future-proofing")
EOF

Repository: zeroclaw-labs/zeroclaw

Length of output: 607


Validate JSON-RPC response id matches the request before returning.

Each transport returns parsed responses without correlating id to the outbound request. JSON-RPC 2.0 protocol requires id-based correlation to ensure responses aren't mis-associated with requests, especially important if server sends notifications, unsolicited messages, or if concurrent handling is added in the future.

Proposed fix
+fn ensure_matching_id(request: &JsonRpcRequest, response: &JsonRpcResponse) -> Result<()> {
+    if request.id != response.id {
+        bail!("JSON-RPC response id mismatch");
+    }
+    Ok(())
+}

@@
         let resp: JsonRpcResponse = serde_json::from_str(&resp_line)
             .with_context(|| format!("invalid JSON-RPC response: {}", resp_line))?;
+        ensure_matching_id(request, &resp)?;
         Ok(resp)

@@
         let mcp_resp: JsonRpcResponse = serde_json::from_str(&resp_text)
             .with_context(|| format!("invalid JSON-RPC response: {}", resp_text))?;
+        ensure_matching_id(request, &mcp_resp)?;
         Ok(mcp_resp)

@@
         let mcp_resp: JsonRpcResponse = serde_json::from_str(&resp_text)
             .with_context(|| format!("invalid JSON-RPC response: {}", resp_text))?;
+        ensure_matching_id(request, &mcp_resp)?;
         Ok(mcp_resp)

Also applies to: 144-166, 208-235

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_transport.rs` around lines 95 - 104, Validate that the parsed
JsonRpcResponse matches the outbound JsonRpcRequest by comparing their id fields
before returning: in send_and_recv (and the other response-returning methods
flagged) extract the request.id and the resp.id (handling possible null/absent
ids according to your JsonRpc types) and if they differ return an
error/contextual failure (use context or with_context to include both ids and
the resp_line), otherwise return the resp; update the error message to clearly
state "response id mismatch" and include request id and response id values to
aid debugging.

Comment on lines +98 to +100
let resp_line = timeout(Duration::from_secs(RECV_TIMEOUT_SECS), self.recv_raw())
.await
.context("timeout waiting for MCP response")??;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, check if the file exists and see its structure
fd -t f "mcp_transport.rs" -x wc -l {}

Repository: zeroclaw-labs/zeroclaw

Length of output: 98


🏁 Script executed:

# Read the relevant sections of the file
head -250 src/tools/mcp_transport.rs | tail -200

Repository: zeroclaw-labs/zeroclaw

Length of output: 6488


🏁 Script executed:

# Also get the beginning of the file to understand imports and constants
head -120 src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 4045


🏁 Script executed:

# Check how send_and_recv is called and where MCP client timeouts might be set
rg -B 3 -A 3 "send_and_recv|recv_timeout|RECV_TIMEOUT" src/tools/

Repository: zeroclaw-labs/zeroclaw

Length of output: 5552


🏁 Script executed:

# Look for any higher-level timeout configuration in MCP or tool call paths
fd -t f "*.rs" src/tools src/providers | xargs rg -l "MCP|mcp" | head -10

Repository: zeroclaw-labs/zeroclaw

Length of output: 583


🏁 Script executed:

# Check all usages of send_and_recv to see if there are callers outside mcp_client.rs
rg "send_and_recv" src/tools/ -B 2 -A 2

Repository: zeroclaw-labs/zeroclaw

Length of output: 2818


🏁 Script executed:

# Look for direct instantiation of transports to see if there are paths that bypass mcp_client
rg "create_transport|HttpTransport|SseTransport|StdioTransport" src/ -B 1 -A 1

Repository: zeroclaw-labs/zeroclaw

Length of output: 3497


🏁 Script executed:

# Check the exact lines mentioned in the review for the HTTP and SSE timeout settings
sed -n '115,135p' src/tools/mcp_transport.rs
sed -n '185,205p' src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 1192


🏁 Script executed:

# Verify the context around the stdio timeout more carefully
sed -n '90,110p' src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 786


🏁 Script executed:

# Check the full context around tool call timeout handling in mcp_client.rs
rg -B 5 -A 10 "fn call\(" src/tools/mcp_client.rs | head -50

Repository: zeroclaw-labs/zeroclaw

Length of output: 48


🏁 Script executed:

# Specifically check the tool_timeout variable and DEFAULT_TOOL_TIMEOUT_SECS
rg -B 3 -A 3 "DEFAULT_TOOL_TIMEOUT_SECS|tool_timeout" src/tools/mcp_client.rs | head -60

Repository: zeroclaw-labs/zeroclaw

Length of output: 1591


🏁 Script executed:

# Check if there are any other callers of send_and_recv that are NOT wrapped in timeout
rg -B 5 -A 1 "send_and_recv" src/tools/mcp_client.rs | grep -v "timeout"

Repository: zeroclaw-labs/zeroclaw

Length of output: 933


🏁 Script executed:

# Verify exact line numbers mentioned in the review comment
sed -n '98,100p' src/tools/mcp_transport.rs
sed -n '129,131p' src/tools/mcp_transport.rs
sed -n '192,194p' src/tools/mcp_transport.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 465


Hardcoded transport timeouts prematurely cut off long-running tool calls.

The stdio transport enforces a 30-second timeout, and HTTP/SSE enforce 120 seconds. Since these are inner timeouts that wrap the network/IO operation, they fire before the outer timeout set by the caller (mcp_client.rs). This means tool calls configured for longer durations (e.g., 300–600 seconds) will be interrupted by the transport's hardcoded limit instead.

For example, a tool call configured for 300 seconds on HTTP transport will timeout at 120 seconds instead.

Remove hardcoded transport-level timeouts. The caller already wraps all send_and_recv calls with appropriate timeouts (init/list: 30s, tool calls: configurable 30–600s). Transport-level timeouts should not override caller intent.

Proposed fix
-const RECV_TIMEOUT_SECS: u64 = 30;

@@
-        let resp_line = timeout(Duration::from_secs(RECV_TIMEOUT_SECS), self.recv_raw())
-            .await
-            .context("timeout waiting for MCP response")??;
+        let resp_line = self.recv_raw().await?;

@@
-        let client = reqwest::Client::builder()
-            .timeout(Duration::from_secs(120))
-            .build()
+        let client = reqwest::Client::builder()
+            .build()

@@
-        let client = reqwest::Client::builder()
-            .timeout(Duration::from_secs(120))
-            .build()
+        let client = reqwest::Client::builder()
+            .build()
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_transport.rs` around lines 98 - 100, The transport is imposing
an inner timeout by wrapping self.recv_raw() with
timeout(Duration::from_secs(RECV_TIMEOUT_SECS), ...), which prematurely aborts
long-running tool calls; remove that wrapper so send_and_recv uses the caller's
timeout insteadβ€”replace the timeout(...) .await ...?? chain with a direct await
on self.recv_raw() (and eliminate or stop using RECV_TIMEOUT_SECS in this
receive path) inside the function where resp_line is obtained to let the outer
mcp_client timeouts control behavior.

Comment on lines +101 to +103
let resp: JsonRpcResponse = serde_json::from_str(&resp_line)
.with_context(|| format!("invalid JSON-RPC response: {}", resp_line))?;
Ok(resp)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not include raw MCP response bodies in error context.

Current parse errors embed full payload text, which can leak secrets/PII through logs.

Proposed fix
-        let resp: JsonRpcResponse = serde_json::from_str(&resp_line)
-            .with_context(|| format!("invalid JSON-RPC response: {}", resp_line))?;
+        let resp: JsonRpcResponse = serde_json::from_str(&resp_line)
+            .with_context(|| format!("invalid JSON-RPC response ({} bytes)", resp_line.len()))?;

@@
-        let mcp_resp: JsonRpcResponse = serde_json::from_str(&resp_text)
-            .with_context(|| format!("invalid JSON-RPC response: {}", resp_text))?;
+        let mcp_resp: JsonRpcResponse = serde_json::from_str(&resp_text)
+            .with_context(|| format!("invalid JSON-RPC response ({} bytes)", resp_text.len()))?;

@@
-        let mcp_resp: JsonRpcResponse = serde_json::from_str(&resp_text)
-            .with_context(|| format!("invalid JSON-RPC response: {}", resp_text))?;
+        let mcp_resp: JsonRpcResponse = serde_json::from_str(&resp_text)
+            .with_context(|| format!("invalid JSON-RPC response ({} bytes)", resp_text.len()))?;

As per coding guidelines, "Never log secrets, raw tokens, or sensitive payloads in any code path."

Also applies to: 162-163, 231-232

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_transport.rs` around lines 101 - 103, The current
serde_json::from_str call includes the raw response string (resp_line) in the
error context, which can leak secrets; change the with_context on the
JsonRpcResponse deserialization to not include resp_line but a non-sensitive
message (e.g., "invalid JSON-RPC response" or include only safe metadata such as
payload length or a fixed redaction token) for the call that creates resp:
JsonRpcResponse and mirror the same change for the other occurrences where
resp_line is used in with_context (the similar serde_json::from_str sites
referenced in the file). Ensure no raw payload, tokens, or PII are included in
any error string.

Comment thread src/tools/mcp_transport.rs Outdated
Comment on lines +209 to +213
// For SSE, we POST the request and the response comes via SSE stream.
// Simplified implementation: treat as HTTP for now, proper SSE would
// maintain a persistent event stream.
let body = serde_json::to_string(request)?;
let url = format!("{}/message", self.base_url.trim_end_matches('/'));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

SSE transport currently presents partial support instead of explicit unsupported behavior.

This path posts and parses an immediate HTTP response while noting real SSE streaming is not implemented. That can silently misbehave against actual SSE MCP servers.

Proposed fix
 impl McpTransportConn for SseTransport {
     async fn send_and_recv(&mut self, request: &JsonRpcRequest) -> Result<JsonRpcResponse> {
-        // For SSE, we POST the request and the response comes via SSE stream.
-        // Simplified implementation: treat as HTTP for now, proper SSE would
-        // maintain a persistent event stream.
-        let body = serde_json::to_string(request)?;
-        let url = format!("{}/message", self.base_url.trim_end_matches('/'));
+        let _ = request;
+        bail!("SSE transport is not fully supported yet; use stdio or HTTP transport");

As per coding guidelines, "Keep unsupported paths explicit with error handling rather than adding partial fake support."

Also applies to: 229-234

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_transport.rs` around lines 209 - 213, The SSE branch in
mcp_transport.rs currently serializes the request and POSTs to "{}/message"
(using self.base_url and serde_json::to_string(request)), but this is only a
partial/fake implementation; replace this fake behavior with explicit error
handling so unsupported SSE use fails loudly. In the function/enum match arm
handling SSE (the branch that creates body and url), return a clear error (e.g.,
Err(anyhow!("SSE transport unsupported - real SSE streaming not implemented")))
instead of attempting an HTTP POST; keep the code path for HTTP/LongPoll
unchanged and document the SSE branch with that explicit unsupported error.

@dwillitzer dwillitzer changed the base branch from main to dev February 25, 2026 19:03
@github-actions github-actions bot added ci Auto scope: CI/workflow/hook files changed. docs Auto scope: docs/markdown/template files changed. dependencies Auto scope: dependency manifest/lock/policy changed. core Auto scope: root src/*.rs files changed. labels Feb 25, 2026
@github-actions github-actions bot added size: XL Auto size: >1000 non-doc changed lines. daemon: core Auto module: daemon core files changed. doctor: core Auto module: doctor core files changed. heartbeat: engine Auto module: heartbeat/engine changed. skills: audit Auto module: skills/audit changed. gateway: api Auto module: gateway/api changed. and removed onboard Auto scope: src/onboard/** changed. skills Auto scope: src/skills/** changed. channel: core Auto module: channel core files changed. provider: groq Auto module: provider/groq changed. labels Feb 25, 2026
@github-actions github-actions bot added config Auto scope: src/config/** changed. daemon Auto scope: src/daemon/** changed. doctor Auto scope: src/doctor/** changed. gateway Auto scope: src/gateway/** changed. heartbeat Auto scope: src/heartbeat/** changed. onboard Auto scope: src/onboard/** changed. skills Auto scope: src/skills/** changed. and removed config Auto scope: src/config/** changed. daemon Auto scope: src/daemon/** changed. doctor Auto scope: src/doctor/** changed. gateway Auto scope: src/gateway/** changed. labels Feb 25, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/config/schema.rs (1)

431-440: ⚠️ Potential issue | 🟠 Major

Encrypt MCP env and headers in config persistence flows.

Line 433 (env) and Line 440 (headers) can carry tokens, but Config::load_or_init / Config::save do not decrypt/encrypt these maps yet, so values are persisted plaintext.

πŸ” Proposed fix
@@
             for agent in config.agents.values_mut() {
                 decrypt_optional_secret(&store, &mut agent.api_key, "config.agents.*.api_key")?;
             }
+            for (server_idx, server) in config.mcp.servers.iter_mut().enumerate() {
+                for (key, value) in server.headers.iter_mut() {
+                    decrypt_secret(
+                        &store,
+                        value,
+                        &format!("config.mcp.servers[{server_idx}].headers.{key}"),
+                    )?;
+                }
+                for (key, value) in server.env.iter_mut() {
+                    decrypt_secret(
+                        &store,
+                        value,
+                        &format!("config.mcp.servers[{server_idx}].env.{key}"),
+                    )?;
+                }
+            }
@@
         for agent in config_to_save.agents.values_mut() {
             encrypt_optional_secret(&store, &mut agent.api_key, "config.agents.*.api_key")?;
         }
+        for (server_idx, server) in config_to_save.mcp.servers.iter_mut().enumerate() {
+            for (key, value) in server.headers.iter_mut() {
+                encrypt_secret(
+                    &store,
+                    value,
+                    &format!("config.mcp.servers[{server_idx}].headers.{key}"),
+                )?;
+            }
+            for (key, value) in server.env.iter_mut() {
+                encrypt_secret(
+                    &store,
+                    value,
+                    &format!("config.mcp.servers[{server_idx}].env.{key}"),
+                )?;
+            }
+        }
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/schema.rs` around lines 431 - 440, The env and headers HashMap
fields on the config struct are stored plaintext because Config::load_or_init
and Config::save never encrypt/decrypt them; update those two methods to
serialize and encrypt the maps before persisting and to decrypt and deserialize
them after loading (use the project's existing encryption utilities or key
management functions), ensuring Config::save transforms config.env and
config.headers into encrypted blobs and Config::load_or_init reverses that so
the in-memory Config still exposes plain maps while on-disk/config-store data is
encrypted.
πŸ€– Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/config-reference.md`:
- Around line 632-684: The new "## `[mcp]`" MCP config section was added only to
the English config-reference; synchronize this addition to the Vietnamese
localized copies (the Vietnamese config-reference variants) or, if you intend to
defer localization, add an explicit deferral note in the docs hub/README/SUMMARY
stating localization is postponed for the `[mcp]` section and why; update the
localized files to include the exact `[mcp]` section content (including the
`[[mcp.servers]]` table and examples) or add the deferral notice in the
corresponding SUMMARY/README so users know the EN section is intentionally not
yet translated.

In `@src/daemon/mod.rs`:
- Around line 13-26: Change check_port_available to return std::io::Result<()>
(no host/port tuple) and update the caller in mod.rs to explicitly discriminate
io::Error kinds: when check_port_available(&host, port).await returns Err(e)
inspect e.kind() and only treat io::ErrorKind::AddrInUse as a port-conflict
(then call is_zeroclaw_daemon_running(&host, port) and log/print as before); for
any other error kinds (e.g., AddrNotAvailable, PermissionDenied, InvalidInput)
propagate or bail with the original error context so invalid host format or
permission failures are not misreported as "already in use". Ensure references
to check_port_available and is_zeroclaw_daemon_running are updated accordingly.

---

Duplicate comments:
In `@src/config/schema.rs`:
- Around line 431-440: The env and headers HashMap fields on the config struct
are stored plaintext because Config::load_or_init and Config::save never
encrypt/decrypt them; update those two methods to serialize and encrypt the maps
before persisting and to decrypt and deserialize them after loading (use the
project's existing encryption utilities or key management functions), ensuring
Config::save transforms config.env and config.headers into encrypted blobs and
Config::load_or_init reverses that so the in-memory Config still exposes plain
maps while on-disk/config-store data is encrypted.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between f3cea0c and ee2bb63.

πŸ“’ Files selected for processing (3)
  • docs/config-reference.md
  • src/config/schema.rs
  • src/daemon/mod.rs

Comment thread docs/config-reference.md Outdated
Comment on lines +632 to +684
## `[mcp]`

MCP (Model Context Protocol) client β€” connect zeroclaw to external MCP servers and expose their tools alongside built-in tools.

| Key | Default | Description |
|---|---|---|
| `enabled` | `false` | Enable MCP client at daemon startup |
| `servers` | `[]` | List of MCP server configs (see below) |

### `[[mcp.servers]]`

| Key | Default | Description |
|---|---|---|
| `name` | _required_ | Server name; used as tool prefix: `<name>__<tool>` |
| `transport` | `"stdio"` | Transport type: `"stdio"`, `"http"`, or `"sse"` |
| `command` | `""` | Executable to spawn (stdio only, e.g. `"bunx"`, `"python"`) |
| `args` | `[]` | Arguments for the executable (stdio only) |
| `env` | `{}` | Environment variables for the spawned process (stdio only) |
| `url` | unset | Server URL for http/sse transports |
| `headers` | `{}` | HTTP headers for http/sse transports (e.g. `Authorization`) |
| `tool_timeout_secs` | `180` | Per-server tool call timeout in seconds (max 600) |

**Stdio (local child process):**

```toml
[mcp]
enabled = true

[[mcp.servers]]
name = "filesystem"
command = "bunx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
```

**HTTP/SSE (remote server):**

```toml
[[mcp.servers]]
name = "slack"
transport = "sse"
url = "https://mcp.slack.com/mcp"
[mcp.servers.headers]
Authorization = "Bearer xoxb-..."
```

Tools from MCP servers appear as `{server_name}__{tool_name}` (e.g. `filesystem__read_file`, `slack__list_channels`).

Notes:

- Daemon startup connects to all enabled MCP servers and registers their tools into the runtime tool registry.
- If an MCP server exits after startup, its tools are unavailable until the daemon restarts; reconnection is a follow-up feature.
- SSE transport is currently synchronous (HTTP POST per call); persistent event-stream support is tracked separately.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Find all config-reference variants (all locales) ==="
find docs -name "config-reference*" -type f 2>/dev/null | sort

echo
echo "=== List all .md files in docs with locale patterns ==="
fd -a '\.md$' docs | grep -E '(zh-CN|ja|ru|fr|vi)' | head -20 || true

echo
echo "=== Check docs directory structure for i18n ==="
ls -la docs/ | grep -E '^d' || true

echo
echo "=== Check if localized docs exist in subdirectories ==="
find docs -type d -name '*i18n*' -o -name '*zh*' -o -name '*ja*' -o -name '*ru*' -o -name '*fr*' -o -name '*vi*' 2>/dev/null | sort || true

Repository: zeroclaw-labs/zeroclaw

Length of output: 2897


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Check MCP section in Vietnamese config-reference files ==="
for file in docs/config-reference.vi.md docs/i18n/vi/config-reference.md docs/vi/config-reference.md; do
  if [ -f "$file" ]; then
    echo "--- $file ---"
    grep -n "MCP\|mcp" "$file" | head -5 || echo "(no MCP references found)"
    echo
  fi
done

echo "=== Check if other locales have ANY config-reference variant ==="
fd "config-reference" docs --extension md | sort

echo
echo "=== Search for explicit deferral/i18n notes in README, SUMMARY, allowed doc locations ==="
rg -n "config-reference.*defer|deferr.*config-reference|locali.*config|i18n.*config" docs/README*.md docs/SUMMARY*.md README*.md 2>/dev/null || echo "(no deferral notes found)"

echo
echo "=== Check SUMMARY files for config-reference references across locales ==="
rg -n "config-reference" docs/SUMMARY*.md | head -20

Repository: zeroclaw-labs/zeroclaw

Length of output: 1720


Update Vietnamese config-reference.md files with MCP section, or explicitly document localization deferral.

The MCP config section (lines 632-684) added to docs/config-reference.md was not propagated to Vietnamese localized variants (docs/config-reference.vi.md, docs/i18n/vi/config-reference.md, docs/vi/config-reference.md). Chinese (zh-CN), Japanese (ja), Russian (ru), and French (fr) lack localized config-reference files entirely and fall back to the EN version via SUMMARY references. Per coding guidelines, either sync all localized docs in this PR or explicitly document deferral in README/SUMMARY/docs-hub.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/config-reference.md` around lines 632 - 684, The new "## `[mcp]`" MCP
config section was added only to the English config-reference; synchronize this
addition to the Vietnamese localized copies (the Vietnamese config-reference
variants) or, if you intend to defer localization, add an explicit deferral note
in the docs hub/README/SUMMARY stating localization is postponed for the `[mcp]`
section and why; update the localized files to include the exact `[mcp]` section
content (including the `[[mcp.servers]]` table and examples) or add the deferral
notice in the corresponding SUMMARY/README so users know the EN section is
intentionally not yet translated.

Comment thread src/daemon/mod.rs
Comment on lines +13 to +26
if let Err(_e) = check_port_available(&host, port).await {
// Port is in use - check if it's our daemon
if is_zeroclaw_daemon_running(&host, port).await {
tracing::info!("ZeroClaw daemon already running on {host}:{port}");
println!("βœ“ ZeroClaw daemon already running on http://{host}:{port}");
println!(" Use 'zeroclaw restart' to restart, or 'zeroclaw status' to check health.");
return Ok(());
}
// Something else is using the port
bail!(
"Port {port} is already in use by another process. \
Run 'lsof -i :{port}' to identify it, or use a different port."
);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Inspect preflight error handling and bind/address parsing path.
rg -n -C3 'check_port_available|AddrInUse|SocketAddr|already in use' src/daemon/mod.rs

# Inspect gateway host validation constraints in config.
rg -n -C2 'gateway\.host must not be empty|pub host: String' src/config/schema.rs

Repository: zeroclaw-labs/zeroclaw

Length of output: 1840


Explicitly discriminate port-conflict errors from other preflight failures.

At line 13, all errors from check_port_available are treated as port conflicts. Combined with address parsing at line 347, failures like invalid host format or permission denied are misreported as "already in use."

Change check_port_available to return std::io::Result<()> and bind directly with (host, port):

πŸ› οΈ Proposed fix
    if let Err(e) = check_port_available(&host, port).await {
+       if e.kind() != std::io::ErrorKind::AddrInUse {
+           bail!("Failed to bind {host}:{port} during pre-flight check: {e}");
+       }
        // Port is in use - check if it's our daemon
        if is_zeroclaw_daemon_running(&host, port).await {
            tracing::info!("ZeroClaw daemon already running on {host}:{port}");
            println!("βœ“ ZeroClaw daemon already running on http://{host}:{port}");
            println!("  Use 'zeroclaw restart' to restart, or 'zeroclaw status' to check health.");
            return Ok(());
        }
        // Something else is using the port
        bail!(
            "Port {port} is already in use by another process. \
             Run 'lsof -i :{port}' to identify it, or use a different port."
        );
    }

-async fn check_port_available(host: &str, port: u16) -> Result<()> {
-    let addr: std::net::SocketAddr = format!("{host}:{port}").parse()?;
-    match tokio::net::TcpListener::bind(addr).await {
-        Ok(listener) => {
-            drop(listener);
-            Ok(())
-        }
-        Err(e) if e.kind() == std::io::ErrorKind::AddrInUse => {
-            bail!("Port {} is already in use", port)
-        }
-        Err(e) => bail!("Failed to check port {}: {}", port, e),
-    }
+async fn check_port_available(host: &str, port: u16) -> std::io::Result<()> {
+    let listener = tokio::net::TcpListener::bind((host, port)).await?;
+    drop(listener);
+    Ok(())
 }
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/daemon/mod.rs` around lines 13 - 26, Change check_port_available to
return std::io::Result<()> (no host/port tuple) and update the caller in mod.rs
to explicitly discriminate io::Error kinds: when check_port_available(&host,
port).await returns Err(e) inspect e.kind() and only treat
io::ErrorKind::AddrInUse as a port-conflict (then call
is_zeroclaw_daemon_running(&host, port) and log/print as before); for any other
error kinds (e.g., AddrNotAvailable, PermissionDenied, InvalidInput) propagate
or bail with the original error context so invalid host format or permission
failures are not misreported as "already in use". Ensure references to
check_port_available and is_zeroclaw_daemon_running are updated accordingly.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

πŸ€– Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/i18n/README.md`:
- Around line 29-34: Remove the duplicate "## Structure" block that causes
MD024: locate the second "## Structure" header in docs/i18n/README.md and delete
that redundant header and its duplicate links, then merge the unique
compatibility note (the Compatibility Vietnamese paths: `docs/vi/` and
`docs/*.vi.md`) into the remaining canonical "Structure" section (the block that
lists Docs structure map and Canonical Vietnamese tree `docs/i18n/vi/`) so
navigation is not duplicated and CI lint (MD024) will pass.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between ee2bb63 and dab0ee5.

πŸ“’ Files selected for processing (2)
  • Cargo.toml
  • docs/i18n/README.md
πŸ’€ Files with no reviewable changes (1)
  • Cargo.toml

Comment thread docs/i18n/README.md Outdated
Comment on lines +29 to +34
## Structure

- Docs structure map (language/part/function): [../structure/README.md](../structure/README.md)
- Canonical Vietnamese tree: `docs/i18n/vi/`
- Compatibility Vietnamese paths: `docs/vi/` and `docs/*.vi.md`

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove the duplicate Structure section (currently breaks CI lint).

Line 29 reintroduces ## Structure, which triggers MD024 and fails both CI and Docs Deploy. This block also duplicates nearby navigation content; keep a single structure section and fold only the unique compatibility note into it.

Proposed fix
-## Structure
-
-- Docs structure map (language/part/function): [../structure/README.md](../structure/README.md)
-- Canonical Vietnamese tree: `docs/i18n/vi/`
-- Compatibility Vietnamese paths: `docs/vi/` and `docs/*.vi.md`
+- Compatibility Vietnamese paths: `docs/vi/` and `docs/*.vi.md`

Based on learnings: β€œTreat docs navigation as product UX: … keep top-level nav concise; avoid duplicative links across adjacent nav blocks …”.

πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
## Structure
- Docs structure map (language/part/function): [../structure/README.md](../structure/README.md)
- Canonical Vietnamese tree: `docs/i18n/vi/`
- Compatibility Vietnamese paths: `docs/vi/` and `docs/*.vi.md`
- Compatibility Vietnamese paths: `docs/vi/` and `docs/*.vi.md`
🧰 Tools
πŸͺ› GitHub Actions: CI Run

[error] 29-29: MD024/no-duplicate-heading Multiple headings with the same content [Context: "Structure"]

πŸͺ› GitHub Actions: Docs Deploy

[error] 29-29: Markdown linting failed: MD024 no-duplicate-heading - Multiple headings with the same content. Context: 'Structure'.

πŸͺ› markdownlint-cli2 (0.21.0)

[warning] 29-29: Multiple headings with the same content

(MD024, no-duplicate-heading)

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/i18n/README.md` around lines 29 - 34, Remove the duplicate "##
Structure" block that causes MD024: locate the second "## Structure" header in
docs/i18n/README.md and delete that redundant header and its duplicate links,
then merge the unique compatibility note (the Compatibility Vietnamese paths:
`docs/vi/` and `docs/*.vi.md`) into the remaining canonical "Structure" section
(the block that lists Docs structure map and Canonical Vietnamese tree
`docs/i18n/vi/`) so navigation is not duplicated and CI lint (MD024) will pass.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/config/schema.rs (1)

661-670: ⚠️ Potential issue | 🟠 Major

Encrypt MCP headers/env before persisting config

Line 663 and Line 670 add secret-bearing maps, but Config::save/Config::load_or_init do not encrypt/decrypt config.mcp.servers[*].env and config.mcp.servers[*].headers, so MCP credentials are stored plaintext at rest.

πŸ” Suggested patch
@@
             decrypt_channel_secrets(&store, &mut config.channels_config)?;
+            for (server_idx, server) in config.mcp.servers.iter_mut().enumerate() {
+                for (key, value) in server.env.iter_mut() {
+                    decrypt_secret(
+                        &store,
+                        value,
+                        &format!("config.mcp.servers[{server_idx}].env.{key}"),
+                    )?;
+                }
+                for (key, value) in server.headers.iter_mut() {
+                    decrypt_secret(
+                        &store,
+                        value,
+                        &format!("config.mcp.servers[{server_idx}].headers.{key}"),
+                    )?;
+                }
+            }
@@
         encrypt_channel_secrets(&store, &mut config_to_save.channels_config)?;
+        for (server_idx, server) in config_to_save.mcp.servers.iter_mut().enumerate() {
+            for (key, value) in server.env.iter_mut() {
+                encrypt_secret(
+                    &store,
+                    value,
+                    &format!("config.mcp.servers[{server_idx}].env.{key}"),
+                )?;
+            }
+            for (key, value) in server.headers.iter_mut() {
+                encrypt_secret(
+                    &store,
+                    value,
+                    &format!("config.mcp.servers[{server_idx}].headers.{key}"),
+                )?;
+            }
+        }
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/schema.rs` around lines 661 - 670, Config currently persists MCP
secret maps plaintext: ensure Config::save encrypts each server.env and
server.headers (config.mcp.servers[*].env and config.mcp.servers[*].headers)
before writing, and Config::load_or_init decrypts them after reading; locate the
MCP server struct in src/config/schema.rs and the save/load implementations
(Config::save, Config::load_or_init) and apply symmetric encryption/decryption
using the project's crypto helper (or add one) so the stored JSON/YAML contains
encrypted strings and the in-memory Config retains plaintext maps for runtime.
🧹 Nitpick comments (1)
src/tools/mcp_client.rs (1)

32-37: Consider using plain u64 instead of AtomicU64 inside Mutex.

next_id is already protected by the Mutex<McpServerInner>, so AtomicU64 provides no additional benefit. A plain u64 would simplify the code.

Suggested refactor
 struct McpServerInner {
     config: McpServerConfig,
     transport: Box<dyn McpTransportConn>,
-    next_id: AtomicU64,
+    next_id: u64,
     tools: Vec<McpToolDef>,
 }

And in call_tool:

-        let id = inner.next_id.fetch_add(1, Ordering::Relaxed);
+        let id = inner.next_id;
+        inner.next_id += 1;
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/mcp_client.rs` around lines 32 - 37, Replace the AtomicU64 field
with a plain u64 on McpServerInner (change the type of next_id to u64), and
update any code that used AtomicU64 methods (notably call_tool) to mutate
next_id while holding the Mutex<McpServerInner> guard (e.g., increment next_id
directly on the locked inner and use that value for IDs) instead of calling
fetch_add; ensure all accesses to next_id occur under the Mutex guard and remove
any unnecessary atomic imports/usages.
πŸ€– Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/config/schema.rs`:
- Around line 647-667: Config::validate currently doesn't enforce MCP
invariants: check McpTransport and url, and enforce tool_timeout_secs bounds. In
Config::validate (or the impl for the config struct) add validation: if
transport != McpTransport::Stdio then ensure url.is_some() and return an Err
with a clear message if missing; for tool_timeout_secs, if Some(secs) ensure
secs <= 600 (and optionally > 0) otherwise return an Err indicating the hard
cap; also document that unset tool_timeout_secs will default to 180 where the
config is consumed. Use the field names transport, url, and tool_timeout_secs
and the McpTransport enum to locate the code to modify.

---

Duplicate comments:
In `@src/config/schema.rs`:
- Around line 661-670: Config currently persists MCP secret maps plaintext:
ensure Config::save encrypts each server.env and server.headers
(config.mcp.servers[*].env and config.mcp.servers[*].headers) before writing,
and Config::load_or_init decrypts them after reading; locate the MCP server
struct in src/config/schema.rs and the save/load implementations (Config::save,
Config::load_or_init) and apply symmetric encryption/decryption using the
project's crypto helper (or add one) so the stored JSON/YAML contains encrypted
strings and the in-memory Config retains plaintext maps for runtime.

---

Nitpick comments:
In `@src/tools/mcp_client.rs`:
- Around line 32-37: Replace the AtomicU64 field with a plain u64 on
McpServerInner (change the type of next_id to u64), and update any code that
used AtomicU64 methods (notably call_tool) to mutate next_id while holding the
Mutex<McpServerInner> guard (e.g., increment next_id directly on the locked
inner and use that value for IDs) instead of calling fetch_add; ensure all
accesses to next_id occur under the Mutex guard and remove any unnecessary
atomic imports/usages.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 7c03719 and a0f0e48.

β›” Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
πŸ“’ Files selected for processing (8)
  • docs/i18n/README.md
  • src/config/mod.rs
  • src/config/schema.rs
  • src/tools/mcp_client.rs
  • src/tools/mcp_protocol.rs
  • src/tools/mcp_tool.rs
  • src/tools/mcp_transport.rs
  • src/tools/mod.rs
🚧 Files skipped from review as they are similar to previous changes (3)
  • src/tools/mod.rs
  • docs/i18n/README.md
  • src/tools/mcp_tool.rs

Comment thread src/config/schema.rs Outdated
Comment on lines +647 to +667
/// Transport type: "stdio" (default), "sse", or "http".
#[serde(default)]
pub transport: McpTransport,
/// URL for SSE/HTTP transports (e.g. "http://localhost:8080/mcp").
/// Required for non-stdio transports.
#[serde(default)]
pub url: Option<String>,
/// Executable to spawn for stdio transport (e.g. `"npx"`, `"python"`).
/// Ignored for SSE/HTTP transports.
#[serde(default)]
pub command: String,
/// Arguments passed to the executable (stdio only).
#[serde(default)]
pub args: Vec<String>,
/// Optional environment variables for the spawned process (stdio only).
#[serde(default)]
pub env: std::collections::HashMap<String, String>,
/// Per-server foreground tool call timeout (seconds).
/// Default: 180s if unset. Maximum allowed: 600s (hard cap applied).
#[serde(default)]
pub tool_timeout_secs: Option<u64>,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Enforce MCP invariants in Config::validate

Line 651 and Line 665 define contract-level constraints (URL required for non-stdio; timeout bounded), but these are not validated in Config::validate, so invalid MCP configs pass parsing and fail later at runtime.

βœ… Suggested validation block
@@
         // Delegate coordination runtime safety bounds.
         if self.coordination.enabled && self.coordination.lead_agent.trim().is_empty() {
@@
         if self.coordination.max_seen_message_ids == 0 {
             anyhow::bail!("coordination.max_seen_message_ids must be greater than 0");
         }
+
+        // MCP server config
+        for (i, server) in self.mcp.servers.iter().enumerate() {
+            if server.name.trim().is_empty() {
+                anyhow::bail!("mcp.servers[{i}].name must not be empty");
+            }
+
+            match server.transport {
+                McpTransport::Stdio => {
+                    if server.command.trim().is_empty() {
+                        anyhow::bail!(
+                            "mcp.servers[{i}].command must not be empty when transport = 'stdio'"
+                        );
+                    }
+                }
+                McpTransport::Http | McpTransport::Sse => {
+                    let url = server
+                        .url
+                        .as_deref()
+                        .map(str::trim)
+                        .filter(|v| !v.is_empty())
+                        .ok_or_else(|| {
+                            anyhow::anyhow!(
+                                "mcp.servers[{i}].url is required when transport is 'http' or 'sse'"
+                            )
+                        })?;
+                    let parsed = reqwest::Url::parse(url)
+                        .with_context(|| format!("mcp.servers[{i}].url is not a valid URL"))?;
+                    if !matches!(parsed.scheme(), "http" | "https") {
+                        anyhow::bail!("mcp.servers[{i}].url must use http/https");
+                    }
+                }
+            }
+
+            if let Some(timeout) = server.tool_timeout_secs {
+                if timeout == 0 || timeout > 600 {
+                    anyhow::bail!(
+                        "mcp.servers[{i}].tool_timeout_secs must be in range 1..=600 when set"
+                    );
+                }
+            }
+        }
 
         Ok(())
     }

As per coding guidelines: β€œFor config/schema changes in Rust, treat keys as public contract: document defaults, compatibility impact, and migration/rollback path”.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/schema.rs` around lines 647 - 667, Config::validate currently
doesn't enforce MCP invariants: check McpTransport and url, and enforce
tool_timeout_secs bounds. In Config::validate (or the impl for the config
struct) add validation: if transport != McpTransport::Stdio then ensure
url.is_some() and return an Err with a clear message if missing; for
tool_timeout_secs, if Some(secs) ensure secs <= 600 (and optionally > 0)
otherwise return an Err indicating the hard cap; also document that unset
tool_timeout_secs will default to 180 where the config is consumed. Use the
field names transport, url, and tool_timeout_secs and the McpTransport enum to
locate the code to modify.

@dwillitzer
Copy link
Copy Markdown
Contributor Author

This PR is ready for review. Lint/format issues addressed, ready for maintainer approval. Tagging for visibility: @theonlyhennygod @willsarg

Please review when you have a chance. No further updates planned from my end.

@dwillitzer
Copy link
Copy Markdown
Contributor Author

Hi @theonlyhennygod,

I noticed this PR was closed. Could you share the reasoning? I'm happy to address any concerns or resubmit under a different approach if needed.

The MCP client integration was designed to support the upcoming tool ecosystem β€” if there's a different direction you'd prefer, I'm glad to align with it.

Thanks for your time on #1847 β€” glad to see that one land!

@dwillitzer
Copy link
Copy Markdown
Contributor Author

Historical Reference

This PR's MCP client implementation was merged via PR #2013 (zeroclaw-labs/issue-1380-mcp-main).

Code Preservation

Comparing this PR's code with what landed in main:

File Status
mcp_protocol.rs βœ… Merged verbatim (126 lines)
mcp_client.rs βœ… Merged verbatim (357 lines, import path updates only)
mcp_tool.rs βœ… Merged verbatim (68 lines)
mcp_transport.rs πŸ”§ Enhanced (285 β†’ 875 lines) β€” added SSE stream state tracking, async signaling, notification handling

Summary

Core architecture preserved. Transport layer expanded 3x for production robustness (SSE edge cases, stream state management).

Closing this PR as superseded by #2013 β€” the maintainers integrated the feature via their internal branch.


Thanks for the collaboration on this one. 🀝

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Auto scope: dependency manifest/lock/policy changed. docs Auto scope: docs/markdown/template files changed. risk: low Auto risk: docs/chore-only paths. size: XS Auto size: <=80 non-doc changed lines. tests Auto scope: tests/** changed. tool Auto scope: src/tools/** changed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants