feat(mcp): AD4M MCP Server — AI Agent Integration with Dynamic SHACL Tools#665
feat(mcp): AD4M MCP Server — AI Agent Integration with Dynamic SHACL Tools#665
Conversation
Adds a new mcp-server crate that exposes AD4M functionality via the Model Context Protocol (MCP). This enables AI agents to interact with AD4M through a standard protocol. Tools implemented: - list_perspectives: List all perspectives - get_perspective: Get perspective snapshot with all links - query_links: Query links with source/target/predicate filters - add_link: Add new links to perspectives - run_prolog: Execute Prolog queries for complex reasoning - create_perspective: Create new perspectives Uses rmcp (Rust MCP SDK) v0.15.0 with stdio transport for Claude Desktop compatibility. HTTP transport for remote access planned. Closes #663
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR introduces MCP (Model Context Protocol) server support to the rust-executor crate, enabling AI agents to interact with AD4M Subject operations through a standard protocol. It adds a stdio-based MCP server with handlers for perspectives, subject classes, queries, data access, creation, commands, and inference, along with authentication tools and comprehensive integration tests. Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as AI Agent
participant MCP as MCP Server<br/>(stdio)
participant Handler as Ad4mMcpHandler
participant Auth as Auth System
participant AD4M as AD4M Core<br/>(JS Handle)
Agent->>MCP: Tool Call (e.g., query_subjects)
MCP->>Handler: Route to Tool Method
Handler->>Auth: Validate Auth Token
Auth-->>Handler: Token Valid
Handler->>AD4M: Query Perspective<br/>(Prolog)
AD4M-->>Handler: Query Result
Handler->>Handler: Format JSON Response
Handler-->>MCP: Tool Result
MCP-->>Agent: Response
sequenceDiagram
participant Executor as Executor Process
participant Server as start_mcp_server()
participant Context as McpContext
participant Handler as Ad4mMcpHandler
participant Transport as Stdio Transport
Executor->>Server: Startup with config
Server->>Context: Create with handles
Server->>Handler: Initialize Handler
Server->>Transport: Create stdio transport
Server->>Handler: serve(transport)
Handler->>Transport: Listen for requests
Transport-->>Handler: MCP Tool Call
Handler->>Handler: Execute Tool Logic
Handler-->>Transport: Return Result
Transport-->>Handler: Next Request
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Adds MCP (Model Context Protocol) support directly into the AD4M executor, enabling AI agents to work with typed Subject models instead of raw links. MCP Tools: - list_perspectives: List all perspectives with metadata - list_subject_classes: Get model types defined in a perspective - query_subjects: Query subject instances with Prolog filters - get_subject_data: Get all properties of a subject instance - create_subject: Create new model instances - execute_commands: Run actions on subjects - infer: Run Prolog queries for complex reasoning Architecture: - MCP module integrated into rust-executor (not separate binary) - Shares auth context with GraphQL layer - Uses rmcp v0.15.0 with stdio transport - Calls same perspective methods as GraphQL resolvers Next steps: - Add HTTP transport for remote access - Wire up MCP server startup in main executor - Add authentication flow for AI agents
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
patches/@mattrglobal__node-bbs-signatures@0.11.0.patch (1)
25-28:⚠️ Potential issue | 🔴 CriticalMixing ESM
importinto CommonJS files will cause a runtimeSyntaxError.These three files are clearly CommonJS — the surrounding code uses
exports.*,require(), and__dirname(all CJS-only globals). Inserting a top-levelimportstatement is invalid in CJS and will throw aSyntaxErrorwhen Node.js loads the module.If the goal is to use the
node:protocol prefix, keep the CommonJS form:Proposed fix
-import path from "node:path" +var path = require("node:path");Apply to all three files (
bbsSignature.js,bls12381.js,bls12381toBbs.js).Also applies to: 38-41, 51-54
🤖 Fix all issues with AI agents
In @.github/workflows/agent-language-tests.yml:
- Line 22: Update every workflow that still references the checkout action by
replacing the literal string "uses: actions/checkout@v3" with "uses:
actions/checkout@v4"; search for the exact token "uses: actions/checkout@v3"
across the PR and change each occurrence to "uses: actions/checkout@v4" so the
workflows run on the supported Node version.
In @.github/workflows/publish_staging.yml:
- Line 56: Update all GitHub Actions usages to their current major versions:
replace occurrences of "actions/checkout@v3" with "actions/checkout@v4",
"actions/setup-node@v3" with "actions/setup-node@v4", "actions/setup-go@v4" with
"actions/setup-go@v5", and "actions/setup-python@v4" with
"actions/setup-python@v5" throughout the workflow file (search for the literal
strings "uses: actions/checkout@v3", "uses: actions/setup-node@v3", "uses:
actions/setup-go@v4", and "uses: actions/setup-python@v4"); ensure you update
every instance flagged (including the occurrences referenced in the review) so
the workflow uses Node/GitHub-supported runtime versions and re-run CI to
confirm no transient breaking changes.
- Around line 514-515: Replace the unsafe install-and-echo pattern for
cargo-workspaces with a proper presence check and conditional install (locate
the `cargo install cargo-workspaces || echo "cargo-workspaces already
installed"` command and change it to first check for `cargo-workspaces` in PATH
and only run `cargo install cargo-workspaces` if missing), and stop passing the
registry token on the command line to `cargo login` (locate the `cargo login ${{
secrets.CARGO_REGISTRY_TOKEN }}` invocation and instead set the
CARGO_REGISTRY_TOKEN environment variable in the job/step and invoke `cargo
login` without the secret argument).
In `@mcp-server/src/main.rs`:
- Around line 19-21: The CLI token argument (token: Option<String>) leaks
secrets via process listings; update the Clap arg on the token field to read
from an environment variable fallback (e.g., AD4M_TOKEN) so the value can be
supplied via environment instead of CLI. Specifically, modify the #[arg(...)]
annotation for the token field (token: Option<String>) to include env =
"AD4M_TOKEN" (and consider removing short flag or making it discouraged) so Clap
will use AD4M_TOKEN when the flag isn’t provided; keep the field name token and
Option<String> type unchanged. Ensure documentation/comments and any help text
reflect the AD4M_TOKEN env fallback.
In `@mcp-server/src/server.rs`:
- Around line 145-165: The tool method list_perspectives currently returns a
plain String and encodes errors as Strings, which prevents MCP clients from
distinguishing failures; change its signature to return Result<String, String>
and propagate errors via map_err and ?: call self.get_client().await.map_err(|e|
format!("Error getting client: {}", e))? to get the client, call
client.perspectives.all().await.map_err(|e| format!("Error listing perspectives:
{}", e))? for perspectives, convert the perspectives to JSON and return
Ok(serde_json::to_string_pretty(&result).map_err(|e| format!("Error serializing
result: {}", e))?), ensuring all failure paths return Err(String) rather than
embedding errors in a successful String.
In `@rust-executor/src/mcp/tools.rs`:
- Around line 197-207: The code interpolates user values p.class_name and
p.query (filter) directly into the Prolog string in rust-executor::mcp::tools
(the block that builds query with subject_class and instance), which allows
breaking or altering the query; fix by escaping double-quote characters in
p.class_name before formatting (replace " with \" or an equivalent Prolog-safe
escape) and use that escaped value in the format calls that build the query
string; for p.query (the filter) either validate or clearly document that
query_subjects accepts raw Prolog/filter syntax (or implement a lightweight
sanitizer/whitelist) so consumers know it is not treated as a literal.
- Around line 306-308: The parsing of p.parameters into parameters currently
swallows JSON parse errors by calling
serde_json::from_str(...).unwrap_or_default(), which can hide malformed input;
change this to surface and propagate the error instead: replace the
map/unwrap_or_default chain for p.parameters with code that attempts
serde_json::from_str(params_str) and maps any Err into a proper Result error
(e.g., using map_or_else or map + transpose or
serde_json::from_str(params_str).map_err(|e| anyhow!(...))? ) so the function
returns/propagates the parse error to the caller; reference the
variables/functions Parameter, parameters, p.parameters, and
serde_json::from_str in your change and mirror the same fix used for
initial_values handling.
- Around line 263-264: The current expression that builds initial_values
(p.initial_values.as_ref().and_then(|v| serde_json::from_str(v).ok())) swallows
JSON parse errors; replace this with parsing that surfaces errors (e.g., use
p.initial_values.as_ref().map(|v| serde_json::from_str(v)).transpose() or an
explicit match on serde_json::from_str) so that serde_json::Error is propagated
or returned from the enclosing function instead of being silently converted to
None; update the enclosing function's return type to Result if necessary and
ensure you handle or bubble up errors from serde_json::from_str when creating
initial_values.
- Around line 170-171: Currently calls like
get_auth_token().await.unwrap_or_default() feed an empty string into
AgentContext::from_auth_token(), which via user_email_from_token() yields None
and sets is_main_agent=true; update each MCP tool site (where get_auth_token is
used, e.g., the occurrences around
get_auth_token()/AgentContext::from_auth_token() at lines ~170, ~210, ~256,
~285, ~325, ~388) to explicitly handle the None case: either return an error
when get_auth_token().await returns None, or construct a restricted AgentContext
for unauthenticated sessions (e.g., via a new helper that accepts Option<String>
and returns Result<AgentContext, Error> or builds a non-main AgentContext when
token is None), and replace the unwrap_or_default() usage with this validation
helper so unauthenticated calls cannot silently escalate to main-agent
privileges.
🧹 Nitpick comments (14)
.github/workflows/publish_staging.yml (3)
70-73: Inconsistentpnpm installflags between Linux and macOS jobs.Linux (lines 70, 73) uses
--no-frozen-lockfile, while macOS (lines 225, 228) omits it. This means macOS will use--frozen-lockfileby default (if configured in.npmrcor pnpm settings), potentially causing divergent dependency resolution or build failures on one platform but not the other. Pick one approach and apply it consistently.Also applies to: 224-228
313-389: Large commented-out Windows block — consider removing or tracking as a follow-up.~75 lines of dead code make the workflow harder to maintain. Additionally, if this block is ever re-enabled, line 331 references
steps.changed_extract_versioninstead ofsteps.extract_version, which would be a bug.Consider removing the block and tracking Windows support re-enablement in an issue instead.
24-26: Deno version inconsistency across jobs.
create-releasepins Denov1.32.4(line 26) whilebuild-launcher-binary-macosusesv2.x(line 185). If both jobs need Deno, this version gap (v1 vs v2) could lead to subtle behavioral differences. Align on one version unless there's a specific reason for the divergence.Also applies to: 182-185
.github/workflows/p-diff-sync-tests.yml (1)
29-189: Consider a matrix strategy to reduce duplication across the 6 test jobs.The
pull,render,revisions,signals,stress, andtelepresencejobs are identical except for the finalpnpm run test-<name>command. A matrix strategy would collapse ~160 lines into ~30:Example matrix approach
- pull: - name: Pull Test - runs-on: ubuntu-latest - container: - image: coasys/ad4m-ci-linux:latest@sha256:3d6e8b6357224d689345eebd5f9da49ee5fd494b3fd976273d0cf5528f6903ab - env: - npm_config_build_from_source: true - steps: - ... - render: - ... + test: + name: ${{ matrix.test }} Test + runs-on: ubuntu-latest + container: + image: coasys/ad4m-ci-linux:latest@sha256:3d6e8b6357224d689345eebd5f9da49ee5fd494b3fd976273d0cf5528f6903ab + env: + npm_config_build_from_source: true + strategy: + fail-fast: false + matrix: + test: [pull, render, revisions, signals, stress, telepresence] + steps: + - uses: actions/checkout@v4 + - name: Verify Rust toolchain + run: rustc --version && cargo --version && hc --version + - name: Install dependencies + run: pnpm install --no-frozen-lockfile + - name: Build languages + run: pnpm run build-languages + - name: Run tests + run: | + cd bootstrap-languages/p-diff-sync/hc-dna/zomes/tests + pnpm install --no-frozen-lockfile + deno install + pnpm run test-${{ matrix.test }}mcp-server/src/server.rs (3)
70-76:Arc<RwLock<Option<String>>>for an immutable token is unnecessary overhead.The token is set once in
new()and only ever read viaget_client(). AnArc<str>orOption<String>(since the struct already derivesClone) would be simpler. If you plan to support token refresh (e.g., via the unused_admin_credential), keepingRwLockis fine — but add a comment noting the intent.Simplified alternative (if no mutation is planned)
pub struct Ad4mMcpServer { executor_url: String, - token: Arc<RwLock<Option<String>>>, + token: Option<String>, tool_router: ToolRouter<Self>, }And in
new():Ok(Self { executor_url, - token: Arc::new(RwLock::new(token)), + token, tool_router: Self::tool_router(), })And in
get_client():async fn get_client(&self) -> Result<Ad4mClient> { - let token = self.token.read().await; - let t = token.clone() + let t = self.token.clone() .ok_or_else(|| anyhow::anyhow!("No authentication token configured"))?; Ok(Ad4mClient::new(self.executor_url.clone(), t)) }
145-165: Duplicate link-to-JSON serialization logic.Lines 176-183 (
get_perspective) and 215-222 (query_links) contain identical link serialization. Extract a helper to reduce duplication:Proposed helper
+ fn serialize_link(link: &ad4m_client::LinkExpression) -> serde_json::Value { + json!({ + "source": link.data.source, + "target": link.data.target, + "predicate": link.data.predicate, + "author": link.author, + "timestamp": link.timestamp, + }) + }Then in both tool methods:
- let links: Vec<serde_json::Value> = snapshot.links.iter().map(|link| { - json!({ ... }) - }).collect(); + let links: Vec<serde_json::Value> = snapshot.links.iter().map(Self::serialize_link).collect();Also applies to: 201-231
97-120: Connection validation failure is silently swallowed.
new()logs a warning on connection failure (line 110) but proceeds to construct the server anyway. This means the server will happily start and then fail on every tool call if the token or URL is invalid. Consider failing fast or at least emitting a more prominent warning that tool calls will fail.mcp-server/Cargo.toml (1)
13-37: Remove unusedthiserrordependency.The code uses
anyhow::Result<T>andanyhow::anyhow!()for error handling throughoutmain.rsandserver.rs. No custom error types are defined that would requirethiserror, making this dependency redundant.Proposed fix
# Error handling anyhow = "1" -thiserror = "1"bootstrap-languages/direct-message-language/hc-dna/tests/sweettest/src/test_direct_message.rs (1)
13-13:conductorsdoesn't need to bemut.
conductorsis declared asmutin all four tests (Lines 13, 51, 88, 112) but is never mutated — only indexed for reads and passed by reference. Remove themutto avoid compiler warnings.Example fix (repeat for all tests)
- let (mut conductors, cells) = setup_conductors(2, true).await; + let (conductors, cells) = setup_conductors(2, true).await;rust-executor/src/mcp/server.rs (2)
23-44:start_mcp_serverblocks indefinitely — ensure it's spawned, not awaited directly.
running.waiting().await?on Line 41 blocks until the MCP server shuts down. When this is eventually wired into the executor'srun()function, it must betokio::spawn'd (or run on a separate thread) to avoid blocking the GraphQL server and other services. Consider documenting this in the function's doc comment.Additionally, since the executor uses stdio for the MCP transport, ensure no other component writes to stdout (logging via
env_loggerdefaults to stderr, so that should be fine, but worth keeping in mind).Suggested doc comment addition
/// Start the MCP server on stdio /// /// This is designed to be run alongside the GraphQL server, sharing the same /// JsCoreHandle for executing AD4M operations. +/// +/// **Note:** This function blocks until the MCP server shuts down. Callers should +/// spawn it in a separate task (e.g., `tokio::spawn`) to avoid blocking other services. +/// The stdio transport assumes exclusive ownership of stdin/stdout — ensure no other +/// component writes to stdout while the MCP server is running.
13-17: Consider wrappingadmin_credentialinArcto avoid cloning secrets.
McpContextderivesClone, which meansadmin_credential: Option<String>gets deep-copied on every clone. Since this is a sensitive credential, wrapping it inArc<Option<String>>(likeauth_token) would limit the number of copies of the secret in memory.rust-executor/src/lib.rs (1)
10-10: MCP server module is declared but not started.
pub mod mcp;is added at line 9, butstart_mcp_serveris never called in therun()function (lines 67–212). The server won't launch when the executor starts. Since the PR notes indicate "wiring startup" as a next step, consider adding a TODO comment to make this explicit for future work.bootstrap-languages/direct-message-language/hc-dna/tests/sweettest/src/utils.rs (1)
79-108: Serde workaround for private fields is reasonable but fragile.The JSON round-trip to construct
LinkExpression(because its fields are private) will silently break if the upstream type's serialization schema changes. A brief note in the comment pointing to the upstream type would help future maintainers. Low priority.rust-executor/src/mcp/tools.rs (1)
165-184: Repeated perspective-lookup + auth boilerplate across every tool.Every tool duplicates the
get_perspective→get_auth_token→AgentContext::from_auth_token→ match pattern. Extracting a small helper would cut ~6 lines per tool and centralize the auth-enforcement fix.Sketch of a shared helper
async fn resolve_perspective(&self, uuid: &str) -> Result<(PerspectiveInstance, AgentContext), String> { let perspective = get_perspective(uuid) .ok_or_else(|| format!("Perspective not found: {}", uuid))?; let agent_context = self.agent_context().await?; Ok((perspective, agent_context)) }Also applies to: 186-218, 220-244, 246-287, 289-330, 332-351
- Add login tool for multi-user mode (email/password) - Add set_token tool to set capability token directly - Add auth_status tool to check current authentication - Add runtimeLoginUser to rust-client - Add MCP integration plan document Part of MCP integration work (PR #665)
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (6)
docs/MCP_INTEGRATION_PLAN.md (2)
135-140: Security considerations look solid but may benefit from one addition.The security section covers capability scoping, rate limiting, audit logging, and input validation. Given that the
logintool accepts email/password and theset_tokentool accepts raw tokens over MCP/stdio, consider also noting the risk of credential/token exposure in MCP transport logs (stdio is plaintext) and any mitigations planned for the HTTP transport phase (e.g., TLS requirement).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/MCP_INTEGRATION_PLAN.md` around lines 135 - 140, Add a security note about credential/token exposure risk when using the login and set_token tools over MCP/stdio (which is plaintext) and during HTTP transport; update the MCP_INTEGRATION_PLAN Security Considerations to call out that login (email/password) and set_token (raw tokens) MUST not be logged in plaintext, should be transmitted only over encrypted channels (require TLS for HTTP transport), tokens should be short-lived/scoped and rotatable, and MCP/stdio use should be avoided for secrets or augmented with an encrypted tunnel or redaction middleware to prevent credential leakage in transport or logs.
14-26: Add a language identifier to the fenced code block.The ASCII diagram block is missing a language specifier. You can use
textorplaintextto satisfy the markdownlint MD040 rule and improve rendering consistency.-``` +```text🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/MCP_INTEGRATION_PLAN.md` around lines 14 - 26, The fenced ASCII diagram block lacks a language identifier (violates MD040); update the fenced code block surrounding the diagram (the triple-backtick block containing the ASCII diagram) to include a language specifier such as "text" or "plaintext" (e.g., change ``` to ```text) so the renderer and markdown linter recognize it as plain text while preserving the ASCII art in the OpenClaw/AD4M diagram.rust-client/src/runtime.rs (2)
437-460: Consider adding a request timeout to the login HTTP call.The
reqwest::Client::new()created here has no timeout configured. If the executor is unreachable or hangs, this call will block indefinitely. The existingqueryutility used by all other methods in this file may have its own timeout handling, but this raw client bypasses it.Proposed fix
- let response_body: graphql_client::Response<login_user::ResponseData> = reqwest::Client::new() + let response_body: graphql_client::Response<login_user::ResponseData> = reqwest::Client::builder() + .timeout(std::time::Duration::from_secs(30)) + .build()? .post(&executor_url) .json(&query) .send()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust-client/src/runtime.rs` around lines 437 - 460, The login_user function currently creates a reqwest::Client with Client::new() and makes a POST without a timeout; update login_user to construct a reqwest client with a configured timeout (e.g., Client::builder().timeout(Duration::from_secs(...)).build() ) or reuse the existing query utility's client so the call to .post(&executor_url).json(&query).send().await has a bounded timeout; ensure you reference the LoginUser GraphQL request and the login_user function and add the necessary Duration import if needed.
445-460: Login errors expose raw GraphQL error details — verify this is acceptable.Line 458 formats
response_body.errorswithDebug, which may include server-side error details. For a login endpoint this typically contains messages like "invalid credentials" which is fine, but confirm that the server doesn't return stack traces or internal details in error responses that would be undesirable to surface to callers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust-client/src/runtime.rs` around lines 445 - 460, The login_user function currently includes raw GraphQL error details by formatting response_body.errors in the returned anyhow error; change this so the public error returned to callers is sanitized (e.g., a generic "Login failed" or "Invalid credentials" message) while preserving the original response_body.errors in a debug/internal log for troubleshooting; update the login_user function (inspect response_body, response_body.errors and the anyhow::anyhow! call) to redact or map sensitive details before returning and emit the full errors only to a debug logger (not in the returned error).mcp-server/src/server.rs (2)
189-224:get_perspectivereturns unbounded snapshot — consider warning about large perspectives.The
snapshotcall retrieves all links in a perspective without any limit. For perspectives with thousands of links, this could produce very large JSON responses that exceed MCP transport limits or overwhelm the AI agent's context window. At minimum, consider documenting this in the tool description, or adding an optionallimitparameter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mcp-server/src/server.rs` around lines 189 - 224, get_perspective currently calls client.perspectives.snapshot with no bounds, which can return thousands of links; add an optional limit to GetPerspectiveParams (e.g., limit: Option<usize>) and use it to constrain results (either by passing the limit into snapshot if the API supports it or by truncating snapshot.links to limit after retrieval), include a field like total_link_count when truncating, and update the #[tool(description = ...)] text to warn about potentially large responses; touch the get_perspective function, Parameters<GetPerspectiveParams>, and the client.perspectives.snapshot call sites to implement this behavior.
226-268:query_linkshardcodesNoneforfrom_date,until_date, andlimit— agents can't paginate.The
from_date,until_date, andlimitparameters are all hardcoded toNone(lines 241-243). Exposing at leastlimitinQueryLinksParamswould let agents control result size and implement pagination, which is important for perspectives with many links.Proposed addition to QueryLinksParams
pub struct QueryLinksParams { /// Perspective UUID pub perspective_id: String, /// Source URI filter (optional) pub source: Option<String>, /// Target URI filter (optional) pub target: Option<String>, /// Predicate URI filter (optional) pub predicate: Option<String>, + /// Maximum number of links to return (optional) + pub limit: Option<f64>, + /// Filter links from this date (ISO 8601, optional) + pub from_date: Option<String>, + /// Filter links until this date (ISO 8601, optional) + pub until_date: Option<String>, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mcp-server/src/server.rs` around lines 226 - 268, The query_links handler currently passes None for from_date, until_date, and limit to client.perspectives.query_links, preventing pagination; update the Parameters<QueryLinksParams> struct to include at least an optional limit (and optionally from_date/until_date) and then change query_links to pass those values (e.g., Some(p.limit) / p.from_date.clone() / p.until_date.clone()) into client.perspectives.query_links instead of hardcoded None so callers can control page size and date ranges; update any deserialization/validation for QueryLinksParams and ensure the call sites use the new fields in the query_links async fn and related parameter handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mcp-server/src/server.rs`:
- Around line 426-465: The auth_status function currently holds the RwLock read
guard (self.token.read().await) across the HTTP call to client.agent.me().await;
change it to take the token value out and drop the guard immediately before
creating Ad4mClient and calling client.agent.me().await — for example, read and
clone/unwrap the Option into a local variable (or map to an owned String) and
let the read guard go out of scope, then create Ad4mClient::new with that owned
token and perform client.agent.me().await; ensure you still handle the None case
exactly as before and keep serde_json serialization behavior unchanged.
- Around line 354-385: The login handler currently returns a "token_preview"
containing the first 20 chars of the JWT (variable jwt) which leaks JWT content;
keep storing the token in self.token (the token write block) but remove the
token_preview field from the JSON result and replace it with a non-sensitive
indicator like "token_set": true (modify the success branch in async fn login to
build the result JSON without token_preview and instead include token_set:true),
leaving the empty-token creation via Ad4mClient::new(...) unchanged.
In `@rust-client/src/runtime.gql`:
- Around line 175-177: The LoginUser GraphQL mutation in runtime.gql references
runtimeLoginUser which is not defined in the schema; either delete the LoginUser
mutation or add a matching runtimeLoginUser mutation to the schema
(tests/js/schema.gql). Locate the LoginUser mutation in runtime.gql and remove
it if the backend should not expose login, or add a corresponding mutation
definition named runtimeLoginUser(email: String!, password: String!):
<ReturnType> to the schema and implement its resolver so graphql_client can
compile; ensure the mutation name and argument types exactly match between
runtime.gql and the schema.
---
Duplicate comments:
In `@mcp-server/src/server.rs`:
- Around line 165-187: Change tool methods (starting with list_perspectives) to
return Result<String, String> instead of String; e.g., update the signature of
async fn list_perspectives(&self, _params: Parameters<ListPerspectivesParams>)
-> Result<String, String> and for each success path return Ok(<string>) and for
error paths return Err(format!(...)). In list_perspectives specifically, convert
the successful serde_json::to_string_pretty result into Ok(json) and replace all
format!("Error...") returns with Err(format!("Error ...: {}", e)). Apply the
same pattern to get_perspective, query_links, add_link, run_prolog,
create_perspective, login, set_token, auth_status so MCP clients receive Err
values (which rmcp will mark as isError: true).
---
Nitpick comments:
In `@docs/MCP_INTEGRATION_PLAN.md`:
- Around line 135-140: Add a security note about credential/token exposure risk
when using the login and set_token tools over MCP/stdio (which is plaintext) and
during HTTP transport; update the MCP_INTEGRATION_PLAN Security Considerations
to call out that login (email/password) and set_token (raw tokens) MUST not be
logged in plaintext, should be transmitted only over encrypted channels (require
TLS for HTTP transport), tokens should be short-lived/scoped and rotatable, and
MCP/stdio use should be avoided for secrets or augmented with an encrypted
tunnel or redaction middleware to prevent credential leakage in transport or
logs.
- Around line 14-26: The fenced ASCII diagram block lacks a language identifier
(violates MD040); update the fenced code block surrounding the diagram (the
triple-backtick block containing the ASCII diagram) to include a language
specifier such as "text" or "plaintext" (e.g., change ``` to ```text) so the
renderer and markdown linter recognize it as plain text while preserving the
ASCII art in the OpenClaw/AD4M diagram.
In `@mcp-server/src/server.rs`:
- Around line 189-224: get_perspective currently calls
client.perspectives.snapshot with no bounds, which can return thousands of
links; add an optional limit to GetPerspectiveParams (e.g., limit:
Option<usize>) and use it to constrain results (either by passing the limit into
snapshot if the API supports it or by truncating snapshot.links to limit after
retrieval), include a field like total_link_count when truncating, and update
the #[tool(description = ...)] text to warn about potentially large responses;
touch the get_perspective function, Parameters<GetPerspectiveParams>, and the
client.perspectives.snapshot call sites to implement this behavior.
- Around line 226-268: The query_links handler currently passes None for
from_date, until_date, and limit to client.perspectives.query_links, preventing
pagination; update the Parameters<QueryLinksParams> struct to include at least
an optional limit (and optionally from_date/until_date) and then change
query_links to pass those values (e.g., Some(p.limit) / p.from_date.clone() /
p.until_date.clone()) into client.perspectives.query_links instead of hardcoded
None so callers can control page size and date ranges; update any
deserialization/validation for QueryLinksParams and ensure the call sites use
the new fields in the query_links async fn and related parameter handling.
In `@rust-client/src/runtime.rs`:
- Around line 437-460: The login_user function currently creates a
reqwest::Client with Client::new() and makes a POST without a timeout; update
login_user to construct a reqwest client with a configured timeout (e.g.,
Client::builder().timeout(Duration::from_secs(...)).build() ) or reuse the
existing query utility's client so the call to
.post(&executor_url).json(&query).send().await has a bounded timeout; ensure you
reference the LoginUser GraphQL request and the login_user function and add the
necessary Duration import if needed.
- Around line 445-460: The login_user function currently includes raw GraphQL
error details by formatting response_body.errors in the returned anyhow error;
change this so the public error returned to callers is sanitized (e.g., a
generic "Login failed" or "Invalid credentials" message) while preserving the
original response_body.errors in a debug/internal log for troubleshooting;
update the login_user function (inspect response_body, response_body.errors and
the anyhow::anyhow! call) to redact or map sensitive details before returning
and emit the full errors only to a debug logger (not in the returned error).
- Add mut to perspective bindings where execute_commands, get_subject_data, and create_subject are called - These methods require &mut self on PerspectiveInstance - Addresses CodeRabbit review feedback on PR #665
There was a problem hiding this comment.
🧹 Nitpick comments (3)
rust-executor/src/mcp/server.rs (1)
11-17:McpContextfields are allpub— consider restricting visibility.
js_handle,admin_credential, andauth_tokenare all public. SinceMcpContextis only consumed byAd4mMcpHandler::new()within the same crate, these could bepub(crate)to limit the API surface.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust-executor/src/mcp/server.rs` around lines 11 - 17, The struct McpContext currently exposes all fields as pub; restrict their visibility to crate-only by changing js_handle, admin_credential, and auth_token to pub(crate) since McpContext is only consumed internally (e.g., by Ad4mMcpHandler::new()); update the field declarations for McpContext to use pub(crate) to reduce the public API surface while leaving the struct itself clonable and usable within the crate.mcp-server/src/server.rs (1)
116-139:_admin_credentialis accepted but unused — consider removing or documenting the plan.The parameter is prefixed with
_to suppress warnings, but there's no TODO or doc comment explaining its intended use. If it's planned for future HTTP transport auth, a brief comment would help.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mcp-server/src/server.rs` around lines 116 - 139, The constructor method new currently accepts an unused parameter _admin_credential; either remove this parameter from the Ad4mServer::new signature and all call sites, or keep it and add a concise doc/TODO explaining intended future use (e.g., "reserved for HTTP transport admin auth") and remove the leading underscore so the compiler won't silently allow it; update the function signature and any references to the symbol _admin_credential accordingly to reflect the chosen approach.rust-executor/src/mcp/tools.rs (1)
164-179:list_perspectivesacquires aMutexlock inside a loop — potential for lock contention.Each iteration calls
p.persisted.lock().await(line 170). Ifpersistedis atokio::sync::Mutex, this is fine for correctness, but holding/releasing the lock per iteration while iterating over all perspectives could become a bottleneck under concurrent access. For now, with typical perspective counts, this is acceptable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust-executor/src/mcp/tools.rs` around lines 164 - 179, The loop in list_perspectives holds each p.persisted.lock().await while constructing the JSON object, which can increase contention; instead, minimize lock hold time by entering a short scope where you lock p.persisted, clone or copy only the needed fields (uuid, name, shared_url, neighbourhood.is_some()) into local variables, drop the guard immediately, then push the json! created from those cloned values; reference the function list_perspectives, the iterator variable p, and the lock call p.persisted.lock().await when making this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@mcp-server/src/server.rs`:
- Around line 165-187: Change the tool methods to return Result<String, String>
instead of plain String so MCP clients can detect errors; for example update the
signature of list_perspectives to async fn list_perspectives(&self, _params:
Parameters<ListPerspectivesParams>) -> Result<String, String> and propagate
errors by returning Err(format!(...)) (or using ? with conversions) instead of
embedding "Error ..." in an Ok string, convert serde_json::to_string_pretty(...)
failures with .map_err(|e| format!("Error serializing perspectives: {}", e)),
and map client call failures to Err(format!("Error listing perspectives: {}",
e)); apply the same pattern to other tool methods like get_perspective and
query_links so failures become Err(String) (compatible with rmcp::Result<T,E>
where E: IntoContents).
In `@rust-executor/src/mcp/tools.rs`:
- Around line 142-151: get_agent_context currently converts a missing/empty
token into AgentContext::from_auth_token(String::new()) which sets is_main_agent
= true and inadvertently grants main-agent privileges to unauthenticated
sessions; change this by either (A) making get_agent_context return Err when no
token is present and updating callers that perform mutations (e.g.,
create_subject, execute_commands) to handle/propagate that error, or (B) split
into two helpers — get_read_only_agent_context (returns a read-only AgentContext
or a non-main fallback) and get_write_agent_context (returns
Result<AgentContext, String> and Err when token is missing) — then update
mutating tools (create_subject, execute_commands) to use the write helper and
read-only tools to use the read helper so unauthenticated requests can no longer
receive main-agent privileges.
---
Nitpick comments:
In `@mcp-server/src/server.rs`:
- Around line 116-139: The constructor method new currently accepts an unused
parameter _admin_credential; either remove this parameter from the
Ad4mServer::new signature and all call sites, or keep it and add a concise
doc/TODO explaining intended future use (e.g., "reserved for HTTP transport
admin auth") and remove the leading underscore so the compiler won't silently
allow it; update the function signature and any references to the symbol
_admin_credential accordingly to reflect the chosen approach.
In `@rust-executor/src/mcp/server.rs`:
- Around line 11-17: The struct McpContext currently exposes all fields as pub;
restrict their visibility to crate-only by changing js_handle, admin_credential,
and auth_token to pub(crate) since McpContext is only consumed internally (e.g.,
by Ad4mMcpHandler::new()); update the field declarations for McpContext to use
pub(crate) to reduce the public API surface while leaving the struct itself
clonable and usable within the crate.
In `@rust-executor/src/mcp/tools.rs`:
- Around line 164-179: The loop in list_perspectives holds each
p.persisted.lock().await while constructing the JSON object, which can increase
contention; instead, minimize lock hold time by entering a short scope where you
lock p.persisted, clone or copy only the needed fields (uuid, name, shared_url,
neighbourhood.is_some()) into local variables, drop the guard immediately, then
push the json! created from those cloned values; reference the function
list_perspectives, the iterator variable p, and the lock call
p.persisted.lock().await when making this change.
- Fix ESM import in CommonJS patch: use require() not import - Remove short flag for token CLI arg, document AD4M_TOKEN env var preference - Add warning about Prolog injection in query_subjects tool description Security improvements for MCP integration per PR #665 review.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
patches/@mattrglobal__node-bbs-signatures@0.11.0.patch (1)
12-13:⚠️ Potential issue | 🟡 MinorBehavioral change: native compilation fallback is now enabled.
Changing
--fallback-to-build=false→--fallback-to-buildmeansnode-pre-gypwill now attempt to compile the native addon from source when a prebuilt binary isn't available, instead of failing. This requires a C++ toolchain (Python,make, a C++ compiler) on the host.If this is intentional (e.g., to support platforms/architectures without prebuilt binaries), this is fine — just be aware it may cause slower or failing installs for users without build tools.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@patches/`@mattrglobal__node-bbs-signatures@0.11.0.patch around lines 12 - 13, The install script change enables node-pre-gyp's fallback-to-build behavior (the "install" npm script was changed from "node-pre-gyp install --fallback-to-build=false" to "node-pre-gyp install --fallback-to-build"), which will attempt native compilation when no prebuilt binary is found; either revert this patch to restore the previous strict behavior or make the intent explicit: if you want to allow source builds, keep the new flag but document the requirement for a C++ toolchain and update packaging/CI to ensure builds succeed on target platforms, otherwise restore "--fallback-to-build=false" in the "install" script to prevent automatic native compilation failures for users without build tools.
🧹 Nitpick comments (2)
rust-executor/src/mcp/tools.rs (2)
398-422: Consider adding a length/complexity limit on raw Prolog queries.The
infertool passes user-supplied Prolog directly to the engine. While the description warns it's raw Prolog, an unbounded query could cause excessive computation. A simple length check or timeout would add a layer of defense.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust-executor/src/mcp/tools.rs` around lines 398 - 422, The infer tool currently forwards raw Prolog from Parameters<InferParams> to perspective.prolog_query_with_context without limits; add a defensive check in the infer method to reject or truncate overly large/complex queries (e.g., enforce a MAX_QUERY_LEN on p.query and return a clear error message if exceeded) and also wrap the async call to perspective.prolog_query_with_context in a timeout (using tokio::time::timeout or the engine's built-in timeout if available) to prevent long-running executions; update the error branches to indicate whether the failure was due to length limit or timeout so callers can react accordingly.
154-158: Single-quote escaping with\'is not portable to all ISO Prolog implementations.ISO Prolog (ISO/IEC 13211-1) specifies that single quotes inside single-quoted atoms are escaped by doubling (
''), not with a backslash. While some implementations (e.g., SWI-Prolog, SICStus) support\'as an extension whencharacter_escapesis enabled, this is not guaranteed across all ISO-compliant Prolog systems. Sinceescaped_class_nameis interpolated inside double-quoted strings, the single-quote escape may be unnecessary for your specific usage. Consider either removing the single-quote escape (if not needed) or using''(if you need to support stricter ISO Prolog portability).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust-executor/src/mcp/tools.rs` around lines 154 - 158, The escape_prolog_string function currently escapes single quotes with backslash (replace('\'', "\\'")), which is not ISO-Prolog portable; update escape_prolog_string to either drop the single-quote backslash escape (remove the .replace('\'', "\\'") call) if single quotes are not being interpolated into single-quoted Prolog atoms, or change it to double the single-quote (replace('\'', "''")) to follow ISO/IEC 13211-1 portability; adjust the implementation in the escape_prolog_string function accordingly and keep the existing backslash and double-quote escapes as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@patches/`@mattrglobal__node-bbs-signatures@0.11.0.patch:
- Around line 12-13: The install script change enables node-pre-gyp's
fallback-to-build behavior (the "install" npm script was changed from
"node-pre-gyp install --fallback-to-build=false" to "node-pre-gyp install
--fallback-to-build"), which will attempt native compilation when no prebuilt
binary is found; either revert this patch to restore the previous strict
behavior or make the intent explicit: if you want to allow source builds, keep
the new flag but document the requirement for a C++ toolchain and update
packaging/CI to ensure builds succeed on target platforms, otherwise restore
"--fallback-to-build=false" in the "install" script to prevent automatic native
compilation failures for users without build tools.
---
Duplicate comments:
In `@rust-executor/src/mcp/tools.rs`:
- Around line 142-151: The current get_agent_context silently returns
AgentContext::from_auth_token(String::new()) when no token is present,
effectively granting main-agent privileges; change get_agent_context to return
Err(...) when no auth token is missing (i.e., require a non-empty token) so
write operations cannot escalate, and create a separate explicit helper (e.g.,
get_read_only_agent_context or get_optional_agent_context) that callers may use
for read-only operations if needed; update callers such as create_subject and
execute_commands to use the strict get_agent_context (and propagate the Err)
while read-only callers use the explicit read-only helper.
---
Nitpick comments:
In `@rust-executor/src/mcp/tools.rs`:
- Around line 398-422: The infer tool currently forwards raw Prolog from
Parameters<InferParams> to perspective.prolog_query_with_context without limits;
add a defensive check in the infer method to reject or truncate overly
large/complex queries (e.g., enforce a MAX_QUERY_LEN on p.query and return a
clear error message if exceeded) and also wrap the async call to
perspective.prolog_query_with_context in a timeout (using tokio::time::timeout
or the engine's built-in timeout if available) to prevent long-running
executions; update the error branches to indicate whether the failure was due to
length limit or timeout so callers can react accordingly.
- Around line 154-158: The escape_prolog_string function currently escapes
single quotes with backslash (replace('\'', "\\'")), which is not ISO-Prolog
portable; update escape_prolog_string to either drop the single-quote backslash
escape (remove the .replace('\'', "\\'") call) if single quotes are not being
interpolated into single-quoted Prolog atoms, or change it to double the
single-quote (replace('\'', "''")) to follow ISO/IEC 13211-1 portability; adjust
the implementation in the escape_prolog_string function accordingly and keep the
existing backslash and double-quote escapes as-is.
Per design review, removing the standalone mcp-server binary. The embedded MCP server in rust-executor provides more powerful direct access and will use HTTP transport for Claude Desktop integration. The rust-client LoginUser addition is kept as a useful general-purpose client function.
HTTP transport implementation is blocked by hyper version conflict: - Project uses hyper 0.14 via reqwest - rmcp StreamableHttpService needs hyper 1.x For now, MCP clients can use stdio transport by running executor as subprocess. Added notes on future HTTP implementation options.
These were for the standalone mcp-server CLI path which has been removed. - Revert patch file to dev state - Revert rust-client LoginUser addition (not needed for embedded MCP)
- Add login_email tool for multi-user mode (email + password → JWT) - Add set_token tool for setting existing JWT or admin credential - Add auth_status tool for checking current authentication status - Add unit tests for auth functionality (5 tests) - Add JS integration test file for full login flow testing Supports both: 1. Local executor with admin credential 2. Remote multi-user executor with email/password login
|
@coderabbitai review Development has stabilized on this branch — the MCP server implementation is complete and all 4 CI checks pass. Please resume the review. — Data 🤖 |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
tests/js/tests/mcp-auth.test.ts (2)
34-72:mcpRequesthelper is defined but never used; it also has issues that should be fixed before adoption.The tests exercise authentication via GraphQL (
Ad4mClient) rather than the MCP stdio transport. If this helper is intended for future use:
- No request-ID matching (Line 52): resolves on any response with
result/error, which could return a mismatched response if notifications or other messages arrive first.- Timeout not cleared on success (Line 67):
setTimeoutis never cancelled afterresolve(), causing a spuriousoff('data', onData)call.Date.now()as ID can collide if two requests are sent within the same millisecond.Consider removing this dead code or fixing these issues if it's planned for upcoming tests.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/js/tests/mcp-auth.test.ts` around lines 34 - 72, The mcpRequest helper is unused and has three bugs: it resolves on any result/error rather than matching the JSON-RPC id, it never clears the timeout on success, and it uses Date.now() which can collide; to fix, update mcpRequest to generate a unique request id (e.g., incrementing counter or UUID), only resolve when a parsed message's id matches the request id and has result/error, clear the timeout and remove the stdout listener before resolving or rejecting, and ensure the timeout is cleared on both success and failure (and consider removing the helper if you intend to keep it unused).
20-31: Tests don't exercise MCP stdio transport despite the file header describing it.The docblock outlines a 4-step approach including "Spawning the MCP server as a subprocess" and "Sending MCP tool calls via stdin," but no test actually does this — they all use the GraphQL API. This means the MCP tools (
login_email,set_token,auth_statusintools.rs) have no integration test coverage through the MCP protocol path.Consider adding at least one test that spawns the MCP server process and exercises
set_token+auth_statusvia stdio to validate the end-to-end MCP flow.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/js/tests/mcp-auth.test.ts` around lines 20 - 31, Add an actual stdio-based integration test to mcp-auth.test.ts that spawns the MCP server process and exercises the MCP tools via stdin/stdout: after creating the multi-user executor and a test user via the existing GraphQL helper calls, spawn the MCP server subprocess (same binary/entry used in production), write MCP tool JSON requests for "set_token" with the user's token and then "auth_status" to the subprocess stdin, read/parse the subprocess stdout responses, and assert the expected success/identity fields; reference the MCP tool names login_email, set_token, and auth_status and the test file tests/js/tests/mcp-auth.test.ts to locate where to add the new test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rust-executor/src/mcp/tools.rs`:
- Around line 624-629: The auth_status response currently returns
"authenticated": true for tokens that can't be decoded; update the logic in the
auth_status handler (the function producing that json!({...}).to_string()) so
that when a stored token is neither a valid JWT nor the admin credential (i.e.,
decoding/validation fails) the JSON sets "authenticated": false (or alternately
include a distinct field like "status": "invalid") and keep "token_type":
"unknown" and the explanatory message; locate the json!({...}).to_string() block
shown in the diff and change the authenticated value and/or add a status field
so callers do not treat unvalidated tokens as authenticated.
- Around line 183-188: The escape_prolog_string function currently only escapes
backslashes, double quotes and single quotes, which allows newline/carriage
characters in class_name to break the Prolog double-quoted string; update
escape_prolog_string to also replace '\n' with "\\n" and '\r' with "\\r"
(keeping the existing order so backslashes are escaped first) so that newlines
and carriage returns are safely represented in the Prolog string.
- Around line 194-209: list_perspectives currently returns all perspectives
without authenticating; call get_agent_context() at the start of
list_perspectives and require a valid AgentContext in multi-user mode, returning
an error or empty list for unauthenticated sessions, then filter the
perspectives returned by all_perspectives() to only include those owned by the
authenticated agent (compare the agent's DID/ID to the appropriate field on the
persisted handle, e.g., handle.owner / handle.agent / handle.uuid owner
metadata) before serializing the result; use the existing get_agent_context,
list_perspectives, and all_perspectives symbols to locate where to add the auth
check and filtering.
In `@tests/js/tests/mcp-auth.test.ts`:
- Around line 138-144: Rename the misleading test or implement MCP interaction:
either change the it(...) description "should be able to set admin token via
MCP" to something like "should report unlocked status via GraphQL admin client"
to match the actual check that calls adminAd4mClient!.agent.status(), or replace
the body to exercise MCP stdio by invoking the mcpRequest helper and the
set_token tool (simulate MCP input/output, send set_token via mcpRequest and
assert token set) so the test actually verifies MCP behavior; update references
to the test name string and the call sites adminAd4mClient!.agent.status() or
add usage of mcpRequest accordingly.
- Around line 121-128: The unbounded loop in the after() hooks that repeatedly
calls executorProcess?.kill() while waiting for executorProcess?.killed can hang
CI; modify both after() blocks to use a bounded retry loop (e.g., maxAttempts =
10) that calls executorProcess.kill() and await sleep(...) each iteration, and
if the process is still not killed after maxAttempts escalate by sending a hard
kill (executorProcess.kill('SIGKILL')) and then break; ensure you still check
executorProcess for existence and clear the loop after the process is killed or
after the escalation attempt.
---
Duplicate comments:
In `@rust-executor/src/mcp/tools.rs`:
- Around line 171-181: get_agent_context currently maps None/empty tokens to
AgentContext::from_auth_token(String::new()) which sets is_main_agent=true;
change it so the None/empty branch returns an Err instead of creating a
main-agent context, and update write-only tools (e.g., create_subject and
execute_commands) to call require_agent_context() (or otherwise propagate the
error) instead of get_agent_context so unauthenticated sessions cannot silently
escalate to main-agent privileges; keep read-only callers using a safe
non-privileged path or an explicit allowlist if needed.
---
Nitpick comments:
In `@tests/js/tests/mcp-auth.test.ts`:
- Around line 34-72: The mcpRequest helper is unused and has three bugs: it
resolves on any result/error rather than matching the JSON-RPC id, it never
clears the timeout on success, and it uses Date.now() which can collide; to fix,
update mcpRequest to generate a unique request id (e.g., incrementing counter or
UUID), only resolve when a parsed message's id matches the request id and has
result/error, clear the timeout and remove the stdout listener before resolving
or rejecting, and ensure the timeout is cleared on both success and failure (and
consider removing the helper if you intend to keep it unused).
- Around line 20-31: Add an actual stdio-based integration test to
mcp-auth.test.ts that spawns the MCP server process and exercises the MCP tools
via stdin/stdout: after creating the multi-user executor and a test user via the
existing GraphQL helper calls, spawn the MCP server subprocess (same
binary/entry used in production), write MCP tool JSON requests for "set_token"
with the user's token and then "auth_status" to the subprocess stdin, read/parse
the subprocess stdout responses, and assert the expected success/identity
fields; reference the MCP tool names login_email, set_token, and auth_status and
the test file tests/js/tests/mcp-auth.test.ts to locate where to add the new
test.
- Fix get_agent_context to require auth, add get_agent_context_for_read for read-only ops - Fix escape_prolog_string to also escape newlines and carriage returns - Fix auth_status to return authenticated:false for invalid tokens - Update read-only tools to use get_agent_context_for_read - Write operations (create_subject, execute_commands, run_prolog) still require auth - Add escape tests for newline characters
When enable_mcp is true, the executor runs in MCP mode (AI agent subprocess mode): - Reads MCP JSON-RPC requests from stdin - Writes responses to stdout - Designed for Claude Desktop and similar AI tools to spawn as subprocess Normal mode (enable_mcp false/unset) runs GraphQL server as before.
- Add mcp-integration.test.ts with full workflow simulation - Test subject class introspection (Channel, Message from Flux) - Test subject instance creation and querying - Test property getting/setting - Test link relationships between subjects - Add Prolog query tests - Include proper MCP request helper with ID matching - Skipped stdio transport tests (require special setup)
BREAKING: MCP server now uses HTTP transport instead of stdio. Changes: - Updated rmcp to 0.15.0 with transport-streamable-http-server feature - Added axum 0.8 for HTTP server - MCP server listens on port 3001 by default (configurable via --mcp-port) - Added --enable-mcp and --mcp-port CLI flags - Added enable_mcp and mcp_port to Ad4mConfig - Created HTTP-based test file (mcp-http.test.ts) The MCP server is now accessible via HTTP POST requests to http://localhost:3001/ using standard MCP JSON-RPC protocol. This follows Nico's directive: HTTP only, no stdio.
# Conflicts: # deno.lock
- Add MCP tools: add_link, query_links, add_sdna, add_perspective - Run MCP server alongside GraphQL (not instead of) - Add enableMcp parameter to test utils startExecutor - Full 15-step integration test: init → auth → create perspective → register Channel/Message SHACL classes → create instances → link message to channel → query and verify round-trip - All operations via HTTP MCP protocol (no AD4M client fallback)
…ject_collection, add_to_collection, remove_from_collection Addresses Nico's review: MCP tools should work at the subject class abstraction level, not raw links. LLMs can now set properties by name, query collections, and add/remove items without knowing predicates. Tools resolve property/collection names to predicates via SHACL shape links stored in the perspective. Test updated to use high-level tools instead of manual add_link calls. Channel SHACL definition extended with messages collection.
…s main agent
In multi-user mode (admin_credential configured), the call_tool() auth gate
was ineffective: capabilities_from_token("", Some(cred)) returns Ok with
minimal caps (not Err), so the 'capabilities.is_err()' check passed for
unauthenticated requests. An attacker with no JWT could then reach
get_readable_perspective(), which treated no-JWT as main-agent context
(user_email: None → is_main_agent: true), exposing all main-agent perspectives.
Fixes:
1. call_tool(): when admin_credential is set and stored token is empty, return
auth error immediately — before falling through to capabilities check.
2. get_readable_perspective(): defense-in-depth auth check in admin mode via
get_agent_context() (requires non-empty stored token).
3. Remove get_agent_context_for_read(): this function explicitly fell back to
main-agent context when unauthenticated. Remove all call sites (the results
were always discarded — _agent_context) and delete the function. Auth is
now enforced at the call_tool() and get_readable_perspective() layer.
Single-user mode (no admin_credential) is unaffected: capabilities_from_token
still grants ALL_CAPABILITY for empty tokens, matching GQL localhost behavior.
…proofs Profile links were manually constructed with empty/invalid signatures. Now uses create_signed_expression() to properly sign each link with the agent's key, matching how link signing works elsewhere in the codebase. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Deduplicate mcpHttpRequest, callMcpTool, parseSSEStream, initializeMcp, and listMcpTools from mcp-http.test.ts and mcp-neighbourhood.test.ts into a shared mcp-utils.ts module. All functions take mcpBaseUrl as the first parameter for portability across test suites. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add test section 5 to mcp-auth.test.ts that enables multi-user mode, signs up a user via MCP, logs in with email+password to get a JWT, and verifies the JWT works for authenticated operations. Also switches the file to use the shared mcp-utils.ts helpers. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…xt, TLS/SMTP docs, remove misplaced content
Regression tests verifying the security fix in call_tool() + get_readable_perspective():
unauthenticated sessions must not be able to read perspective data even when they
know a valid perspective UUID.
Before the fix, query_links / get_models / query_subjects would succeed with no JWT
in admin mode because capabilities_from_token('', Some(cred)) returned Ok(minimal_caps)
and the call_tool() gate only checked .is_err(). get_readable_perspective() then treated
the missing JWT as main-agent context, exposing all main-agent perspectives.
New tests (section 6):
- query_links rejects unauthenticated access to a known perspective UUID
- get_models rejects unauthenticated access to a known perspective UUID
- query_subjects rejects unauthenticated access to a known perspective UUID
Each test: authenticates + creates a perspective, then opens a fresh unauthenticated
session and tries to read the same perspective — expects an auth error in the response.
callMcpTool throws on JSON-RPC error responses (after auth bypass fix). Tests now catch both thrown errors and returned error objects/strings.
marvin-bot-coasys
left a comment
There was a problem hiding this comment.
✅ Approved
Final review pass — branch is ready to merge.
Auth Security (the main concern)
All three auth vulnerabilities are fixed and regression-tested:
-
call_tool()gate — in multi-user mode (admin_credential set), an empty token now gets an explicit rejection before the capabilities check. Previouslycapabilities_from_token("", Some(cred))returnedOk(minimal_caps)and theis_err()check let unauthenticated requests through. -
get_readable_perspective()defense-in-depth — requires a valid token in admin mode before checking perspective ownership. Previously, missing JWT →user_email: None→ main-agent context → all main-agent perspectives exposed. -
get_agent_context_for_read()removed — the function explicitly fell back to main-agent context on missing token. Gone. -
Regression tests (section 6 in
mcp-auth.test.ts) — three tests proving a fresh unauthenticated session cannot read an authenticated user's perspective viaquery_links,get_models, orquery_subjects.
Other Review Items Addressed
- Profile links use
create_signed_expression()✅ - Perspective isolation reuses
can_access_perspective()from GQL resolvers ✅ - No hardcoded Flux types — subscriptions.rs derives from SHACL; dynamic.rs from constructor actions ✅
- DRY access patterns via
get_perspective_with_auth()✅ - SHACL module extracted ✅
- Test utils shared in
mcp-utils.ts✅ - Email auth tested (signup + login without SMTP) ✅
- Docs: mcp.mdx well-structured, GraphQL as appendix, SHACL in JSON format ✅
- Skill: Holochain context, TLS/SMTP docs, subject-class-first approach ✅
Minor (not blocking — follow-up)
dynamic.rsat ~1200 lines — down from 3500+ but could split further- Subscription SHACL path reading noted as future work in comments
Ship it.
Summary
Adds a full MCP (Model Context Protocol) server embedded in the AD4M executor, enabling AI agents to interact with AD4M through a standardized protocol. The core innovation is dynamic tool generation from SHACL subject class schemas — register a data model and the MCP server auto-generates typed CRUD tools for it.
This is designed as the primary interface for AI agents working with AD4M. Agents work with structured subject classes through dynamically generated tools rather than raw GraphQL.
What AI Agents Can Do
Architecture
Embedded HTTP MCP Server — runs inside the rust-executor when
enable_mcp: true:POST /mcpJsCoreHandlewith GraphQL for consistent stateMCP Tool Modules (~5,100 lines Rust)
mod.rsdynamic.rssubjects.rsprofiles.rsperspectives.rsauth.rsflows.rschildren.rsneighbourhoods.rssubscriptions.rsshacl.rsShaclClass,ShaclProperty,load_classes()server.rsStatic MCP Tools (38 tools)
Authentication (7)
auth_statuslogin_emailsignuprequest_login_verificationverify_email_coderequest_capabilitygenerate_jwtPerspectives & Links (5)
list_perspectivesadd_perspectiveadd_linkquery_linksinferSchema & Models (2)
get_modelsadd_modelSubject Operations (9)
query_subjectsget_subject_datacreate_subjectset_subject_propertyget_subject_collectionadd_to_collectionremove_from_collectionget_subject_childrenad4m://has_child), filterable by classdelete_subjectChildren (2)
add_childget_childrenNeighbourhoods (2)
neighbourhood_publish_from_perspectiveneighbourhood_join_from_urlFlows (6)
add_flowget_flowsflow_stateflow_actionsflow_startflow_run_actionAgent & Profile (5)
get_agent_profileset_agent_profileset_agent_profile_pictureget_agent_public_perspectiveset_agent_public_perspectiveSubscriptions & Commands (2)
generate_waker_queryexecute_commandsDynamic Tools (auto-generated per SHACL class)
When a model is registered via
add_model(JSON format), tools are auto-generated with class-first naming:{class}_createchannel_create{class}_querychannel_query{class}_getchannel_get{class}_deletechannel_delete{class}_set_{prop}channel_set_name{class}_get_{coll}channel_get_messages{class}_add_{coll}channel_add_messages{class}_remove_{coll}channel_remove_messagesAD4M Waker Bridge
Standalone Node.js module (
waker-bridge/) that watches AD4M perspectives and wakes an OpenClaw agent when data changes:perspectiveLinkAddedGraphQL WebSocketwaker-config.jsonOpenClaw Skill (
skills/ad4m/)Bot-first skill file for AI agents to onboard onto AD4M:
--tls-cert/--tls-key), SMTP for multi-userSecurity
Arc<RwLock<Option<String>>>— no cross-session leakscreate_signed_expressionfor proper cryptographic signingescape_prolog_string()escapes special charactersTests (~1,928 lines TypeScript)
mcp-http.test.tsmcp-auth.test.tsmcp-neighbourhood.test.tsmcp-utils.tsLauncher Integration
Ad4mConfigon startupDocumentation
docs-src/pages/developer-guides/mcp.mdx— Setup, authentication, tool reference, SHACL examples (JSON format)Usage
CLI
Claude Desktop / OpenClaw
{ "mcpServers": { "ad4m": { "url": "http://localhost:3001/mcp" } } }Running Tests
Known Limitations
flow_startandflow_run_actionare stubs (returns not-yet-implemented)subscribe_to_modelgenerates waker config, not a live subscription