feat(jobs): per-job MCP server filtering and max_iterations cap#1243
feat(jobs): per-job MCP server filtering and max_iterations cap#1243serrrfirat merged 6 commits intonearai:stagingfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances job configuration flexibility and security by introducing per-job control over MCP server access and worker agent iteration limits. These changes allow for more granular resource management and improved isolation between different job types, ensuring that jobs only access necessary tools and operate within defined computational boundaries, thereby boosting efficiency and reducing potential security risks. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces per-job MCP server filtering and a max_iterations cap, which are valuable additions for scoping job permissions and managing resources. The implementation is solid, with new parameters correctly propagated through the job creation flow and handled during container setup. The temporary file management for per-job configurations is also well-implemented.
My review includes a few suggestions to enhance maintainability and robustness, primarily by addressing hardcoded paths and refining error handling. Additionally, I've noted that several functions now have a large number of arguments; while acceptable for this change, it would be beneficial to consider using a builder pattern or an options struct in the future to improve code clarity.
There was a problem hiding this comment.
Code Review
This pull request introduces per-job configuration for MCP servers and iteration limits, which is a great enhancement for security and resource management. The implementation is generally solid, with good use of feature flags and temporary file handling. I've included a few suggestions to improve error handling and code maintainability.
zmanian
left a comment
There was a problem hiding this comment.
Review: REQUEST CHANGES
Good architecture -- per-job MCP filtering via temp config bind-mounted read-only into containers, behind a feature gate. But two critical bugs in the max_iterations implementation.
Critical
1. max_iterations is completely dead code
The cap is injected as IRONCLAW_MAX_ITERATIONS env var, but the worker binary reads --max-iterations from clap CLI args (default 50). The env var is never consumed anywhere in the codebase. Jobs always use 50 iterations regardless of what the user specifies.
Fix: add env = "IRONCLAW_MAX_ITERATIONS" to the clap arg definition in src/cli/mod.rs:
#[arg(long, env = "IRONCLAW_MAX_ITERATIONS", default_value = "50")]2. max_iterations: 0 is allowed
No minimum bound in the parsing at job.rs. max_iterations: 0 creates a job that immediately terminates with zero loop iterations. Fix: add .max(1) after .min(500).
Important
3. Hardcoded /tmp/ironclaw-mcp-configs/
Per .claude/rules/review-discipline.md: "Never hardcode /tmp/... paths." Use std::env::temp_dir() or store the temp dir path on ContainerJobManager.
4. No test for max_iterations plumbing
The generate_worker_mcp_config tests are good, but there's no test verifying max_iterations reaches the container. Would have caught bug #1.
Suggestions
- Consider a
CreateJobRequeststruct instead of#[allow(clippy::too_many_arguments)] - MCP server name matching should be case-insensitive per project style rules
What's good
- Feature gate (
MCP_PER_JOB_ENABLED, default false) is the right safety choice generate_worker_mcp_configsemantics are clean:None(full),Some([])(none),Some([...])(filtered)- Disabled servers correctly excluded from filtered configs
- Good test coverage for the MCP filtering path
- Schema_version preserved in filtered configs
CI all green.
|
Thanks for the thorough review @zmanian. All four issues addressed in 2c564df: Critical fixes:
Important fixes: Suggestions addressed:
|
There was a problem hiding this comment.
Pull request overview
Adds per-job scoping controls for sandboxed job execution by extending the create_job tool with optional MCP server filtering and a worker loop iteration cap, plus orchestrator support for enforcing those settings at container creation time.
Changes:
- Extend
create_jobtool schema and parameter parsing to acceptmcp_serversandmax_iterations. - Add orchestrator plumbing to optionally mount a filtered per-job MCP config and inject
IRONCLAW_MAX_ITERATIONS. - Add tests for MCP config filtering behavior and an env-var wiring assertion for
max_iterations.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| src/tools/builtin/job.rs | Adds mcp_servers / max_iterations tool parameters and forwards them into sandbox job creation. |
| src/orchestrator/mod.rs | Adds MCP_PER_JOB_ENABLED env-var gate into ContainerJobConfig. |
| src/orchestrator/job_manager.rs | Implements MCP config filtering/mounting, IRONCLAW_MAX_ITERATIONS injection, cleanup, and unit tests. |
| src/cli/mod.rs | Allows worker max_iterations to be set via IRONCLAW_MAX_ITERATIONS. |
| src/channels/web/handlers/jobs.rs | Updates restart path to pass new create_job arguments. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
2c564df to
04e6e6a
Compare
|
Addressed Copilot review feedback in 79aecc8:
Re: temp file leak on Docker create failure — Re: predictable temp path — the @zmanian ready for re-review when you get a chance. |
zmanian
left a comment
There was a problem hiding this comment.
Feature logic is sound and well-designed, but CI is blocking on missing regression tests.
Strengths:
- Per-job MCP filtering gated behind
MCP_PER_JOB_ENABLED(opt-in, backward compatible) - Three clean modes: None (full config), Some([]) (no MCP), Some(["name"]) (filtered)
- Case-insensitive server name matching, respects
enabledflag max_iterationscorrectly clamped to [1, 500] and injected via env var- 8 unit tests for core filtering logic
Required changes:
-
Regression test enforcement failing -- add integration tests covering:
- Create job with
mcp_servers: ["serpstat"]-> verify mounted config contains only serpstat - Create job with
max_iterations: 5-> verify env var injected - Create job with
MCP_PER_JOB_ENABLED=falseandmcp_serversparam -> verify param ignored - Verify temp file cleanup after job completion
- Create job with
-
Security hardening (minor): Set restrictive permissions (0o700) on
/tmp/ironclaw-mcp-configs/directory
Will approve once the regression tests are added and CI passes.
79aecc8 to
237c7dc
Compare
|
Addressed all review feedback in Regression tests added (5 new tests, 18 total):
Security hardening:
All 18 tests pass locally. |
3f473ab to
552a16c
Compare
552a16c to
ed205c3
Compare
ed205c3 to
721b649
Compare
- Fix max_iterations dead code: add env = "IRONCLAW_MAX_ITERATIONS" to clap arg so worker CLI reads the env var injected by orchestrator - Fix max_iterations: 0 allowed: use .clamp(1, 500) instead of .min(500) - Replace hardcoded /tmp/ironclaw-mcp-configs with std::env::temp_dir() - Make MCP server name matching case-insensitive - Add test for case-insensitive matching - Add test verifying max_iterations env var name matches clap definition Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Guard IRONCLAW_MAX_ITERATIONS injection to Worker mode only (ClaudeCode uses max_turns) - Extract WORKER_MCP_CONFIG_PATH as constant (no more hardcoded path) - Fix TOCTOU race in cleanup_job: use remove_file directly, match on NotFound - Fix schema_version default: 0 → 1 to match McpServersFile default - Propagate serialization errors instead of silently writing empty config - Add type validation warnings for mcp_servers and max_iterations params
…tering Add 5 regression tests covering CI-required scenarios: - Filtered config contains only the requested server (no leaks) - Feature flag disabled skips MCP filtering entirely - Temp file cleanup removes per-job config - cleanup_job is idempotent (no panic on missing file/handle) - Temp directory has restrictive 0o700 permissions (unix) Security: set 0o700 permissions on /tmp/ironclaw-mcp-configs/ to prevent other users on the host from reading filtered MCP server configs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… validation Critical: 1. Server-side max_iterations clamp in create_job_inner — defense no longer relies solely on tool parameter parsing. Uses MAX_WORKER_ITERATIONS constant (matching worker/job.rs) so the cap is enforced even for direct API calls. 2. Introduce JobCreationParams struct to bundle credential_grants, mcp_servers, and max_iterations. Removes #[allow(clippy::too_many_arguments)] from both create_job and execute_sandbox (7→5 and 9→7 positional args). Important: 3. Validate MCP server names: reject path separators (/\), null bytes, and names longer than 128 chars to prevent future misuse. 5. Add test verifying max_iterations is NOT injected for ClaudeCode mode. Add test verifying server-side clamp uses MAX_WORKER_ITERATIONS constant. Add test verifying name validation rejects path separators and null bytes.
69cc96e to
87eb293
Compare
|
Addressed all review items in 87eb293: Critical #1 — Server-side Critical #2 — Important #3 — MCP server name validation: Important #5 — Test: |
zmanian
left a comment
There was a problem hiding this comment.
Re-review: APPROVE
All critical and important issues from previous reviews have been addressed across the four fix commits.
Verified fixes
Critical (all resolved):
- Server-side clamp added in
create_job_inner:iters.clamp(1, MAX_WORKER_ITERATIONS)enforces the cap even when tool parameter parsing is bypassed (e.g., direct API calls via the web restart handler). This was our primary blocking concern. env = "IRONCLAW_MAX_ITERATIONS"added to the clap arg insrc/cli/mod.rs, closing the dead-code bug where the env var was injected but never consumed.JobCreationParamsstruct introduced, eliminating positional argument proliferation oncreate_jobandexecute_sandbox.
Important (all resolved):
- MCP server name validation rejects path separators, null bytes, and names >128 chars.
- Temp directory uses
std::env::temp_dir()instead of hardcoded/tmp/. - Temp directory permissions hardened to 0700 on Unix.
- Test added for ClaudeCode mode not injecting
IRONCLAW_MAX_ITERATIONS.
Regression tests added (12 new tests):
test_mcp_config_none_filter_returns_master_pathtest_mcp_config_empty_filter_returns_nonetest_mcp_config_missing_master_returns_nonetest_mcp_config_filters_to_named_serverstest_mcp_config_no_match_returns_nonetest_mcp_config_case_insensitive_matchtest_max_iterations_env_var_injectedtest_max_iterations_not_injected_for_claude_codetest_server_side_max_iterations_clamptest_mcp_server_name_validation_rejects_path_separatorstest_filtered_config_contains_only_requested_servertest_feature_flag_disabled_skips_mcp_filteringtest_temp_file_cleanup_removes_per_job_configtest_cleanup_job_is_idempotenttest_temp_dir_has_restrictive_permissions(unix-only)
Remaining minor items (non-blocking)
-
Synchronous I/O in async context:
generate_worker_mcp_configstill usesstd::fsrather thantokio::fs. Low risk for small config files but technically violates the project's async I/O convention. Fine as a follow-up. -
Source-scanning tests:
test_max_iterations_env_var_injectedandtest_feature_flag_disabled_skips_mcp_filteringuseinclude_str!to grep source code. These are brittle if formatting changes. Consider replacing with behavioral tests in a follow-up, but they serve their purpose for now. -
Dual MAX_WORKER_ITERATIONS constants: The 500 cap exists in both
src/orchestrator/job_manager.rsandsrc/worker/job.rs. A shared constant inironclaw_commonwould prevent drift. Non-blocking since thetest_server_side_max_iterations_clamptest catches mismatches.
Co-Authored-By: Claude Opus 4.6 (1M context) noreply@anthropic.com
…TIONS 1. Convert generate_worker_mcp_config from sync std::fs to async tokio::fs. The function is called from async create_job_inner — sync I/O was blocking the tokio runtime thread. All test callers converted to #[tokio::test]. 2. Move MAX_WORKER_ITERATIONS (500) to ironclaw_common as single source of truth. Both src/orchestrator/job_manager.rs and src/worker/job.rs now import from the shared crate, preventing drift.
|
Addressed remaining items in 79da64f:
|
|
@zmanian Your last review (APPROVE) was dismissed by the rebase force-push. All critical and important issues from your four review rounds are addressed — could you re-approve when you get a chance? CI is all green. |
zmanian
left a comment
There was a problem hiding this comment.
Re-Review (4th round) -- APPROVE
All items from the previous 3 review cycles are resolved:
- max_iterations: env var wired to clap, server-side clamp with shared
MAX_WORKER_ITERATIONSconstant - Temp file handling:
std::env::temp_dir(), directory permissions 0o700, server name validation (rejects/,\,\0, >128 chars) - Async I/O:
generate_worker_mcp_configconverted to async withtokio::fs JobCreationParamsstruct introduced- 15 regression tests added
Suggestions (non-blocking)
cleanup_jobstill uses syncstd::fs::remove_filein async context -- should usetokio::fs::remove_filefor consistency- Tool parsing at
job.rs:740uses magic number500instead ofMAX_WORKER_ITERATIONSconstant -- could drift - Consider setting 0o600 on individual config files, not just the directory
* feat(jobs): per-job MCP server filtering and max_iterations cap Add mcp_servers and max_iterations optional params to create_job. mcp_servers filters which MCP servers are mounted into worker containers (gated behind MCP_PER_JOB_ENABLED, default false). max_iterations caps the worker agent loop (default 50, max 500). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address review feedback on per-job MCP filtering - Fix max_iterations dead code: add env = "IRONCLAW_MAX_ITERATIONS" to clap arg so worker CLI reads the env var injected by orchestrator - Fix max_iterations: 0 allowed: use .clamp(1, 500) instead of .min(500) - Replace hardcoded /tmp/ironclaw-mcp-configs with std::env::temp_dir() - Make MCP server name matching case-insensitive - Add test for case-insensitive matching - Add test verifying max_iterations env var name matches clap definition Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Copilot review feedback on per-job MCP filtering - Guard IRONCLAW_MAX_ITERATIONS injection to Worker mode only (ClaudeCode uses max_turns) - Extract WORKER_MCP_CONFIG_PATH as constant (no more hardcoded path) - Fix TOCTOU race in cleanup_job: use remove_file directly, match on NotFound - Fix schema_version default: 0 → 1 to match McpServersFile default - Propagate serialization errors instead of silently writing empty config - Add type validation warnings for mcp_servers and max_iterations params * test: add regression tests and security hardening for per-job MCP filtering Add 5 regression tests covering CI-required scenarios: - Filtered config contains only the requested server (no leaks) - Feature flag disabled skips MCP filtering entirely - Temp file cleanup removes per-job config - cleanup_job is idempotent (no panic on missing file/handle) - Temp directory has restrictive 0o700 permissions (unix) Security: set 0o700 permissions on /tmp/ironclaw-mcp-configs/ to prevent other users on the host from reading filtered MCP server configs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address code review — server-side clamp, JobCreationParams, name validation Critical: 1. Server-side max_iterations clamp in create_job_inner — defense no longer relies solely on tool parameter parsing. Uses MAX_WORKER_ITERATIONS constant (matching worker/job.rs) so the cap is enforced even for direct API calls. 2. Introduce JobCreationParams struct to bundle credential_grants, mcp_servers, and max_iterations. Removes #[allow(clippy::too_many_arguments)] from both create_job and execute_sandbox (7→5 and 9→7 positional args). Important: 3. Validate MCP server names: reject path separators (/\), null bytes, and names longer than 128 chars to prevent future misuse. 5. Add test verifying max_iterations is NOT injected for ClaudeCode mode. Add test verifying server-side clamp uses MAX_WORKER_ITERATIONS constant. Add test verifying name validation rejects path separators and null bytes. * fix: async I/O in generate_worker_mcp_config, shared MAX_WORKER_ITERATIONS 1. Convert generate_worker_mcp_config from sync std::fs to async tokio::fs. The function is called from async create_job_inner — sync I/O was blocking the tokio runtime thread. All test callers converted to #[tokio::test]. 2. Move MAX_WORKER_ITERATIONS (500) to ironclaw_common as single source of truth. Both src/orchestrator/job_manager.rs and src/worker/job.rs now import from the shared crate, preventing drift. --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ai#1243) * feat(jobs): per-job MCP server filtering and max_iterations cap Add mcp_servers and max_iterations optional params to create_job. mcp_servers filters which MCP servers are mounted into worker containers (gated behind MCP_PER_JOB_ENABLED, default false). max_iterations caps the worker agent loop (default 50, max 500). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address review feedback on per-job MCP filtering - Fix max_iterations dead code: add env = "IRONCLAW_MAX_ITERATIONS" to clap arg so worker CLI reads the env var injected by orchestrator - Fix max_iterations: 0 allowed: use .clamp(1, 500) instead of .min(500) - Replace hardcoded /tmp/ironclaw-mcp-configs with std::env::temp_dir() - Make MCP server name matching case-insensitive - Add test for case-insensitive matching - Add test verifying max_iterations env var name matches clap definition Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Copilot review feedback on per-job MCP filtering - Guard IRONCLAW_MAX_ITERATIONS injection to Worker mode only (ClaudeCode uses max_turns) - Extract WORKER_MCP_CONFIG_PATH as constant (no more hardcoded path) - Fix TOCTOU race in cleanup_job: use remove_file directly, match on NotFound - Fix schema_version default: 0 → 1 to match McpServersFile default - Propagate serialization errors instead of silently writing empty config - Add type validation warnings for mcp_servers and max_iterations params * test: add regression tests and security hardening for per-job MCP filtering Add 5 regression tests covering CI-required scenarios: - Filtered config contains only the requested server (no leaks) - Feature flag disabled skips MCP filtering entirely - Temp file cleanup removes per-job config - cleanup_job is idempotent (no panic on missing file/handle) - Temp directory has restrictive 0o700 permissions (unix) Security: set 0o700 permissions on /tmp/ironclaw-mcp-configs/ to prevent other users on the host from reading filtered MCP server configs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address code review — server-side clamp, JobCreationParams, name validation Critical: 1. Server-side max_iterations clamp in create_job_inner — defense no longer relies solely on tool parameter parsing. Uses MAX_WORKER_ITERATIONS constant (matching worker/job.rs) so the cap is enforced even for direct API calls. 2. Introduce JobCreationParams struct to bundle credential_grants, mcp_servers, and max_iterations. Removes #[allow(clippy::too_many_arguments)] from both create_job and execute_sandbox (7→5 and 9→7 positional args). Important: 3. Validate MCP server names: reject path separators (/\), null bytes, and names longer than 128 chars to prevent future misuse. 5. Add test verifying max_iterations is NOT injected for ClaudeCode mode. Add test verifying server-side clamp uses MAX_WORKER_ITERATIONS constant. Add test verifying name validation rejects path separators and null bytes. * fix: async I/O in generate_worker_mcp_config, shared MAX_WORKER_ITERATIONS 1. Convert generate_worker_mcp_config from sync std::fs to async tokio::fs. The function is called from async create_job_inner — sync I/O was blocking the tokio runtime thread. All test callers converted to #[tokio::test]. 2. Move MAX_WORKER_ITERATIONS (500) to ironclaw_common as single source of truth. Both src/orchestrator/job_manager.rs and src/worker/job.rs now import from the shared crate, preventing drift. --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The MCP config mounted by PR nearai#1243 was never consumed inside containers. This wires up the full MCP client lifecycle so sandboxed jobs can use MCP tools, including OAuth-authenticated servers. Container worker changes (src/worker/): - load_mcp_tools() reads the mounted mcp-servers.json, creates McpSessionManager + McpProcessManager, registers tools into the ToolRegistry alongside dev tools - ProxySecretsStore implements SecretsStore by proxying to the orchestrator, enabling MCP OAuth token refresh inside containers - System prompt updated to mention MCP tools when present Orchestrator changes (src/orchestrator/): - Three new endpoints: POST /worker/{job_id}/mcp/secrets/{get,exists,create} gated by per-job bearer token + exact-match HashSet allowlist - compute_mcp_secret_allowlist() pre-computes the 5 canonical secret names per OAuth-capable MCP server at job creation time - ContainerJobConfig gains configurable mcp_config_path and user_id - IRONCLAW_USER_ID injected into container env vars Security: closed-world allowlist model — containers can only access the exact set of mcp_{name}_{token_type} secrets for their scoped servers. Closes nearai#2180
The MCP config mounted by PR nearai#1243 was never consumed inside containers. This wires up the full MCP client lifecycle so sandboxed jobs can use MCP tools, including OAuth-authenticated servers. Container worker changes (src/worker/): - load_mcp_tools() reads the mounted mcp-servers.json, creates McpSessionManager + McpProcessManager, registers tools into the ToolRegistry alongside dev tools - ProxySecretsStore implements SecretsStore by proxying to the orchestrator, enabling MCP OAuth token refresh inside containers - System prompt updated to mention MCP tools when present Orchestrator changes (src/orchestrator/): - Three new endpoints: POST /worker/{job_id}/mcp/secrets/{get,exists,create} gated by per-job bearer token + exact-match HashSet allowlist - compute_mcp_secret_allowlist() pre-computes the 5 canonical secret names per OAuth-capable MCP server at job creation time - ContainerJobConfig gains configurable mcp_config_path and user_id - IRONCLAW_USER_ID injected into container env vars Security: closed-world allowlist model — containers can only access the exact set of mcp_{name}_{token_type} secrets for their scoped servers. Closes nearai#2180
The MCP config mounted by PR nearai#1243 was never consumed inside containers. This wires up the full MCP client lifecycle so sandboxed jobs can use MCP tools, including OAuth-authenticated servers. Container worker changes (src/worker/): - load_mcp_tools() reads the mounted mcp-servers.json, creates McpSessionManager + McpProcessManager, registers tools into the ToolRegistry alongside dev tools - ProxySecretsStore implements SecretsStore by proxying to the orchestrator, enabling MCP OAuth token refresh inside containers - System prompt updated to mention MCP tools when present Orchestrator changes (src/orchestrator/): - Three new endpoints: POST /worker/{job_id}/mcp/secrets/{get,exists,create} gated by per-job bearer token + exact-match HashSet allowlist - compute_mcp_secret_allowlist() pre-computes the 5 canonical secret names per OAuth-capable MCP server at job creation time - ContainerJobConfig gains configurable mcp_config_path and user_id - IRONCLAW_USER_ID injected into container env vars Security: closed-world allowlist model — containers can only access the exact set of mcp_{name}_{token_type} secrets for their scoped servers. Closes nearai#2180
The MCP config mounted by PR nearai#1243 was never consumed inside containers. This wires up the full MCP client lifecycle so sandboxed jobs can use MCP tools, including OAuth-authenticated servers. Container worker changes (src/worker/): - load_mcp_tools() reads the mounted mcp-servers.json, creates McpSessionManager + McpProcessManager, registers tools into the ToolRegistry alongside dev tools - ProxySecretsStore implements SecretsStore by proxying to the orchestrator, enabling MCP OAuth token refresh inside containers - System prompt updated to mention MCP tools when present Orchestrator changes (src/orchestrator/): - Three new endpoints: POST /worker/{job_id}/mcp/secrets/{get,exists,create} gated by per-job bearer token + exact-match HashSet allowlist - compute_mcp_secret_allowlist() pre-computes the 5 canonical secret names per OAuth-capable MCP server at job creation time - ContainerJobConfig gains configurable mcp_config_path and user_id - IRONCLAW_USER_ID injected into container env vars Security: closed-world allowlist model — containers can only access the exact set of mcp_{name}_{token_type} secrets for their scoped servers. Closes nearai#2180
Summary
mcp_serversandmax_iterationsoptional parameters tocreate_jobtoolmcp_serversfilters which MCP servers are mounted into worker containers, gated behindMCP_PER_JOB_ENABLEDenv var (default false)max_iterationscaps the worker agent loop iteration count (default 50, max 500)Motivation
Deployments running multiple MCP servers need per-job scoping — a research job should only access research tools, not production APIs. Simple data-gathering jobs shouldn't burn 50 iterations when 10 suffice.
Test plan
cargo test --libpasses (3155 tests)cargo clippy --all --all-featureszero warningscargo fmt --checkcleanmcp_servers: ["serpstat"]→ only serpstat in container configmax_iterations: 5→ worker exits at 5MCP_PER_JOB_ENABLED=false(default) ignoresmcp_serversparam🤖 Generated with Claude Code