chore: promote staging to main (2026-03-31 04:47 UTC)#1809
Merged
henrypark133 merged 33 commits intomainfrom Apr 9, 2026
Merged
chore: promote staging to main (2026-03-31 04:47 UTC)#1809henrypark133 merged 33 commits intomainfrom
henrypark133 merged 33 commits intomainfrom
Conversation
…#1125) * feat(context): add approval_context field to JobContext Add approval_context to JobContext so tools can propagate approval information when executing sub-tools. This enables tools like build_software to properly check approvals for shell, write_file, etc. - Add approval_context: Option<ApprovalContext> field to JobContext - Add with_approval_context() builder method - Add check_approval_in_context() helper for tools to verify permissions - Default JobContext now includes autonomous approval context Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(worker): check job-level approval context before executing tools Move job context fetch before approval check and add job-level approval context checking. Job-level context takes precedence over worker-level, allowing tools like build_software to set specific allowed sub-tools while maintaining the fallback to worker-level approval for normal operations. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(scheduler): propagate approval_context to JobContext Store approval_context from dispatch into JobContext so it's available to tools during execution. This completes the chain: scheduler -> job context -> tools -> sub-tools. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(builder): use approval context for sub-tool execution Update build_software to create a JobContext with build-specific approval permissions and check approval before executing sub-tools. This allows the builder to work in autonomous contexts (web UI, routines) while maintaining security by only allowing specific build-related tools. Allowed tools: shell, read_file, write_file, list_dir, apply_patch, http Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(db): initialize approval_context as None in job restoration When restoring jobs from database, set approval_context to None. The context will be populated by the scheduler on next dispatch if needed. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * test: add comprehensive approval context tests Add tests for: - JobContext default includes approval_context - with_approval_context() builder method - Autonomous context blocks Always-approved tools unless explicitly allowed - autonomous_with_tools allows specific tools - Builder tool approval context configuration Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(security): address critical approval context security issues This commit addresses all security concerns raised in PR review: 1. Revert JobContext::default() to approval_context: None - Previously set ApprovalContext::autonomous() which was too permissive - Secure default requires explicit opt-in for autonomous execution - Any code using JobContext::default() now correctly blocks non-Never tools 2. Fix check_approval_in_context() to match worker behavior - Previously returned Ok(()) when approval_context was None (insecure) - Now uses ApprovalContext::is_blocked_or_default() for consistency - Prevents privilege escalation through sub-tool execution paths 3. Remove "http" from builder's allowed tools - Building software doesn't require direct http tool access - Shell commands (cargo, npm, pip) handle dependency fetching - Reduces attack surface for builder tool execution 4. Update tests to reflect new secure defaults - Tests now verify JobContext::default() blocks non-Never tools - New test added for secure default behavior Security review references: - Issue #1: JobContext::default() behavioral change - Issue #3: check_approval_in_context more permissive than worker check - Issue #4: Builder allows http without justification Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(worker): implement additive approval semantics for job + worker checks This addresses the remaining security review concern from PR #1125. Previously, the worker used "precedence" semantics where job-level approval context would completely bypass worker-level checks. This meant a tool's job-level context could potentially override worker-level restrictions. Changes: - Worker now checks BOTH job-level AND worker-level approval contexts - Tool is blocked if EITHER level blocks it (additive/intersection semantics) - Maintains defense in depth: job-level cannot bypass worker-level restrictions Tests added: - test_additive_approval_semantics_both_levels_must_approve: verifies job-level blocks take effect even when worker-level allows - test_additive_approval_worker_block_overrides_job_allow: verifies worker-level blocks take effect even when job-level allows - test_additive_approval_both_levels_allow: verifies tool is allowed only when both levels approve Security review reference: - Issue #3 from @G7CNF: "document or enforce additive semantics for job + worker approval checks" - Issue #2 from @zmanian: "Job-level context bypasses worker-level entirely" Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(security): address PR #1125 review feedback - Restore requirement-aware is_blocked() semantics: Never and UnlessAutoApproved tools pass in autonomous context, Only Always tools require explicit allowlist entry - Use AutonomousUnavailable error (with descriptive reason) instead of generic AuthRequired for approval blocking in worker - Deduplicate approval_context propagation in scheduler dispatch (single update_context_and_get call instead of duplicated blocks) - Remove http from builder tool allowlist (shell handles network) - Add TODO comments for serde(skip) losing approval_context on DB restore in both libsql and postgres backends - Add tests: Never tools in additive model, builder unlisted tool blocking Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(worker): remove duplicate approval check and use normalized params - Remove pre-existing worker-level-only approval check (lines 561-567) that duplicated the new additive check, using a different error type and missing job-level context - Use normalized_params (not raw params) for requires_approval() so parameter-dependent approval (e.g. shell destructive detection) works correctly with coerced values - Remove unused autonomous_unavailable_error import - Add comment documenting unreachable else branch in scheduler Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: ilblackdragon@gmail.com <ilblackdragon@gmail.com>
* fix(security): block cross-channel approval thread hijacking (#1485) Add source_channel to Thread and verify channel authorization before allowing approval messages to target threads by UUID. The web gateway channel is allowed as a trusted approval UI. Threads without source_channel (deserialized from older DB records) are permitted for backward compatibility. Closes #1485 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style: run cargo fmt https://claude.ai/code/session_01Mdiz3XwyZcjqMkqicaynGs * fix(security): address review feedback on source_channel - hydrate_thread_from_db now passes message.channel as source_channel instead of None, ensuring DB-hydrated threads get proper channel auth - Replace is_none_or (unstable) with map_or(true, ...) for MSRV compat - Add "gateway" to trusted approval channels alongside "web" - Document why bootstrap thread uses None for source_channel Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(clippy): use is_none_or instead of map_or for Option check is_none_or is stable since Rust 1.82 and preferred by clippy over map_or(true, ...) pattern. https://claude.ai/code/session_012nbbEyFXjDwdZZrHg7gFNK * fix(security): persist source_channel to DB, harden cross-channel authorization Address PR #1590 review feedback: 1. Persist source_channel to DB: Add source_channel column to conversations table in both PostgreSQL (V14 migration) and libSQL (incremental migration + base schema). Add get_conversation_source_channel trait method to ConversationStore with both backend implementations. 2. Fix hydrate_thread_from_db: Read source_channel from DB instead of stamping the requesting message's channel, preventing channel confusion after server restart. 3. Reject reserved WASM channel names: Validate that WASM channels cannot register as "web", "gateway", "cli", or "repl" to prevent authorization bypass via name spoofing. 4. Require pending_approval exists: Authorization check now verifies thread.pending_approval.is_some() before allowing approval-shaped messages to target a thread. 5. Fail-closed for None source_channel: Use "__bootstrap__" sentinel for bootstrap threads (authorized from any channel). None now means "deny by default" instead of "allow by default". 6. Extract and test authorization predicate: is_approval_authorized() helper with 6 unit tests covering same-channel, cross-channel blocked, web/gateway always allowed, None denied, and bootstrap sentinel. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve merge conflicts from staging rebase - Fix Thread::with_id calls to include source_channel parameter - Fix ensure_conversation calls to include source_channel parameter - Bump libsql source_channel migration to V15 (V14 taken by users) - Remove stale conflict markers - Fix clippy warning in users.rs https://claude.ai/code/session_01Esh8QQzHACYyfsVwCb479F * style: fix cargo fmt formatting https://claude.ai/code/session_01Ci7CAdGaHhssYdio7wxVvd * fix(security): address review feedback on cross-channel approval checks 1. thread_ops.rs: Remove .or(Some(&*message.channel)) fallback in maybe_hydrate_thread() so that when source_channel is NULL in the DB, it stays None rather than being stamped with the requesting channel. This preserves the fail-closed behavior of is_approval_authorized(). 2. libsql_migrations.rs: Remove source_channel from base SCHEMA to eliminate duplicate column definition. The column is now added solely by V14 migration, preventing fresh databases from failing on startup. 3. wasm/setup.rs: Expand RESERVED_CHANNEL_NAMES to cover all built-in channels (http, signal, slack-relay, secret_save) and add a dynamic collision check against already-registered channel names passed from the startup sequence. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(security): harden cross-channel approval authorization - Fix migration number collisions (V14 already taken by users migration; rename to V15 for PostgreSQL, bump to 16 for libSQL) - Extract TRUSTED_APPROVAL_CHANNELS constant to replace hardcoded "web"/"gateway" in is_approval_authorized(); WASM setup imports it - Add __bootstrap__ sentinel to WASM reserved channel names to prevent impersonation granting universal approval rights - Fix TenantScope::ensure_conversation passing None for source_channel, which silently blocked approvals for tenant-created threads - Add 11 regression tests: authorization logic, WASM reserved name validation, libSQL source_channel DB round-trip and upsert invariant Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address code review findings for cross-channel approval security 1. Add "telegram" to WASM channel name blocklist -- bundled channels like telegram were claimable by malicious WASM modules that load before the bundled one, bypassing cross-channel approval auth. 2. Make V16 libSQL migration (ADD COLUMN source_channel) idempotent -- the runner now checks pragma_table_info before executing ALTER TABLE, preventing startup failures if the base schema already includes the column. 3. Replace silent .unwrap_or(None) in thread hydration with explicit match on DB result -- legacy threads without stored source_channel now log a warning, and DB errors log an error. Both cases remain fail-closed (approvals denied) but are no longer silent. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: ilblackdragon@gmail.com <ilblackdragon@gmail.com>
* feat(setup): build ironclaw-worker Docker image in setup wizard After confirming Docker is available, the setup wizard now checks if the ironclaw-worker:latest image exists locally. If not found, it offers to build it from Dockerfile.worker or provides manual build instructions. This fixes the job failures caused by missing Docker images when users enable the sandbox feature through the setup wizard. Fixes #459 Co-Authored-By: Claude <noreply@anthropic.com> * fix(sandbox): address PR #714 review feedback - Use tokio::process::Command instead of std::process::Command in build_image() to avoid blocking the async runtime during Docker builds - Add security doc warning that dockerfile_path must be trusted (Docker builds execute arbitrary RUN commands) - Use settings.sandbox.image instead of hardcoded "ironclaw-worker:latest" to respect SANDBOX_IMAGE env var configuration Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix(sandbox): address remaining PR #714 review feedback - Fix path resolution bug in build_image(): canonicalize the Dockerfile path before deriving context_dir, preventing double-resolution for nested paths like "docker/sandbox.Dockerfile" - Use SetupError::Sandbox (via From impl) instead of SetupError::Auth for connect_docker() failures - Add ContainerRunner::for_image_ops() constructor to avoid passing a bogus proxy_port=0 when only image operations are needed - Replace .unwrap_or(-1) with .map_or() to avoid unwrap in production - Add unit test for build_image() error handling on nonexistent path Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(sandbox): address PR #1757 review comments [skip-regression-check] - Stream docker build output via tokio BufReader instead of silent .output(), providing real-time build progress via tracing::info - Remove docker/sandbox.Dockerfile from candidates (wrong image, no worker entrypoint) — only Dockerfile.worker produces the correct image - Respect auto_pull_image config: attempt pull before offering local build; skip build prompt entirely for registry-style images (contain '/') - Graceful fallback when connect_docker() fails in ensure_worker_image (handles Windows check_docker/connect_docker mismatch) - Fix test to use Docker::connect_with_http_defaults() so it runs without a Docker daemon (canonicalize fails before any daemon call) - Cap stderr capture at 4KB for build error messages Regression test for build_image() error path was added in prior commit. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Tianer Zhou <ezhoureal@gmail.com> Co-authored-by: Claude <noreply@anthropic.com>
* feat(telegram): add sendVoice support for audio/ogg attachments When an agent response includes an attachment with MIME type audio/ogg or audio/opus, the Telegram channel now sends it via sendVoice instead of sendDocument. This renders the audio as an in-chat voice note with waveform and playback controls rather than a file download. Adds: - VOICE_MIME_TYPES constant for ogg/opus detection - send_voice() function mirroring send_document() but calling sendVoice - Updated send_attachment() routing: photo → sendPhoto, ogg/opus → sendVoice, other → sendDocument This is the channel-side prerequisite for TTS voice replies (issue #90). The TTS provider infrastructure (TTS_PROVIDER, TTS_BASE_URL, etc.) is tracked separately in that issue. * docs: update FEATURE_PARITY.md for sendVoice support * fix(telegram): address review feedback on sendVoice PR - Add MAX_VOICE_SIZE (50MB) guard with fallback to send_document - Extract base_mime_type() to handle parameterized MIME types (e.g. "audio/ogg; codecs=opus") - Extract classify_attachment() pure function for testable routing - Add unit tests for MIME routing and base_mime_type parsing - Bump telegram channel version to 0.2.6 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(telegram): extract send_multipart_upload shared helper Replace three near-identical multipart upload functions (send_photo, send_document, send_voice) with a shared send_multipart_upload() that takes the API method and field name as parameters. Each public function now handles only its size guard and delegates to the shared helper. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: TheWolfOfWalmart <tenny@tenn-lab.xyz> Co-authored-by: ilblackdragon@gmail.com <ilblackdragon@gmail.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…DMs (#1845) * fix(relay): route async Slack messages to correct channel instead of DMs Fixes three bugs causing async/cross-channel Slack messages to land in DMs or fail silently: 1. routing_target_from_metadata now extracts channel_id for Slack relay messages, so proactive broadcasts target the originating channel instead of falling back to sender_id (user's DM) 2. Lightweight routine JobContext carries notify metadata (owner_id, notify_channel, notify_user) so the message tool can resolve the correct delivery target — previously ..Default::default() left metadata as null 3. Routine creation auto-captures source channel/target from ctx.metadata when the LLM omits delivery params, so routines created from a Slack channel know where to send results Also: - Clarified message tool channel/target parameter descriptions to prevent LLM confusion between transport names and Slack channel IDs - IronClaw proxy_provider now checks Slack ok=false and surfaces errors instead of silently succeeding Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style: apply cargo fmt Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…on calls (#1752) * fix(gemini): preserve and echo thoughtSignature for Gemini 3.x function calls Gemini 3.x models return a `thoughtSignature` field alongside `functionCall` parts and require it to be echoed back when replaying conversation history. Without this, all tool-calling requests fail with HTTP 400 "Function call is missing a thought_signature". Changes: - Add `thought_signature: Option<String>` to `ToolCall` struct - Capture `thoughtSignature` from Gemini response in `from_gemini_response()` - Echo it back on `functionCall` parts in `to_gemini_request()` - Add 4 tests covering roundtrip, capture, and omission Fixes #1510 * fix(gemini): address review feedback — remove .unwrap(), document DB round-trip limitation M1: Replace `part.as_object_mut().unwrap().insert(...)` with `if let Some(obj)` pattern per CLAUDE.md no-unwrap policy. M2: Add comments in thread_ops.rs and session.rs documenting that thought_signature is lost on DB round-trip. The synthetic fallback in ensure_thought_signatures() covers this at request time. * refactor(gemini): move thought_signature from ToolCall to provider-local storage Instead of adding a Gemini-specific `thought_signature` field to the shared `ToolCall` struct (which required `thought_signature: None` in 18 files across every provider and consumer), store captured thought signatures in a `HashMap<String, String>` on `GeminiOauthProvider` keyed by tool-call ID. - `from_gemini_response()` returns captured signatures as a third tuple element; `complete_with_tools()` stores them on the provider instance - `to_gemini_request()` accepts the signatures map and injects real signatures before `ensure_thought_signatures()` fills synthetic gaps - Zero changes outside `gemini_oauth.rs` except removing the reverted `thought_signature: None` lines The `ensure_thought_signatures()` fallback continues to work for history entries loaded from DB (where real signatures aren't available). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(gemini): prune thought_signatures map to prevent unbounded growth Address review feedback from Copilot: - Prune stale entries after each response by retaining only signatures for tool-call IDs present in the conversation history or just-received response. This prevents unbounded map growth and O(n) clone overhead. - Fix inaccurate test comment to reflect the actual condition. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: ilblackdragon@gmail.com <ilblackdragon@gmail.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…rom LLM (#1748) * fix(builder): accept inline-table and object-map dependency formats from LLM * review: return Option from flatten_dep to skip invalid TOML values --------- Co-authored-by: Illia Polosukhin <ilblackdragon@gmail.com>
* feat(config): unify all settings to DB > env > default priority Previously only LLM settings used DB-first priority while all other subsystems (agent, channels, tunnel, heartbeat, embeddings, sandbox, wasm, safety, builder, transcription, routines, skills, hygiene, search) used env-first. This made web UI settings changes unreliable for non-LLM config — env vars would silently override DB values. Now all subsystems follow the same priority: DB > env > TOML > default. - Add db_first_or_default, db_first_bool, db_first_optional_string, db_first_option helpers to config/helpers.rs with shadow warnings - Flip 10 Group 1 resolvers (agent, channels, tunnel, heartbeat, embeddings, sandbox, wasm, safety, builder, transcription) from parse_optional_env/parse_bool_env to db_first_* equivalents - Add Settings structs for 4 Group 2 resolvers (routines, skills, hygiene, search) that previously had no DB persistence - Update Config::build() call sites and cli/doctor.rs caller - Security-sensitive fields stay env-only: allow_local_tools, allow_full_access, cost/rate limits, auth tokens, API keys - Bootstrap configs (database, secrets) stay env-only Closes #1119 (partial — config unification phases 1-2) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address PR review feedback - Stop logging raw values in shadow warnings to prevent leaking sensitive data (tunnel tokens, API keys) to logs - Propagate optional_env errors for OLLAMA_BASE_URL instead of silently swallowing them with .ok().flatten() - Make tunnel auth tokens (cf_token, ngrok_token) env-only like gateway_auth_token — sensitive credentials should not come from DB - Fix transcription enabled tri-state: explicit DB false now correctly overrides TRANSCRIPTION_ENABLED env var (was collapsing to "unset") - Switch SearchSettings fts_weight/vector_weight to Option<f32> so 0.5 can be explicitly configured without being treated as "unset" Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address @ilblackdragon review feedback - Document default-equality heuristic limitation in db_first_or_default (a DB value equal to the default is treated as "unset") - Remove dead _db_value parameter from warn_if_db_shadows_env - Replace misleading db_first_or_default for embedding dimension with direct parse_optional_env (dimension depends on model, not DB) - Use db_first_option for search weights to emit shadow warnings consistently with other resolvers - Add migration warnings for auth tokens (gateway_auth_token, cf_token, ngrok_token) that are now env-only — warns at startup if these fields are set in DB/TOML but being ignored - Improve signal error message to mention signal_enabled setting - Clarify module docs and TOML header about default-equality caveat Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: minor cleanups from self-review - Simplify warn_if_db_shadows_env: use is_ok_and() instead of binding + drop - Add comment explaining u32→usize cast for max_parallel_jobs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(jobs): per-job MCP server filtering and max_iterations cap Add mcp_servers and max_iterations optional params to create_job. mcp_servers filters which MCP servers are mounted into worker containers (gated behind MCP_PER_JOB_ENABLED, default false). max_iterations caps the worker agent loop (default 50, max 500). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address review feedback on per-job MCP filtering - Fix max_iterations dead code: add env = "IRONCLAW_MAX_ITERATIONS" to clap arg so worker CLI reads the env var injected by orchestrator - Fix max_iterations: 0 allowed: use .clamp(1, 500) instead of .min(500) - Replace hardcoded /tmp/ironclaw-mcp-configs with std::env::temp_dir() - Make MCP server name matching case-insensitive - Add test for case-insensitive matching - Add test verifying max_iterations env var name matches clap definition Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Copilot review feedback on per-job MCP filtering - Guard IRONCLAW_MAX_ITERATIONS injection to Worker mode only (ClaudeCode uses max_turns) - Extract WORKER_MCP_CONFIG_PATH as constant (no more hardcoded path) - Fix TOCTOU race in cleanup_job: use remove_file directly, match on NotFound - Fix schema_version default: 0 → 1 to match McpServersFile default - Propagate serialization errors instead of silently writing empty config - Add type validation warnings for mcp_servers and max_iterations params * test: add regression tests and security hardening for per-job MCP filtering Add 5 regression tests covering CI-required scenarios: - Filtered config contains only the requested server (no leaks) - Feature flag disabled skips MCP filtering entirely - Temp file cleanup removes per-job config - cleanup_job is idempotent (no panic on missing file/handle) - Temp directory has restrictive 0o700 permissions (unix) Security: set 0o700 permissions on /tmp/ironclaw-mcp-configs/ to prevent other users on the host from reading filtered MCP server configs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address code review — server-side clamp, JobCreationParams, name validation Critical: 1. Server-side max_iterations clamp in create_job_inner — defense no longer relies solely on tool parameter parsing. Uses MAX_WORKER_ITERATIONS constant (matching worker/job.rs) so the cap is enforced even for direct API calls. 2. Introduce JobCreationParams struct to bundle credential_grants, mcp_servers, and max_iterations. Removes #[allow(clippy::too_many_arguments)] from both create_job and execute_sandbox (7→5 and 9→7 positional args). Important: 3. Validate MCP server names: reject path separators (/\), null bytes, and names longer than 128 chars to prevent future misuse. 5. Add test verifying max_iterations is NOT injected for ClaudeCode mode. Add test verifying server-side clamp uses MAX_WORKER_ITERATIONS constant. Add test verifying name validation rejects path separators and null bytes. * fix: async I/O in generate_worker_mcp_config, shared MAX_WORKER_ITERATIONS 1. Convert generate_worker_mcp_config from sync std::fs to async tokio::fs. The function is called from async create_job_inner — sync I/O was blocking the tokio runtime thread. All test callers converted to #[tokio::test]. 2. Move MAX_WORKER_ITERATIONS (500) to ironclaw_common as single source of truth. Both src/orchestrator/job_manager.rs and src/worker/job.rs now import from the shared crate, preventing drift. --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Improve command execution parameter validation Enhance workdir and timeout parameter handling for command execution. * fix(worker): refine timeout parameter validation logic Refactor timeout parameter handling to ensure it is a positive integer. * fix(shell): add timeout and workdir validation with regression tests Parse timeout strictly: reject non-integer (float/string), zero, and null values; normalize blank/whitespace-only workdir to None. Add six regression tests covering each edge case flagged in review. * fix(shell): improve parameter validation consistency Add "minimum": 1 to timeout schema so LLMs get constraint upfront Reject non-string workdir types (was silently ignored before) Clarify error message: "positive integer" instead of "integer" Add test for non-string workdir rejection
…#1848) For channel mentions, the relay channel used event.channel_id (e.g. "C088K6C3SQZ") as the thread_id fallback. Slack requires thread_ts to be a message timestamp, so it silently ignored this and posted a top-level message instead of threading. Now uses event.id (the Slack message ts, e.g. "1609459200.000100") as the fallback, so responses are always threaded under the user's message. Also fixes metadata["thread_id"] to use the same value. Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test(routines): add issue 1781 coverage * fix: address PR review feedback
* test(e2e): cover chat approval parity across channels * fix: harden chat approval prompt rendering
* Expand GitHub WASM tool surface * Tighten GitHub tool input validation
* test(e2e): add agent loop recovery coverage * test(e2e): harden mock message text parsing
…5626 chore: promote staging to staging-promote/2f2ad260-23866616993 (2026-04-01 22:10 UTC)
…6993 chore: promote staging to staging-promote/510fe19a-23863770631 (2026-04-01 19:23 UTC)
…0631 chore: promote staging to staging-promote/eb3fa0e6-23858863254 (2026-04-01 18:14 UTC)
…9614 chore: promote staging to staging-promote/27a2fab1-23853914907 (2026-04-01 15:27 UTC)
…4907 chore: promote staging to staging-promote/73759253-23837266309 (2026-04-01 14:31 UTC)
…6309 chore: promote staging to staging-promote/684a9d30-23835295614 (2026-04-01 07:30 UTC)
…5614 chore: promote staging to staging-promote/f441d788-23825523544 (2026-04-01 06:31 UTC)
…3544 chore: promote staging to staging-promote/b6b3ffa1-23819569437 (2026-04-01 00:16 UTC)
…9437 chore: promote staging to staging-promote/78e448df-23807837438 (2026-03-31 21:10 UTC)
…7438 chore: promote staging to staging-promote/27fa292b-23782121704 (2026-03-31 16:20 UTC)
…1704 chore: promote staging to staging-promote/42623ed1-23780941831 (2026-03-31 05:32 UTC)
…channels Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Auto-promotion from staging CI
Batch range:
ffff743dfcd9355ea2297da0891e4f145b6fd4da..42623ed1113dd026bb95ef68e6142ea4d9978f74Promotion branch:
staging-promote/42623ed1-23780941831Base:
mainTriggered by: Staging CI batch at 2026-03-31 04:47 UTC
Commits in this batch (1):
Current commits in this promotion (20)
Current base:
mainCurrent head:
staging-promote/42623ed1-23780941831Current range:
origin/main..origin/staging-promote/42623ed1-23780941831Auto-updated by staging promotion metadata workflow
Waiting for gates:
Auto-created by staging-ci workflow