fix: resolve dependency audit findings precisely#361
fix: resolve dependency audit findings precisely#361claude-code-best merged 14 commits intoclaude-code-best:mainfrom
Conversation
Harden ACP communication boundaries Remote ACP sessions now cannot widen permission mode through untrusted metadata or client payloads. WebSocket ACP ingress measures payloads by bytes before binary decode, and prompt queue handoff keeps exactly one prompt active while queued prompts are drained FIFO. Constraint: ACP remote clients must not be able to open bypassPermissions without local launch intent Constraint: WebSocket payload limits must be byte-based and checked before binary decode Rejected: Keep promptToQueryContent wrapper | no production consumers remained after prompt conversion single-sourcing Confidence: high Scope-risk: moderate Directive: Do not re-enable remote bypassPermissions from _meta unless a local launch gate is verified in both acp-link and agent Tested: targeted ACP/RCS/acp-link prompt queue, bridge, permission, payload, and prompt conversion tests; bun run typecheck; bun run build Not-tested: Manual live ACP/RCS session against an external client
Keep the full repository test, typecheck, build, and Biome lint gates usable after the ACP fix pass. This commit is intentionally separate from the ACP behavior change: it fixes Windows-safe Langfuse home redaction, removes stale lint suppressions, resolves Biome warning/info diagnostics, and keeps env expansion tests explicit without template-placeholder lint noise. Constraint: The project completion contract requires full typecheck, lint, test, and build evidence Rejected: Leave warning/info diagnostics as historical noise | they obscure future gate regressions and weaken flow-impact claims Confidence: high Scope-risk: narrow Directive: Keep repository gate cleanup separate from feature fixes when it is not part of the same runtime path Tested: bunx biome lint src/; bunx tsc --noEmit; bun test src/services/mcp/__tests__/envExpansion.test.ts src/utils/__tests__/sliceAnsi.test.ts src/utils/__tests__/stringUtils.test.ts; bun test; bun run build Not-tested: Manual Langfuse export against a real external Langfuse service
Deep review found several paths that made ACP communication failures look normal: prompt errors could finish as end_turn, permission pipeline exceptions could fall through to client approval, tool rawInput was deep-copied with JSON, and acp-link accepted unbounded or unvalidated WebSocket payloads. This keeps the behavior fail-closed, validates WS payloads before dispatch, caps payload size before JSON parse, and preserves cancellation intent with a generation counter. Constraint: User explicitly rejected pseudo-fixes, fallback behavior, and unbounded payload handling Rejected: Keep JSON stringify/parse rawInput copy | duplicates large payloads and silently drops non-JSON inputs Rejected: Delegate permission pipeline errors to client approval | allows a broken local permission check to be bypassed Confidence: high Scope-risk: moderate Directive: Do not convert ACP errors into normal end_turn responses without a protocol-level reason and regression tests Tested: bun test src/services/acp/__tests__/agent.test.ts src/services/acp/__tests__/bridge.test.ts src/services/acp/__tests__/permissions.test.ts Tested: bun test packages/acp-link/src/__tests__/server.test.ts Tested: bunx tsc --noEmit Tested: bunx biome lint src/ packages/acp-link/src/ Tested: bun run test:all Tested: bun run build Not-tested: Manual end-to-end ACP client session over a real editor WebSocket
📝 WalkthroughWalkthroughComprehensive update introducing WebSocket subprotocol-based authentication for ACP endpoints, migrating UUID generation from external package to Node's crypto API, tightening API key validation via timing-safe comparison, adding CORS configuration, refactoring permission-mode handling, updating CI workflows with pinned action versions, and addressing code quality improvements (exponentiation syntax, lint comment cleanup, radix-explicit parsing). Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant ACP-Link
participant RCS as Remote Control Server
participant Backend
Client->>ACP-Link: WebSocket upgrade (with Sec-WebSocket-Protocol: rcs.auth.<token>)
activate ACP-Link
ACP-Link->>ACP-Link: extractWebSocketAuthToken()
ACP-Link->>ACP-Link: authTokensEqual(extracted, configured)
alt Token Valid
ACP-Link->>ACP-Link: Accept connection
ACP-Link->>Client: WebSocket connected
else Token Invalid
ACP-Link->>Client: Close with 1008 (auth failed)
end
deactivate ACP-Link
Client->>RCS: POST /acp/agents (Authorization: Bearer <token>)
activate RCS
RCS->>RCS: extractBearerToken()
RCS->>RCS: validateApiKey() with timing-safe comparison
alt Valid API Key
RCS->>Backend: Fetch agent list
Backend-->>RCS: Agent data
RCS-->>Client: 200 OK with agents
else Invalid Key
RCS-->>Client: 401 Unauthorized
end
deactivate RCS
Client->>RCS: WebSocket upgrade to /acp/relay (Sec-WebSocket-Protocol: rcs.auth.<token>)
activate RCS
RCS->>RCS: extractWebSocketAuthToken()
RCS->>RCS: validateApiKey() with timing-safe hash comparison
alt Token Valid
RCS->>Client: Upgrade accepted
Client->>RCS: JSON message (decodeAcpWsMessageData)
RCS->>RCS: handleSizedWsPayload() - validate size ≤ 10MiB
alt Size Valid & Valid JSON
RCS->>Backend: Process message
Backend-->>RCS: Response
RCS-->>Client: Forward response
else Payload Too Large
RCS-->>Client: Close with 1009 (message too big)
end
else Token Invalid
RCS-->>Client: Close with 1008 (auth failed)
end
deactivate RCS
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing Touches🧪 Generate unit tests (beta)
|
GitHub Actions failed under bun test --coverage because permissions.test.ts replaced ../bridge.js with a partial mock that omitted forwardSessionUpdates. Coverage worker ordering on Linux let sibling tests observe that incomplete module. This isolates ACP test mocks by snapshotting real exports, overriding only requested symbols, and restoring mocks in LIFO order. The shared helper also keeps the same behavior in agent.test.ts without duplicating mock infrastructure. Constraint: bun:test mock.module is process-global inside a worker. Rejected: Add fallback exports or production guards | the bridge export exists; the failure was test mock pollution. Rejected: Keep per-file helper copies | duplication would let restore semantics drift again. Confidence: high Scope-risk: narrow Directive: Prefer safeMockModule for partial mocks of real modules in ACP tests; plain mock.module is only appropriate for fully synthetic modules or isolated tests. Tested: bun test src/services/acp/__tests__/agent.test.ts src/services/acp/__tests__/bridge.test.ts src/services/acp/__tests__/permissions.test.ts Tested: bun test --coverage --coverage-reporter=lcov Tested: bunx tsc --noEmit Tested: bun run lint Tested: git diff --check Not-tested: Linux runner directly before push
The previous CI repair removed the failing partial bridge mock, but it also added a shared safeMockModule helper and left the acp-link bypass normalization warning in the real new_session path. This tightens the fix: acp-link now treats an unauthorized client bypass request as normal permission-mode normalization without emitting a warning, and the ACP permission test explicitly preserves the real bridge and permission exports instead of using a shared helper. The agent test keeps its local mock preservation but names it by behavior and restores mocks in LIFO order. Constraint: CI output should not contain expected warning noise for covered policy branches. Rejected: Silence the test only | the normal new_session path would still warn for an expected normalization branch. Rejected: Keep the shared safeMockModule helper | the failing module was specific and should be fixed by preserving real exports at the mocking site. Confidence: high Scope-risk: narrow Directive: Treat client-requested bypassPermissions as data to normalize unless the local default explicitly enables bypass. Tested: bun test packages/acp-link/src/__tests__/server.test.ts Tested: bun test src/services/acp/__tests__/agent.test.ts src/services/acp/__tests__/bridge.test.ts src/services/acp/__tests__/permissions.test.ts Tested: bun test --coverage --coverage-reporter=lcov with UPPER_WARN_COUNT=0 Tested: bun run test:all Tested: bun run lint Tested: bunx tsc --noEmit Tested: git diff --check
ACP clients must not be able to enter bypassPermissions unless the local ACP gate and process environment both allow it. The same gate now controls session creation, explicit mode changes, and the ExitPlanMode option list, while session setup restores process.cwd so coverage and later work do not inherit ACP session state. Constraint: CI must stay warning-clean without hiding real ACP permission failures Rejected: Logging rejected bypass requests on the normal new_session path | it preserves audit text but reintroduces warning noise the runtime should not emit Rejected: Broad CI=true postinstall skip | it hides explicit Chrome MCP setup checks outside the install path Confidence: high Scope-risk: moderate Directive: Keep bypassPermissions gated through one ACP availability decision before exposing it to clients Tested: bun test src/services/acp/__tests__/permissions.test.ts src/services/acp/__tests__/agent.test.ts packages/acp-link/src/__tests__/server.test.ts Tested: bun run test:all Tested: bun run lint Tested: bun run build:vite with zero warning matches Tested: bun test --coverage --coverage-reporter lcov --coverage-dir coverage produced non-empty lcov with SF records and zero filtered warning matches Not-tested: GitHub Actions result after this push
The CI log still had three non-failing warnings after the ACP hardening commit: git init default-branch advice from checkout, a Node 20 action-runtime deprecation, and one additional known Vite dynamic-import diagnostic that only surfaced on Linux. The workflow now provides explicit git config and opts actions into Node 24, while Vite keeps a narrow allowlist for acknowledged optimizer diagnostics. Constraint: Do not use shell log filtering to hide warnings after they happen Rejected: Grep warning lines out of CI output | it would make future diagnostics harder to find Confidence: high Scope-risk: narrow Directive: Add new Vite warning allowlist entries only after checking that they are existing optimizer diagnostics, not new application defects Tested: bunx tsc --noEmit --pretty false Tested: bunx biome lint .github/workflows/ci.yml vite.config.ts Tested: bun run build:vite with zero warning matches Not-tested: GitHub Actions result after this push
ACP clients now fail closed when permissionMode is malformed, unknown, or requests bypass without a local bypass opt-in. acp-link validates new_session input before forwarding to the agent and returns client error frames for expected unauthorized requests without logging create-failed noise. The direct AcpAgent path independently rejects invalid _meta.permissionMode and unauthorized bypass instead of falling back to settings. CI workflows and generated GitHub App templates now use Node 24-compatible actions pinned to immutable commit SHAs, and acp-link startup output no longer prints the auth token. Constraint: Must not hide warnings with test isolation or log filtering Rejected: Silent fallback to local permission mode | accepts invalid client intent and masks boundary behavior Rejected: Broad dependency churn from bun update | audit remained failing while package and lockfile churn expanded scope Confidence: high Scope-risk: moderate Directive: Client-provided permissionMode must stay fail-closed before reaching AcpAgent; only local settings.defaultMode may fall back to default on invalid local config Tested: bun test packages/acp-link/src/__tests__/server.test.ts src/services/acp/__tests__/agent.test.ts src/services/acp/__tests__/permissions.test.ts src/services/skillLearning/__tests__/skillLifecycle.test.ts src/utils/settings/__tests__/config.test.ts Tested: bunx tsc -p packages/acp-link/tsconfig.json --noEmit --pretty false Tested: bunx tsc --noEmit --pretty false Tested: bun run lint Tested: bun run test:all Tested: local CI equivalent install/typecheck/coverage/build with warning_scan=0 Not-tested: Pre-existing bun audit vulnerabilities require a separate dependency-hardening PR
Use dependency-native upgrades and lockfile resolution to close the audit findings without suppressions. Keep the chrome MCP setup aligned with the new dependency graph and add real integration coverage so the override behavior stays verified. Constraint: no audit ignores or warning suppression Rejected: broad google-auth/protobuf overrides | replaced with upstream-compatible resolution Confidence: high Scope-risk: moderate Directive: keep dependency fixes upstream-compatible; do not reintroduce blanket overrides unless the audit surface changes materially Tested: bun audit; bun audit --json; bun install --frozen-lockfile with CLAUDE_CODE_SKIP_CHROME_MCP_SETUP=1; bunx tsc --noEmit --pretty false; bun run lint; targeted tests; bun run test:all; bun test --coverage --coverage-reporter lcov --coverage-dir coverage; bun run build:vite Not-tested: unrelated pre-existing ACP/CORS/token fallback residual risks
Replace the ad hoc URL-token flow with crypto UUID-backed transport identifiers so the bearer token stays in structured request data instead of query strings. Keep the server, web client, and transport helpers aligned so the ACP/RCS handshake remains compatible after the API shape change. Constraint: token must not be embedded in the URL Rejected: token-as-uuid query fallback | leaked bearer tokens in URLs Confidence: high Scope-risk: moderate Directive: preserve the structured auth path; do not reintroduce query-token fallback when adjusting ACP transport code Tested: targeted ACP/RCS transport tests Not-tested: unrelated pre-existing ACP/CORS/token fallback residual risks
Normalize WebFetch headers before dispatch so canonicalization preserves auth semantics and duplicate forms do not slip through. Keep the behavior locked with a focused header test instead of broadening the request pipeline. Constraint: preserve header semantics without widening the fetch surface Rejected: ad hoc caller-side normalization | too easy to bypass in future call sites Confidence: high Scope-risk: narrow Directive: keep header normalization close to the WebFetch utility so future callers inherit the same behavior automatically Tested: targeted WebFetch header tests Not-tested: unrelated fetch backend behavior beyond header normalization
Tighten the remaining Claude security artifact items by requiring API keys on ACP global reads and relay upgrades, moving WebSocket tokens out of URLs, and replacing open web CORS with an explicit allowlist. Constraint: Browser WebSocket clients cannot set arbitrary Authorization headers, so the token is carried in a selected subprotocol instead of a query string. Rejected: Keep UUID auth for ACP channel groups | any caller can mint a UUID and read global ACP data. Rejected: Preserve ?token= compatibility | secrets leak into logs, history, referrers, and intermediaries. Confidence: high Scope-risk: moderate Directive: Do not reintroduce query-string bearer tokens; use Authorization or rcs.auth.<base64url-token>. Tested: bunx tsc --noEmit --pretty false Tested: bun run typecheck in packages/remote-control-server Tested: bun run build in packages/acp-link Tested: bun run lint Tested: bun audit Tested: focused RCS/acp-link/web tests, 160 pass Tested: Edge headless browser WebSocket subprotocol handshake Tested: bun run test:all, 3669 pass Tested: bun run build:vite Tested: bun run build Not-tested: Manual end-to-end relay with a live external ACP agent
Resolve the PR merge conflict against upstream/main while preserving the ACP auth hardening on the feature branch. The only manual conflict was in the RCS store: keep the upstream removal of ACP machineName record reuse so reconnects do not inherit stale agent/channel-group state, while retaining the branch's node:crypto UUID generation path. Constraint: PR claude-code-best#361 was merge-conflicting against claude-code-best/claude-code main, not failing CI. Constraint: Do not merge into upstream main; update only the fork PR branch for review. Rejected: Restore ACP machineName reuse | upstream removed it and it can retain stale ACP state. Rejected: Suppress warnings or tests | local lint, audit, focused tests, full test suite, and builds pass without suppression. Confidence: high Scope-risk: moderate Directive: Do not reintroduce query-string bearer tokens, UUID-only ACP access, open /web/* CORS, or ACP machineName environment reuse without new threat-model evidence. Tested: bunx tsc --noEmit --pretty false; packages/remote-control-server bun run typecheck; packages/acp-link bun run build; bun run lint; bun audit; focused ACP/RCS tests 165 pass; bun run test:all 3672 pass; bun run build; bun run build:vite; git diff --check
The CI runner does not expose @grpc/proto-loader as a root-resolvable package, and the test was relying on local hoisting rather than the real dependency owner. Resolve proto-loader through @opentelemetry/exporter-trace-otlp-grpc and @grpc/grpc-js so the smoke test follows the package graph it is validating. Constraint: Do not add a new root dependency for a transitive smoke test. Rejected: Skip or weaken the test | the test protects the protobuf 7 override path and should keep exercising loadSync. Rejected: Add @grpc/proto-loader directly to root package.json | that hides the owning-package resolution issue and broadens dependency surface. Confidence: high Scope-risk: narrow Directive: Dependency override smoke tests should resolve from the package that actually owns the dependency, not from incidental root hoisting. Tested: bun test tests/integration/dependency-overrides.test.ts; bunx tsc --noEmit --pretty false; bun run lint; bun audit; bun run test:all; git diff --check
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/services/api/filesApi.ts (1)
110-110:⚠️ Potential issue | 🟠 MajorRemove
as anyfrom retry result handling.
RetryResult<T>already narrows correctly in the!result.donepath, so this cast weakens type safety unnecessarily.Suggested fix
- lastError = (result as any).error || `${operation} failed` + lastError = result.error ?? `${operation} failed`As per coding guidelines, "Prohibit
as anytype assertions in production code; ... Use type guards (type narrowing) with union types instead of forced type casting".Also applies to: 116-116
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/filesApi.ts` at line 110, The assignment to lastError uses a forced cast "(result as any).error" which weakens type safety; update the retry result handling to rely on the RetryResult<T> discriminated union instead — when checking the result (the variable named result, returned from the retry helper) ensure you narrow on the discriminant (e.g., result.done === false or result.success === false depending on the RetryResult shape) and then read result.error directly (and fall back to `${operation} failed`), removing the "as any" cast; apply the same change to the other occurrence that sets lastError.
🧹 Nitpick comments (21)
src/utils/__tests__/stringUtils.test.ts (1)
25-25: Simplify the string concatenation.The concatenation
"^$" + "{}()|[]\\.*+?"produces the same result as a single literal. Consider simplifying for clarity.♻️ Simplified constant definition
- const allSpecialChars = "^$" + "{}()|[]\\.*+?"; + const allSpecialChars = "^${}()|[]\\.*+?";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/__tests__/stringUtils.test.ts` at line 25, The constant allSpecialChars is built via unnecessary concatenation ("^$" + "{}()|[]\\.*+?"); replace it with a single string literal containing the same sequence (allSpecialChars) to simplify and improve clarity—update the declaration of allSpecialChars in the test to use one combined string instead of concatenating two literals.src/services/mcp/__tests__/envExpansion.test.ts (2)
110-120: Nit: nested case can reuseenvExprfor consistency.
${ENV_OPEN}${envExpr("VAR")}${ENV_CLOSE}is equivalent toenvExpr(envExpr("VAR")), which keeps the placeholder construction uniform with every other test in the file. Behavior and assertions remain unchanged.♻️ Optional simplification
- const nestedExpr = `${ENV_OPEN}${envExpr("VAR")}${ENV_CLOSE}`; + const nestedExpr = envExpr(envExpr("VAR"));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/mcp/__tests__/envExpansion.test.ts` around lines 110 - 120, Replace the manual nested placeholder construction `${ENV_OPEN}${envExpr("VAR")}${ENV_CLOSE}` with the helper call envExpr(envExpr("VAR")) in the test's nestedExpr declaration so the test uses the same envExpr helper consistently; update the nestedExpr assignment (used by expandEnvVarsInString and assertions on result.missingVars/result.expanded) to call envExpr(envExpr("VAR")).
4-6: Consider a brief comment explaining the"$" + "{"split.The concatenation is clearly intentional (to prevent build/scan tooling from interpreting the literal
${...}as a template expression or env placeholder when reading this test source), but the rationale isn't obvious to future readers. A one-liner aboveENV_OPENwould prevent someone from "simplifying" it back to"${"and reintroducing the original problem this refactor solves.📝 Suggested clarifying comment
+// Build "${...}" placeholders dynamically so tooling that scans this file +// (bundlers, env-expansion utilities, etc.) does not interpret the literals. const ENV_OPEN = "$" + "{"; const ENV_CLOSE = "}"; const envExpr = (value: string): string => `${ENV_OPEN}${value}${ENV_CLOSE}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/mcp/__tests__/envExpansion.test.ts` around lines 4 - 6, Add a one-line clarifying comment above ENV_OPEN explaining why the string is split as "$" + "{" to prevent source-level tools or editors from interpreting the literal `${...}` (e.g., template/placeholder scanners) in tests; keep ENV_OPEN, ENV_CLOSE and envExpr unchanged but ensure the comment references ENV_OPEN so future readers understand not to collapse it to "${".src/commands/model/model.tsx (1)
81-81: Consider adding explicit type annotation for tri-state variable.The removal of
= undefinedis functionally correct since uninitialized variables default toundefined. However, the tri-state semantics (true/false/undefined) are important here and would be clearer with an explicit type annotation.📝 Optional: Add type annotation for clarity
At line 81:
- let wasFastModeToggledOn + let wasFastModeToggledOn: boolean | undefinedAt line 217:
- let wasFastModeToggledOn + let wasFastModeToggledOn: boolean | undefinedThis makes the tri-state nature explicit:
true= fast mode turned on,false= fast mode turned off,undefined= no change.Also applies to: 217-217
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/model/model.tsx` at line 81, The tri-state variable wasFastModeToggledOn should have an explicit type to clarify its true/false/undefined semantics; change its declaration (the one currently written as "let wasFastModeToggledOn") to include a type annotation "boolean | undefined" and apply the same annotation where it's declared/redeclared (the second occurrence around the later usage) so the intent (true = on, false = off, undefined = no change) is explicit to readers and TypeScript.src/services/langfuse/sanitize.ts (1)
28-28: Module-load-time evaluation ofHOME_DIR_PATTERN.Patterns are snapshotted once when this module is first imported. Tests (or any code) that mutate
process.env.HOME/USERPROFILEafter import won't see the change reflected in redaction. Consider computing lazily on first use (or per-call when cheap) so behavior tracks the current environment.♻️ Optional refactor — lazy initialization
-const HOME_DIR_PATTERN = new RegExp(`(?:${homePathPatterns().join('|')})`, 'g') +let _homeDirPattern: RegExp | undefined +function getHomeDirPattern(): RegExp { + if (!_homeDirPattern) { + _homeDirPattern = new RegExp(`(?:${homePathPatterns().join('|')})`, 'g') + } + return _homeDirPattern +}Then replace
HOME_DIR_PATTERNusages below withgetHomeDirPattern(). Add a reset hook if tests need to invalidate the cache.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/langfuse/sanitize.ts` at line 28, The current HOME_DIR_PATTERN is computed at module-load time so changes to environment vars like process.env.HOME/USERPROFILE after import aren’t reflected; replace the static const with a lazy getter (e.g., implement getHomeDirPattern()) that computes and returns new RegExp(`(?:${homePathPatterns().join('|')})`, 'g') on first call (or on every call if cheap), update all usages of HOME_DIR_PATTERN to call getHomeDirPattern(), and optionally expose a resetHomeDirPatternForTests() to clear any cached pattern so tests can mutate environment variables and get updated redaction behavior; reference HOME_DIR_PATTERN, homePathPatterns(), and getHomeDirPattern() when making the change.vite.config.ts (1)
25-41: Consider matching INEFFECTIVE_DYNAMIC_IMPORT onwarning.idand document each entry.Two small improvements worth considering:
- Rollup populates
warning.idwith the importer module id forINEFFECTIVE_DYNAMIC_IMPORT, so matching onwarning.id(suffix/equality) is more precise than substring-matching against the human-readablewarning.message, which can change wording across Rollup versions and may incidentally contain unrelated path tokens.- Since
acknowledgedBuildWarningsis a static allowlist, entries silently stop suppressing if the underlying files are renamed/moved. A short comment per entry (or grouped by reason) helps future maintainers prune the list rather than blindly carrying it forward.♻️ Proposed refactor
-const acknowledgedBuildWarnings = [ - "src/utils/sandbox/sandbox-adapter.ts", - "packages/builtin-tools/src/tools/ToolSearchTool/prompt.ts", - "src/utils/claudemd.ts", - "src/services/SessionMemory/sessionMemoryUtils.ts", - "src/commands/logout/logout.tsx", - "src/utils/sessionStorage.ts", - "src/utils/swarm/backends/registry.ts", - "src/utils/toolSearch.ts", - "src/utils/hooks.ts", - "src/services/skillLearning/sessionObserver.ts", - "src/utils/settings/changeDetector.ts", -]; +// Files with intentional dynamic imports that Rollup flags as +// INEFFECTIVE_DYNAMIC_IMPORT (lazy-loaded by design / runtime-resolved). +// Keep this list minimal; remove an entry as soon as the underlying +// import is restructured. +const acknowledgedBuildWarnings = [ + "src/utils/sandbox/sandbox-adapter.ts", + "packages/builtin-tools/src/tools/ToolSearchTool/prompt.ts", + "src/utils/claudemd.ts", + "src/services/SessionMemory/sessionMemoryUtils.ts", + "src/commands/logout/logout.tsx", + "src/utils/sessionStorage.ts", + "src/utils/swarm/backends/registry.ts", + "src/utils/toolSearch.ts", + "src/utils/hooks.ts", + "src/services/skillLearning/sessionObserver.ts", + "src/utils/settings/changeDetector.ts", +]; @@ if ( warning.code === "EVAL" && warning.id?.includes("@protobufjs+inquire") ) { return true; } - return ( - warning.code === "INEFFECTIVE_DYNAMIC_IMPORT" && - acknowledgedBuildWarnings.some((id) => warning.message?.includes(id)) - ); + if (warning.code !== "INEFFECTIVE_DYNAMIC_IMPORT") return false; + const target = warning.id ?? warning.message ?? ""; + return acknowledgedBuildWarnings.some((id) => target.includes(id)); }To confirm
warning.idis populated forINEFFECTIVE_DYNAMIC_IMPORTin the Rollup version Vite resolves here:Rollup INEFFECTIVE_DYNAMIC_IMPORT warning shape id propertyAnd to confirm the listed paths still exist (any drift would mean the suppression no longer matches anything):
#!/bin/bash set -euo pipefail paths=( "src/utils/sandbox/sandbox-adapter.ts" "packages/builtin-tools/src/tools/ToolSearchTool/prompt.ts" "src/utils/claudemd.ts" "src/services/SessionMemory/sessionMemoryUtils.ts" "src/commands/logout/logout.tsx" "src/utils/sessionStorage.ts" "src/utils/swarm/backends/registry.ts" "src/utils/toolSearch.ts" "src/utils/hooks.ts" "src/services/skillLearning/sessionObserver.ts" "src/utils/settings/changeDetector.ts" ) for p in "${paths[@]}"; do if [[ -f "$p" ]]; then echo "OK $p"; else echo "MISS $p"; fi doneBased on learnings: "Applies to **/*.{ts,tsx} : All imports, builds, and execution must use Bun APIs, not Node.js APIs. Runtime is Bun, not Node.js" — this Vite config file is build-time tooling and not affected by the Bun runtime constraint, so Node imports here remain appropriate.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vite.config.ts` around lines 25 - 41, The current isAcknowledgedBuildWarning should be updated to check INEFFECTIVE_DYNAMIC_IMPORT against warning.id (using equality or suffix match) instead of searching warning.message, so modify the branch that returns when warning.code === "INEFFECTIVE_DYNAMIC_IMPORT" to compare warning.id (e.g., acknowledgedBuildWarnings.some(id => warning.id?.endsWith(id) || warning.id === id)) and guard for undefined id; additionally, add a short comment for each entry in the acknowledgedBuildWarnings array (or group comments) explaining why that path is suppressed so future maintainers can verify/remove stale entries.packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts (1)
56-59: Use the sharedsettingsmock instead of inlining one here.Per the project's testing conventions,
settings.jsis one of the modules that must be mocked viatests/mocks/rather than redefined per-test. InlininggetSettings_DEPRECATED/getInitialSettingshere creates a divergent stub surface that won't track future changes to the real exports.♻️ Suggested refactor
-mock.module('src/utils/settings/settings.js', () => ({ - getInitialSettings: () => ({}), - getSettings_DEPRECATED: () => ({ skipWebFetchPreflight: true }), -})) +import { settingsMock } from '../../../../../../tests/mocks/settings' +// ... +mock.module('src/utils/settings/settings.js', () => ({ + ...settingsMock, + getSettings_DEPRECATED: () => ({ skipWebFetchPreflight: true }), +}))If the shared mock doesn't yet expose a
skipWebFetchPreflighttoggle, extendtests/mocks/settings.tsrather than redefining the module surface here.Based on learnings: "Use shared mocks from
tests/mocks/forlog.ts,debug.ts,bun:bundle,settings.js,config.ts, andauth.ts" and "Never inline mock definitions; update shared mock files instead".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts` around lines 56 - 59, Replace the inline mock of settings in headers.test.ts (the mock.module call that defines getInitialSettings and getSettings_DEPRECATED) with the project’s shared settings mock from tests/mocks/settings.ts; if the shared mock lacks skipWebFetchPreflight, add that toggle to tests/mocks/settings.ts (exporting getInitialSettings and getSettings_DEPRECATED appropriately) so tests import the common stub instead of redefining getInitialSettings/getSettings_DEPRECATED locally..github/workflows/update-contributors.yml (1)
18-18: Heads-up: pinned to amain-branch SHA, not a release tag.
jaywcjlove/github-action-contributorsdoesn't expose a stable semver tag here, so the SHA pin is correct but updates require manually followingmain. Consider opening an upstream issue to request tagged releases, or wrap this step withcontinue-on-error: truesince the contributors graphic is non-critical.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/update-contributors.yml at line 18, The workflow step using the action jaywcjlove/github-action-contributors is pinned to a commit SHA on main (not a release tag); update the step to be resilient by adding continue-on-error: true to that step (so failures won’t break the run) and optionally open an upstream issue requesting tagged releases; locate the step referencing jaywcjlove/github-action-contributors and add continue-on-error: true under that step to implement the quick fix.packages/remote-control-server/src/auth/api-key.ts (1)
9-13: Constant-time comparison looks correct — optional: cache hashed configured keys.Both operands are 32-byte SHA-256 digests so
timingSafeEqualwon't throw on length mismatch. Hashing every configured key on every request is negligible for small lists, but ifconfig.apiKeysis bounded and stable you could precompute the digests once at module load:Optional micro-optimization
-import { createHash, timingSafeEqual } from "node:crypto"; -import { config } from "../config"; - -function sha256(value: string): Buffer { - return createHash("sha256").update(value).digest(); -} - -/** Validate a raw API key token string */ -export function validateApiKey(token: string | undefined): boolean { - if (!token) return false; - const tokenHash = sha256(token); - return config.apiKeys.some((key) => timingSafeEqual(tokenHash, sha256(key))); -} +import { createHash, timingSafeEqual } from "node:crypto"; +import { config } from "../config"; + +function sha256(value: string): Buffer { + return createHash("sha256").update(value).digest(); +} + +let cachedKeyHashes: Buffer[] | undefined; +function getKeyHashes(): Buffer[] { + if (!cachedKeyHashes) cachedKeyHashes = config.apiKeys.map(sha256); + return cachedKeyHashes; +} + +/** Validate a raw API key token string */ +export function validateApiKey(token: string | undefined): boolean { + if (!token) return false; + const tokenHash = sha256(token); + return getKeyHashes().some((keyHash) => timingSafeEqual(tokenHash, keyHash)); +}Skip if
config.apiKeyscan be reloaded at runtime (cache would go stale).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/auth/api-key.ts` around lines 9 - 13, The validateApiKey function currently re-hashes each configured key on every call; to micro-optimize, precompute and store the SHA-256 digests for config.apiKeys at module load (e.g., create a hashedApiKeys array by mapping config.apiKeys through sha256) and update validateApiKey to compare the incoming tokenHash against hashedApiKeys using timingSafeEqual, and if config.apiKeys may change at runtime provide a refreshApiKeyCache function or skip caching when reloading is required; keep function names validateApiKey, sha256, timingSafeEqual and config.apiKeys as the reference points.scripts/setup-chrome-mcp.mjs (1)
17-19: Consider accepting common truthy values for the skip flag.Right now only the literal string
"1"skips setup. If a contributor setsCLAUDE_CODE_SKIP_CHROME_MCP_SETUP=true(a common convention) the gate silently no-ops. Optional tweak:Proposed broadening
-if (process.env.CLAUDE_CODE_SKIP_CHROME_MCP_SETUP === "1") { +const skipFlag = process.env.CLAUDE_CODE_SKIP_CHROME_MCP_SETUP?.toLowerCase(); +if (skipFlag === "1" || skipFlag === "true" || skipFlag === "yes") { process.exit(0); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/setup-chrome-mcp.mjs` around lines 17 - 19, The skip gate currently only treats the exact string "1" as truthy; update the check on process.env.CLAUDE_CODE_SKIP_CHROME_MCP_SETUP to accept common truthy values (e.g., "1", "true", "yes", "on" — case-insensitive, trimmed) before calling process.exit(0). Locate the existing if that references process.env.CLAUDE_CODE_SKIP_CHROME_MCP_SETUP and replace the equality check with a small normalized membership test (lowercasing/trimming the env value and comparing against a set of accepted truthy tokens) so contributors can use values like "true" or "yes" to skip setup.packages/remote-control-server/src/transport/acp-ws-handler.ts (1)
89-89: TruncatingrandomUUID()to 12 hex chars leaves only ~48 bits of entropy.
randomUUID().replace(/-/g, "").slice(0, 12)yields ~48 bits of randomness for the fallbackchannelGroupId. With many concurrent unregistered agents reaching the fallback path, birthday-bound collisions become non-negligible (≈ 1 in 16M). The previousuuid()call was likely truncated too, so this is a pre-existing concern rather than a regression — but worth either (a) keeping the full 32-char hex, or (b) adding a uniqueness guard againstgetAcpEventBuskeys to avoid cross-tenant event leakage if a collision ever occurs.Optional: keep full entropy
- const channelGroupId = (msg.channel_group_id as string) || `group_${randomUUID().replace(/-/g, "").slice(0, 12)}`; + const channelGroupId = (msg.channel_group_id as string) || `group_${randomUUID().replace(/-/g, "")}`;(and similarly at Line 157)
Also applies to: 157-157
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/transport/acp-ws-handler.ts` at line 89, The fallback channelGroupId truncates randomUUID() to 12 chars, reducing entropy and risking collisions; update the logic in acp-ws-handler where channelGroupId is created (and the similar occurrence near the other fallback at the second occurrence) to either use the full uuid string (keep all hex chars from randomUUID().replace(/-/g, "")) or implement a uniqueness guard: generate a candidate id, check getAcpEventBus (or the map/registry it uses) for existence, and loop/regenerate until a unique id is produced before assigning channelGroupId to avoid event-bus key collisions across concurrent unregistered agents..github/workflows/release-rcs.yml (1)
20-20: SHA pinning looks correct.Verified
actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83ddcorresponds to the v6.0.2 tag against multiple independent references. The other three Docker action SHAs follow the same convention with version+date comments. Good supply-chain hardening.Optional: consider adding a
# renovate: ...or Dependabot comment so these pinned digests get updated automatically, otherwise they'll silently drift.Also applies to: 23-23, 30-30, 50-50
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-rcs.yml at line 20, Add an automation hint so pinned action digests (e.g. the actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd line and the other Docker action SHA lines) are picked up by Renovate or Dependabot; modify each pinned- SHA comment to include a tool-specific directive like "# renovate: datasource=github-actions depName=actions/checkout" or add a Dependabot-friendly comment (e.g. "# dependabot use the latest" or similar) immediately after the action lines so automated tooling will detect and update these pinned digests.docs/features/remote-control-self-hosting.md (1)
228-231: LGTM — accurately captures the EventSource limitation.Browser
EventSourceindeed cannot setAuthorizationheaders, so calling out thefetch+ReadableStreamworkaround for external SSE subscribers is the correct guidance to pair with the move away from query-string tokens. Consider a follow-up to add a small JS snippet showing the recommended pattern (Bearer header + manual line splitting) so consumers don't reinvent it — non-blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/remote-control-self-hosting.md` around lines 228 - 231, The doc correctly notes EventSource can't send Authorization headers; add a short example snippet demonstrating how to subscribe to the `/acp/channel-groups/:id/events` SSE endpoint using fetch + a ReadableStream with an `Authorization: Bearer <api-key>` header and simple manual line-splitting/dispatch logic so readers can copy-paste the recommended pattern; reference the EventSource limitation and the `/acp/channel-groups/:id/events` endpoint in the example text and keep the snippet minimal and focused on header usage and simple newline parsing.docs/features/acp-link.md (1)
104-106: Nit: specify a language for this fenced code block.
markdownlint-cli2flags MD040 on this block. Addingtext(orbash) keeps the rule satisfied without changing rendered output.📝 Suggested fix
-``` +```text ws://localhost:9315/ws</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/features/acp-link.mdaround lines 104 - 106, The fenced code block
containing the WebSocket URL "ws://localhost:9315/ws" in
docs/features/acp-link.md is missing a language tag; update the fenced block to
include a language (e.g., "text" or "bash") so markdownlint MD040 is satisfied
while preserving rendering by changing the block delimiter to start withtext (orbash) and keeping the same "ws://localhost:9315/ws" content.</details> </blockquote></details> <details> <summary>packages/remote-control-server/web/src/__tests__/api-client.test.ts (1)</summary><blockquote> `159-172`: **Consider restoring `globalThis.window` after the test.** The stub assigned at line 161 is never torn down, so any later test in the same Bun process that inspects `window.location` will inherit `rcs.example.test`. Saving and restoring it (e.g. via `afterEach`) keeps this suite hermetic. <details> <summary>♻️ Optional cleanup</summary> ```diff describe("ACP relay client", () => { + const originalWindow = (globalThis as any).window; + afterEach(() => { + (globalThis as any).window = originalWindow; + }); + test("builds relay URLs without UUID or token query params", () => { ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/web/src/__tests__/api-client.test.ts` around lines 159 - 172, The test sets a fake globalThis.window.location for relayClient.buildRelayUrl but never restores it, causing leakage to other tests; update the test block to save the original globalThis.window (e.g., const _origWindow = globalThis.window) before stubbing and restore it in an afterEach/teardown (or finally) so that globalThis.window is reset after the test(s) run, ensuring hermetic tests that don't mutate window.location across the process. ``` </details> </blockquote></details> <details> <summary>packages/remote-control-server/src/__tests__/routes.test.ts (1)</summary><blockquote> `1310-1685`: **LGTM — strong end-to-end coverage of the auth/payload changes.** The new ACP block exercises every meaningful path: - 401 on missing/UUID-only auth, 200 on `Authorization: Bearer`, and parity for `Sec-WebSocket-Protocol` on `/relay-auth`. - Real `Bun.serve` upgrade tests asserting `4003 unauthorized` for legacy `?token=` and successful registration over subprotocol — this is the most credible regression guard for the URL→header migration. - Payload-guard tests exercise both the byte-size path for multibyte strings (4M `"你"` chars ≈ 12 MiB) and the binary `Uint8Array` path, confirming `1009` close + `handleMessage` is *not* invoked. Minor readability nit (Line 1641): `expect(decoded.ok && decoded.size).toBeGreaterThan(...)` is a compound assertion that works (false coerces to 0 and fails the comparison) but two assertions would be clearer: ```diff - expect(decoded.ok && decoded.size).toBeGreaterThan(10 * 1024 * 1024); + expect(decoded.ok).toBe(true); + if (decoded.ok) expect(decoded.size).toBeGreaterThan(10 * 1024 * 1024); ``` Not a defect — feel free to ignore. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/__tests__/routes.test.ts` around lines 1310 - 1685, The compound assertion in the "rejects oversized multibyte text by byte size" test makes the intent unclear; update the assertions around decodeAcpWsMessageData by first asserting the decode succeeded (e.g., expect(decoded.ok).toBe(true) or toBeTruthy()) and then asserting the size (e.g., expect(decoded.size).toBeGreaterThan(10 * 1024 * 1024)); locate the test in the "ACP WebSocket payload guards" block and the calls to decodeAcpWsMessageData and decoded to implement this split assertion. ``` </details> </blockquote></details> <details> <summary>packages/acp-link/src/rcs-upstream.ts (1)</summary><blockquote> `14-24`: **Minor refactor: unreachable branch + duplicated HTTP-base derivation.** 1. After `path = url.pathname.replace(/\/+$/, "")`, `path` can be `""` but never `"/"`, so the `path === "/"` check at Line 19 is dead code. 2. The 4-step regex chain to convert the configured RCS URL into an HTTP base URL is duplicated verbatim in `registerViaRest` (Lines 66–70) and the `identified` handler (Lines 160–164). Now that `buildRcsWsUrl` exists, mirror it with a `buildRcsHttpBaseUrl` so both call sites stay in sync. <details> <summary>♻️ Proposed refactor</summary> ```diff export function buildRcsWsUrl(rcsUrl: string): string { let raw = rcsUrl; raw = raw.replace(/^http:\/\//, "ws://").replace(/^https:\/\//, "wss://"); const url = new URL(raw); - const path = url.pathname.replace(/\/+$/, ""); - if (!path || path === "/") { - url.pathname = "/acp/ws"; - } + if (url.pathname.replace(/\/+$/, "") === "") { + url.pathname = "/acp/ws"; + } url.searchParams.delete("token"); return url.toString(); } + +export function buildRcsHttpBaseUrl(rcsUrl: string): string { + return rcsUrl + .replace(/^ws:\/\//, "http://") + .replace(/^wss:\/\//, "https://") + .replace(/\/acp\/ws.*$/, "") + .replace(/\/$/, ""); +} ``` Then replace both inline chains with `buildRcsHttpBaseUrl(this.config.rcsUrl)`. </details> Also applies to: 66-70, 160-164 <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@packages/acp-link/src/rcs-upstream.ts` around lines 14 - 24, The pathname check "path === '/'" in buildRcsWsUrl is unreachable because path is produced via replace(/\/+$/, "") and can be "" but not "/", so remove that redundant condition and keep only the empty-string branch; then extract the duplicated URL-scheme conversion logic used in registerViaRest and the identified handler into a new helper function buildRcsHttpBaseUrl(rcsUrl: string): string that performs the same http/https→http(s) base conversion (the same regex replacements and trimming) and return the base HTTP URL, and replace the inline 4-step regex chains in registerViaRest and the identified handler with calls to buildRcsHttpBaseUrl(this.config.rcsUrl) to ensure both places stay in sync with buildRcsWsUrl. ``` </details> </blockquote></details> <details> <summary>packages/remote-control-server/src/auth/cors.ts (1)</summary><blockquote> `11-27`: **Optional: cache the allowlist and use Set membership.** `resolveWebCorsOrigin` is invoked on every cross-origin/preflight request, but each call rebuilds the `Set`, parses `config.baseUrl` via `new URL`, and falls back to `Array.includes` (linear). Since `config.webCorsOrigins`, `config.baseUrl`, and `config.port` are effectively immutable at runtime, you can compute once and use `Set.has` for O(1) lookups. <details> <summary>♻️ Proposed refactor</summary> ```diff -export function getAllowedWebCorsOrigins(): string[] { - const origins = new Set<string>(config.webCorsOrigins); - - const baseOrigin = config.baseUrl ? originFromUrl(config.baseUrl) : undefined; - if (baseOrigin) { - origins.add(baseOrigin); - } - - origins.add(`http://localhost:${config.port}`); - origins.add(`http://127.0.0.1:${config.port}`); - - return [...origins]; -} - -export function resolveWebCorsOrigin(origin: string): string | undefined { - return getAllowedWebCorsOrigins().includes(origin) ? origin : undefined; -} +let cachedAllowedOrigins: Set<string> | undefined; + +function computeAllowedOrigins(): Set<string> { + const origins = new Set<string>(config.webCorsOrigins); + const baseOrigin = config.baseUrl ? originFromUrl(config.baseUrl) : undefined; + if (baseOrigin) origins.add(baseOrigin); + origins.add(`http://localhost:${config.port}`); + origins.add(`http://127.0.0.1:${config.port}`); + return origins; +} + +export function getAllowedWebCorsOrigins(): string[] { + cachedAllowedOrigins ??= computeAllowedOrigins(); + return [...cachedAllowedOrigins]; +} + +export function resolveWebCorsOrigin(origin: string): string | undefined { + cachedAllowedOrigins ??= computeAllowedOrigins(); + return cachedAllowedOrigins.has(origin) ? origin : undefined; +} ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/auth/cors.ts` around lines 11 - 27, resolveWebCorsOrigin currently rebuilds the allowlist on every request (via getAllowedWebCorsOrigins), reparses config.baseUrl, and uses Array.includes; instead, compute the allowlist once (a Set of origins) when module loads or on first access by calling originFromUrl(config.baseUrl) and adding config.webCorsOrigins plus localhost/127.0.0.1 entries, store it in a module-level Set (e.g., cachedAllowedOrigins) and change resolveWebCorsOrigin to use cachedAllowedOrigins.has(origin) for O(1) lookups; keep getAllowedWebCorsOrigins to return [...cachedAllowedOrigins] if needed. ``` </details> </blockquote></details> <details> <summary>packages/acp-link/src/ws-auth.ts (1)</summary><blockquote> `13-38`: **Empty `rcs.auth.` short-circuits the loop and ignores subsequent valid tokens.** When `protocolHeader` contains multiple comma-separated protocols (e.g., `"rcs.auth., rcs.auth.<validBase64Url>"`), the first iteration with empty payload returns `undefined` immediately rather than continuing the loop. Same with the `catch` branch on a malformed entry. Browsers don't typically emit multiple `rcs.auth.*` tokens, but per the test on Line 154 (`"other, rcs.auth."`) the function is meant to scan all tokens — being lenient here also avoids future regressions if additional sub-protocols are introduced. <details> <summary>♻️ Proposed fix</summary> ```diff const encoded = trimmed.slice(WS_AUTH_PROTOCOL_PREFIX.length); if (!encoded) { - return undefined; + continue; } try { const token = Buffer.from(encoded, "base64url").toString("utf8"); - return token.length > 0 ? token : undefined; + if (token.length > 0) return token; } catch { - return undefined; + // try next protocol entry } } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@packages/acp-link/src/ws-auth.ts` around lines 13 - 38, The decodeWebSocketAuthProtocol function currently returns undefined as soon as it encounters an rcs.auth. entry with an empty payload or a decoding exception, which short-circuits scanning remaining protocols; update decodeWebSocketAuthProtocol so that when encoded is empty or when Buffer.from(..., "base64url") throws, the code continues the for-loop instead of returning, allowing later valid rcs.auth.<token> entries to be found (refer to the function name decodeWebSocketAuthProtocol and the WS_AUTH_PROTOCOL_PREFIX usage to locate the logic to change). ``` </details> </blockquote></details> <details> <summary>src/services/acp/agent.ts (1)</summary><blockquote> `460-468`: **Silently swallowing the `process.chdir` failure may leave the agent in a non-existent CWD downstream.** `process.chdir(cwd)` is best-effort here, but tools and `getCommands(cwd)` later still receive `cwd` as if it were valid. If the directory truly doesn't exist, downstream file ops and command discovery will throw with less actionable errors. Consider at least logging the swallowed error so debugging an empty-CWD scenario doesn't require a stack trace from a downstream caller. <details> <summary>♻️ Proposed fix</summary> ```diff try { process.chdir(cwd) processCwdChanged = true - } catch { + } catch (err) { + console.error('[ACP] Failed to chdir to session cwd', { cwd, error: (err as Error).message }) // CWD may not exist yet; best-effort } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/services/acp/agent.ts` around lines 460 - 468, The catch block swallowing process.chdir(cwd) should at minimum log the error so downstream failures are easier to diagnose: update the try/catch around process.chdir(cwd) (the block that sets processCwdChanged and calls setOriginalCwd) to catch (err) and emit a clear log entry including cwd and the error message (use the existing logger if available), then keep processCwdChanged=false; this will make failures visible when getCommands(cwd) or other file ops later fail while preserving the existing best-effort behavior. ``` </details> </blockquote></details> <details> <summary>tests/integration/dependency-overrides.test.ts (1)</summary><blockquote> `113-133`: **Move temp dir setup inside `try` so cleanup runs on `writeFileSync` failure.** `mkdtempSync` succeeds before the `try` block, so a `writeFileSync` failure (rare but possible: ENOSPC, permission edge cases on some CI runners) leaves the temp directory orphaned because `rmSync` only runs in the `finally`. <details> <summary>♻️ Proposed fix</summary> ```diff const tempDir = mkdtempSync(join(tmpdir(), 'proto-loader-smoke-')) - const protoPath = join(tempDir, 'service.proto') - - writeFileSync( - protoPath, - [ - 'syntax = "proto3";', - 'package smoke;', - 'message Ping { string id = 1; }', - 'service PingService { rpc Send(Ping) returns (Ping); }', - ].join('\n'), - ) - try { + const protoPath = join(tempDir, 'service.proto') + writeFileSync( + protoPath, + [ + 'syntax = "proto3";', + 'package smoke;', + 'message Ping { string id = 1; }', + 'service PingService { rpc Send(Ping) returns (Ping); }', + ].join('\n'), + ) const loaded = protoLoader.loadSync(protoPath, { keepCase: true }) expect(loaded['smoke.Ping']).toBeDefined() expect(loaded['smoke.PingService']).toBeDefined() } finally { rmSync(tempDir, { force: true, recursive: true }) } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@tests/integration/dependency-overrides.test.ts` around lines 113 - 133, Move the temporary directory creation and proto file write into the try block so rmSync in the finally always runs even if writeFileSync throws: call mkdtempSync(...) to create tempDir and compute protoPath, then call writeFileSync(protoPath, ...) inside the try (before calling protoLoader.loadSync), keeping the existing finally that calls rmSync(tempDir, { force: true, recursive: true }); ensure references to mkdtempSync, protoPath, writeFileSync, protoLoader.loadSync, and rmSync are updated accordingly. ``` </details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Inline comments:
In@packages/acp-link/src/rcs-upstream.ts:
- Around line 128-130: The WebSocket upgrade handlers must echo the negotiated
Sec-WebSocket-Protocol in the 101 response; modify the upgradeWebSocket flow so
the negotiated protocol (e.g., the validated protocol or the first offered one)
is set on the response headers and returned with the upgrade. Concretely, change
the upgradeWebSocket handler signature or use the Hono upgrade API to allow
returning response headers from the callbacks used in the /acp/ws,
/relay/:agentId and /ingress/ses_123 handlers so you can set
"Sec-WebSocket-Protocol" to the chosen protocol before sending the upgrade;
ensure the code paths that currently validate the header (the auth validation
logic) pass the validated protocol back and that the onOpen/onMessage-style
handler return includes that header in the 101 Switching Protocols response.In
@packages/remote-control-server/src/auth/middleware.ts:
- Around line 9-38: The catch around Buffer.from(encoded, "base64url") in
decodeWebSocketAuthProtocol is dead code because Buffer.from(..., "base64url")
doesn't throw; remove the try/catch and either (A) drop the wrapper and simply
decode (return token if token.length>0), or (B) add real validation before
accepting the token (for example validate encoded with a base64url regex or
re-encode the decoded bytes and ensure Buffer.from(token,
"utf8").toString("base64url") === encoded) so malformed subprotocols are
rejected; update decodeWebSocketAuthProtocol accordingly, referencing the
variables protocolHeader, trimmed, encoded and the WS_AUTH_PROTOCOL_PREFIX.In
@packages/remote-control-server/src/index.ts:
- Line 14: Add a unit test in middleware.test.ts that covers the empty
webCorsOrigins path: construct a config with webCorsOrigins: [] (and set the
server port), call getAllowedWebCorsOrigins() and assert it returns only
localhost and 127.0.0.1 entries on the configured port, and then call
resolveWebCorsOrigin() with an external origin (e.g., https://dashboard.example)
and assert it returns undefined (fail-closed); ensure the test uses the same
helpers/setup as the existing CORS tests so it exercises the actual middleware
resolution logic.In
@packages/remote-control-server/src/routes/acp/index.ts:
- Around line 130-135: The current parsing of from_sequence_num/Last-Event-ID
uses parseInt directly and can produce NaN; update the logic around lastEventId,
fromSeq and fromSeqNum so parsed values are validated (Number.isNaN or isNaN)
and any NaN falls back to 0 before calling createAcpSSEStream; specifically,
after parsing with parseInt for fromSeq and lastEventId, check for NaN and set
fromSeqNum = 0 when invalid so createAcpSSEStream always receives a real number.In
@packages/remote-control-server/web/src/acp/client.ts:
- Around line 23-34: The server must echo the client's subprotocol (the string
produced by encodeWebSocketAuthProtocol) in the WebSocket upgrade response;
update the Hono handler that calls upgradeWebSocket (or the upgradeWebSocket
helper itself) to parse the incoming Sec-WebSocket-Protocol header, select the
matching protocol (e.g., the one that starts with "rcs.auth."), and include it
in the upgrade response by setting Sec-WebSocket-Protocol (or providing the
protocol to Bun/Hono's WebSocket upgrade API) so the returned upgrade object
includes the selected protocol and browsers accept the connection.In
@packages/remote-control-server/web/src/acp/relay-client.ts:
- Around line 5-12: Update the JSDoc for buildRelayUrl to remove the outdated
"UUID auth" reference and instead describe that the function simply constructs
the WebSocket URL for an agent (no query auth) and that authentication is
handled separately by forwarding the API token as a WebSocket subprotocol (see
createRelayClient) rather than embedding it in the URL; ensure the top comment
reflects the current auth flow and mentions createRelayClient handling the
token.In
@src/constants/github-app.ts:
- Line 34: Update the stale SHA pins for anthropics/claude-code-action: replace
occurrences of 567fe954a4527e81f132d87d1bdbcc94f7737434 with the current v1
resolved SHA a9ac9eb3a874922a7e34246afae3914113c6626f (these appear in the
workflow where anthropics/claude-code-action is referenced, specifically the two
occurrences you noted); leave the already-correct actions/checkout pin
(de0fac2e4500dabe0009e67214ff5f5447ce83dd) unchanged. Ensure both instances of
the old SHA are updated so the workflow uses the intended released version.In
@src/services/acp/agent.ts:
- Around line 256-276: When detecting the cancel-generation mismatch after
dequeuing (the checkif (session.cancelGeneration !== promptCancelGeneration)), explicitly reset the session running flag before
returning to avoid leavingsession.promptRunningtrue with no active prompt;
i.e., setsession.promptRunning = false(and ensure any relevant cleanup like
removing this prompt's UUID from
session.pendingQueue/session.pendingMessagesif necessary) so future prompts
are not deadlocked — update the logic aroundpromptCancelGeneration,
session.cancelGeneration,session.cancelled, and the earlyreturn { stopReason: 'cancelled' }to perform this reset/cleanup before returning.In
@src/services/langfuse/sanitize.ts:
- Around line 14-28: The Windows path-redaction is case-sensitive: update
homePathPatterns() and HOME_DIR_PATTERN so Windows-style paths are matched
case-insensitively by (1) when iterating values from process.env.HOME /
process.env.USERPROFILE / homedir() add a lower-cased variant
(value.toLowerCase() and its slash-normalized form) to homes so escaped
env-derived Windows home paths will match lowercased logs, and (2) change the
Windows fallback literal from '[A-Za-z]:[/\\]Users[/\\][^/\\\\]+' to
'[A-Za-z]:[/\\][Uu]sers[/\\][^/\\\\]+' (i.e. replace "Users" with [Uu]sers)
so mixed-cased drive roots are covered; keep HOME_DIR_PATTERN construction
unchanged (no global /i flag) so Unix '/Users/' literal behavior is preserved.
Outside diff comments:
In@src/services/api/filesApi.ts:
- Line 110: The assignment to lastError uses a forced cast "(result as
any).error" which weakens type safety; update the retry result handling to rely
on the RetryResult discriminated union instead — when checking the result
(the variable named result, returned from the retry helper) ensure you narrow on
the discriminant (e.g., result.done === false or result.success === false
depending on the RetryResult shape) and then read result.error directly (and
fall back to${operation} failed), removing the "as any" cast; apply the same
change to the other occurrence that sets lastError.
Nitpick comments:
In @.github/workflows/release-rcs.yml:
- Line 20: Add an automation hint so pinned action digests (e.g. the
actions/checkout@de0fac2 line and the other
Docker action SHA lines) are picked up by Renovate or Dependabot; modify each
pinned- SHA comment to include a tool-specific directive like "# renovate:
datasource=github-actions depName=actions/checkout" or add a Dependabot-friendly
comment (e.g. "# dependabot use the latest" or similar) immediately after the
action lines so automated tooling will detect and update these pinned digests.In @.github/workflows/update-contributors.yml:
- Line 18: The workflow step using the action
jaywcjlove/github-action-contributors is pinned to a commit SHA on main (not a
release tag); update the step to be resilient by adding continue-on-error: true
to that step (so failures won’t break the run) and optionally open an upstream
issue requesting tagged releases; locate the step referencing
jaywcjlove/github-action-contributors and add continue-on-error: true under that
step to implement the quick fix.In
@docs/features/acp-link.md:
- Around line 104-106: The fenced code block containing the WebSocket URL
"ws://localhost:9315/ws" in docs/features/acp-link.md is missing a language tag;
update the fenced block to include a language (e.g., "text" or "bash") so
markdownlint MD040 is satisfied while preserving rendering by changing the block
delimiter to start withtext (orbash) and keeping the same
"ws://localhost:9315/ws" content.In
@docs/features/remote-control-self-hosting.md:
- Around line 228-231: The doc correctly notes EventSource can't send
Authorization headers; add a short example snippet demonstrating how to
subscribe to the/acp/channel-groups/:id/eventsSSE endpoint using fetch + a
ReadableStream with anAuthorization: Bearer <api-key>header and simple
manual line-splitting/dispatch logic so readers can copy-paste the recommended
pattern; reference the EventSource limitation and the
/acp/channel-groups/:id/eventsendpoint in the example text and keep the
snippet minimal and focused on header usage and simple newline parsing.In
@packages/acp-link/src/rcs-upstream.ts:
- Around line 14-24: The pathname check "path === '/'" in buildRcsWsUrl is
unreachable because path is produced via replace(//+$/, "") and can be "" but
not "/", so remove that redundant condition and keep only the empty-string
branch; then extract the duplicated URL-scheme conversion logic used in
registerViaRest and the identified handler into a new helper function
buildRcsHttpBaseUrl(rcsUrl: string): string that performs the same
http/https→http(s) base conversion (the same regex replacements and trimming)
and return the base HTTP URL, and replace the inline 4-step regex chains in
registerViaRest and the identified handler with calls to
buildRcsHttpBaseUrl(this.config.rcsUrl) to ensure both places stay in sync with
buildRcsWsUrl.In
@packages/acp-link/src/ws-auth.ts:
- Around line 13-38: The decodeWebSocketAuthProtocol function currently returns
undefined as soon as it encounters an rcs.auth. entry with an empty payload or a
decoding exception, which short-circuits scanning remaining protocols; update
decodeWebSocketAuthProtocol so that when encoded is empty or when
Buffer.from(..., "base64url") throws, the code continues the for-loop instead of
returning, allowing later valid rcs.auth. entries to be found (refer to
the function name decodeWebSocketAuthProtocol and the WS_AUTH_PROTOCOL_PREFIX
usage to locate the logic to change).In
@packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts:
- Around line 56-59: Replace the inline mock of settings in headers.test.ts (the
mock.module call that defines getInitialSettings and getSettings_DEPRECATED)
with the project’s shared settings mock from tests/mocks/settings.ts; if the
shared mock lacks skipWebFetchPreflight, add that toggle to
tests/mocks/settings.ts (exporting getInitialSettings and getSettings_DEPRECATED
appropriately) so tests import the common stub instead of redefining
getInitialSettings/getSettings_DEPRECATED locally.In
@packages/remote-control-server/src/__tests__/routes.test.ts:
- Around line 1310-1685: The compound assertion in the "rejects oversized
multibyte text by byte size" test makes the intent unclear; update the
assertions around decodeAcpWsMessageData by first asserting the decode succeeded
(e.g., expect(decoded.ok).toBe(true) or toBeTruthy()) and then asserting the
size (e.g., expect(decoded.size).toBeGreaterThan(10 * 1024 * 1024)); locate the
test in the "ACP WebSocket payload guards" block and the calls to
decodeAcpWsMessageData and decoded to implement this split assertion.In
@packages/remote-control-server/src/auth/api-key.ts:
- Around line 9-13: The validateApiKey function currently re-hashes each
configured key on every call; to micro-optimize, precompute and store the
SHA-256 digests for config.apiKeys at module load (e.g., create a hashedApiKeys
array by mapping config.apiKeys through sha256) and update validateApiKey to
compare the incoming tokenHash against hashedApiKeys using timingSafeEqual, and
if config.apiKeys may change at runtime provide a refreshApiKeyCache function or
skip caching when reloading is required; keep function names validateApiKey,
sha256, timingSafeEqual and config.apiKeys as the reference points.In
@packages/remote-control-server/src/auth/cors.ts:
- Around line 11-27: resolveWebCorsOrigin currently rebuilds the allowlist on
every request (via getAllowedWebCorsOrigins), reparses config.baseUrl, and uses
Array.includes; instead, compute the allowlist once (a Set of origins) when
module loads or on first access by calling originFromUrl(config.baseUrl) and
adding config.webCorsOrigins plus localhost/127.0.0.1 entries, store it in a
module-level Set (e.g., cachedAllowedOrigins) and change resolveWebCorsOrigin to
use cachedAllowedOrigins.has(origin) for O(1) lookups; keep
getAllowedWebCorsOrigins to return [...cachedAllowedOrigins] if needed.In
@packages/remote-control-server/src/transport/acp-ws-handler.ts:
- Line 89: The fallback channelGroupId truncates randomUUID() to 12 chars,
reducing entropy and risking collisions; update the logic in acp-ws-handler
where channelGroupId is created (and the similar occurrence near the other
fallback at the second occurrence) to either use the full uuid string (keep all
hex chars from randomUUID().replace(/-/g, "")) or implement a uniqueness guard:
generate a candidate id, check getAcpEventBus (or the map/registry it uses) for
existence, and loop/regenerate until a unique id is produced before assigning
channelGroupId to avoid event-bus key collisions across concurrent unregistered
agents.In
@packages/remote-control-server/web/src/__tests__/api-client.test.ts:
- Around line 159-172: The test sets a fake globalThis.window.location for
relayClient.buildRelayUrl but never restores it, causing leakage to other tests;
update the test block to save the original globalThis.window (e.g., const
_origWindow = globalThis.window) before stubbing and restore it in an
afterEach/teardown (or finally) so that globalThis.window is reset after the
test(s) run, ensuring hermetic tests that don't mutate window.location across
the process.In
@scripts/setup-chrome-mcp.mjs:
- Around line 17-19: The skip gate currently only treats the exact string "1" as
truthy; update the check on process.env.CLAUDE_CODE_SKIP_CHROME_MCP_SETUP to
accept common truthy values (e.g., "1", "true", "yes", "on" — case-insensitive,
trimmed) before calling process.exit(0). Locate the existing if that references
process.env.CLAUDE_CODE_SKIP_CHROME_MCP_SETUP and replace the equality check
with a small normalized membership test (lowercasing/trimming the env value and
comparing against a set of accepted truthy tokens) so contributors can use
values like "true" or "yes" to skip setup.In
@src/commands/model/model.tsx:
- Line 81: The tri-state variable wasFastModeToggledOn should have an explicit
type to clarify its true/false/undefined semantics; change its declaration (the
one currently written as "let wasFastModeToggledOn") to include a type
annotation "boolean | undefined" and apply the same annotation where it's
declared/redeclared (the second occurrence around the later usage) so the intent
(true = on, false = off, undefined = no change) is explicit to readers and
TypeScript.In
@src/services/acp/agent.ts:
- Around line 460-468: The catch block swallowing process.chdir(cwd) should at
minimum log the error so downstream failures are easier to diagnose: update the
try/catch around process.chdir(cwd) (the block that sets processCwdChanged and
calls setOriginalCwd) to catch (err) and emit a clear log entry including cwd
and the error message (use the existing logger if available), then keep
processCwdChanged=false; this will make failures visible when getCommands(cwd)
or other file ops later fail while preserving the existing best-effort behavior.In
@src/services/langfuse/sanitize.ts:
- Line 28: The current HOME_DIR_PATTERN is computed at module-load time so
changes to environment vars like process.env.HOME/USERPROFILE after import
aren’t reflected; replace the static const with a lazy getter (e.g., implement
getHomeDirPattern()) that computes and returns new
RegExp((?:${homePathPatterns().join('|')}), 'g') on first call (or on every
call if cheap), update all usages of HOME_DIR_PATTERN to call
getHomeDirPattern(), and optionally expose a resetHomeDirPatternForTests() to
clear any cached pattern so tests can mutate environment variables and get
updated redaction behavior; reference HOME_DIR_PATTERN, homePathPatterns(), and
getHomeDirPattern() when making the change.In
@src/services/mcp/__tests__/envExpansion.test.ts:
- Around line 110-120: Replace the manual nested placeholder construction
${ENV_OPEN}${envExpr("VAR")}${ENV_CLOSE}with the helper call
envExpr(envExpr("VAR")) in the test's nestedExpr declaration so the test uses
the same envExpr helper consistently; update the nestedExpr assignment (used by
expandEnvVarsInString and assertions on result.missingVars/result.expanded) to
call envExpr(envExpr("VAR")).- Around line 4-6: Add a one-line clarifying comment above ENV_OPEN explaining
why the string is split as "$" + "{" to prevent source-level tools or editors
from interpreting the literal${...}(e.g., template/placeholder scanners) in
tests; keep ENV_OPEN, ENV_CLOSE and envExpr unchanged but ensure the comment
references ENV_OPEN so future readers understand not to collapse it to "${".In
@src/utils/__tests__/stringUtils.test.ts:
- Line 25: The constant allSpecialChars is built via unnecessary concatenation
("^$" + "{}()|[]\.*+?"); replace it with a single string literal containing the
same sequence (allSpecialChars) to simplify and improve clarity—update the
declaration of allSpecialChars in the test to use one combined string instead of
concatenating two literals.In
@tests/integration/dependency-overrides.test.ts:
- Around line 113-133: Move the temporary directory creation and proto file
write into the try block so rmSync in the finally always runs even if
writeFileSync throws: call mkdtempSync(...) to create tempDir and compute
protoPath, then call writeFileSync(protoPath, ...) inside the try (before
calling protoLoader.loadSync), keeping the existing finally that calls
rmSync(tempDir, { force: true, recursive: true }); ensure references to
mkdtempSync, protoPath, writeFileSync, protoLoader.loadSync, and rmSync are
updated accordingly.In
@vite.config.ts:
- Around line 25-41: The current isAcknowledgedBuildWarning should be updated to
check INEFFECTIVE_DYNAMIC_IMPORT against warning.id (using equality or suffix
match) instead of searching warning.message, so modify the branch that returns
when warning.code === "INEFFECTIVE_DYNAMIC_IMPORT" to compare warning.id (e.g.,
acknowledgedBuildWarnings.some(id => warning.id?.endsWith(id) || warning.id ===
id)) and guard for undefined id; additionally, add a short comment for each
entry in the acknowledgedBuildWarnings array (or group comments) explaining why
that path is suppressed so future maintainers can verify/remove stale entries.</details> <details> <summary>🪄 Autofix (Beta)</summary> Fix all unresolved CodeRabbit comments on this PR: - [ ] <!-- {"checkboxId": "4b0d0e0a-96d7-4f10-b296-3a18ea78f0b9"} --> Push a commit to this branch (recommended) - [ ] <!-- {"checkboxId": "ff5b1114-7d8c-49e6-8ac1-43f82af23a33"} --> Create a new PR with the fixes </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| this.ws = new WebSocket(wsUrl, [ | ||
| encodeWebSocketAuthProtocol(this.config.apiToken), | ||
| ]); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find where the server processes Sec-WebSocket-Protocol on the upgrade path
rg -nP -C3 '(Sec-WebSocket-Protocol|protocols\s*[:=]|upgradeWebSocket\s*\()' \
--type=ts -g '!**/__tests__/**' packages/remote-control-server packages/acp-link
# Check if encodeWebSocketAuthProtocol is also used to echo back the negotiated value
rg -nP -C3 '\bencodeWebSocketAuthProtocol\b' --type=ts \
-g '!**/__tests__/**' packages/remote-control-server packages/acp-linkRepository: claude-code-best/claude-code
Length of output: 7015
🌐 Web query:
Bun WebSocket server upgrade Sec-WebSocket-Protocol echo response
💡 Result:
To create a Bun WebSocket echo server that handles Sec-WebSocket-Protocol negotiation during the upgrade: 1. In the fetch handler, check the client's offered protocols from req.headers.get('Sec-WebSocket-Protocol'). 2. Select a supported protocol (must be one offered by the client). 3. Pass it in the server.upgrade(req, { headers: { 'Sec-WebSocket-Protocol': selectedProtocol } }) options. Bun's underlying uWebSockets.js handles setting the response header in the 101 Switching Protocols response. 4. In the websocket.message handler, echo the message with ws.send(message). Example code: Bun.serve({ port: 3000, fetch(req, server) { if (server.upgrade(req, { headers: { // Select protocol, e.g. first offered or specific one 'Sec-WebSocket-Protocol': req.headers.get('Sec-WebSocket-Protocol')?.split(',')[0]?.trim || undefined, }, data: { protocol: req.headers.get('Sec-WebSocket-Protocol') // Store offered for logging } })) { return; } return new Response('Upgrade failed', { status: 400 }); }, websocket: { data: { protocol: '' }, open(ws) { console.log(Connected with protocol: ${ws.data.protocol}); }, message(ws, message) { ws.send(message); // Echo } } }); Notes: - Use Headers instance if needed for complex headers. - Recent fixes (PR #25774, issues #18243, #25773) resolved duplicate/mismatch issues; use latest Bun (1.3+). - Bun auto-echoes first protocol if none specified (issue #18243 open as of 2026-01). - No direct ws.protocol property on ServerWebSocket; store in data during upgrade if needed. - Client sees selected protocol in ws.protocol after upgrade.
Citations:
- 1:
sec-websocket-protocolis automatically returned during an upgrade to a WebSocket connection oven-sh/bun#18243 - 2: Fix WebSocket upgrade with custom Sec-WebSocket-Protocol header oven-sh/bun#25774
- 3: WebSocket server.upgrade() with Sec-WebSocket-Protocol header fails with "Mismatch client protocol" oven-sh/bun#25773
- 4: https://bun.com/docs/runtime/http/websockets
- 5: https://bun.sh/docs/api/websockets
- 6: https://bun.sh/reference/bun/ServerWebSocket
- 7: https://bun.com/docs/guides/websocket/simple
🏁 Script executed:
# Find the upgradeWebSocket implementation
rg -nP -A5 'function upgradeWebSocket|const upgradeWebSocket' --type=ts packages/
# Check full context of the ACP WebSocket handler - the one that matters for this client
rg -nP -B2 -A15 'app.get.*"/ws"' packages/remote-control-server/src/routes/acp/index.tsRepository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# First, identify where upgradeWebSocket comes from (likely Hono)
rg -nP 'upgradeWebSocket' --type=ts packages/remote-control-server/src/routes/acp/index.ts | head -20
# Search for the import of upgradeWebSocket
rg -nP '^import.*upgradeWebSocket' --type=ts packages/Repository: claude-code-best/claude-code
Length of output: 470
🏁 Script executed:
# Check if upgradeWebSocket is from Hono and examine handler return
rg -nP 'from.*hono|import.*upgradeWebSocket' --type=ts packages/ | head -30Repository: claude-code-best/claude-code
Length of output: 2961
🏁 Script executed:
# Get the full ACP WebSocket handler implementation
sed -n '140,170p' packages/remote-control-server/src/routes/acp/index.tsRepository: claude-code-best/claude-code
Length of output: 990
🏁 Script executed:
# Also check the session-ingress handler for reference
sed -n '87,115p' packages/remote-control-server/src/routes/v1/session-ingress.tsRepository: claude-code-best/claude-code
Length of output: 997
🏁 Script executed:
# Check the acp-link server handler in detail
sed -n '1000,1025p' packages/acp-link/src/server.tsRepository: claude-code-best/claude-code
Length of output: 899
🏁 Script executed:
# Check extractWebSocketAuthToken implementation
sed -n '45,50p' packages/remote-control-server/src/auth/middleware.tsRepository: claude-code-best/claude-code
Length of output: 340
🏁 Script executed:
# Check Hono version and see if there's any pattern of passing headers in upgradeWebSocket
grep -r "version" packages/remote-control-server/package.json | grep hono || \
cat packages/remote-control-server/package.json | grep -A2 -B2 '"hono"'Repository: claude-code-best/claude-code
Length of output: 197
🏁 Script executed:
# Search for any pattern where headers are set during WebSocket upgrade
rg -nP '(headers.*protocol|protocol.*headers|Sec-WebSocket-Protocol.*=)' --type=ts packages/ | grep -v testRepository: claude-code-best/claude-code
Length of output: 159
🏁 Script executed:
# Search for WebSocket tests that verify protocol negotiation
rg -nP 'Sec-WebSocket-Protocol|protocol.*echo|negotiat' --type=ts packages/acp-link/src/__tests__/ packages/remote-control-server/src/__tests__/ | head -40Repository: claude-code-best/claude-code
Length of output: 514
🏁 Script executed:
# Look for any documentation or comments about headers in WebSocket upgrade
rg -nP 'upgradeWebSocket|headers|respond' --type=ts packages/remote-control-server/src/transport/ws-shared.ts
# Also check if there are any examples in tests of protocol echo
rg -nP -B5 -A10 'Sec-WebSocket-Protocol' packages/remote-control-server/src/__tests__/middleware.test.ts | head -50Repository: claude-code-best/claude-code
Length of output: 1321
🏁 Script executed:
# Search Hono documentation pattern: check if upgradeWebSocket callback returns response headers
rg -nP 'return.*header|response.*header|Sec-WebSocket' --type=ts packages/remote-control-server/src/routes/ | grep -i upgradeRepository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Check if there's a pattern in the codebase for returning response headers from handlers
rg -nP 'return.*\{' packages/remote-control-server/src/routes/acp/index.ts | head -10
# Look for any handler that might set custom response headers
rg -nP 'header\(|setHeader|response' --type=ts packages/remote-control-server/src/routes/ | grep -i upgradeRepository: claude-code-best/claude-code
Length of output: 365
🏁 Script executed:
# Specifically look at what the upgradeWebSocket handlers can return
cat packages/remote-control-server/src/routes/acp/index.ts | sed -n '137,180p'Repository: claude-code-best/claude-code
Length of output: 1356
🏁 Script executed:
# Check if there are any other WebSocket implementations in the codebase that might handle headers
rg -nP 'Bun.serve|fetch.*upgrade|server\.upgrade' --type=ts packages/ | head -20Repository: claude-code-best/claude-code
Length of output: 648
🏁 Script executed:
# Get the full test for WebSocket protocol handling
sed -n '150,160p' packages/remote-control-server/src/__tests__/middleware.test.ts
# Check if there are any tests that verify the response headers
rg -nP -B2 -A8 'Sec-WebSocket-Protocol.*test|protocol.*header.*test' --type=ts packages/remote-control-server/src/__tests__/Repository: claude-code-best/claude-code
Length of output: 3802
🏁 Script executed:
# Check if Hono's upgradeWebSocket in version 4.12 supports headers in return
cat packages/remote-control-server/package.json | grep -A5 -B5 'hono'Repository: claude-code-best/claude-code
Length of output: 390
🏁 Script executed:
# Look for Hono type definitions or examples showing upgrade response structure
rg -nP 'type.*Upgrade|interface.*Handler|onOpen.*ws.*Context' --type=ts packages/ | grep -i ws | head -15Repository: claude-code-best/claude-code
Length of output: 820
🏁 Script executed:
# Check if there's any pattern where headers are set before upgrade
rg -nP 'c\.header|res\.header|response.*header' --type=ts packages/remote-control-server/src/routes/ | grep -C2 upgradeWebSocketRepository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Check if there's documentation or comments in the codebase about Hono upgradeWebSocket
rg -nP 'hono/bun|upgradeWebSocket' --type=ts -B2 -A2 packages/remote-control-server/src/transport/ws-shared.tsRepository: claude-code-best/claude-code
Length of output: 130
Server must echo negotiated Sec-WebSocket-Protocol in the WebSocket upgrade response (101 Switching Protocols).
Per RFC 6455 §4.2.2, when a client offers Sec-WebSocket-Protocol, the server MUST select and echo one back in the 101 response header. The current handlers in /acp/ws, /relay/:agentId, and /ingress/ses_123 read the Sec-WebSocket-Protocol header for auth validation but do not return it in the upgrade response. This works in Bun's test environment (which is permissive) but will fail against spec-compliant peers (browsers, stricter WS implementations).
The upgradeWebSocket callbacks (packages/remote-control-server/src/routes/acp/index.ts:140, session-ingress.ts:87) return event handlers (onOpen, onMessage, etc.) but currently have no mechanism to set response headers. You need to modify the Hono upgradeWebSocket handler signature or use an alternative approach to explicitly set the Sec-WebSocket-Protocol response header to the negotiated value (e.g., the first protocol offered by the client, or the one successfully validated).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/acp-link/src/rcs-upstream.ts` around lines 128 - 130, The WebSocket
upgrade handlers must echo the negotiated Sec-WebSocket-Protocol in the 101
response; modify the upgradeWebSocket flow so the negotiated protocol (e.g., the
validated protocol or the first offered one) is set on the response headers and
returned with the upgrade. Concretely, change the upgradeWebSocket handler
signature or use the Hono upgrade API to allow returning response headers from
the callbacks used in the /acp/ws, /relay/:agentId and /ingress/ses_123 handlers
so you can set "Sec-WebSocket-Protocol" to the chosen protocol before sending
the upgrade; ensure the code paths that currently validate the header (the auth
validation logic) pass the validated protocol back and that the
onOpen/onMessage-style handler return includes that header in the 101 Switching
Protocols response.
| export function encodeWebSocketAuthProtocol(token: string): string { | ||
| return `${WS_AUTH_PROTOCOL_PREFIX}${Buffer.from(token, "utf8").toString("base64url")}`; | ||
| } | ||
|
|
||
| function decodeWebSocketAuthProtocol(protocolHeader: string | undefined): string | undefined { | ||
| if (!protocolHeader) { | ||
| return undefined; | ||
| } | ||
|
|
||
| for (const protocol of protocolHeader.split(",")) { | ||
| const trimmed = protocol.trim(); | ||
| if (!trimmed.startsWith(WS_AUTH_PROTOCOL_PREFIX)) { | ||
| continue; | ||
| } | ||
|
|
||
| const encoded = trimmed.slice(WS_AUTH_PROTOCOL_PREFIX.length); | ||
| if (!encoded) { | ||
| return undefined; | ||
| } | ||
|
|
||
| try { | ||
| const token = Buffer.from(encoded, "base64url").toString("utf8"); | ||
| return token.length > 0 ? token : undefined; | ||
| } catch { | ||
| return undefined; | ||
| } | ||
| } | ||
|
|
||
| return undefined; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Node.js Buffer.from base64url throws on invalid input
💡 Result:
Node.js Buffer.from(str, 'base64url') does not reliably throw on invalid base64url text. Instead, per the Buffer API docs, Buffer.from for 'base64'/'base64url'/'hex' “assumes valid input”; with non-base64 characters, the decoded result may be incorrect (and in some cases may be longer than expected), rather than raising. So if you observe “throws” behavior, it’s usually due to something else (e.g., you’re not actually using Node’s Buffer implementation; or the error is coming from code paths like converting the Buffer to an ArrayBuffer, not from the decoding itself), not from Buffer.from itself failing strictly on bad base64url. Practical guidance: 1) Validate before decoding (e.g., check that the string matches the base64url alphabet and has a sane length), then decode. 2) If you need strict validation semantics, don’t rely on Buffer.from base64url; use a stricter base64url validator/decoder or convert to standard base64 and validate with a stricter routine. Example of what to rely on: - Node docs: “For 'base64','base64url', and 'hex', this function assumes valid input… For strings that contain non-base64/hex-encoded data (e.g. whitespace), the return value might be greater than the length of a Buffer created from the string.” Also note: there are known “base64url” pitfalls in adjacent areas: - In nodejs/node issue #55564, a reported “incorrect” result was attributed to how ArrayBuffer is obtained from a pooled FastBuffer (not a decoding failure).
Citations:
- 1: https://nodejs.org/api/buffer.html
- 2: https://nodejs.org/download/release/v21.0.0/docs/api/buffer.html
- 3: Buffer.from('base64') decodes some white space as data nodejs/node#11987
- 4: Incorrect result from
Buffer.from(...).bufferwhen usingbase64urlencoding nodejs/node#55564 - 5: node: Buffer.from(str, "base64") should ignore data after invalid characters denoland/std#1875
🏁 Script executed:
head -20 packages/remote-control-server/src/auth/middleware.tsRepository: claude-code-best/claude-code
Length of output: 825
🏁 Script executed:
# Check if Buffer is used elsewhere
rg "Buffer\.from|Buffer\.alloc" --type ts --type tsx | head -20Repository: claude-code-best/claude-code
Length of output: 100
🏁 Script executed:
# Check if there's a tsconfig or bun config that might allow Node.js Buffer
find . -name "tsconfig*.json" -o -name "bunfig.toml" | xargs cat 2>/dev/null | head -30Repository: claude-code-best/claude-code
Length of output: 735
🏁 Script executed:
# Search for Buffer usage in packages/remote-control-server
rg "Buffer\." packages/remote-control-server/src --type ts -B 2 -A 2Repository: claude-code-best/claude-code
Length of output: 3754
🏁 Script executed:
# Check the actual content around lines 9-38 to confirm the code matches the review
sed -n '1,50p' packages/remote-control-server/src/auth/middleware.ts | cat -nRepository: claude-code-best/claude-code
Length of output: 2071
Remove the dead try/catch wrapper on Buffer.from("base64url") decoding.
Buffer.from(value, "base64url") does not throw on invalid input — it silently ignores non-base64url characters and returns whatever bytes it can parse. The catch block (lines 32–34) is unreachable.
Either drop the wrapper or add validation to genuinely reject malformed subprotocols:
♻️ Proposed cleanup
const encoded = trimmed.slice(WS_AUTH_PROTOCOL_PREFIX.length);
- if (!encoded) {
+ if (!encoded || !/^[A-Za-z0-9_-]+$/.test(encoded)) {
return undefined;
}
-
- try {
- const token = Buffer.from(encoded, "base64url").toString("utf8");
- return token.length > 0 ? token : undefined;
- } catch {
- return undefined;
- }
+ const token = Buffer.from(encoded, "base64url").toString("utf8");
+ return token.length > 0 ? token : undefined;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/src/auth/middleware.ts` around lines 9 - 38,
The catch around Buffer.from(encoded, "base64url") in
decodeWebSocketAuthProtocol is dead code because Buffer.from(..., "base64url")
doesn't throw; remove the try/catch and either (A) drop the wrapper and simply
decode (return token if token.length>0), or (B) add real validation before
accepting the token (for example validate encoded with a base64url regex or
re-encode the decoded bytes and ensure Buffer.from(token,
"utf8").toString("base64url") === encoded) so malformed subprotocols are
rejected; update decodeWebSocketAuthProtocol accordingly, referencing the
variables protocolHeader, trimmed, encoded and the WS_AUTH_PROTOCOL_PREFIX.
| import { existsSync } from "node:fs"; | ||
| import { fileURLToPath } from "node:url"; | ||
| import acpRoutes from "./routes/acp"; | ||
| import { webCorsOptions } from "./auth/cors"; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect webCorsOptions resolver and the empty-origins default behavior.
fd -t f 'cors.ts' packages/remote-control-server/src/auth -x cat {}
echo '--- middleware.test.ts CORS assertions ---'
rg -nP -C2 'webCorsOptions|webCorsOrigins|Access-Control-Allow-Origin' packages/remote-control-server/src/__tests__/middleware.test.ts
echo '--- config.webCorsOrigins definition ---'
rg -nP -C3 'webCorsOrigins' packages/remote-control-server/src/config.tsRepository: claude-code-best/claude-code
Length of output: 2209
🏁 Script executed:
#!/bin/bash
# Check for test cases with empty webCorsOrigins
rg -n 'webCorsOrigins.*\[\]|webCorsOrigins.*undefined' packages/remote-control-server/src/__tests__/middleware.test.ts
# Show all test cases in the middleware test file to see if empty origins is tested
echo '--- All describe/it blocks in middleware.test.ts ---'
rg -n '^\s*(describe|it)\(' packages/remote-control-server/src/__tests__/middleware.test.ts | head -30
# Full CORS test section
echo '--- Full CORS test section ---'
sed -n '250,300p' packages/remote-control-server/src/__tests__/middleware.test.tsRepository: claude-code-best/claude-code
Length of output: 1939
Test coverage gap: empty webCorsOrigins scenario is not explicitly tested.
The security concern is resolved. resolveWebCorsOrigin() correctly returns undefined when an origin is not in the allowed list—fail-closed, not *. Even with empty config.webCorsOrigins (the production default), getAllowedWebCorsOrigins() safely defaults to localhost and 127.0.0.1 on the configured port, denying external origins.
However, the test suite in middleware.test.ts does not explicitly verify this empty-origins default path. All CORS tests use webCorsOrigins: ["https://dashboard.example"]. Add a test case with an empty webCorsOrigins array to confirm the fail-closed behavior in the actual default configuration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/src/index.ts` at line 14, Add a unit test in
middleware.test.ts that covers the empty webCorsOrigins path: construct a config
with webCorsOrigins: [] (and set the server port), call
getAllowedWebCorsOrigins() and assert it returns only localhost and 127.0.0.1
entries on the configured port, and then call resolveWebCorsOrigin() with an
external origin (e.g., https://dashboard.example) and assert it returns
undefined (fail-closed); ensure the test uses the same helpers/setup as the
existing CORS tests so it exercises the actual middleware resolution logic.
| const lastEventId = c.req.header("Last-Event-ID"); | ||
| const fromSeq = c.req.query("from_sequence_num"); | ||
| const fromSeqNum = fromSeq ? parseInt(fromSeq) : lastEventId ? parseInt(lastEventId) : 0; | ||
| const fromSeqNum = fromSeq ? parseInt(fromSeq, 10) : lastEventId ? parseInt(lastEventId, 10) : 0; | ||
|
|
||
| return createAcpSSEStream(c, groupId, fromSeqNum); | ||
| }); |
There was a problem hiding this comment.
Guard parseInt against NaN so malformed from_sequence_num / Last-Event-ID falls back to 0.
If a client sends from_sequence_num=abc or Last-Event-ID=foo, parseInt(..., 10) yields NaN, which then propagates to createAcpSSEStream. Comparisons against NaN are always false, which can silently subvert event replay/skipping logic.
🛡️ Proposed fix
- const fromSeqNum = fromSeq ? parseInt(fromSeq, 10) : lastEventId ? parseInt(lastEventId, 10) : 0;
+ const parseSeq = (s: string | undefined): number => {
+ const n = s ? parseInt(s, 10) : NaN;
+ return Number.isFinite(n) ? n : 0;
+ };
+ const fromSeqNum = parseSeq(fromSeq) || parseSeq(lastEventId);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/src/routes/acp/index.ts` around lines 130 -
135, The current parsing of from_sequence_num/Last-Event-ID uses parseInt
directly and can produce NaN; update the logic around lastEventId, fromSeq and
fromSeqNum so parsed values are validated (Number.isNaN or isNaN) and any NaN
falls back to 0 before calling createAcpSSEStream; specifically, after parsing
with parseInt for fromSeq and lastEventId, check for NaN and set fromSeqNum = 0
when invalid so createAcpSSEStream always receives a real number.
| function encodeWebSocketAuthProtocol(token: string): string { | ||
| const bytes = new TextEncoder().encode(token); | ||
| let binary = ""; | ||
| for (const byte of bytes) { | ||
| binary += String.fromCharCode(byte); | ||
| } | ||
| const encoded = btoa(binary) | ||
| .replace(/\+/g, "-") | ||
| .replace(/\//g, "_") | ||
| .replace(/=+$/, ""); | ||
| return `rcs.auth.${encoded}`; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm the server echoes the selected rcs.auth.* subprotocol on the upgrade response.
rg -nP -C5 'Sec-WebSocket-Protocol|rcs\.auth\.' --type=tsRepository: claude-code-best/claude-code
Length of output: 9408
🏁 Script executed:
# Search for WebSocket upgrade response construction and protocol header handling
rg -n 'upgradeWebSocket|ws\.accept|sendUpgrade|Sec-WebSocket-Protocol.*response|setHeader.*Sec-WebSocket-Protocol' --type=ts -A 3Repository: claude-code-best/claude-code
Length of output: 2723
🏁 Script executed:
# Check the Hono upgradeWebSocket implementation behavior regarding protocol echoing
rg -n 'decodeWebSocketAuthToken|extractWebSocketAuthToken' --type=ts -B 2 -A 5Repository: claude-code-best/claude-code
Length of output: 11096
🏁 Script executed:
# Look for server-side WebSocket configuration that handles the subprotocol
fd -e ts -path '*/server*' | head -20Repository: claude-code-best/claude-code
Length of output: 241
🏁 Script executed:
# Check what the upgradeWebSocket handler returns/accepts for protocol handling
rg -n 'upgradeWebSocket.*\(async' --type=ts -A 20 packages/remote-control-server/src/routes/acp/index.ts | head -50Repository: claude-code-best/claude-code
Length of output: 1525
🏁 Script executed:
# Look for WebSocket response object structure that might include protocol
rg -n 'onOpen|onMessage|onClose' --type=ts -B 2 -A 8 packages/remote-control-server/src/routes/acp/index.ts | head -80Repository: claude-code-best/claude-code
Length of output: 2406
🏁 Script executed:
# Check if Hono documentation or types show protocol support
fd -e ts -path '*hono*' 2>/dev/null | head -10Repository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Search for any explicit protocol response handling in the codebase
rg -n 'protocol.*response|response.*protocol|protocolSelected' --type=tsRepository: claude-code-best/claude-code
Length of output: 391
🏁 Script executed:
# Search for type definitions or examples of upgradeWebSocket return structure
rg -n 'type.*upgradeWebSocket|interface.*WebSocketUpgrade' --type=tsRepository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Check Bun's WebSocket documentation or type hints in the code
rg -n 'WSContext|WsMessageEvent' --type=ts -B 2 -A 5 | head -40Repository: claude-code-best/claude-code
Length of output: 3528
🏁 Script executed:
# Look for any protocol specification in response handling
rg -n 'protocols?:|subprotocol' --type=ts -iRepository: claude-code-best/claude-code
Length of output: 3239
🏁 Script executed:
# Check the ACP WebSocket test to see if it validates protocol echo
sed -n '1510,1550p' packages/remote-control-server/src/__tests__/routes.test.tsRepository: claude-code-best/claude-code
Length of output: 1392
🏁 Script executed:
# Look for MCP client WebSocket connection to see protocol specification pattern
rg -n 'new WebSocket|websocket.*protocol' --type=ts -B 3 -A 3Repository: claude-code-best/claude-code
Length of output: 8692
🏁 Script executed:
# Check for any Hono WebSocket examples in the code with protocol
rg -n 'upgradeWebSocket.*return.*protocol' --type=ts -B 2 -A 5Repository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Check the Bun.serve websocket configuration in the test
sed -n '1510,1520p' packages/remote-control-server/src/__tests__/routes.test.tsRepository: claude-code-best/claude-code
Length of output: 338
🏁 Script executed:
# Look for sessionIngressWebsocket definition
rg -n 'sessionIngressWebsocket.*=' --type=ts -B 3 -A 10Repository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Check if protocol can be specified in Bun's websocket handler
rg -n 'websocket.*:.*{' --type=ts -A 15 | head -60Repository: claude-code-best/claude-code
Length of output: 166
🏁 Script executed:
# Find sessionIngressWebsocket definition
rg -n 'sessionIngressWebsocket' --type=ts packages/remote-control-server/src/__tests__/routes.test.ts -B 5 -A 15 | head -50Repository: claude-code-best/claude-code
Length of output: 1917
🏁 Script executed:
# Search for websocket handler config in test file
rg -n 'websocket:' --type=ts packages/remote-control-server/src/__tests__/routes.test.ts -B 3 -A 10Repository: claude-code-best/claude-code
Length of output: 2897
🏁 Script executed:
# Find the session-ingress export to see websocket definition
rg -n 'export.*websocket|export const websocket' --type=ts packages/remote-control-server/src/routes/v1/session-ingress.ts -B 2 -A 15Repository: claude-code-best/claude-code
Length of output: 133
🏁 Script executed:
# Look for websocket handler definition in session-ingress
fd -e ts session-ingress && cat packages/remote-control-server/src/routes/v1/session-ingress.ts | head -150Repository: claude-code-best/claude-code
Length of output: 4418
🏁 Script executed:
# Find the websocket definition in ws-shared
cat packages/remote-control-server/src/transport/ws-shared.tsRepository: claude-code-best/claude-code
Length of output: 128
🏁 Script executed:
# Check Bun's websocket handler to understand protocol support
rg -n 'export.*websocket' packages/remote-control-server/src/transport/ws-shared.ts -A 20Repository: claude-code-best/claude-code
Length of output: 130
RFC 6455 violation: Server does not echo selected WebSocket subprotocol.
The client correctly encodes the auth token and sends it as a subprotocol (rcs.auth.{base64url}), but the server reads this header without echoing it back in the upgrade response. Per RFC 6455 Section 4.1, the server must include Sec-WebSocket-Protocol in the HTTP upgrade response containing the selected subprotocol; otherwise, browsers will reject the connection with a protocol mismatch error. The upgradeWebSocket return object does not currently support protocol specification. This requires adding protocol echo support to the Hono handler or Bun WebSocket configuration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/web/src/acp/client.ts` around lines 23 - 34,
The server must echo the client's subprotocol (the string produced by
encodeWebSocketAuthProtocol) in the WebSocket upgrade response; update the Hono
handler that calls upgradeWebSocket (or the upgradeWebSocket helper itself) to
parse the incoming Sec-WebSocket-Protocol header, select the matching protocol
(e.g., the one that starts with "rcs.auth."), and include it in the upgrade
response by setting Sec-WebSocket-Protocol (or providing the protocol to
Bun/Hono's WebSocket upgrade API) so the returned upgrade object includes the
selected protocol and browsers accept the connection.
| /** | ||
| * Build the RCS relay WebSocket URL for a given agent. | ||
| * Uses UUID auth (same as /code/ pages). | ||
| */ | ||
| export function buildRelayUrl(agentId: string): string { | ||
| const protocol = window.location.protocol === "https:" ? "wss:" : "ws:"; | ||
| const uuid = getUuid(); | ||
| return `${protocol}//${window.location.host}/acp/relay/${agentId}?uuid=${encodeURIComponent(uuid)}`; | ||
| return `${protocol}//${window.location.host}/acp/relay/${agentId}`; | ||
| } |
There was a problem hiding this comment.
Stale JSDoc — UUID auth is no longer used.
The comment still claims buildRelayUrl uses UUID auth, but the implementation removed the ?uuid=... query and now relies on the API token forwarded as a WebSocket subprotocol via createRelayClient. Update the comment so it matches the new auth path.
📝 Suggested doc fix
/**
* Build the RCS relay WebSocket URL for a given agent.
- * Uses UUID auth (same as /code/ pages).
+ * Authentication is performed by `createRelayClient` via the
+ * `rcs.auth.<base64url-token>` WebSocket subprotocol, not via URL query.
*/
export function buildRelayUrl(agentId: string): string {
const protocol = window.location.protocol === "https:" ? "wss:" : "ws:";
return `${protocol}//${window.location.host}/acp/relay/${agentId}`;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/web/src/acp/relay-client.ts` around lines 5 -
12, Update the JSDoc for buildRelayUrl to remove the outdated "UUID auth"
reference and instead describe that the function simply constructs the WebSocket
URL for an agent (no query auth) and that authentication is handled separately
by forwarding the API token as a WebSocket subprotocol (see createRelayClient)
rather than embedding it in the URL; ensure the top comment reflects the current
auth flow and mentions createRelayClient handling the token.
| steps: | ||
| - name: Checkout repository | ||
| uses: actions/checkout@v4 | ||
| uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2, 2026-04-25 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify pinned SHAs match the documented tags.
set -e
gh api repos/actions/checkout/git/ref/tags/v6.0.2 --jq '.object.sha'
gh api repos/anthropics/claude-code-action/git/ref/tags/v1 --jq '.object.sha'Repository: claude-code-best/claude-code
Length of output: 154
🏁 Script executed:
cat -n src/constants/github-app.ts | sed -n '30,45p; 125,140p'Repository: claude-code-best/claude-code
Length of output: 1606
Update pinned SHAs for anthropics/claude-code-action — they are outdated.
The actions/checkout pinning is correct: de0fac2e4500dabe0009e67214ff5f5447ce83dd matches v6.0.2. However, the anthropics/claude-code-action pins are stale. Lines 40 and 135 use 567fe954a4527e81f132d87d1bdbcc94f7737434, but v1 currently resolves to a9ac9eb3a874922a7e34246afae3914113c6626f. Update these pins to the current SHA to ensure the workflow uses the intended released version.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/constants/github-app.ts` at line 34, Update the stale SHA pins for
anthropics/claude-code-action: replace occurrences of
567fe954a4527e81f132d87d1bdbcc94f7737434 with the current v1 resolved SHA
a9ac9eb3a874922a7e34246afae3914113c6626f (these appear in the workflow where
anthropics/claude-code-action is referenced, specifically the two occurrences
you noted); leave the already-correct actions/checkout pin
(de0fac2e4500dabe0009e67214ff5f5447ce83dd) unchanged. Ensure both instances of
the old SHA are updated so the workflow uses the intended released version.
| const promptCancelGeneration = session.cancelGeneration | ||
|
|
||
| // Handle prompt queuing — if a prompt is already running, queue this one | ||
| if (session.promptRunning) { | ||
| const order = session.nextPendingOrder++ | ||
| const promptUuid = randomUUID() | ||
| const cancelled = await new Promise<boolean>((resolve) => { | ||
| session.pendingMessages.set(promptUuid, { resolve, order }) | ||
| session.pendingQueue.push(promptUuid) | ||
| session.pendingMessages.set(promptUuid, { resolve }) | ||
| }) | ||
| if (cancelled) { | ||
| return { stopReason: 'cancelled' } | ||
| } | ||
| } | ||
|
|
||
| if (session.cancelGeneration !== promptCancelGeneration) { | ||
| return { stopReason: 'cancelled' } | ||
| } | ||
|
|
||
| // Reset cancellation only when this prompt is about to run. Queued prompts | ||
| // must not clear the cancellation state for the active prompt. | ||
| session.cancelled = false |
There was a problem hiding this comment.
Race: queued prompt may leave promptRunning=true with no active prompt, deadlocking future prompts.
The previous prompt's finally (Line 339-345) optimistically sets session.promptRunning = true before calling nextPrompt.resolve(false). The resolution and the resumed await in this prompt are split across microtasks, so an externally-queued cancel() microtask scheduled before this resume can land between them: it bumps cancelGeneration but finds pendingMessages already empty (this prompt was popped), so it doesn't re-resolve us. When this prompt resumes with cancelled=false, the cancelGeneration !== promptCancelGeneration branch returns early without entering try, so the finally never runs and promptRunning is never reset. Any subsequent prompt sees promptRunning=true, queues itself, and waits forever (cancel cleared the queue, and nothing dequeues again).
Reset the running flag explicitly on this branch since the previous handoff is no longer valid.
🛡️ Proposed fix
if (session.cancelGeneration !== promptCancelGeneration) {
+ // Previous prompt's finally optimistically set promptRunning=true expecting
+ // us to take over. Cancel landed between handoff and resumption, so undo.
+ session.promptRunning = false
return { stopReason: 'cancelled' }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const promptCancelGeneration = session.cancelGeneration | |
| // Handle prompt queuing — if a prompt is already running, queue this one | |
| if (session.promptRunning) { | |
| const order = session.nextPendingOrder++ | |
| const promptUuid = randomUUID() | |
| const cancelled = await new Promise<boolean>((resolve) => { | |
| session.pendingMessages.set(promptUuid, { resolve, order }) | |
| session.pendingQueue.push(promptUuid) | |
| session.pendingMessages.set(promptUuid, { resolve }) | |
| }) | |
| if (cancelled) { | |
| return { stopReason: 'cancelled' } | |
| } | |
| } | |
| if (session.cancelGeneration !== promptCancelGeneration) { | |
| return { stopReason: 'cancelled' } | |
| } | |
| // Reset cancellation only when this prompt is about to run. Queued prompts | |
| // must not clear the cancellation state for the active prompt. | |
| session.cancelled = false | |
| const promptCancelGeneration = session.cancelGeneration | |
| // Handle prompt queuing — if a prompt is already running, queue this one | |
| if (session.promptRunning) { | |
| const promptUuid = randomUUID() | |
| const cancelled = await new Promise<boolean>((resolve) => { | |
| session.pendingQueue.push(promptUuid) | |
| session.pendingMessages.set(promptUuid, { resolve }) | |
| }) | |
| if (cancelled) { | |
| return { stopReason: 'cancelled' } | |
| } | |
| } | |
| if (session.cancelGeneration !== promptCancelGeneration) { | |
| // Previous prompt's finally optimistically set promptRunning=true expecting | |
| // us to take over. Cancel landed between handoff and resumption, so undo. | |
| session.promptRunning = false | |
| return { stopReason: 'cancelled' } | |
| } | |
| // Reset cancellation only when this prompt is about to run. Queued prompts | |
| // must not clear the cancellation state for the active prompt. | |
| session.cancelled = false |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/acp/agent.ts` around lines 256 - 276, When detecting the
cancel-generation mismatch after dequeuing (the check `if
(session.cancelGeneration !== promptCancelGeneration)`), explicitly reset the
session running flag before returning to avoid leaving `session.promptRunning`
true with no active prompt; i.e., set `session.promptRunning = false` (and
ensure any relevant cleanup like removing this prompt's UUID from
`session.pendingQueue`/`session.pendingMessages` if necessary) so future prompts
are not deadlocked — update the logic around `promptCancelGeneration`,
`session.cancelGeneration`, `session.cancelled`, and the early `return {
stopReason: 'cancelled' }` to perform this reset/cleanup before returning.
| for (const value of [process.env.HOME, process.env.USERPROFILE, homedir()]) { | ||
| if (value) { | ||
| homes.add(value) | ||
| homes.add(value.replace(/\\/g, '/')) | ||
| } | ||
| } | ||
|
|
||
| return [ | ||
| ...Array.from(homes, escapeRegExp), | ||
| '/Users/[^/\\\\]+', | ||
| '[A-Za-z]:[/\\\\]Users[/\\\\][^/\\\\]+', | ||
| ] | ||
| } | ||
|
|
||
| const HOME_DIR_PATTERN = new RegExp(`(?:${homePathPatterns().join('|')})`, 'g') |
There was a problem hiding this comment.
Windows path matching is case-sensitive — lowercased users paths won't redact.
The Windows fallback [A-Za-z]:[/\\\\]Users[/\\\\][^/\\\\]+ hard-codes Users, and the escaped values from HOME/USERPROFILE retain their original casing. Because Windows filesystems are case-insensitive, the same logical path can surface in logs as C:\Users\Bob, c:\users\bob, or mixed casing — and only the exactly-cased variants will be redacted. That defeats the redaction on Windows hosts where some library lowercases paths before logging.
🛡️ Proposed fix — case-insensitive matching
-const HOME_DIR_PATTERN = new RegExp(`(?:${homePathPatterns().join('|')})`, 'g')
+const HOME_DIR_PATTERN = new RegExp(`(?:${homePathPatterns().join('|')})`, 'gi')(Or, if a global case-insensitive match is undesirable for the Unix /Users/ literal, replace Users in the Windows fallback with [Uu]sers and lower-case the escaped Windows home variants explicitly.)
🧰 Tools
🪛 ast-grep (0.42.1)
[warning] 27-27: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp((?:${homePathPatterns().join('|')}), 'g')
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html
(regexp-from-variable)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/langfuse/sanitize.ts` around lines 14 - 28, The Windows
path-redaction is case-sensitive: update homePathPatterns() and HOME_DIR_PATTERN
so Windows-style paths are matched case-insensitively by (1) when iterating
values from process.env.HOME / process.env.USERPROFILE / homedir() add a
lower-cased variant (value.toLowerCase() and its slash-normalized form) to homes
so escaped env-derived Windows home paths will match lowercased logs, and (2)
change the Windows fallback literal from '[A-Za-z]:[/\\\\]Users[/\\\\][^/\\\\]+'
to '[A-Za-z]:[/\\\\][Uu]sers[/\\\\][^/\\\\]+' (i.e. replace "Users" with
[Uu]sers) so mixed-cased drive roots are covered; keep HOME_DIR_PATTERN
construction unchanged (no global /i flag) so Unix '/Users/' literal behavior is
preserved.
* fix: bound agent communication memory growth UDS messaging now uses private local capabilities instead of exposing auth tokens through SDK metadata, environment variables, session registry, peer listing, or tool output. The receive path bounds NDJSON frames, response buffers, active clients, and pending inbox bytes, and strips auth metadata before messages enter the prompt queue. Teammate mailboxes now validate file and message sizes, fail closed on corrupt mutation inputs, compact by count and retained bytes, and use stable message identity for in-process acknowledgements. Agent summaries now fork only a bounded recent context using lazy size estimation and content fingerprints instead of retaining or serializing unbounded histories. Constraint: PR #361 was already merged; this branch is based on upstream/main@c2ac9a74. Rejected: Default-disabling COORDINATOR_MODE/TEAMMEM only | explicit feature enablement still hit unbounded paths. Rejected: Persisting UDS auth in SDK/env/session registry | bridge/remote metadata can leak local capability secrets. Rejected: Inline uds #token addresses | observable/tool/classifier paths can reflect raw addresses outside the UDS request frame. Rejected: Positional mailbox marking after compaction | compaction can shift indices across the lock boundary. Confidence: high Scope-risk: moderate Directive: Do not expose UDS capability tokens through SDK messages, environment variables, session registry, peer-list output, or SendMessage result/classifier surfaces. Directive: Do not reintroduce positional mailbox acknowledgements unless compaction is removed or read+mark is atomic under one lock. Tested: bun test src/utils/__tests__/ndjsonFramer.test.ts src/utils/__tests__/udsMessaging.test.ts packages/builtin-tools/src/tools/SendMessageTool/__tests__/udsRecipientSanitization.test.ts Tested: bunx tsc --noEmit --pretty false Tested: bun run lint Tested: bunx biome lint modified src/package files Tested: bun run test:all (3704 pass, 0 fail, 6734 expects) Tested: bun audit (No vulnerabilities found) Tested: bun run build Tested: bun run build:vite Tested: git diff --check Not-tested: End-to-end external UDS client driving a full production headless model turn. * fix: harden bounded agent communication review fixes CodeRabbit and Codecov surfaced real gaps in UDS framing, peer discovery, mailbox retention, and summary context coverage. This tightens those paths without suppressing review or coverage signals. Constraint: PR #369 must address CodeRabbit and Codecov findings without warning suppression or fake fallbacks Rejected: Suppress Codecov or CodeRabbit warnings | leaves real receive-path and test-isolation gaps Rejected: Add unreachable feature-gated tests | bun:bundle keeps those branches compile-time gated in local tests Confidence: high Scope-risk: moderate Directive: Keep UDS auth-token rejection outside feature flags; do not reintroduce inline token fallbacks Tested: bun test --coverage --coverage-reporter lcov --coverage-dir coverage; bun run test:all; bun run lint; bun run build; bun run build:vite; bun audit; git diff --cached --check Not-tested: Remote Codecov/CodeRabbit refreshed reports until pushed * fix: prevent agent communication bounds from hiding CI regressions Tighten the UDS auth, framing, and response-reader boundaries while keeping the AgentSummary lifecycle covered so Codecov and CI fail on real regressions instead of missing coverage. The poorMode settings mock mirrors unrelated real settings defaults to avoid Bun mock retention changing later permission tests. Constraint: PR #369 must fix Codecov/CI precisely without warning suppression, fallback masking, or mock pollution Rejected: Delete AgentSummary lifecycle coverage | would hide Codecov loss and stale-summary behavior Rejected: Store inline UDS rejection in a hidden input sentinel | cloned observable inputs can drop it and bypass rejection Rejected: Ignore malformed UDS frames until timeout | leaves client slots and SendMessage calls open to exhaustion Confidence: high Scope-risk: moderate Directive: Keep empty #token= markers rejected; do not require a non-empty token value in hasInlineUdsToken Tested: bun test packages/builtin-tools/src/tools/SendMessageTool/__tests__/udsRecipientSanitization.test.ts src/utils/__tests__/udsMessaging.test.ts src/utils/__tests__/udsResponseReader.test.ts src/utils/__tests__/ndjsonFramer.test.ts Tested: bunx tsc --noEmit --pretty false Tested: bun run lint Tested: bun test --coverage --coverage-reporter lcov --coverage-dir coverage Tested: bun run test:all Tested: bun audit Tested: bun run build Tested: bun run build:vite Not-tested: GitHub-hosted Codecov upload until pushed PR checks rerun --------- Co-authored-by: unraid <local@unraid.local>
Dependency-native upgrades clear the bun audit findings without broadening scope
Verification:
bun run typecheckbun run test:allbun run lintbun run buildgit diff --checkSummary by CodeRabbit
Release Notes
New Features
Security Improvements
Bug Fixes
Documentation
Dependencies