Skip to content

fix(anthropic_messages): forward named params into MessagesIntercepto…#27810

Open
samagana wants to merge 1 commit into
BerriAI:litellm_internal_stagingfrom
samagana:fix-interceptor-named-params
Open

fix(anthropic_messages): forward named params into MessagesIntercepto…#27810
samagana wants to merge 1 commit into
BerriAI:litellm_internal_stagingfrom
samagana:fix-interceptor-named-params

Conversation

@samagana
Copy link
Copy Markdown

@samagana samagana commented May 13, 2026

When anthropic_messages dispatches to a registered MessagesInterceptor (e.g. AdvisorOrchestrationHandler), it currently splats only **kwargs plus a handful of explicit positional/named args. Top-level parameters that are bound as named arguments on the public anthropic_messages function — thinking, metadata, stop_sequences, system, temperature, tool_choice, top_k, top_p — are silently dropped, because they live in local variables, not in kwargs.

This loses request fields on every interceptor sub-call. The most visible breakage: thinking={"type": "adaptive"} sent by clients (Claude Code, Anthropic SDK callers, etc.) is dropped on the executor sub-call, so downstream providers whose validation depends on thinking reject the request. Concretely, Vertex AI returns:

invalid_request_error: clear_thinking_20251015 strategy requires thinking to be enabled or adaptive

even though the caller correctly sent thinking: {type: adaptive}.

Fix: forward every named parameter explicitly into interceptor.handle, so the advisor (and any future interceptor) preserves the full request shape on its internal sub-calls.

Tests: added regression test_named_params_forwarded_into_advisor_executor_subcall that drives the full anthropic_messages → interceptor → executor path and asserts all named params arrive in the executor sub-call. Verified the test fails on master (None vs {type: adaptive}) and passes with this fix. All 4 tests in test_advisor_integration.py pass.

Relevant issues

None filed - discovered while debugging Claude Code + advisor against a litellm proxy fronting Vertex AI.

Linear ticket

N/A (external contributor)

Pre-Submission checklist

  • I have added testing in the tests/test_litellm/ directory
  • My PR passes all unit tests - make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem
  • I have requested a Greptile review by commenting @greptileai and received a Confidence Score of at least 4/5 before requesting a maintainer review

CI (LiteLLM team)

  • Branch creation CI run — Link:
  • CI run for the last commit — Link:
  • Merge / cherry-pick CI run — Links:

Screenshots / Proof of Fix

Reproduction (against an unpatched litellm proxy fronting vertex_ai/claude-sonnet-4-6):

Send a request that mirrors what Claude Code with /advisor enabled produces — thinking: {type: adaptive} + context_management.clear_thinking_20251015 + an advisor tool registration:

curl http://localhost:4000/v1/messages \
  -H "x-api-key: sk-local-test" \
  -H "anthropic-version: 2023-06-01" \
  -H "anthropic-beta: context-management-2025-06-27,advisor-tool-2026-03-01" \
  -H "content-type: application/json" \
  -d '{
    "model": "claude-sonnet-4-6",
    "max_tokens": 8000,
    "thinking": {"type": "adaptive"},
    "context_management": {"edits":[{"type":"clear_thinking_20251015","keep":"all"}]},
    "tools": [{"name":"advisor","type":"advisor_20260301","model":"claude-opus-4-7"}],
    "messages": [{"role":"user","content":"hello"}]
  }'

Before fix — 400:

invalid_request_error: `clear_thinking_20251015` strategy requires `thinking` to be enabled or adaptive

After fix — 200, response carries a thinking block from the model.

Where the drop happens (root cause): litellm/llms/anthropic/experimental_pass_through/messages/handler.py — the interceptor dispatch splats **kwargs but never forwards the named parameters of anthropic_messages, so thinking (and 7 others) are silently lost on the executor sub-call.

Regression test (fails on master, passes with fix):

$ pytest tests/test_litellm/llms/anthropic/experimental_pass_through/messages/test_advisor_integration.py -v
test_anthropic_provider_bypasses_interceptor                            PASSED
test_full_dispatch_interceptor_fires_and_loop_completes                 PASSED
test_max_uses_enforced_through_full_handler                             PASSED
test_named_params_forwarded_into_advisor_executor_subcall               PASSED
4 passed

Screenshot
591502389-ca41f144-e992-4ba9-94f9-80cee7f1ec58

Type

🐛 Bug Fix

Changes

  • litellm/llms/anthropic/experimental_pass_through/messages/handler.py — forward thinking, metadata, stop_sequences, system, temperature, tool_choice, top_k, top_p explicitly into interceptor.handle(...) so they're not dropped by the **kwargs splat.
  • tests/test_litellm/llms/anthropic/experimental_pass_through/messages/test_advisor_integration.py — add test_named_params_forwarded_into_advisor_executor_subcall regression test.

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented May 13, 2026

CLA assistant check
All committers have signed the CLA.

@samagana samagana marked this pull request as ready for review May 13, 2026 03:40
@codecov
Copy link
Copy Markdown

codecov Bot commented May 13, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

@samagana
Copy link
Copy Markdown
Author

@greptileai

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented May 13, 2026

Greptile Summary

This PR fixes a silent parameter drop in the anthropic_messages interceptor dispatch path: named params bound to the function signature (thinking, metadata, system, temperature, stop_sequences, tool_choice, top_k, top_p) were never forwarded to interceptor.handle, causing downstream providers (e.g. Vertex AI) to reject requests that depend on those fields.

  • handler.py: Each of the 8 named params is now pop-ped from request_kwargs immediately after the pre-request hook extraction step (mirroring the existing pattern for tools and stream), then passed explicitly into interceptor.handle. Popping before kwargs.update(request_kwargs) also prevents the TypeError: got multiple values for keyword argument that would fire when a hook returns one of these keys.
  • test_advisor_integration.py: Adds test_named_params_forwarded_into_advisor_executor_subcall (asserts all 8 params reach the executor) and test_pre_request_hook_override_does_not_collide_with_explicit_kwargs (asserts hook-returned overrides propagate without collisions). Both are purely mock-based.

Confidence Score: 5/5

The change is narrowly scoped to the interceptor dispatch path and follows the existing extraction pattern for tools/stream; no existing behavior is altered for callers that do not use an interceptor.

The extraction-before-merge pattern is already established in the codebase for tools and stream; extending it to the remaining 8 named params is straightforward. The two new regression tests cover forwarding correctness and hook-collision prevention, and are mock-only with no external dependencies.

No files require special attention.

Important Files Changed

Filename Overview
litellm/llms/anthropic/experimental_pass_through/messages/handler.py Pops all 8 named params from request_kwargs before merging into kwargs, then forwards them explicitly to interceptor.handle — correctly fixes both the silent-drop bug and the potential duplicate-keyword TypeError.
tests/test_litellm/llms/anthropic/experimental_pass_through/messages/test_advisor_integration.py Adds two regression tests: one asserting all 8 named params reach the executor sub-call, and one asserting that hook-returned overrides of named params don't cause a duplicate-keyword TypeError. All mocked — no real network calls.

Reviews (3): Last reviewed commit: "fix(anthropic_messages): forward named p..." | Re-trigger Greptile

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented May 13, 2026

Greptile Summary

Fixes silent dropping of 8 named parameters (thinking, metadata, stop_sequences, system, temperature, tool_choice, top_k, top_p) when anthropic_messages dispatches to a MessagesInterceptor. Because these are bound as named params — not captured in **kwargs — they were never forwarded to interceptor.handle, causing providers like Vertex AI to reject requests that depended on them.

  • handler.py: Adds 8 explicit keyword arguments to the interceptor.handle(...) call so they are preserved on every interceptor sub-call, matching the forwarding already done for api_key and api_base.
  • test_advisor_integration.py: Adds test_named_params_forwarded_into_advisor_executor_subcall, a mock-only regression test that exercises the full anthropic_messages → interceptor → executor path and asserts 7 of the 8 newly-forwarded params arrive in the executor sub-call (metadata is captured but not asserted).

Confidence Score: 4/5

Safe to merge; the change is a targeted two-line-group addition with a passing regression test and no structural rewrites.

The fix is correct and well-scoped: forwarding named params to the interceptor mirrors the existing pattern already used for api_key and api_base. The regression test validates the critical path. The only gap is that metadata is captured in the test but never asserted, meaning a future regression that drops only metadata would go unnoticed by the new test.

The test file's metadata assertion gap is the only item worth a second look before merging.

Important Files Changed

Filename Overview
litellm/llms/anthropic/experimental_pass_through/messages/handler.py Adds explicit forwarding of 8 named params (thinking, metadata, stop_sequences, system, temperature, tool_choice, top_k, top_p) into interceptor.handle() so they aren't silently dropped on sub-calls. Change is minimal and correct.
tests/test_litellm/llms/anthropic/experimental_pass_through/messages/test_advisor_integration.py Adds regression test for named-param forwarding; all params asserted except metadata, leaving a small coverage gap for that one forwarded field.

Reviews (2): Last reviewed commit: "fix(anthropic_messages): forward named p..." | Re-trigger Greptile

…r.handle

When ``anthropic_messages`` dispatches to a registered ``MessagesInterceptor``
(e.g. ``AdvisorOrchestrationHandler``), it currently splats only ``**kwargs``
plus a handful of explicit positional/named args. Top-level parameters bound
as named arguments on ``anthropic_messages`` — ``thinking``, ``metadata``,
``stop_sequences``, ``system``, ``temperature``, ``tool_choice``, ``top_k``,
``top_p`` — are silently dropped, because they live in local variables, not
in ``kwargs``.

This loses request fields on every interceptor sub-call. The most visible
breakage: ``thinking={"type": "adaptive"}`` sent by clients (Claude Code,
Anthropic SDK callers, etc.) is dropped on the executor sub-call, so
downstream providers whose validation depends on ``thinking`` reject the
request. Concretely, Vertex AI returns:

    invalid_request_error: ``clear_thinking_20251015`` strategy requires
    ``thinking`` to be enabled or adaptive

even though the caller correctly sent ``thinking: {type: adaptive}``.

Fix
---
1. Extend the existing ``request_kwargs.pop()`` extraction (already used for
   ``tools`` and ``stream``) to cover all named params we forward to the
   interceptor. This honors pre-request hook overrides for any of those
   fields and prevents duplicate-keyword conflicts when ``**kwargs`` is
   splatted into ``interceptor.handle(...)``.
2. Forward every named parameter explicitly into ``interceptor.handle``, so
   the advisor (and any future interceptor) preserves the full request
   shape on its internal sub-calls.

Tests
-----
- ``test_named_params_forwarded_into_advisor_executor_subcall`` — drives the
  full ``anthropic_messages`` -> interceptor -> executor path and asserts
  all 8 named params arrive in the executor sub-call. Verified to fail on
  master (None vs caller-supplied values) and pass with this fix.
- ``test_pre_request_hook_override_does_not_collide_with_explicit_kwargs`` —
  simulates a ``CustomLogger.async_pre_request_hook`` returning ``thinking``,
  ``system``, ``temperature``. Without the new pops, the explicit-kwarg
  forwarding raises ``TypeError: got multiple values for keyword argument``.
  This test locks in the pop extraction.

All 5 tests in ``test_advisor_integration.py`` pass.
@samagana samagana force-pushed the fix-interceptor-named-params branch from 29a931e to fba2ea2 Compare May 13, 2026 03:53
@samagana
Copy link
Copy Markdown
Author

@greptileai

@oss-pr-review-agent-shin
Copy link
Copy Markdown
Contributor

🤖 litellm-agent: This PR is currently BLOCKED from merge.

Score: 4/5

Why blocked:

  • 1 unresolved reviewer concern (greptile) (unresolved_concern, -1 pts)

Details: Score docked for: 1 unresolved reviewer concern (greptile).

Fix the issues above and push an update — the bot will re-review automatically.

Note: This bot is still in beta and might not always work as expected. Please share any feedback via Slack.

@oss-pr-review-agent-shin
Copy link
Copy Markdown
Contributor

🤖 litellm-agent: Auto-merge skipped — the staging branch shin_agent_oss_staging_05_13_2026 has 1 commit(s) not in your branch. Merging as-is would produce a confusing diff on the staging PR.

Please rebase your branch onto shin_agent_oss_staging_05_13_2026 and push; the agent will re-review automatically.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants