fix(anthropic_messages): forward named params into MessagesIntercepto…#27810
fix(anthropic_messages): forward named params into MessagesIntercepto…#27810samagana wants to merge 1 commit into
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
Greptile SummaryThis PR fixes a silent parameter drop in the
Confidence Score: 5/5The change is narrowly scoped to the interceptor dispatch path and follows the existing extraction pattern for tools/stream; no existing behavior is altered for callers that do not use an interceptor. The extraction-before-merge pattern is already established in the codebase for tools and stream; extending it to the remaining 8 named params is straightforward. The two new regression tests cover forwarding correctness and hook-collision prevention, and are mock-only with no external dependencies. No files require special attention.
|
| Filename | Overview |
|---|---|
| litellm/llms/anthropic/experimental_pass_through/messages/handler.py | Pops all 8 named params from request_kwargs before merging into kwargs, then forwards them explicitly to interceptor.handle — correctly fixes both the silent-drop bug and the potential duplicate-keyword TypeError. |
| tests/test_litellm/llms/anthropic/experimental_pass_through/messages/test_advisor_integration.py | Adds two regression tests: one asserting all 8 named params reach the executor sub-call, and one asserting that hook-returned overrides of named params don't cause a duplicate-keyword TypeError. All mocked — no real network calls. |
Reviews (3): Last reviewed commit: "fix(anthropic_messages): forward named p..." | Re-trigger Greptile
Greptile SummaryFixes silent dropping of 8 named parameters (
Confidence Score: 4/5Safe to merge; the change is a targeted two-line-group addition with a passing regression test and no structural rewrites. The fix is correct and well-scoped: forwarding named params to the interceptor mirrors the existing pattern already used for api_key and api_base. The regression test validates the critical path. The only gap is that metadata is captured in the test but never asserted, meaning a future regression that drops only metadata would go unnoticed by the new test. The test file's metadata assertion gap is the only item worth a second look before merging.
|
| Filename | Overview |
|---|---|
| litellm/llms/anthropic/experimental_pass_through/messages/handler.py | Adds explicit forwarding of 8 named params (thinking, metadata, stop_sequences, system, temperature, tool_choice, top_k, top_p) into interceptor.handle() so they aren't silently dropped on sub-calls. Change is minimal and correct. |
| tests/test_litellm/llms/anthropic/experimental_pass_through/messages/test_advisor_integration.py | Adds regression test for named-param forwarding; all params asserted except metadata, leaving a small coverage gap for that one forwarded field. |
Reviews (2): Last reviewed commit: "fix(anthropic_messages): forward named p..." | Re-trigger Greptile
…r.handle
When ``anthropic_messages`` dispatches to a registered ``MessagesInterceptor``
(e.g. ``AdvisorOrchestrationHandler``), it currently splats only ``**kwargs``
plus a handful of explicit positional/named args. Top-level parameters bound
as named arguments on ``anthropic_messages`` — ``thinking``, ``metadata``,
``stop_sequences``, ``system``, ``temperature``, ``tool_choice``, ``top_k``,
``top_p`` — are silently dropped, because they live in local variables, not
in ``kwargs``.
This loses request fields on every interceptor sub-call. The most visible
breakage: ``thinking={"type": "adaptive"}`` sent by clients (Claude Code,
Anthropic SDK callers, etc.) is dropped on the executor sub-call, so
downstream providers whose validation depends on ``thinking`` reject the
request. Concretely, Vertex AI returns:
invalid_request_error: ``clear_thinking_20251015`` strategy requires
``thinking`` to be enabled or adaptive
even though the caller correctly sent ``thinking: {type: adaptive}``.
Fix
---
1. Extend the existing ``request_kwargs.pop()`` extraction (already used for
``tools`` and ``stream``) to cover all named params we forward to the
interceptor. This honors pre-request hook overrides for any of those
fields and prevents duplicate-keyword conflicts when ``**kwargs`` is
splatted into ``interceptor.handle(...)``.
2. Forward every named parameter explicitly into ``interceptor.handle``, so
the advisor (and any future interceptor) preserves the full request
shape on its internal sub-calls.
Tests
-----
- ``test_named_params_forwarded_into_advisor_executor_subcall`` — drives the
full ``anthropic_messages`` -> interceptor -> executor path and asserts
all 8 named params arrive in the executor sub-call. Verified to fail on
master (None vs caller-supplied values) and pass with this fix.
- ``test_pre_request_hook_override_does_not_collide_with_explicit_kwargs`` —
simulates a ``CustomLogger.async_pre_request_hook`` returning ``thinking``,
``system``, ``temperature``. Without the new pops, the explicit-kwarg
forwarding raises ``TypeError: got multiple values for keyword argument``.
This test locks in the pop extraction.
All 5 tests in ``test_advisor_integration.py`` pass.
29a931e to
fba2ea2
Compare
|
🤖 litellm-agent: This PR is currently BLOCKED from merge. Score: 4/5 ❌ Why blocked:
Details: Score docked for: 1 unresolved reviewer concern (greptile). Fix the issues above and push an update — the bot will re-review automatically.
|
|
🤖 litellm-agent: Auto-merge skipped — the staging branch Please rebase your branch onto |
When
anthropic_messagesdispatches to a registeredMessagesInterceptor(e.g.AdvisorOrchestrationHandler), it currently splats only**kwargsplus a handful of explicit positional/named args. Top-level parameters that are bound as named arguments on the publicanthropic_messagesfunction —thinking,metadata,stop_sequences,system,temperature,tool_choice,top_k,top_p— are silently dropped, because they live in local variables, not inkwargs.This loses request fields on every interceptor sub-call. The most visible breakage:
thinking={"type": "adaptive"}sent by clients (Claude Code, Anthropic SDK callers, etc.) is dropped on the executor sub-call, so downstream providers whose validation depends onthinkingreject the request. Concretely, Vertex AI returns:even though the caller correctly sent
thinking: {type: adaptive}.Fix: forward every named parameter explicitly into
interceptor.handle, so the advisor (and any future interceptor) preserves the full request shape on its internal sub-calls.Tests: added regression
test_named_params_forwarded_into_advisor_executor_subcallthat drives the fullanthropic_messages→ interceptor → executor path and asserts all named params arrive in the executor sub-call. Verified the test fails on master (Nonevs{type: adaptive}) and passes with this fix. All 4 tests intest_advisor_integration.pypass.Relevant issues
None filed - discovered while debugging Claude Code + advisor against a litellm proxy fronting Vertex AI.
Linear ticket
N/A (external contributor)
Pre-Submission checklist
tests/test_litellm/directorymake test-unit@greptileaiand received a Confidence Score of at least 4/5 before requesting a maintainer reviewCI (LiteLLM team)
Screenshots / Proof of Fix
Reproduction (against an unpatched litellm proxy fronting
vertex_ai/claude-sonnet-4-6):Send a request that mirrors what Claude Code with
/advisorenabled produces —thinking: {type: adaptive}+context_management.clear_thinking_20251015+ an advisor tool registration:Before fix — 400:
After fix — 200, response carries a thinking block from the model.
Where the drop happens (root cause):
litellm/llms/anthropic/experimental_pass_through/messages/handler.py— the interceptor dispatch splats**kwargsbut never forwards the named parameters ofanthropic_messages, sothinking(and 7 others) are silently lost on the executor sub-call.Regression test (fails on master, passes with fix):
Screenshot

Type
🐛 Bug Fix
Changes
litellm/llms/anthropic/experimental_pass_through/messages/handler.py— forwardthinking,metadata,stop_sequences,system,temperature,tool_choice,top_k,top_pexplicitly intointerceptor.handle(...)so they're not dropped by the**kwargssplat.tests/test_litellm/llms/anthropic/experimental_pass_through/messages/test_advisor_integration.py— addtest_named_params_forwarded_into_advisor_executor_subcallregression test.