Skip to content

fix(core): propagate BeforeModel hook model override end-to-end#24784

Merged
SandyTao520 merged 3 commits intogoogle-gemini:mainfrom
krishdef7:fix/before-model-hook-model-override-e2e
Apr 7, 2026
Merged

fix(core): propagate BeforeModel hook model override end-to-end#24784
SandyTao520 merged 3 commits intogoogle-gemini:mainfrom
krishdef7:fix/before-model-hook-model-override-e2e

Conversation

@krishdef7
Copy link
Copy Markdown
Contributor

Summary

Follow-up to #22326 (merged). That PR fixed the crash when a BeforeModel hook returns a partial llm_request with only a model field. This PR completes the fix so the model override actually takes effect at the API call site.

Details

fromHookLLMRequest() correctly preserved the model in the translated GenerateContentParameters after #22326, but the value was silently dropped further up the call chain, identified by @SandyTao520 in the review of #22326:

  1. BeforeModelHookResult had no model field
  2. fireBeforeModelEvent() only forwarded modifiedConfig and modifiedContents, the model was never returned to the caller
  3. makeApiCallAndProcessStream() always used the original modelToUse, ignoring any model the hook intended to set

Fix: Three targeted changes, no public API breakage:

  • Added modifiedModel?: string to BeforeModelHookResult - Forward modifiedRequest?.model from fireBeforeModelEvent()
  • Apply modifiedModel to both modelToUse and lastModelToUse in geminiChat.ts before the API call, updating lastModelToUse ensures AfterModel hooks and request tracking also reflect the overridden model

Related Issues

Fixes #21847
Follow-up to #22326

How to Validate

  1. Add a BeforeModel hook returning:
    { hookSpecificOutput: { hookEventName: "BeforeModel", llm_request: { model: "gemini-2.5-flash" } } }
  2. Set active model to gemini-2.5-flash-lite via /model set gemini-2.5-flash-lite
  3. Send a message
  4. Run /stats session - requests now correctly show against gemini-2.5-flash, not gemini-2.5-flash-lite

Pre-Merge Checklist

  • No breaking changes - hooks returning full llm_request objects are unaffected
  • All hook tests pass (146/146)
  • Updated relevant documentation and README (not needed)
  • Added/updated tests (not needed - behavior covered by existing hookSystem.test.ts)

@krishdef7 krishdef7 requested a review from a team as a code owner April 6, 2026 20:25
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request completes the implementation of model overrides within the BeforeModel hook system. By ensuring that the model specified by a hook is correctly propagated through the event system and applied at the API call site, it resolves an issue where model overrides were being silently ignored, ensuring consistent behavior for request tracking and subsequent hook processing.

Highlights

  • Interface Update: Added the modifiedModel field to the BeforeModelHookResult interface to allow hooks to specify model overrides.
  • Event Propagation: Updated fireBeforeModelEvent in hookSystem.ts to correctly extract and forward the modified model from the request.
  • API Call Integration: Modified geminiChat.ts to apply the modifiedModel override to both modelToUse and lastModelToUse before executing the API call.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables hooks to override the Gemini model by adding a modifiedModel field to the BeforeModelHookResult interface and implementing the corresponding logic in GeminiChat. Feedback indicates that the current implementation needs to re-evaluate content compatibility (e.g., modern feature support) and resolve model aliases when a model override occurs to prevent potential API errors.

Signed-off-by: krishdef7 <gargkrish06@gmail.com>
@krishdef7 krishdef7 force-pushed the fix/before-model-hook-model-override-e2e branch from 51ae1d0 to 04c9e4f Compare April 6, 2026 20:37
@gemini-cli gemini-cli bot added priority/p1 Important and should be addressed in the near term. help wanted We will accept PRs from all issues marked as "help wanted". Thanks for your support! labels Apr 6, 2026
Copy link
Copy Markdown
Contributor

@SandyTao520 SandyTao520 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, tested this works. thanks for your help!

@SandyTao520 SandyTao520 added this pull request to the merge queue Apr 6, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Apr 7, 2026
@gemini-cli gemini-cli bot added the area/agent Issues related to Core Agent, Tools, Memory, Sub-Agents, Hooks, Agent Quality label Apr 7, 2026
@krishdef7
Copy link
Copy Markdown
Contributor Author

@SandyTao520 The merge queue ejected the PR due to a timeout in sandboxManager.integration.test.ts > blocks access outside the workspace (60s timeout on Windows sandbox). This is a pre-existing flaky test, it's unrelated to the hook changes and passes on every other platform. The branch itself shows all 26 checks passing. Could you re-add to the merge queue when you get a chance?

@SandyTao520 SandyTao520 enabled auto-merge April 7, 2026 16:48
@SandyTao520 SandyTao520 added this pull request to the merge queue Apr 7, 2026
Merged via the queue into google-gemini:main with commit 68fef87 Apr 7, 2026
43 of 47 checks passed
warrenzhu25 pushed a commit to warrenzhu25/gemini-cli that referenced this pull request Apr 9, 2026
…le-gemini#24784)

Signed-off-by: krishdef7 <gargkrish06@gmail.com>
Co-authored-by: Sandy Tao <sandytao520@icloud.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/agent Issues related to Core Agent, Tools, Memory, Sub-Agents, Hooks, Agent Quality help wanted We will accept PRs from all issues marked as "help wanted". Thanks for your support! priority/p1 Important and should be addressed in the near term.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] BeforeModel hook ignores llm_request.model override

2 participants