fix(core): propagate BeforeModel hook model override end-to-end#24784
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request completes the implementation of model overrides within the BeforeModel hook system. By ensuring that the model specified by a hook is correctly propagated through the event system and applied at the API call site, it resolves an issue where model overrides were being silently ignored, ensuring consistent behavior for request tracking and subsequent hook processing. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request enables hooks to override the Gemini model by adding a modifiedModel field to the BeforeModelHookResult interface and implementing the corresponding logic in GeminiChat. Feedback indicates that the current implementation needs to re-evaluate content compatibility (e.g., modern feature support) and resolve model aliases when a model override occurs to prevent potential API errors.
Signed-off-by: krishdef7 <gargkrish06@gmail.com>
51ae1d0 to
04c9e4f
Compare
SandyTao520
left a comment
There was a problem hiding this comment.
LGTM, tested this works. thanks for your help!
|
@SandyTao520 The merge queue ejected the PR due to a timeout in |
…le-gemini#24784) Signed-off-by: krishdef7 <gargkrish06@gmail.com> Co-authored-by: Sandy Tao <sandytao520@icloud.com>
Summary
Follow-up to #22326 (merged). That PR fixed the crash when a BeforeModel hook returns a partial
llm_requestwith only a model field. This PR completes the fix so the model override actually takes effect at the API call site.Details
fromHookLLMRequest()correctly preserved the model in the translatedGenerateContentParametersafter #22326, but the value was silently dropped further up the call chain, identified by @SandyTao520 in the review of #22326:BeforeModelHookResulthad nomodelfieldfireBeforeModelEvent()only forwardedmodifiedConfigandmodifiedContents, the model was never returned to the callermakeApiCallAndProcessStream()always used the originalmodelToUse, ignoring any model the hook intended to setFix: Three targeted changes, no public API breakage:
modifiedModel?: stringtoBeforeModelHookResult- ForwardmodifiedRequest?.modelfromfireBeforeModelEvent()modifiedModelto bothmodelToUseandlastModelToUseingeminiChat.tsbefore the API call, updatinglastModelToUseensures AfterModel hooks and request tracking also reflect the overridden modelRelated Issues
Fixes #21847
Follow-up to #22326
How to Validate
{ hookSpecificOutput: { hookEventName: "BeforeModel", llm_request: { model: "gemini-2.5-flash" } } }gemini-2.5-flash-litevia/model set gemini-2.5-flash-lite/stats session- requests now correctly show againstgemini-2.5-flash, notgemini-2.5-flash-litePre-Merge Checklist
llm_requestobjects are unaffectedhookSystem.test.ts)