feat(core): add per-model token usage to stream-json output#21839
feat(core): add per-model token usage to stream-json output#21839NTaylorMullen merged 2 commits intogoogle-gemini:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request is a great enhancement, adding per-model token usage statistics to the stream-json output format, which brings it to parity with the json format. The changes are well-implemented, including updates to the necessary types and tests. My review includes one suggestion to refactor the convertToStreamStats function for improved efficiency and code clarity.
Add a `models` field to `StreamStats` with per-model token breakdowns, bringing stream-json output to parity with json output format. Aggregated totals are derived from per-model stats to avoid duplicate logic.
2c0aa53 to
84b0e3b
Compare
Head branch was pushed to by a user without write access

Add a
modelsfield toStreamStatswith per-model token breakdowns, bringing stream-json output to parity with json output format. Aggregated totals are derived from per-model stats to avoid duplicate logic.Summary
The
stream-jsonoutput format aggregates token usage into a single flatstatsobject, losing per-model granularity thatjsonoutput provides. This PR adds amodelsfield to theresultevent'sStreamStatswith per-model token breakdowns. Existing aggregated fields are unchanged, making this a backward-compatible addition.Details
ModelStreamStatsinterface with per-model token fields (total_tokens,input_tokens,output_tokens,cached,input).models: Record<string, ModelStreamStats>toStreamStats.convertToStreamStats()to build per-model stats first, then derive aggregated totals from them to avoid duplicate logic.Related Issues
#21833
How to Validate
resultevent includes per-model stats:Pre-Merge Checklist