Skip to content

feat(core): add per-model token usage to stream-json output#21839

Merged
NTaylorMullen merged 2 commits intogoogle-gemini:mainfrom
yongruilin:stream-json-stats
Mar 10, 2026
Merged

feat(core): add per-model token usage to stream-json output#21839
NTaylorMullen merged 2 commits intogoogle-gemini:mainfrom
yongruilin:stream-json-stats

Conversation

@yongruilin
Copy link
Contributor

Add a models field to StreamStats with per-model token breakdowns, bringing stream-json output to parity with json output format. Aggregated totals are derived from per-model stats to avoid duplicate logic.

Summary

The stream-json output format aggregates token usage into a single flat stats object, losing per-model granularity that json output provides. This PR adds a models field to the result event's StreamStats with per-model token breakdowns. Existing aggregated fields are unchanged, making this a backward-compatible addition.

Details

  • Added ModelStreamStats interface with per-model token fields (total_tokens, input_tokens, output_tokens, cached, input).
  • Added models: Record<string, ModelStreamStats> to StreamStats.
  • Refactored convertToStreamStats() to build per-model stats first, then derive aggregated totals from them to avoid duplicate logic.

Related Issues

#21833

How to Validate

  1. Run with stream-json output and verify the result event includes per-model stats:
    NODE_ENV=development node scripts/start.js -p "hello" --output-format stream-json 2>/dev/null | grep '"type":"result"' | python3 -m json.tool
  2. Run unit tests:
    npx vitest run --coverage.enabled=false packages/core/src/output/stream-json-formatter.test.ts

Pre-Merge Checklist

  • Updated relevant documentation and README (if needed)
  • Added/updated tests (if needed)
  • Noted breaking changes (if any)
  • Validated on required platforms/methods:
    • MacOS
      • npm run
      • npx
      • Docker
      • Podman
      • Seatbelt
    • Windows
      • npm run
      • npx
      • Docker
    • Linux
      • npm run
      • npx
      • Docker

@yongruilin yongruilin requested a review from a team as a code owner March 10, 2026 06:53
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the stream-json output by integrating detailed per-model token usage statistics. Previously, this output only provided aggregated token counts, lacking the granularity available in other formats. The changes introduce a new models field within the StreamStats object, allowing users to see token breakdowns for each model involved in a session. This improvement provides more comprehensive data for analysis and aligns the stream-json output with the richer information provided by the standard json output.

Highlights

  • Enhanced Stream-JSON Output: The stream-json output format now includes a models field within the StreamStats object, providing granular per-model token usage breakdowns, bringing it to parity with the json output format.
  • New ModelStreamStats Interface: A new TypeScript interface, ModelStreamStats, has been introduced to define the structure for per-model token statistics, including total_tokens, input_tokens, output_tokens, cached, and input.
  • Refactored Token Aggregation Logic: The convertToStreamStats() function has been refactored to first build detailed per-model statistics and then derive the aggregated total token counts from these per-model stats, ensuring consistency and avoiding duplicate logic.
  • Updated Test Coverage: Existing unit tests have been updated, and new test cases added, to validate the correct inclusion and aggregation of per-model token usage in the stream-json output.
Changelog
  • packages/cli/src/utils/errors.test.ts
    • Updated mock StreamStats object to include the new models field, ensuring compatibility with the updated StreamStats interface.
  • packages/core/src/output/stream-json-formatter.test.ts
    • Updated mock StreamStats objects in various test cases to include the new models field.
    • Added new test assertions to verify the presence and correctness of per-model token statistics within the result event's StreamStats.
  • packages/core/src/output/stream-json-formatter.ts
    • Imported the newly defined ModelStreamStats type.
    • Modified the convertToStreamStats method to construct a models object containing ModelStreamStats for each model.
    • Refactored the aggregation logic within convertToStreamStats to calculate total token counts by summing values from the newly created per-model stats.
    • Added the models field to the StreamStats object returned by convertToStreamStats.
  • packages/core/src/output/types.ts
    • Defined a new TypeScript interface ModelStreamStats to represent token usage for individual models.
    • Extended the StreamStats interface by adding a models property, which is a record mapping model names to ModelStreamStats objects.
Activity
  • The pull request was created by yongruilin.
  • The author provided a detailed summary, specific implementation details, and a link to a related GitHub issue (stream-json should include per-model usage stats like --output-format json #21833).
  • Validation steps, including a bash command to test stream-json output and instructions for running unit tests, were provided.
  • The pre-merge checklist indicates that tests were added/updated and validated on Linux using npm run.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a great enhancement, adding per-model token usage statistics to the stream-json output format, which brings it to parity with the json format. The changes are well-implemented, including updates to the necessary types and tests. My review includes one suggestion to refactor the convertToStreamStats function for improved efficiency and code clarity.

Add a `models` field to `StreamStats` with per-model token breakdowns,
bringing stream-json output to parity with json output format. Aggregated
totals are derived from per-model stats to avoid duplicate logic.
@gemini-cli gemini-cli bot added the priority/p1 Important and should be addressed in the near term. label Mar 10, 2026
Copy link
Contributor

@jacob314 jacob314 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@NTaylorMullen NTaylorMullen enabled auto-merge March 10, 2026 17:12
auto-merge was automatically disabled March 10, 2026 17:23

Head branch was pushed to by a user without write access

@NTaylorMullen NTaylorMullen enabled auto-merge March 10, 2026 17:25
@NTaylorMullen NTaylorMullen added this pull request to the merge queue Mar 10, 2026
Merged via the queue into google-gemini:main with commit 4da0366 Mar 10, 2026
27 checks passed
JaisalJain pushed a commit to JaisalJain/gemini-cli that referenced this pull request Mar 11, 2026
kunal-10-cloud pushed a commit to kunal-10-cloud/gemini-cli that referenced this pull request Mar 12, 2026
liamhelmer pushed a commit to badal-io/gemini-cli that referenced this pull request Mar 12, 2026
yashodipmore pushed a commit to yashodipmore/geemi-cli that referenced this pull request Mar 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

priority/p1 Important and should be addressed in the near term.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants