Skip to content

[router][grpc] Refine streaming processes#11277

Merged
CatherineSue merged 1 commit intomainfrom
chang/stream
Oct 6, 2025
Merged

[router][grpc] Refine streaming processes#11277
CatherineSue merged 1 commit intomainfrom
chang/stream

Conversation

@CatherineSue
Copy link
Collaborator

@CatherineSue CatherineSue commented Oct 6, 2025

Motivation

In streaming, if the model is still in reasoning stage, we can skip tool call parsing.

Modifications

  • Add is_in_reasoning for reasoning parsers.
  • Extract is_in_reasoning after calling reasoing_parser. parse_reasoning_streaming_incremental
  • Add check for in_reasoning besides tool_choice and tools for tool call parsing.

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @CatherineSue, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request optimizes the streaming process within the router by integrating a mechanism to detect when a language model is in a 'reasoning stage.' By exposing this state, the system can intelligently bypass tool call parsing during reasoning, which streamlines operations and enhances overall performance for streaming responses.

Highlights

  • New Reasoning State Method: Introduced an is_in_reasoning method to the ReasoningParser trait and its concrete implementations, allowing parsers to report if they are currently processing reasoning content.
  • Enhanced Reasoning Stream Processing: The process_reasoning_stream function in both pd_router.rs and router.rs was updated to return the current reasoning state (in_reasoning boolean) in addition to the processed text and optional chunk.
  • Optimized Tool Call Parsing: Tool call parsing is now conditionally skipped when the model is identified as being in a reasoning stage, preventing unnecessary processing and improving streaming efficiency.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an optimization to skip tool call parsing during the model's reasoning phase in streaming mode. This is achieved by adding an is_in_reasoning method to the ReasoningParser trait and its implementations. The gRPC routers then use this state to conditionally bypass tool parsing logic.

The changes are logical and correctly implement the intended optimization. My review includes a couple of suggestions to improve maintainability and robustness:

  • Refactoring duplicated code between the two gRPC router implementations.
  • Improving error handling for mutex locks to prevent potential panics.

Overall, this is a good enhancement to the streaming process.

if let Some(pooled_parser) = reasoning_parsers.get(&index) {
let parse_result = {
let (parse_result, in_reasoning) = {
let mut parser = pooled_parser.lock().unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using .unwrap() on a Mutex lock can lead to a panic if the lock is poisoned (i.e., if another thread panics while holding the lock). This could crash the entire router service. It's safer to handle the PoisonError gracefully to make the service more robust.

Consider recovering from the poisoned state by taking ownership of the mutex data. This same issue exists in sgl-router/src/routers/grpc/router.rs.

Suggested change
let mut parser = pooled_parser.lock().unwrap();
let mut parser = match pooled_parser.lock() {
Ok(guard) => guard,
Err(poisoned) => {
warn!("Reasoning parser lock was poisoned. Recovering by taking ownership of the data.");
poisoned.into_inner()
}
};

if let Some(pooled_parser) = reasoning_parsers.get(&index) {
let parse_result = {
let (parse_result, in_reasoning) = {
let mut parser = pooled_parser.lock().unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Calling .unwrap() on a Mutex lock is risky in a concurrent environment. If another thread panics while holding the lock, this unwrap() will also panic, potentially bringing down the service. To improve robustness, you should handle the PoisonError that lock() can return.

Suggested change
let mut parser = pooled_parser.lock().unwrap();
let mut parser = match pooled_parser.lock() {
Ok(guard) => guard,
Err(poisoned) => {
warn!("Reasoning parser lock was poisoned. Recovering by taking ownership of the data.");
poisoned.into_inner()
}
};

model: &str,
created: u64,
) -> (String, Option<ChatCompletionStreamResponse>) {
) -> (String, Option<ChatCompletionStreamResponse>, bool) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The implementation of process_reasoning_stream (lines 486-549) is identical to the one in sgl-router/src/routers/grpc/pd_router.rs (lines 1172-1232). To avoid code duplication and improve maintainability, consider moving this logic to a shared utility function, for example in sgl-router/src/routers/grpc/utils.rs.

You could define a free function that takes &ReasoningParserFactory as an argument and then call it from both GrpcRouter and GrpcPDRouter.

@CatherineSue CatherineSue merged commit b07c9c7 into main Oct 6, 2025
65 of 70 checks passed
@CatherineSue CatherineSue deleted the chang/stream branch October 6, 2025 22:15
ch-tiger1 pushed a commit to ch-tiger1/sglang that referenced this pull request Oct 9, 2025
lpc0220 pushed a commit to lpc0220/sglang that referenced this pull request Oct 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments