Skip to content

Conversation

@fjybiocs
Copy link
Contributor

Motivation

Piecewise CUDA graphs can be used to accelerate prefill computation.

But reviously, cuda graph was always disabled for prefill server in disaggregation mode. This PR allows using cuda graph when piecewise cuda graph is enabled.

Modifications

  • Modified the logic in server_args.py to only disable cuda graph when enable_piecewise_cuda_graph is not set

serverance.fu added 2 commits November 16, 2025 22:22
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @fjybiocs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a crucial change to how CUDA graphs are handled within the prefill server, specifically when operating in disaggregation mode. By modifying the conditional logic, it now permits the utilization of piecewise CUDA graphs, which were previously unconditionally disabled. This enhancement is designed to unlock potential performance benefits for prefill computations by allowing more granular control over CUDA graph enablement.

Highlights

  • Piecewise CUDA Graph Enablement: This pull request enables the use of piecewise CUDA graphs for the prefill server, which was previously always disabled in disaggregation mode.
  • Conditional CUDA Graph Disabling: The logic in server_args.py has been updated to only disable CUDA graphs for the prefill server if the enable_piecewise_cuda_graph flag is not set, allowing for performance improvements when piecewise CUDA graphs are desired.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables the use of piecewise CUDA graphs for the prefill server in disaggregation mode. The change correctly modifies the logic in server_args.py to conditionally disable the standard CUDA graph only when piecewise CUDA graph is not enabled. The updated warning message is also more specific and helpful. The implementation is clean and directly addresses the motivation. Good work!

Copy link
Collaborator

@ishandhanani ishandhanani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you share an example of you setting this and getting correct inference? It would be nice to get a basic functional and accuracy test

@fjybiocs
Copy link
Contributor Author

Can you share an example of you setting this and getting correct inference? It would be nice to get a basic functional and accuracy test

Thanks for pointing this out! I'll add proper tests soon.

@ShangmingCai
Copy link
Collaborator

@fjybiocs We previously disabled CUDA graph by default because the input sequence length in the Prefill phase varies, the computation graph is not static, and the kernel startup overhead is not significant compared to the prefill computation cost, so enabling CUDA graph does not provide significant optimization benefits.

I agree that piece-wise CUDA graph has the potential to make prefill benefit from CUDA graph as well. But we should not enable it by default before this technique becomes the de facto standard. Can you run some tests to verify the effects?

@fjybiocs
Copy link
Contributor Author

fjybiocs commented Nov 17, 2025

@fjybiocs We previously disabled CUDA graph by default because the input sequence length in the Prefill phase varies, the computation graph is not static, and the kernel startup overhead is not significant compared to the prefill computation cost, so enabling CUDA graph does not provide significant optimization benefits.

I agree that piece-wise CUDA graph has the potential to make prefill benefit from CUDA graph as well. But we should not enable it by default before this technique becomes the de facto standard. Can you run some tests to verify the effects?

I’ve done some tests on H100 with DeepSeek MEO-16B, and the results seem to support enabling piece-wise CUDA graph for prefill.

Without piece-wise graph, when the input length is in the range of ~1–512 tokens, the prefill latency stays almost flat at around 40 ms. The latency only starts to increase when the input grows beyond this range. This suggests that for shorter inputs, the bottleneck is likely on the host side rather than the GPU kernels themselves.

After enabling piece-wise CUDA graph, the improvements are quite noticeable:

  • For input length around 32, the latency drops to ~18 ms
  • For 512 tokens, it’s around ~25 ms

These are substantial gains, and the “flat latency plateau” disappeared.


There seems to be a small misunderstanding here: this PR does not enable piece-wise graph by default. The previous behavior was that even if a user explicitly enabled piece-wise graph, as long as PD-separation was on, the P node would forcibly disable CUDA graph, making it impossible for the user to actually use piece-wise graph on the P side, which works correctly and provides clear benefits in certain scenarios.

This PR simply fixes that issue—when the user explicitly requests piece-wise graph, it will now be correctly enabled on the P node instead of being silently turned off.

@ishandhanani
Copy link
Collaborator

Ah yes I misunderstood. I see that now we simply disable cuda graphs for P if piecewise isn't turned on. This is a safe change

Copy link
Collaborator

@ShangmingCai ShangmingCai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ShangmingCai
Copy link
Collaborator

ShangmingCai commented Nov 29, 2025

Since this test is not in the suite, I think we don't need to waste ci resource for a simple server arg change. Verified by @ishandhanani

@ShangmingCai ShangmingCai merged commit 143b57b into sgl-project:main Nov 29, 2025
50 of 57 checks passed
harvenstar pushed a commit to harvenstar/sglang that referenced this pull request Dec 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants