Skip to content

Comments

Support piecewise cuda graph for deepseek v3#12996

Merged
ispobock merged 2 commits intomainfrom
ke/dsv3-piecewise
Nov 10, 2025
Merged

Support piecewise cuda graph for deepseek v3#12996
ispobock merged 2 commits intomainfrom
ke/dsv3-piecewise

Conversation

@ispobock
Copy link
Collaborator

@ispobock ispobock commented Nov 10, 2025

Motivation

Followup of #11812.

python -m sglang.launch_server --model-path deepseek-ai/DeepSeek-R1-0528 --tp 8 --trust-remote-code --enable-piecewise-cuda-graph --piecewise-cuda-graph-max-tokens 8192

curl http://127.0.0.1:30000/flush_cache           
python3 -m sglang.bench_serving --backend sglang-oai  --dataset-name random --random-input-len 1024 --random-output-len 10 --random-range-ratio 0.98 --num-prompts 5 --max-concurrency 1 --output-file res.jsonl     
curl http://127.0.0.1:30000/flush_cache   
python3 -m sglang.bench_serving --backend sglang-oai  --dataset-name random --random-input-len 1024 --random-output-len 10 --random-range-ratio 0.98 --num-prompts 20 --max-concurrency 4 --output-file res.jsonl 
curl http://127.0.0.1:30000/flush_cache
python3 -m sglang.bench_serving --backend sglang-oai  --dataset-name random --random-input-len 1024 --random-output-len 10 --random-range-ratio 0.98 --num-prompts 80 --max-concurrency 16 --output-file res.jsonl
curl http://127.0.0.1:30000/flush_cache
python3 -m sglang.bench_serving --backend sglang-oai  --dataset-name random --random-input-len 1024 --random-output-len 10 --random-range-ratio 0.98 --num-prompts 160 --max-concurrency 32 --output-file res.jsonl

w/ piecewise cuda graph:

+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|    |   max_concurrency |   input_throughput |   output_throughput |   mean_ttft_ms |   median_ttft_ms |   p99_ttft_ms |   mean_tpot_ms |   median_tpot_ms |   p99_tpot_ms |   per_user_throughput |
+====+===================+====================+=====================+================+==================+===============+================+==================+===============+=======================+
|  0 |             1.000 |           6525.850 |              61.905 |         69.214 |           76.525 |        79.630 |          9.669 |            9.669 |         9.698 |                61.905 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  1 |             4.000 |          13128.573 |             123.696 |        162.667 |          152.264 |       209.300 |         16.635 |           18.330 |        23.628 |                30.924 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  2 |            16.000 |          20058.533 |             186.603 |        455.605 |          486.626 |       644.320 |         41.247 |           37.912 |        79.012 |                11.663 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  3 |            32.000 |          22036.173 |             206.179 |        770.717 |          809.330 |      1241.564 |         81.765 |           78.221 |       158.597 |                 6.443 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+

w/o piecewise cuda graph:

+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|    |   max_concurrency |   input_throughput |   output_throughput |   mean_ttft_ms |   median_ttft_ms |   p99_ttft_ms |   mean_tpot_ms |   median_tpot_ms |   p99_tpot_ms |   per_user_throughput |
+====+===================+====================+=====================+================+==================+===============+================+==================+===============+=======================+
|  0 |             1.000 |           4847.049 |              45.980 |        122.417 |          116.682 |       146.857 |          9.716 |            9.717 |         9.740 |                45.980 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  1 |             4.000 |          12001.397 |             113.076 |        227.405 |          220.716 |       272.256 |         12.515 |           12.239 |        17.340 |                28.269 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  2 |            16.000 |          14564.738 |             135.494 |        797.197 |          620.783 |      1449.093 |         37.017 |           26.062 |       152.318 |                 8.468 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  3 |            32.000 |          21895.426 |             204.862 |        891.407 |          940.622 |      1287.927 |         68.620 |           63.036 |       139.398 |                 6.402 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+

Mean TTFT is reduced 43% for bs1.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ispobock, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces comprehensive support for piecewise CUDA graphs for DeepSeek V3 models, aiming to significantly enhance inference performance. By enabling 'torch.compile' for Mixture-of-Experts layers and optimizing GPU memory allocation, the changes lead to notable reductions in Time To First Token (TTFT) and overall throughput improvements, particularly under varying concurrency levels.

Highlights

  • Piecewise CUDA Graph Support: Implemented support for piecewise CUDA graphs specifically for DeepSeek V3 models, building upon previous work to enhance inference efficiency.
  • Performance Optimization: Demonstrated significant performance improvements, including a 43% reduction in Mean Time To First Token (TTFT) for batch size 1, by leveraging piecewise CUDA graphs.
  • MoE Layer Compilation: Added fake implementations for the 'sgl_kernel::moe_fused_gate' kernel to enable 'torch.compile' support for Mixture-of-Experts (MoE) layers, crucial for CUDA graph integration.
  • Resource Management: Adjusted GPU memory reservation logic to account for the additional memory required when piecewise CUDA graphs are enabled, ensuring stable operation.
  • Model-Specific Adjustments: Integrated logic to log a message indicating the use of Multi-Layer Attention (MLA) for prefill when piecewise CUDA graphs are active for DeepseekV3ForCausalLM models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for piecewise CUDA graphs for DeepSeek-V3 models, which, according to the benchmarks, significantly improves performance, especially for time-to-first-token. The changes include adding a fake implementation for a custom MoE kernel to enable torch.compile, reserving memory for the CUDA graphs, and adding a relevant log message. The implementation looks solid. I have one minor suggestion to improve the readability of the memory reservation heuristic.

@ispobock ispobock merged commit 58b12cc into main Nov 10, 2025
72 of 84 checks passed
@ispobock ispobock deleted the ke/dsv3-piecewise branch November 10, 2025 15:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants