Skip to content

[Performance] optimize NSA backend metadata computation for multi-step speculative decoding#14781

Merged
Fridge003 merged 5 commits intosgl-project:mainfrom
Johnsonms:nsa_backend_meta_precompution_opt
Dec 18, 2025
Merged

[Performance] optimize NSA backend metadata computation for multi-step speculative decoding#14781
Fridge003 merged 5 commits intosgl-project:mainfrom
Johnsonms:nsa_backend_meta_precompution_opt

Conversation

@Johnsonms
Copy link
Contributor

@Johnsonms Johnsonms commented Dec 10, 2025

The NSA backend for multi-step speculative
decoding. The optimization implements a precompute-once-copy-many strategy:

  • Old approach: Compute metadata N times (one per backend) → ~210μs × N
  • New approach: Compute once, copy N times → ~210μs + ~70μs × N
  • Result: 1.7-2.2X speedup for metadata initialization with 4-8 speculative steps

Key Components Added:

  1. PrecomputedMetadata dataclass - stores shared metadata
  2. _precompute_replay_metadata() - computes metadata once for all backends
  3. init_forward_metadata_replay_cuda_graph_from_precomputed() - fast copy
    method (~50μs vs ~75μs)
  4. Mode-specific precomputation for decode/target_verify/draft_extend

Performance improvements:

  • Metadata computation: 31.9% faster (642μs → 437μs), 33.3% fewer kernels (60 → 40)
  • 4 concurrent requests: +5.5% to +7.5% TPS across sequences 16384-65536
  • Single request (16384): +12.4% TPS
  • Minimal TTFT impact (all within ±3%)
image image

Opt: 437us per forwards in draft meta forward part, will lanuch 40 kernels

image image

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Johnsonms, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant performance optimization for the NSA backend's metadata computation, particularly beneficial for multi-step speculative decoding. By refactoring the metadata generation process to precompute common data once and then efficiently copy it to multiple attention backend instances, the system avoids redundant calculations. This change aims to reduce latency and improve the overall efficiency of speculative decoding operations by centralizing and streamlining metadata preparation.

Highlights

  • Introduced PrecomputedMetadata Dataclass: A new dataclass named PrecomputedMetadata has been added to store metadata that is shared across multiple attention backend instances, specifically for multi-step speculative decoding.
  • Optimized Metadata Computation: The pull request implements a mechanism to precompute common metadata once for multi-step speculative decoding and then efficiently copy it to individual backend instances, avoiding redundant calculations.
  • Performance Improvement: This optimization is expected to yield a significant performance gain, with an estimated 3-5x speedup for 4-8 speculative steps, saving approximately 155μs per additional backend instance by reducing computation time from ~175μs to ~20μs for subsequent instances.
  • Mode-Specific Precomputation: Dedicated private functions (_precompute_decode_mode, _precompute_target_verify_mode, _precompute_draft_extend_mode) were added to handle metadata precomputation tailored to different forward modes within the NSA backend.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant performance optimization for multi-step speculative decoding in the NSA backend. By precomputing shared metadata and reusing it across speculative steps, it avoids redundant computations. The implementation introduces a PrecomputedMetadata dataclass and helper functions to handle different forward modes, which is a clean approach. My review focuses on improving the clarity, consistency, and type correctness of the new code to enhance maintainability.

@Johnsonms Johnsonms force-pushed the nsa_backend_meta_precompution_opt branch from 551fbff to 8eb590d Compare December 10, 2025 07:32
@Fridge003
Copy link
Collaborator

@Johnsonms Thanks for your codes~ Do you have any e2e performance data. Like what's the change of decode throughput of a bs=1 isl=osl=1024, before and after this pr with MTP

@Fridge003
Copy link
Collaborator

Fridge003 commented Dec 11, 2025

Also can you post the accuracy result of GPQA/AIME benchmark?

@Johnsonms
Copy link
Contributor Author

Johnsonms commented Dec 11, 2025

  1. Accuracy Test with gsm8k
python3 -m sglang.launch_server \
  --model-path deepseek-ai/DeepSeek-V3.2-Exp \
  --trust-remote-code \
  --tp-size 8 --dp-size 8 --enable-dp-attention \
  --tool-call-parser deepseekv31 \
  --reasoning-parser deepseek-v3 \
  --chat-template ./examples/chat_template/tool_chat_template_deepseekv32.jinja

python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1319 --parallel 1319

image
  1. Accuracy Test with gpqa-diamond

Service: python -m sglang.launch_server --model deepseek-ai/DeepSeek-V3.2-Exp --tp 8 --dp 8 --enable-dp-attention --speculative-algorithm EAGLE --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4

python3 -m sglang.test.run_eval --port 30000 --eval-name gpqa --num-examples 198 --max-tokens 120000 --repeat 8 --thinking-mode deepseek-v3

image
  1. Accuracy Test with aime 2025

python -m sglang.launch_server --model deepseek-ai/DeepSeek-V3.2-Exp --tp 8 --dp 8 --enable-dp-attention --speculative-algorithm EAGLE --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4 --tool-call-parser deepseekv32 --reasoning-parser deepseek-v3


#! /bin/bash
export NEMO_SKILLS_DISABLE_UNCOMMITTED_CHANGES_CHECK=1

ns prepare_data aime25

PORT=30000
BACKEND=sglang
MODEL="deepseek-ai/DeepSeek-V3.2-Exp" # Should be changed to the model name
MODEL_NAME="dsv32-fp8"

echo "Starting AIME25 evaluation with model $MODEL on port $PORT using backend $BACKEND..."
ns eval \
  --benchmarks=aime25:4 \
  --server_type=$BACKEND \
  --model=$MODEL \
  --server_address=http://localhost:${PORT}/v1 \
  --output_dir=nemo_skills_aime25_${MODEL_NAME}_output_${BACKEND}_$(date +%Y%m%d_%H%M%S) \
  ++chat_template_kwargs.thinking=true \
  ++inference.temperature=1.0 \
  ++inference.top_p=0.95 \
  ++inference.tokens_to_generate=64000
  # ++inference.tokens_to_generate=120000 for Speciale model
image
  1. e2e performance with MTP, 5%-15% improvement
python -m sglang.launch_server \
    --model-path /scratch/huggingface/DeepSeek-V3.2/Exp-FP4 \
    --served-model-name xxxxx/DeepSeek-V3.2-Exp-FP4 \
    --tp 4 --ep 4 \
    --reasoning-parser deepseek-v3 \
    --kv-cache-dtype fp8_e4m3 \
    --modelopt-quant nvfp4 \
    --trust-remote-code \
    --disable-radix-cache \
    --speculative-algorithm EAGLE \
    --speculative-num-steps 3 \
    --speculative-eagle-topk 1 \
    --speculative-num-draft-tokens 4 \
    --max-prefill-tokens 8192 \
    --grammar-backend xgrammar \
    --moe-runner-backend flashinfer_trtllm \
    --quantization modelopt_fp4 \
    --speculative-moe-runner-backend flashinfer_trtllm \
    --enable-metrics \
    --port 54321 \
    --collect-tokens-histogram \
    --chat-template /sgl-workspace/sglang/examples/chat_template/tool_chat_template_deepseekv32.jinja

Baseline
image

After Optimization
image

  1. Another internal e2e tests, 5%-12% improvement``
image

@Johnsonms
Copy link
Contributor Author

Also can you post the accuracy result of GPQA/AIME benchmark?

Added the test result, Thanks @Fridge003

@Fridge003
Copy link
Collaborator

Do you have data of acceptance length? Will it drop after this PR?

@Johnsonms
Copy link
Contributor Author

Johnsonms commented Dec 16, 2025

Do you have data of acceptance length? Will it drop after this PR?
I verified this and confirmed there is no change in the acceptance rate. The optimization focuses on NativeSparseAttnMultiStepBackend::init_forward_metadata_replay_cuda_graph. In this path, the same backend metadata is initialized multiple times (at least three) inside a loop, without any changes between iterations. By precomputing the metadata once and reusing it, we replace three compute-and-copy passes with three simple copies, which saves time.

Before:
image

After:
image

@Johnsonms Johnsonms requested a review from Fridge003 December 17, 2025 07:39
@Johnsonms Johnsonms force-pushed the nsa_backend_meta_precompution_opt branch from eae9bc2 to 0ec0942 Compare December 17, 2025 07:40
@Fridge003
Copy link
Collaborator

/tag-and-rerun-ci

@Johnsonms Johnsonms force-pushed the nsa_backend_meta_precompution_opt branch from 3c40dcf to 0f366a0 Compare December 18, 2025 17:56
@Fridge003 Fridge003 merged commit e0026f7 into sgl-project:main Dec 18, 2025
162 of 194 checks passed
xiaobaicxy added a commit to xiaobaicxy/sglang that referenced this pull request Dec 19, 2025
* 'main' of https://github.com/sgl-project/sglang: (136 commits)
  fix: unreachable error check in retraction (sgl-project#15433)
  [sgl-kernel] chore: update deepgemm version (sgl-project#13402)
  [diffusion] multi-platform: support diffusion on amd and fix encoder loading on MI325 (sgl-project#13760)
  [amd] Add deterministic all-reduce kernel for AMD (ROCm) (sgl-project#15340)
  [diffusion] refactor: refactor _build_req_from_sampling to use shallow_asdict (sgl-project#13782)
  Add customized sampler registration (sgl-project#15423)
  Update readme (sgl-project#15425)
  Fix Mindspore model import warning (sgl-project#15287)
  [Feature] Xiaomi `MiMo-V2-Flash` day0 support (sgl-project#15207)
  [diffusion] profiling: add bench_serving.py and VBench (sgl-project#15410)
  [DLLM] Fix dLLM regression (sgl-project#15371)
  [Deepseek V3.2] Fix Deepseek MTP in V1 mode (sgl-project#15429)
  chore: update CI_PERMISSIONS (sgl-project#15431)
  [DLLM] Add CI for diffusion LLMs (sgl-project#14723)
  Support using different attention backend for draft decoding. (sgl-project#14843)
  feat(dsv32): better error handling for DeepSeek-v3.2 encoder (sgl-project#14353)
  tiny fix lint on main (sgl-project#15424)
  multimodal: precompute hash for MultimodalDataItem (sgl-project#14354)
  [AMD] Clear pre-built AITER kernels and warmup to prevent segfaults and test timeouts (sgl-project#15318)
  [Performance] optimize NSA backend metadata computation for multi-step speculative decoding (sgl-project#14781)
  ...
Prozac614 pushed a commit to Prozac614/sglang that referenced this pull request Dec 23, 2025
jiaming1130 pushed a commit to zhuyijie88/sglang that referenced this pull request Dec 25, 2025
YChange01 pushed a commit to YChange01/sglang that referenced this pull request Jan 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants