Skip to content

Comments

[OPT] DeepSeekV3.2: optimize indexer weight_proj-mma performance#17205

Merged
Fridge003 merged 2 commits intosgl-project:mainfrom
BJWang-ant:opt_weight_proj_v0
Jan 20, 2026
Merged

[OPT] DeepSeekV3.2: optimize indexer weight_proj-mma performance#17205
Fridge003 merged 2 commits intosgl-project:mainfrom
BJWang-ant:opt_weight_proj_v0

Conversation

@BJWang-ant
Copy link
Contributor

@BJWang-ant BJWang-ant commented Jan 16, 2026

Motivation

From these pr[https://github.com//pull/16637 and https://github.com//issues/16861 and https://github.com//pull/13459/files], we know weights_proj is a relatively time-consuming operation.Although weights_proj is stored in BF16 precision in Hugging-Face, it is calculated in FP32 precision in SGLang. The reason for converting weights_proj to FP32 is that in the subsequent scaling calculation, it is necessary to use FP32 precision. To balance performance and accuracy, I still perform the operation on weights_proj with BF16 precision, but then convert its output back to FP32.The accuracy test indicates that with this optimization, the accuracy remains almost unchanged.

Modifications

1.The input of weights_proj is in BF16 precision, which is the same as wk.
2.Convert weights_proj to BF16 weights
3.Convert the output of weights_proj to FP32

Accuracy Tests

test command

python3 -m sglang.compile_deep_gemm --model-path /deepseek-ai/DeepSeek-V3.2
--trust-remote-code --tp-size 8 --dp-size 8 --enable-dp-attention
--tool-call-parser deepseekv32 --reasoning-parser deepseek-v3
--host 0.0.0.0 --port 8000 \

#! /bin/bash
export NEMO_SKILLS_DISABLE_UNCOMMITTED_CHANGES_CHECK=1

ns prepare_data aime25

PORT=8008
BACKEND=sglang
MODEL="/upfs/models/deepseek-ai/DeepSeek-V3.2" # Should be changed to the model name
MODEL_NAME="dsv32-fp8"

echo "Starting AIME25 evaluation with model $MODEL on port $PORT using backend $BACKEND..."
ns eval
--benchmarks=aime25:4
--server_type=$BACKEND
--model=$MODEL
--server_address=http://192.168.170.100:8008/v1
--output_dir=nemo_skills_aime25_${MODEL_NAME}output${BACKEND}$(date +%Y%m%d%H%M%S)
++chat_template_kwargs.thinking=true
++inference.temperature=1.0
++inference.top_p=0.95
++inference.tokens_to_generate=64000
#++inference.tokens_to_generate=120000 for Speciale model

before:
image

after:
image

Benchmarking and Profiling

before:
image

after:
image

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @BJWang-ant, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces an optimization to the weights_proj operation within the nsa_indexer by leveraging bfloat16 precision for its core computation. This change aims to improve performance while preserving numerical accuracy by converting the output back to float32 for downstream operations, addressing a previously identified bottleneck.

Highlights

  • Precision change for weights_proj initialization: The params_dtype for the weights_proj layer is now initialized with torch.bfloat16 instead of torch.float32 to enable BF16 computation.
  • Optimized weights_proj computation: The _project_and_scale_head_gates and _get_logits_head_gate methods now pass the input x directly to weights_proj without explicit float() casting, allowing the operation to leverage bfloat16 precision for performance.
  • Output precision conversion: Immediately after the weights_proj operation, its output is converted back to torch.float32 to ensure accuracy for subsequent scaling calculations, balancing performance and precision.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request optimizes the weights_proj operation in the NSA indexer by changing its data type from float32 to bfloat16. This is a sensible performance optimization, as weights_proj is identified as a time-consuming operation. The changes involve updating the ReplicatedLinear layer for weights_proj to use bfloat16 and adjusting the data flow to perform the matrix multiplication in bfloat16 before converting the result back to float32 for subsequent scaling calculations. The provided accuracy and profiling results support this change, showing performance gains with minimal impact on accuracy. My main feedback is to refactor some duplicated code to improve maintainability.

@BJWang-ant BJWang-ant changed the title [OPT] DSv32: optimize indexer weight_proj-mma performance [OPT] DeepSeekV3.2: optimize indexer weight_proj-mma performance Jan 16, 2026
@BJWang-ant
Copy link
Contributor Author

@Fridge003 Please help me review code

@Fridge003
Copy link
Collaborator

Please attach a screenshot of the trace file, on how the kernel time changes before and after this PR

@BJWang-ant
Copy link
Contributor Author

Please attach a screenshot of the trace file, on how the kernel time changes before and after this PR

OK, I will upload later.

@BJWang-ant
Copy link
Contributor Author

befor-weights_proj-FP32:
1768824440645_B46F94D8-953E-493b-B223-EA108527E51E

after-weights_proj-BF16:
image

@Fridge003 Hi, bro, here is screenshots of the trace files.

@Fridge003
Copy link
Collaborator

/tag-and-rerun-ci

@Fridge003
Copy link
Collaborator

截屏2026-01-20 23 08 09 All deepseek V32 tests passed

@Fridge003 Fridge003 merged commit 612026a into sgl-project:main Jan 20, 2026
198 of 224 checks passed
@zianglih
Copy link
Contributor

Hi, thank you for the optimization! Have you tried TF32? If BF16 GEMM + upcast work, TF32 should also work, with even better precision.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants