[OPT] DeepSeekV3.2: optimize indexer weight_proj-mma performance#17205
[OPT] DeepSeekV3.2: optimize indexer weight_proj-mma performance#17205Fridge003 merged 2 commits intosgl-project:mainfrom
Conversation
Summary of ChangesHello @BJWang-ant, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces an optimization to the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request optimizes the weights_proj operation in the NSA indexer by changing its data type from float32 to bfloat16. This is a sensible performance optimization, as weights_proj is identified as a time-consuming operation. The changes involve updating the ReplicatedLinear layer for weights_proj to use bfloat16 and adjusting the data flow to perform the matrix multiplication in bfloat16 before converting the result back to float32 for subsequent scaling calculations. The provided accuracy and profiling results support this change, showing performance gains with minimal impact on accuracy. My main feedback is to refactor some duplicated code to improve maintainability.
|
@Fridge003 Please help me review code |
|
Please attach a screenshot of the trace file, on how the kernel time changes before and after this PR |
OK, I will upload later. |
|
@Fridge003 Hi, bro, here is screenshots of the trace files. |
adcb444 to
823d243
Compare
|
/tag-and-rerun-ci |
|
Hi, thank you for the optimization! Have you tried TF32? If BF16 GEMM + upcast work, TF32 should also work, with even better precision. |



Motivation
From these pr[https://github.com//pull/16637 and https://github.com//issues/16861 and https://github.com//pull/13459/files], we know weights_proj is a relatively time-consuming operation.Although weights_proj is stored in BF16 precision in Hugging-Face, it is calculated in FP32 precision in SGLang. The reason for converting weights_proj to FP32 is that in the subsequent scaling calculation, it is necessary to use FP32 precision. To balance performance and accuracy, I still perform the operation on weights_proj with BF16 precision, but then convert its output back to FP32.The accuracy test indicates that with this optimization, the accuracy remains almost unchanged.
Modifications
1.The input of weights_proj is in BF16 precision, which is the same as wk.
2.Convert weights_proj to BF16 weights
3.Convert the output of weights_proj to FP32
Accuracy Tests
test command
python3 -m sglang.compile_deep_gemm --model-path /deepseek-ai/DeepSeek-V3.2
--trust-remote-code --tp-size 8 --dp-size 8 --enable-dp-attention
--tool-call-parser deepseekv32 --reasoning-parser deepseek-v3
--host 0.0.0.0 --port 8000 \
#! /bin/bash
export NEMO_SKILLS_DISABLE_UNCOMMITTED_CHANGES_CHECK=1
ns prepare_data aime25
PORT=8008
BACKEND=sglang
MODEL="/upfs/models/deepseek-ai/DeepSeek-V3.2" # Should be changed to the model name
MODEL_NAME="dsv32-fp8"
echo "Starting AIME25 evaluation with model $MODEL on port $PORT using backend $BACKEND..."
ns eval
--benchmarks=aime25:4
--server_type=$BACKEND
--model=$MODEL
--server_address=http://192.168.170.100:8008/v1
--output_dir=nemo_skills_aime25_${MODEL_NAME}output${BACKEND}$(date +%Y%m%d%H%M%S)
++chat_template_kwargs.thinking=true
++inference.temperature=1.0
++inference.top_p=0.95
++inference.tokens_to_generate=64000
#++inference.tokens_to_generate=120000 for Speciale model
before:

after:

Benchmarking and Profiling
before:

after:

Checklist
Review Process
/tag-run-ci-label,/rerun-failed-ci,/tag-and-rerun-ci