[Feature] Optimize DeepSeek's DeepEP on Ascend NPU#8355
[Feature] Optimize DeepSeek's DeepEP on Ascend NPU#8355zhyncs merged 35 commits intosgl-project:mainfrom
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @iforgetmyname, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces significant optimizations for DeepSeek's DeepEP (Deep Expert Parallel) Mixture of Experts (MoE) implementation, specifically targeting Huawei Ascend NPUs. It integrates NPU-specific kernels and quantization methods for key operations like MoE gating, rotary embeddings, and W8A8 quantization, aiming to enhance performance and efficiency on Ascend hardware.
Highlights
- NPU-Specific DeepEP MoE Implementation: Introduced
AscendDeepEPMoEto leverage Ascend NPU capabilities for DeepEP MoE layers. This includes specialized weight loading for W8A8 scales and an NPU-optimizedforwardpass utilizingtorch_npuoperations likenpu_grouped_matmulandnpu_swiglu. Theget_moe_impl_classfunction now conditionally returns this NPU-optimized class. - W8A8 Quantization for NPU: Added
W8A8EPMoEMethodto support W8A8 integer quantization specifically for EPMoE on Ascend NPUs, enabling more efficient memory usage and computation. TheEPMoEinitialization logic was updated to integrate this new quantization method. - Optimized NPU Kernels for Core Operations: Integrated
torch_npukernels for critical operations: MoE Top-K gating (npu_moe_gating_top_k) and Rotary Embedding (npu_mrope). This provides significant performance improvements for these components when running on Ascend NPUs, with fallbacks to native implementations for unsupported configurations. - Refined W8A8 INT8 Quantization Methods: Enhanced existing W8A8 INT8 quantization methods for NPU by optimizing RMSNorm with
npu_add_rms_normandnpu_rms_norm. Additionally, the bias fusion logic in linear layers was refined to ensure correct bias application and prevent double-addition in tensor parallelism scenarios. - NPU-Aware Conditional Logic and Global Arguments: Adjusted conditional imports for certain quantization kernels to be NPU-aware, ensuring that only relevant kernels are loaded based on the hardware. The 'quantization' argument was also added to the global server arguments, allowing for dynamic configuration of quantization schemes.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces optimizations for DeepSeek's DeepEP on Ascend NPU. The changes are extensive, adding NPU-specific kernels and logic for MoE layers, quantization, and rotary embeddings.
My review has identified a few critical issues that need to be addressed:
- There's a method signature mismatch in
AscendDeepEPMoE.forwardthat will cause runtime errors. - The NPU-specific path in
select_expertsreturns an incorrect type, which will lead to unpacking errors. - There are also some bugs in the weight loading logic for NPU that need to be fixed.
Additionally, I've pointed out some areas for code improvement, such as moving local imports to the top level and removing dead code, to enhance maintainability.
Overall, this is a significant feature addition. Please address the critical issues to ensure correctness.
3545556 to
74e3d91
Compare
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
1 similar comment
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
I cannot see any npu deepep calls in this PR. Are they hidden behind deepep-compatible interfaces? And VLLM test failed |
seems like #8837 is the cause. 8837/checks |
see #8886 |
Co-authored-by: ronnie_zheng <zl19940307@163.com> Co-authored-by: Hexq0210 <hexq0809521@gmail.com>
Co-authored-by: ronnie_zheng <zl19940307@163.com> Co-authored-by: Hexq0210 <hexq0809521@gmail.com>
Motivation
Following our roadmap #8004, this is the first pr to support deepep expert parallelism and speed up with some fusion kernels. Now it is possible to activate
--enable-deepep-moewith--deepep-mode low_latencyonly under pd disaggregation scenario on decode nodes. However, we only support W8A8 int8 quant method currently.More info about our DeepEP-compatible kernels, check out our roadmap in sgl-kernel-npu
Modifications
Checklist
Accuracy & Performance
Accuracy with
python3 -m sglang.test.few_shot_gsm8k --num-questions 1318Performance on 2xA3 PD disaggregation w/o DP-ATTN
Code Format