use flashinfer_trtllm moe runner backend to gain around 10% perf on b200 fp8 dpsk#11816
Conversation
Summary of ChangesHello @b8zhong, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request primarily focuses on enhancing the performance of Mixture-of-Experts (MoE) models, particularly for FP8 quantization on NVIDIA Blackwell (SM100/B200) GPUs. It achieves this by making Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces performance improvements for MoE models on B200 GPUs by defaulting to the flashinfer_trtllm backend for FP8 quantization. It also refactors the MoE backend selection by removing the SGLANG_CUTLASS_MOE environment variable in favor of a server argument, which is a good cleanup. Additionally, a new tuned Triton config for B200 is included. I've found a critical issue in the logic for setting the default MoE runner backend due to operator precedence, which could lead to incorrect behavior. Please see the detailed comment.
Qiaolin-Yu
left a comment
There was a problem hiding this comment.
could you share the moe kernel profiling result before and after tuning?
|
@Qiaolin-Yu I didn't use torch profiler, but I generally find the benchmark provided by this tuning script to be accurate Before: After: On average the triton config is like 2% improvement |
I think it will be better to have a profiling result. |
|
@Qiaolin-Yu Sure, btw it seems like bench one batch has some issues... so I profiled the server |
.../triton_3_4_0/E=257,N=256,device_name=NVIDIA_B200,dtype=fp8_w8a8,block_shape=[128, 128].json
Outdated
Show resolved
Hide resolved
| and (is_sm100_supported() or is_sm90_supported()) | ||
| ) | ||
| return ( | ||
| (backend.is_cutlass() or backend.is_flashinfer_cutlass()) |
There was a problem hiding this comment.
i think cutlass and flashinfer_cutlass are different kernels?
There was a problem hiding this comment.
And could you refine the if/else logic here? It seems a little bit messy.
There was a problem hiding this comment.
Sg
-
ah you are right, I was not aware of pure cutlass impl... I kept is flashinfer cutlass only
-
Yes, I will assume the user knows the compatibility (and just return immediately true) else use the check
| return StandardCombineInput(hidden_states=ret) | ||
|
|
||
| if self.use_cutlass_fused_experts_fp8: | ||
| if self._should_use_cutlass_fused_experts(): |
There was a problem hiding this comment.
Why we move this out of if self.block_quant:?
There was a problem hiding this comment.
Smth weird I did during refactor... changed it back
584a1d6 to
a2f7e1e
Compare


Motivation
Do 3 things
The performance improvements are described:
https://sgl-fru7574.slack.com/archives/C0999LZPKQX/p1760830685045249
add the tuned triton config, there was around a 5% improvement in the overall kernel itself, but the E2E results above (those are with triton tuned results) are still worse.
Remove SGLANG_CUTLASS_MOE and move it to server args (like all other moe runner backends.
Launches with FI trtllm moe runner now.
Also works now.
Accuracy Tests