-
Notifications
You must be signed in to change notification settings - Fork 3.7k
[AMD] Add AITER Custom All-Reduce #13102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @hubertlu-tw, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances SGLang's distributed capabilities by integrating AITER's custom all-reduce implementation for AMD GPUs. The primary goal is to leverage AITER's optimized kernels to improve the performance of all-reduce operations on ROCm-enabled systems. It provides a configurable way to enable or disable this new functionality and includes a dedicated benchmark to assess its impact. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for AITER's custom all-reduce on AMD platforms, enabled via the SGLANG_AITER_AR environment variable. The changes are well-structured, including a dispatch mechanism to select the appropriate all-reduce implementation and a comprehensive benchmark script to compare performance. The tests have also been updated accordingly. My main feedback concerns a reduction in type safety in parallel_state.py where Optional[Any] is used. I've suggested an improvement to enhance maintainability.
| ) | ||
|
|
||
| self.ca_comm: Optional[CustomAllreduce] = None | ||
| self.ca_comm: Optional[Any] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing the type hint for ca_comm from Optional[CustomAllreduce] to Optional[Any] reduces type safety and maintainability. While this works because both sglang.CustomAllreduce and aiter.CustomAllreduce are expected to have a compatible interface, it would be better to define a Protocol or an abstract base class that both communicators implement. This would make the code more robust and easier to understand.
For example, you could define a protocol:
from typing import Protocol, Union, Optional
from torch.distributed import ProcessGroup
import torch
class AllReduceCommunicator(Protocol):
def __init__(self, group: ProcessGroup, device: Union[int, str, torch.device], **kwargs):
...
def custom_all_reduce(self, input: torch.Tensor) -> Optional[torch.Tensor]:
...
def should_custom_ar(self, inp: torch.Tensor) -> bool:
...
def close(self) -> None:
...
# Add other common methods like capture if needed
# Then use it as:
self.ca_comm: Optional[AllReduceCommunicator] = NoneThis would enforce that any class assigned to ca_comm has the required methods, improving static analysis and preventing potential runtime errors.
Co-author: @b8zhong, @kkHuang-amd, Alan Kao.
Motivation
Continue the work from #11484
Modifications
SGLANG_AITER_ARwhich has a default True value. To use theCustomAllReducekernels insgl-kernel, please use SGLANG_AITER_AR=0Accuracy Tests
Benchmarking and Profiling
Checklist
TP=8results fromtorchrun --nproc_per_node=8 benchmark/kernels/all_reduce/benchmark_aiter.py: