Skip to content

Conversation

@ShawnY112358
Copy link
Contributor

@ShawnY112358 ShawnY112358 commented Nov 24, 2025

Motivation

In the training-inference disaggregation RL scenario, we use update_weights_from_distributed to update SGLang's weights. However, update_weights_from_distributed currently does not support flattened tensors, resulting in significant time overhead during parameter synchronization.

Modifications

support using accelerating update_weights_from_distributed with flattened tensor

Accuracy Tests

Benchmarking and Profiling

Qwen3 turbopp (30B) 5.5s -> 2.5s
Qwen3 Next (80B) 130s -> 7s

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ShawnY112358, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request optimizes the distributed weight update mechanism within SGLang, specifically targeting scenarios where training and inference are disaggregated. By enabling the use of flattened tensors for weight synchronization, it significantly reduces the performance overhead previously encountered, leading to more efficient and faster parameter updates across distributed systems.

Highlights

  • Performance Improvement for Weight Synchronization: The pull request introduces support for flattened tensors in distributed weight updates, directly addressing a significant time overhead encountered in training-inference disaggregation scenarios.
  • New Input Field for Flattened Tensor Metadata: An optional field, flattened_bucket_meta, has been added to the UpdateWeightsFromDistributedReqInput data structure to carry necessary metadata for handling flattened tensors during weight updates.
  • Conditional Weight Update Logic: The tp_worker now includes logic to conditionally call a new update_bucketed_weights_from_distributed method when flattened_bucket_meta is provided in the request, otherwise falling back to the existing update mechanism.
  • Implementation of Bucketed Weight Update: A new method, update_bucketed_weights_from_distributed, has been added to ModelRunner. This method is responsible for deserializing metadata, broadcasting the flattened tensor, reconstructing individual tensors using FlattenedTensorBucket, and loading them into the model.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an optimized path for updating model weights from a distributed source by using flattened tensors, which should reduce parameter synchronization overhead. The changes involve adding a new field to UpdateWeightsFromDistributedReqInput, and implementing the corresponding logic in TpModelWorker and ModelRunner.

The implementation is mostly correct, but I've found a couple of critical issues in the new update_bucketed_weights_from_distributed method in ModelRunner:

  1. An incorrect dtype is used when creating the tensor to receive broadcasted weights.
  2. The weight_version is used without being defined, which will lead to a NameError.

I've provided detailed comments and suggestions to fix these issues. Once these are addressed, the changes should work as intended.

Comment on lines 155 to 157
success, message = self.model_runner.update_bucketed_weights_from_distributed(
recv_req.flattened_bucket_meta, recv_req.group_name,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The weight_version from recv_req is not being passed to update_bucketed_weights_from_distributed. The called method in model_runner.py attempts to use weight_version, which will cause a NameError. Please pass recv_req.weight_version to the method call.

Suggested change
success, message = self.model_runner.update_bucketed_weights_from_distributed(
recv_req.flattened_bucket_meta, recv_req.group_name,
)
success, message = self.model_runner.update_bucketed_weights_from_distributed(
recv_req.flattened_bucket_meta, recv_req.group_name, recv_req.weight_version,
)

@ShawnY112358
Copy link
Contributor Author

@zhaochenyang20 @JD-ETH

@ShawnY112358 ShawnY112358 changed the title [feat]update bucketed weights from distributed [feat] update bucketed weights from distributed Nov 25, 2025
@hebiao064
Copy link
Collaborator

Could you please share the speedup before and after the change?

Copy link
Collaborator

@hebiao064 hebiao064 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ShawnY112358
Copy link
Contributor Author

ShawnY112358 commented Nov 25, 2025

Could you please share the speedup before and after the change?

posted in PR's description

@JD-ETH
Copy link

JD-ETH commented Nov 25, 2025

Could you please share the speedup before and after the change?

posted in PR's description

wow that's a lot. Good job!

@Kangyan-Zhou
Copy link
Collaborator

/tag-and-rerun-ci

@mickqian
Copy link
Collaborator

please rebase

@Kangyan-Zhou Kangyan-Zhou merged commit 5155016 into sgl-project:main Nov 26, 2025
64 of 77 checks passed
@zhaochenyang20
Copy link
Collaborator

thanks! Nice done

harvenstar pushed a commit to harvenstar/sglang that referenced this pull request Dec 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants