Skip to content

Support LoRA in bench_serving oai interface#11318

Merged
zhyncs merged 2 commits intomainfrom
lifu/bench-lora
Oct 8, 2025
Merged

Support LoRA in bench_serving oai interface#11318
zhyncs merged 2 commits intomainfrom
lifu/bench-lora

Conversation

@lifuhuang
Copy link
Collaborator

Motivation

Enable bench_serving to benchmark LoRA inference in SGLang and vLLM.

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @lifuhuang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the bench_serving tool by integrating support for benchmarking LoRA inference. The primary goal is to enable performance evaluation of LoRA models across both SGLang and vLLM environments. This is achieved by standardizing how LoRA model identifiers are included in API requests, ensuring seamless compatibility and accurate benchmarking results regardless of the serving backend used.

Highlights

  • LoRA Benchmarking Support: Introduced the capability to benchmark LoRA (Low-Rank Adaptation) inference within the bench_serving utility.
  • Cross-Backend LoRA Compatibility: Implemented a compatibility layer to handle differing LoRA naming conventions between SGLang and vLLM serving backends.
  • API Request Modification: Modified async_request_openai_completions and async_request_openai_chat_completions functions to correctly pass LoRA model names in the request payload.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@lifuhuang lifuhuang requested a review from Fridge003 October 8, 2025 04:08
@lifuhuang lifuhuang changed the title Support benchmarking LoRA in bench_serving. Support LoRA in bench_serving oai interface Oct 8, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for benchmarking LoRA inference in bench_serving.py for both SGLang and vLLM. The changes introduce logic to handle the different LoRA conventions between the two frameworks by setting both model and lora_path in the request payload when a LoRA name is provided.

My main feedback is to address the code duplication for this new LoRA handling logic, which appears in both async_request_openai_completions and async_request_openai_chat_completions. Extracting this logic into a shared helper function would improve maintainability.

Comment on lines +212 to +215
# hack to accommodate different LoRA conventions between SGLang and vLLM.
if request_func_input.lora_name:
payload["model"] = request_func_input.lora_name
payload["lora_path"] = request_func_input.lora_name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic for handling LoRA parameters is duplicated in async_request_openai_chat_completions at lines 335-338. To improve maintainability and avoid potential inconsistencies, consider extracting this into a helper function.

For example:

def _add_lora_to_payload(payload: Dict[str, Any], lora_name: Optional[str]):
    """Adds LoRA parameters to the payload for SGLang and vLLM compatibility."""
    if lora_name:
        # Accommodate different LoRA conventions between SGLang and vLLM.
        payload["model"] = lora_name
        payload["lora_path"] = lora_name

You could then replace this block with _add_lora_to_payload(payload, request_func_input.lora_name) in both places.

@Fridge003 Fridge003 added the lora label Oct 8, 2025
Copy link
Collaborator

@Fridge003 Fridge003 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhyncs zhyncs merged commit 92473e2 into main Oct 8, 2025
137 of 159 checks passed
@zhyncs zhyncs deleted the lifu/bench-lora branch October 8, 2025 08:28
ch-tiger1 pushed a commit to ch-tiger1/sglang that referenced this pull request Oct 9, 2025
lpc0220 pushed a commit to lpc0220/sglang that referenced this pull request Oct 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments