Skip to content

[TRTLLM-10076][feat] Serve CLI improvements: renames, new flags, and mm_embedding_serve enhancements#12105

Open
JunyiXu-nv wants to merge 1 commit intoNVIDIA:mainfrom
JunyiXu-nv:dev-junyix-feat-serve-cli-misc-improvements
Open

[TRTLLM-10076][feat] Serve CLI improvements: renames, new flags, and mm_embedding_serve enhancements#12105
JunyiXu-nv wants to merge 1 commit intoNVIDIA:mainfrom
JunyiXu-nv:dev-junyix-feat-serve-cli-misc-improvements

Conversation

@JunyiXu-nv
Copy link
Collaborator

@JunyiXu-nv JunyiXu-nv commented Mar 11, 2026

  • TRTLLM-10076: Update --tokenizer description for PyTorch backend, add --hf_revision alias for --revision with deprecation warning, support hf_revision key in YAML config, add --enable_attention_dp flag
  • TRTLLM-10079: mm_embedding_serve: add --config alias for --extra_encoder_options, expose --hf_revision, --free_gpu_memory_fraction, --tensor_parallel_size
  • TRTLLM-10229: Add --config alias for --config_file in disaggregated and disaggregated_mpi_worker commands
  • TRTLLM-10078: Improve --server_role help message with role descriptions

Made-with: Cursor

Summary by CodeRabbit

Release Notes

  • New Features

    • Added --enable_attention_dp flag to the serve command for distributed attention processing.
  • Deprecations

    • Use --hf_revision instead of --revision.
    • Use --config instead of --extra_encoder_options and --config_file.
    • Deprecation warnings will guide migration to new parameter names.
  • Improvements

    • Enhanced help documentation for tokenizer and server role configuration.
    • Extended serve_encoder with additional parameters: revision, free_gpu_memory_fraction, and tensor_parallel_size.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

@JunyiXu-nv JunyiXu-nv requested a review from a team as a code owner March 11, 2026 07:45
@JunyiXu-nv JunyiXu-nv requested a review from Superjomn March 11, 2026 07:45
@JunyiXu-nv JunyiXu-nv requested review from QiJune and arysef March 11, 2026 07:45
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 11, 2026

📝 Walkthrough

Walkthrough

The changes add support for a new enable_attention_dp feature flag through CLI argument plumbing in the serve command and related functions. Additionally, deprecation warnings are introduced for legacy options (--revision, --extra_encoder_options, --config_file), and new parameters are added to serve_encoder. A preprocessing step maps hf_revision to revision in llm_args handling.

Changes

Cohort / File(s) Summary
CLI and Argument Plumbing
tensorrt_llm/commands/serve.py
Added --enable_attention_dp CLI option to serve command; extended serve_encoder signature with revision, free_gpu_memory_fraction, and tensor_parallel_size parameters; introduced deprecation warnings for legacy flags (--revision → --hf_revision, --extra_encoder_options → --config, --config_file → --config); enhanced help texts for tokenizer and server role clarification; propagated enable_attention_dp through _serve_llm and launch_server paths (+84/-24).
Argument Preprocessing
tensorrt_llm/llmapi/llm_args.py
Added preprocessing step to map hf_revision to revision in llm_args_dict before downstream processing using pop/setdefault (+3/-0).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description lists specific JIRA tickets and changes but lacks proper PR title format, missing sections for Description, Test Coverage details, and incomplete PR Checklist verification. Add a properly formatted PR title following [JIRA][type] format, provide detailed Description and Test Coverage sections, and verify all PR Checklist items are addressed.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main changes: adding new CLI flags (enable_attention_dp, hf_revision), improving rename/alias support, and enhancing mm_embedding_serve.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tensorrt_llm/commands/serve.py`:
- Around line 1122-1127: The disaggregated_mpi_worker entrypoint (function
disaggregated_mpi_worker) should mirror the deprecation behavior of
disaggregated by detecting if "--config_file" was passed and emitting a
DeprecationWarning; add a check using sys.argv to see if "--config_file" is
present and call warnings.warn(..., DeprecationWarning, stacklevel=2)
immediately after the disaggregated_mpi_worker docstring/entry log so users see
the same deprecation message as the disaggregated command.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c1f1f5fa-f1ab-4275-9a8a-9c393cfda74f

📥 Commits

Reviewing files that changed from the base of the PR and between f7255e0 and b398e02.

📒 Files selected for processing (2)
  • tensorrt_llm/commands/serve.py
  • tensorrt_llm/llmapi/llm_args.py

@JunyiXu-nv JunyiXu-nv force-pushed the dev-junyix-feat-serve-cli-misc-improvements branch from b398e02 to 57a6655 Compare March 11, 2026 07:51
… improvements: renames, new flags, and mm_embedding_serve enhancements

- TRTLLM-10076: Update --tokenizer description for PyTorch backend,
  add --hf_revision alias for --revision with deprecation warning,
  support hf_revision key in YAML config, add --enable_attention_dp flag
- TRTLLM-10079: mm_embedding_serve: add --config alias for
  --extra_encoder_options, expose --hf_revision, --free_gpu_memory_fraction,
  --tensor_parallel_size
- TRTLLM-10229: Add --config alias for --config_file in disaggregated
  and disaggregated_mpi_worker commands
- TRTLLM-10078: Improve --server_role help message with role descriptions

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Made-with: Cursor
Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Made-with: Cursor
Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Made-with: Cursor
@JunyiXu-nv JunyiXu-nv force-pushed the dev-junyix-feat-serve-cli-misc-improvements branch from 57a6655 to 1386e53 Compare March 11, 2026 08:14
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38560 [ run ] triggered by Bot. Commit: 1386e53 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38560 [ run ] completed with state SUCCESS. Commit: 1386e53
/LLM/main/L0_MergeRequest_PR pipeline #29902 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38585 [ run ] triggered by Bot. Commit: 1386e53 Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants