Skip to content

[router] configure workflow retries and timeout based on routerConfig#12252

Merged
slin1237 merged 1 commit intomainfrom
worker-init-blocking
Oct 28, 2025
Merged

[router] configure workflow retries and timeout based on routerConfig#12252
slin1237 merged 1 commit intomainfrom
worker-init-blocking

Conversation

@slin1237
Copy link
Collaborator

  1. readiness is already context aware
  2. pass in routerConfig to workflow to control time out, retries etc

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @slin1237, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the router's worker registration workflow by integrating routerConfig for dynamic control over workflow timeouts and retry policies. This change allows the system to adapt its operational bounds based on configured health check parameters, improving the flexibility and robustness of worker management. The modifications include passing the router configuration to the workflow creation function and implementing logic to calculate timeouts and retry attempts dynamically.

Highlights

  • Dynamic Workflow Configuration: The worker registration workflow now dynamically configures its retry attempts and timeouts based on the routerConfig's health check settings, moving away from hardcoded values.
  • Configurable Timeout Calculation: The workflow-level timeout for worker registration is calculated using the router_config.health_check.timeout_secs plus a 30-second buffer, providing a flexible upper bound.
  • Configurable Retry Attempts: The maximum retry attempts for the DetectConnectionModeStep are now dynamically computed to align with the calculated effective timeout, ensuring more adaptive retry policies.
  • Router Configuration Propagation: The create_worker_registration_workflow function now accepts the routerConfig as an argument, and its calls in sgl-router/src/server.rs and test utilities have been updated accordingly.
  • Improved Logging: An informational log message has been added during server startup to display the health check timeout being used and confirm the submission of the worker initialization job.
  • Test Context Updates: Test utility functions in sgl-router/tests/common/mod.rs now clone the RouterConfig when building the AppContext to ensure proper configuration handling.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request makes the worker registration workflow's timeout and retry behavior configurable based on the routerConfig. This is a good improvement for flexibility. The changes in server.rs and test files to pass the configuration are correct. However, I've found a logical issue in the new calculation for max_attempts in worker_registration.rs. The current formula can result in a very high number of attempts that is not consistent with the configured timeout, potentially causing the workflow step to time out prematurely. I've left a detailed comment with an example and a suggestion for revision. I also pointed out a minor inaccuracy in a comment to improve clarity.

@slin1237 slin1237 force-pushed the worker-init-blocking branch 2 times, most recently from e3e09da to 0dd01cd Compare October 28, 2025 06:16
@slin1237 slin1237 force-pushed the worker-init-blocking branch from 0dd01cd to b863aed Compare October 28, 2025 07:05
@slin1237 slin1237 merged commit d85d6db into main Oct 28, 2025
36 of 40 checks passed
@slin1237 slin1237 deleted the worker-init-blocking branch October 28, 2025 07:41
@brunamagrinidacruz
Copy link

brunamagrinidacruz commented Oct 28, 2025

Hello. I need the feature implemented in this commit of max_retries being configured via parameter. How long does it take to your changes be available in pip https://pypi.org/project/sglang-router/?

Currently, my model takes more than 10 minutes to load, and thus, while the router is trying to discover it via Kubernetes, it reaches the max_retries = 100 and returns an error.

Note: Sorry if here is not the right channel to ask; I wasn't able to find the pip frequency. I did look at the tags, but the one from pip is so behind that I am afraid I am looking in the right place.

@nathanielherman
Copy link

@slin1237 I'm confused by the logic of this PR — I believe it's basically capping the timeout for worker registration to health_check_timeout + 30 instead of the previous hardcoded 7200s. But health_check_timeout is a request-level timeout value (ie usually very small), which means that this now spends very little time waiting for a worker to be available.

I hit this issue because the router started running and registering workers before a decode worker was ready and only waited ~30s for the decode worker to respond before giving up on it. The only way I can see to resolve that is to increase health_check_timeout to a very large value (which then means normal health checks also have a really long request-level timeout), or manually check worker-readiness on my end before starting the router.

@slin1237
Copy link
Collaborator Author

@slin1237 I'm confused by the logic of this PR — I believe it's basically capping the timeout for worker registration to health_check_timeout + 30 instead of the previous hardcoded 7200s. But health_check_timeout is a request-level timeout value (ie usually very small), which means that this now spends very little time waiting for a worker to be available.

I hit this issue because the router started running and registering workers before a decode worker was ready and only waited ~30s for the decode worker to respond before giving up on it. The only way I can see to resolve that is to increase health_check_timeout to a very large value (which then means normal health checks also have a really long request-level timeout), or manually check worker-readiness on my end before starting the router.

Thanks for reporting this.
I missed that blind spot. The original issue caused this PR was hardcoded 7200 wasn't long enough.
I agree with you, health check time out should not be used like this. we should have proper startup delay etc for properly handling this. Will get a PR out to address this ASAP

@slin1237
Copy link
Collaborator Author

@slin1237 I'm confused by the logic of this PR — I believe it's basically capping the timeout for worker registration to health_check_timeout + 30 instead of the previous hardcoded 7200s. But health_check_timeout is a request-level timeout value (ie usually very small), which means that this now spends very little time waiting for a worker to be available.

I hit this issue because the router started running and registering workers before a decode worker was ready and only waited ~30s for the decode worker to respond before giving up on it. The only way I can see to resolve that is to increase health_check_timeout to a very large value (which then means normal health checks also have a really long request-level timeout), or manually check worker-readiness on my end before starting the router.

#13473

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments