[fully_async, reward] feat: enable GenRM/DisRM support in fully async training#6044
[fully_async, reward] feat: enable GenRM/DisRM support in fully async training#6044xiefan46 wants to merge 12 commits intoverl-project:mainfrom
Conversation
- Add GenRM (generative reward model) and DisRM (discriminative reward model) support for fully async GRPO training - Use standalone RewardLoopManager for RM to avoid actor/placement group conflicts - Add CPU unit test for async GenRM config and E2E test script - Update CLAUDE.md with project guidance - Tune 0.5B training config for single-GPU setup
There was a problem hiding this comment.
Code Review
This pull request introduces support for Generative Reward Models (GenRM) in fully async training mode. Key changes include updating the resource pool manager to support standalone reward model pools, modifying the FullyAsyncRollouter and FullyAsyncTrainer to initialize the RewardLoopManager asynchronously, and adding comprehensive unit and E2E tests. Review feedback highlights critical issues regarding hardcoded resource pool parameters that bypass allocation logic and potential Ray actor name collisions when both the trainer and rollouter attempt to manage the reward loop.
| The GenRM server and reward_loop_workers are owned by the rollouter. | ||
| The trainer only needs its own RewardLoopManager when use_trainer_do_validate=True. | ||
| """ | ||
| if self.config.async_training.use_trainer_do_validate: |
There was a problem hiding this comment.
When use_trainer_do_validate is enabled, both the FullyAsyncTrainer and the FullyAsyncRollouter will instantiate a RewardLoopManager. Since RewardLoopManager creates Ray actors with fixed names (e.g., reward_loop_worker_{i}), this will lead to a name collision and initialization failure in Ray. This feature currently only works if the Rollouter and Trainer do not both attempt to manage the reward loop actors simultaneously.
| loop = asyncio.get_running_loop() | ||
| self.reward_loop_manager = await loop.run_in_executor( | ||
| None, | ||
| lambda: RewardLoopManager(config=self.config, rm_resource_pool=None), | ||
| ) |
There was a problem hiding this comment.
The rm_resource_pool is hardcoded to None, which bypasses the resource management logic. If the trainer is performing validation with a reward model, it should use the resource pool allocated for Role.RewardModel to ensure proper GPU isolation.
| loop = asyncio.get_running_loop() | |
| self.reward_loop_manager = await loop.run_in_executor( | |
| None, | |
| lambda: RewardLoopManager(config=self.config, rm_resource_pool=None), | |
| ) | |
| rm_resource_pool = self.resource_pool_manager.get_resource_pool(Role.RewardModel) if self.use_rm else None | |
| loop = asyncio.get_running_loop() | |
| self.reward_loop_manager = await loop.run_in_executor( | |
| None, | |
| lambda: RewardLoopManager(config=self.config, rm_resource_pool=rm_resource_pool), | |
| ) |
…avoid overriding base class; add TODO for upstream async fix
- Shorten sequences (256+512) to match OPD test - Increase rollout steps to 150 - Enable wandb logging - Reduce checkpoint bucket to 256MB to avoid OOM - Add max_model_len/max_num_batched_tokens - Disable val_before_train for faster startup
What does this PR do?
Enable GenRM/DisRM (generative/discriminative reward model) support in fully async training mode.
Previously, fully async mode hardcoded
self.use_rm = False, making it impossible to use GPU-based reward models. This PR allows users to deploy a standalone reward model alongside async rollout andtraining.
Resolves #5949
Checklist Before Starting
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,veomni,sglang,vllm,vllm_omni,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,cfg,reward,fully_async,one_step_off,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
nvidia-smi to make sure there are 3 processes running on 3 different gpus
E2E test 3× H100 GPU
full wandb result: https://wandb.ai/models-xx/verl-test-fully-async-genrm?nw=nwuserfxie46
Actor: Qwen2.5-0.5B-Instruct, GenRM judge: Qwen2.5-3B-Instruct, dataset: GSM8K, algorithm: GRPO. 1 GPU each for rollout, training, and GenRM. 3200 rollout steps (~200 sync rounds, ~750 global steps), ~9 min
total.
Results: GenRM reward (critic/score/mean) rises from ~5.5 at start to ~9.8-10.0 by step 175+, confirming the model learns to produce responses that satisfy the GenRM judge. grad_norm decreases from ~3.0 to
~0.5-1.3, indicating convergence. Async pipeline runs stably with trainer idle ~7-12%, rollouter idle ~34-50%. Throughput steady at ~3600-5000 tokens/s. Zero dropped samples, no errors.
reward

grad_norm

throughput

API and Usage Example
# Add code snippet or script demonstrating how to use thisDesign & Code Changes
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)recipesubmodule, please also update the reference to the submodule commit viagit submodule update --remoteorcd recipe && git pull origin main.