Skip to content

[fully_async, reward] feat: enable GenRM/DisRM support in fully async training#6044

Open
xiefan46 wants to merge 12 commits intoverl-project:mainfrom
xiefan46:async-genrm
Open

[fully_async, reward] feat: enable GenRM/DisRM support in fully async training#6044
xiefan46 wants to merge 12 commits intoverl-project:mainfrom
xiefan46:async-genrm

Conversation

@xiefan46
Copy link
Copy Markdown

@xiefan46 xiefan46 commented Apr 17, 2026

What does this PR do?

Enable GenRM/DisRM (generative/discriminative reward model) support in fully async training mode.

Previously, fully async mode hardcoded self.use_rm = False, making it impossible to use GPU-based reward models. This PR allows users to deploy a standalone reward model alongside async rollout and
training.

Resolves #5949

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, vllm_omni, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, fully_async, one_step_off
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

nvidia-smi to make sure there are 3 processes running on 3 different gpus

(base) root@a5d0d2f2c6d5:~# nvidia-smi
Fri Apr 17 16:16:42 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.126.09             Driver Version: 580.126.09     CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA H100 80GB HBM3          On  |   00000000:2A:00.0 Off |                    0 |
| N/A   33C    P0            204W /  700W |   33800MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA H100 80GB HBM3          On  |   00000000:AB:00.0 Off |                    0 |
| N/A   31C    P0            136W /  700W |   41466MiB /  81559MiB |     31%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   2  NVIDIA H100 80GB HBM3          On  |   00000000:DB:00.0 Off |                    0 |
| N/A   31C    P0            220W /  700W |   40853MiB /  81559MiB |     62%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A           21738      C   ray::WorkerDict                       33792MiB |
|    1   N/A  N/A           22454      C   VLLM::Worker                          41458MiB |
|    2   N/A  N/A           22820      C   ray::CheckpointEngineWorker            1408MiB |
|    2   N/A  N/A           23729      C   VLLM::Worker                          39432MiB |
+-----------------------------------------------------------------------------------------+

E2E test 3× H100 GPU

full wandb result: https://wandb.ai/models-xx/verl-test-fully-async-genrm?nw=nwuserfxie46

Actor: Qwen2.5-0.5B-Instruct, GenRM judge: Qwen2.5-3B-Instruct, dataset: GSM8K, algorithm: GRPO. 1 GPU each for rollout, training, and GenRM. 3200 rollout steps (~200 sync rounds, ~750 global steps), ~9 min
total.

Results: GenRM reward (critic/score/mean) rises from ~5.5 at start to ~9.8-10.0 by step 175+, confirming the model learns to produce responses that satisfy the GenRM judge. grad_norm decreases from ~3.0 to
~0.5-1.3, indicating convergence. Async pipeline runs stably with trainer idle ~7-12%, rollouter idle ~34-50%. Throughput steady at ~3600-5000 tokens/s. Zero dropped samples, no errors.

reward
image

grad_norm
image

throughput
image

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

- Add GenRM (generative reward model) and DisRM (discriminative reward model)
  support for fully async GRPO training
- Use standalone RewardLoopManager for RM to avoid actor/placement group conflicts
- Add CPU unit test for async GenRM config and E2E test script
- Update CLAUDE.md with project guidance
- Tune 0.5B training config for single-GPU setup
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Apr 17, 2026

CLA assistant check
All committers have signed the CLA.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Generative Reward Models (GenRM) in fully async training mode. Key changes include updating the resource pool manager to support standalone reward model pools, modifying the FullyAsyncRollouter and FullyAsyncTrainer to initialize the RewardLoopManager asynchronously, and adding comprehensive unit and E2E tests. Review feedback highlights critical issues regarding hardcoded resource pool parameters that bypass allocation logic and potential Ray actor name collisions when both the trainer and rollouter attempt to manage the reward loop.

Comment thread verl/experimental/fully_async_policy/fully_async_rollouter.py
The GenRM server and reward_loop_workers are owned by the rollouter.
The trainer only needs its own RewardLoopManager when use_trainer_do_validate=True.
"""
if self.config.async_training.use_trainer_do_validate:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

When use_trainer_do_validate is enabled, both the FullyAsyncTrainer and the FullyAsyncRollouter will instantiate a RewardLoopManager. Since RewardLoopManager creates Ray actors with fixed names (e.g., reward_loop_worker_{i}), this will lead to a name collision and initialization failure in Ray. This feature currently only works if the Rollouter and Trainer do not both attempt to manage the reward loop actors simultaneously.

Comment on lines +349 to +353
loop = asyncio.get_running_loop()
self.reward_loop_manager = await loop.run_in_executor(
None,
lambda: RewardLoopManager(config=self.config, rm_resource_pool=None),
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The rm_resource_pool is hardcoded to None, which bypasses the resource management logic. If the trainer is performing validation with a reward model, it should use the resource pool allocated for Role.RewardModel to ensure proper GPU isolation.

Suggested change
loop = asyncio.get_running_loop()
self.reward_loop_manager = await loop.run_in_executor(
None,
lambda: RewardLoopManager(config=self.config, rm_resource_pool=None),
)
rm_resource_pool = self.resource_pool_manager.get_resource_pool(Role.RewardModel) if self.use_rm else None
loop = asyncio.get_running_loop()
self.reward_loop_manager = await loop.run_in_executor(
None,
lambda: RewardLoopManager(config=self.config, rm_resource_pool=rm_resource_pool),
)

@xiefan46 xiefan46 changed the title Async genrm [fully_async, reward] feat: enable GenRM/DisRM support in fully async training Apr 17, 2026
@xiefan46 xiefan46 marked this pull request as ready for review April 17, 2026 17:13
- Shorten sequences (256+512) to match OPD test
- Increase rollout steps to 150
- Enable wandb logging
- Reduce checkpoint bucket to 256MB to avoid OOM
- Add max_model_len/max_num_batched_tokens
- Disable val_before_train for faster startup
@yyDing1 yyDing1 self-requested a review April 20, 2026 08:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature Request: Adding GenRM to Fully Async

2 participants