Skip to content

[trainer] refactor: code refactor for diffusion training#6042

Open
zhtmike wants to merge 8 commits intoverl-project:mainfrom
zhtmike:trainer-pr-clean
Open

[trainer] refactor: code refactor for diffusion training#6042
zhtmike wants to merge 8 commits intoverl-project:mainfrom
zhtmike:trainer-pr-clean

Conversation

@zhtmike
Copy link
Copy Markdown
Contributor

@zhtmike zhtmike commented Apr 17, 2026

What does this PR do?

Major change:

  • Decouple LLM and diffusion configs, losses, etc., for easier integration of diffusion models/algorithms/engines in the future. (resolve comment from [5/n][trainer] feat: flowgrpo trainer #5951)
    • Diffusion configs are now all placed in verl/trainer/config/diffusion, including diffusion_actor, dp_diffusion_actor, diffusion_fsdp, diffusion_rollout, etc.
    • Diffusion config classes are now all placed in verl/workers/config/diffusion, including actor, model, rollout.
    • Clean LLM-specific configs such as use_remove_padding, or unused configs. The content of _generated_diffusion_trainer.yaml has now been reduced by ~60%.

Other changes:

  • Clean the diffusion agent loop output; remove dead code originally borrowed from the LLM agent loop, such as input_id, attention_mask, etc., which are not used in diffusion training currently.
  • Refactor and remove extra_configs in diffusion configs. It is too loose and may cause confusion for users. (resolve comment from [5/n][trainer] feat: flowgrpo trainer #5951)
  • Decouple LLM and diffusion losses to make diffusion losses clearer.
  • Code cleanup for diffusion trainer, diffusion agent loop, etc.
  • Change the vllm-omni rollout import to registration, following the diffusion trainer pattern. (resolve comment from [5/n][trainer] feat: flowgrpo trainer #5951)

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, vllm_omni, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, fully_async, one_step_off
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the diffusion training pipeline by introducing a modular configuration structure and dedicated dataclasses for diffusion-specific components. Key changes include the implementation of registries for diffusion loss functions, advantage estimators, and vLLM-Omni pipelines, alongside the removal of the response_mask requirement and legacy reward migration logic. The review feedback highlights a runtime error in configuration dictionary conversion for dataclasses, potential dynamic attribute assignment failures in Pydantic models for metrics, and a type mismatch for the loss_scale_factor parameter.

Comment thread verl/experimental/agent_loop/diffusion_agent_loop.py
Comment thread verl/experimental/agent_loop/diffusion_agent_loop.py
Comment thread verl/workers/config/diffusion/actor.py Outdated
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@zhtmike
Copy link
Copy Markdown
Contributor Author

zhtmike commented Apr 17, 2026

@knlnguyen1802 please take a look of vllm-omni change thx, I think it is better to load the custom class directly in vllm-omni side?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant