Skip to content

Comments

Add diffusion reward loop#3

Merged
zhtmike merged 8 commits intozhtmike:verl-omnifrom
chenyingshu:verl-omni-reward
Jan 8, 2026
Merged

Add diffusion reward loop#3
zhtmike merged 8 commits intozhtmike:verl-omnifrom
chenyingshu:verl-omni-reward

Conversation

@chenyingshu
Copy link

What does this PR do?

Initialize diffusion reward loop pipeline.
Add & pass unit test for DiffusionRewardLoopManager

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a diffusion reward loop pipeline to support image-based reward computation in reinforcement learning workflows. It introduces a specialized reward loop manager (DiffusionRewardLoopManager) and reward manager (DiffusionRewardManager) designed to handle image outputs from diffusion models, along with tests and reward computation functions for OCR-based evaluation.

  • Adds DiffusionRewardLoopManager and DiffusionRewardLoopWorker for distributed reward computation on image data
  • Implements DiffusionRewardManager with support for async reward scoring of image outputs
  • Provides OCR reward computation function with generative reward model (GRM) support using image-to-text extraction

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 11 comments.

Show a summary per file
File Description
verl/experimental/reward_loop/reward_manager/diffusion.py Implements DiffusionRewardManager that extends RewardManagerBase for image-based reward computation
verl/experimental/reward_loop/diffusion_reward_loop.py Core implementation of DiffusionRewardLoopWorker and DiffusionRewardLoopManager for distributed image reward processing
verl/experimental/reward_loop/reward_manager/__init__.py Registers DiffusionRewardManager in module exports
verl/experimental/reward_loop/__init__.py Exports DiffusionRewardLoopManager for external use
tests/experimental/reward_loop/test_diffusion_reward_model_genrm.py Adds unit test for DiffusionRewardLoopManager with OCR-based image evaluation
tests/experimental/reward_loop/reward_fn.py Adds compute_score_ocr function for OCR reward computation using GRM with Levenshtein distance scoring

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 235 to 242
def prepare_query(self, chat, prompt, image_base64: str) -> list:
query = [
{
"type": "image_url",
"image_url": {"url": image_base64},
},
]
return query
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parameters chat and prompt are not used in the prepare_query method body. Consider removing them if they are not needed, or include them in the query if they were intended to be used.

Copilot uses AI. Check for mistakes.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@zhtmike
Copy link
Owner

zhtmike commented Jan 7, 2026

let me know if it is ok for merge

@@ -0,0 +1,110 @@
# Copyright 2024 Bytedance Ltd. and/or its affiliates
# Copyright 2026 Huawei Technologies Co., Ltd
Copy link
Owner

@zhtmike zhtmike Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved. 

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved. , CI check the whole sentence

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@zhtmike zhtmike merged commit 03b166e into zhtmike:verl-omni Jan 8, 2026
zhtmike pushed a commit that referenced this pull request Jan 9, 2026
* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright
zhtmike added a commit that referenced this pull request Jan 9, 2026
* add entroypoint (#1)

* add training engine (#2)

* add training engine

* fix init

* fix typs

* move folders & make for two-forward pass in training loop (#4)

* Add diffusion reward loop (#3)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* [fix] update customized reward func in UT (#5)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* update customized reward_fn

* init dataset for Qwen-Image

* pass UT

* update return, update UT

* pass UT

* align with rl_dataset

* pass UT

* update filter long prompts

* debug

* clean code

---------

Co-authored-by: Cheung Ka Wai <zhtmike@gmail.com>
zhtmike pushed a commit that referenced this pull request Jan 26, 2026
* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright
zhtmike added a commit that referenced this pull request Jan 26, 2026
* add entroypoint (#1)

* add training engine (#2)

* add training engine

* fix init

* fix typs

* move folders & make for two-forward pass in training loop (#4)

* Add diffusion reward loop (#3)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* [fix] update customized reward func in UT (#5)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* update customized reward_fn

* init dataset for Qwen-Image

* pass UT

* update return, update UT

* pass UT

* align with rl_dataset

* pass UT

* update filter long prompts

* debug

* clean code

---------

Co-authored-by: Cheung Ka Wai <zhtmike@gmail.com>
zhtmike added a commit that referenced this pull request Jan 27, 2026
* add entroypoint (#1)

* add training engine (#2)

* add training engine

* fix init

* fix typs

* move folders & make for two-forward pass in training loop (#4)

* Add diffusion reward loop (#3)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* [fix] update customized reward func in UT (#5)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* update customized reward_fn

* Update 20260109 (#8)

* Update 20260109

* update

* fix CI

* [data] feat: Add dataset for Qwen-Image (#6)

* add entroypoint (#1)

* add training engine (#2)

* add training engine

* fix init

* fix typs

* move folders & make for two-forward pass in training loop (#4)

* Add diffusion reward loop (#3)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* [fix] update customized reward func in UT (#5)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* update customized reward_fn

* init dataset for Qwen-Image

* pass UT

* update return, update UT

* pass UT

* align with rl_dataset

* pass UT

* update filter long prompts

* debug

* clean code

---------

Co-authored-by: Cheung Ka Wai <zhtmike@gmail.com>

* add new config; debug actor

* debug; add reward config; add adv, policy loss

* debug reward loop

* init diffusers engine UT

* debug

* debug

* deubg actor forward

* debug

* merge

* add UT for adv and loss

* pass adv&loss UTs; pass engine backward UT

* clean debug code

---------

Co-authored-by: Cheung Ka Wai <zhtmike@gmail.com>
zhtmike added a commit that referenced this pull request Jan 29, 2026
* add entroypoint (#1)

* add training engine (#2)

* add training engine

* fix init

* fix typs

* move folders & make for two-forward pass in training loop (#4)

* Add diffusion reward loop (#3)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* [fix] update customized reward func in UT (#5)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* update customized reward_fn

* Update 20260109 (#8)

* Update 20260109

* update

* fix CI

* [data] feat: Add dataset for Qwen-Image (#6)

* add entroypoint (#1)

* add training engine (#2)

* add training engine

* fix init

* fix typs

* move folders & make for two-forward pass in training loop (#4)

* Add diffusion reward loop (#3)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* [fix] update customized reward func in UT (#5)

* init reward; add ocr reward

* update disrm input

* add unit test

* pass ut

* fix typos/bugs

* update copyright

* update customized reward_fn

* init dataset for Qwen-Image

* pass UT

* update return, update UT

* pass UT

* align with rl_dataset

* pass UT

* update filter long prompts

* debug

* clean code

---------

Co-authored-by: Cheung Ka Wai <zhtmike@gmail.com>

* update to align verl data format

* debug

---------

Co-authored-by: Cheung Ka Wai <zhtmike@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants