feat(tasks): add reasoning collection for LLaVA-OV 1.5 RL#1208
feat(tasks): add reasoning collection for LLaVA-OV 1.5 RL#1208
Conversation
|
I'd like to put it target to |
|
Since I am planning to have some designs on supporting instruct/reasoning models evaluation, I think this should mostly set on models side. |
…ility (#1102) * refactor(models/chat): extract prepare_messages method * refactor(models/chat): refactor async concurrency control and add docstrings - Extract _AdaptiveConcurrencyTracker for cleaner state management - Split generate_until's run() into focused helper methods - Add comprehensive docstrings to all new methods - Simplify run() from 130 lines to 8 lines - Update async_openai_qwen3_vl.py with class docstring * style: auto-fix lint (black + isort) * refactor: replace async_openai_qwen3_vl class with message_format parameter - Add message_format param to AsyncOpenAIChat (default='openai', supports 'qwen3_vl') - Extract _build_video_kwargs() to eliminate DRY violation - Remove separate async_openai_qwen3_vl.py and its registry entry - Fix missing f-string prefix in tool response formatting - Fix duplicate .gitignore entry * refactor(models/chat): add message_format parameter to support qwen3_vl - Add message_format parameter to AsyncOpenAIChat - Support both 'default' and 'qwen3_vl' message formats - Remove async_openai_qwen3_vl.py (no longer needed) - Unregister async_openai_qwen3_vl from model registry - Fix string formatting for tool call tags * fix tool response tag format --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Bo Li <drluodian@gmail.com>
- Combine cn_reasoning and en_reasoning into single reasoning directory - Share common template yaml across both cn and en reasoning tasks - Unified utils.py handles cn/en via DATASET_NAME environment variable - Keep separate group files for mmbench_cn_reasoning and mmbench_en_reasoning
- Remove environment variable dependency - Add separate doc_to_text/doc_to_messages for cn and en in utils.py - Template yaml shared, specific functions defined in task yaml - Single mmbench_reasoning group containing both cn and en dev tasks - Unified process_results without data_source distinction
- Add mmbench_cn_test_reasoning and mmbench_en_test_reasoning - Add test_split to dev reasoning configs - Update mmbench_reasoning group to include all four tasks
- Add mme_realworld_reasoning (en) and mme_realworld_cn_reasoning (cn) - Include doc_to_messages for both languages with reasoning prompts - Support accuracy and format scoring metrics
- Add seedbench_reasoning with doc_to_messages for reasoning format - Add seedbench_2_plus_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics for both benchmarks
- Add cv_bench_reasoning, cv_bench_2d_reasoning, cv_bench_3d_reasoning - Include doc_to_messages for reasoning format - Support accuracy and format scoring metrics
- Apply parse_mcq to ground_truth for consistency - Use case-insensitive comparison for MCQ answers - Strip whitespace for more robust matching
- Add ocrbench_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics
- Add chartqa_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics
- Add infovqa_val_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics
- Add countbenchqa_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics
- Add countbenchqa task config and utils - Add countbenchqa_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics
- Add vstar_bench_reasoning with doc_to_messages for reasoning format - Add vstar_bench_direct_attributes_reasoning - Add vstar_bench_relative_position_reasoning - Support accuracy and format scoring metrics
- Add pixmo_count task config and utils - Add pixmo_count_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics
- Allow loading system prompt from file via system_prompt_file parameter - Add _apply_system_prompt method to inject system prompt into messages - Apply system prompt before generation in generate_until
43d7c76 to
1a42b55
Compare
Extracted accuracy reward logic from compute_score into acc_reward function for better separation of concerns.
|
While I tried to use the reasoning tag in task yaml, I observe that if using the original task yaml, there might be conflict on reasoning system prompt and the post prompt (where model may answer in not aligned behavior). From the alignment perspective, I do recommend to create another task yaml to keep both of the settings so that they won't affect each other while of course we can also use the reasoning tag to filter the answer. |
|
can we just add some reasoning specific field in original yaml? |
…cate reasoning task utils - Add _resolve_system_prompt() and _apply_system_prompt() to base lmms class for model-side system prompt injection (supports file paths and literal strings) - Add factory functions make_reasoning_doc_to_messages() and make_reasoning_process_results() to reasoning_utils.py, eliminating ~400 lines of copy-paste across 12 reasoning task modules - Update AsyncOpenAIChat: replace system_prompt_file with system_prompt using base class utilities, remove duplicate _apply_system_prompt method - Wire up HuggingFace chat model to inject system_prompt into messages during generation (opt-in only, default None to avoid overwriting task-level prompts) - Fix infovqa reasoning: anls(ground_truth, results) -> anls(ground_truth, [extracted]) - Fix mmbench reasoning: cache YAML parsing with @lru_cache instead of per-sample I/O - Fix format_reward() to also match <analysis>...</analysis> tag pattern - Expand --reasoning_tags default to include <analysis> tags
|
Commit message for 4cada1c is wrong, it is because there is a duplicated reasoning tag args so I remove it. |
e3b6691 to
d991192
Compare
c9105dd to
0846d27
Compare
* refactor(models/chat): improve async_openai code structure and readability (#1102) * refactor(models/chat): extract prepare_messages method * refactor(models/chat): refactor async concurrency control and add docstrings - Extract _AdaptiveConcurrencyTracker for cleaner state management - Split generate_until's run() into focused helper methods - Add comprehensive docstrings to all new methods - Simplify run() from 130 lines to 8 lines - Update async_openai_qwen3_vl.py with class docstring * style: auto-fix lint (black + isort) * refactor: replace async_openai_qwen3_vl class with message_format parameter - Add message_format param to AsyncOpenAIChat (default='openai', supports 'qwen3_vl') - Extract _build_video_kwargs() to eliminate DRY violation - Remove separate async_openai_qwen3_vl.py and its registry entry - Fix missing f-string prefix in tool response formatting - Fix duplicate .gitignore entry * refactor(models/chat): add message_format parameter to support qwen3_vl - Add message_format parameter to AsyncOpenAIChat - Support both 'default' and 'qwen3_vl' message formats - Remove async_openai_qwen3_vl.py (no longer needed) - Unregister async_openai_qwen3_vl from model registry - Fix string formatting for tool call tags * fix tool response tag format --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Bo Li <drluodian@gmail.com> * docs: add MMMU eval discrepancy report and TLDR FP definitions * fix(ci): make lint workflow fork-PR safe * feat(tasks): add MMStar reasoning task * refactor(tasks): merge cn and en reasoning into unified structure - Combine cn_reasoning and en_reasoning into single reasoning directory - Share common template yaml across both cn and en reasoning tasks - Unified utils.py handles cn/en via DATASET_NAME environment variable - Keep separate group files for mmbench_cn_reasoning and mmbench_en_reasoning * refactor(tasks): unify cn and en reasoning with single group - Remove environment variable dependency - Add separate doc_to_text/doc_to_messages for cn and en in utils.py - Template yaml shared, specific functions defined in task yaml - Single mmbench_reasoning group containing both cn and en dev tasks - Unified process_results without data_source distinction * fix(tasks): add dataset_name to reasoning task configs * feat(tasks): add test split for mmbench reasoning tasks - Add mmbench_cn_test_reasoning and mmbench_en_test_reasoning - Add test_split to dev reasoning configs - Update mmbench_reasoning group to include all four tasks * feat(tasks): add MME-RealWorld reasoning tasks - Add mme_realworld_reasoning (en) and mme_realworld_cn_reasoning (cn) - Include doc_to_messages for both languages with reasoning prompts - Support accuracy and format scoring metrics * feat(tasks): add SEED-Bench reasoning tasks - Add seedbench_reasoning with doc_to_messages for reasoning format - Add seedbench_2_plus_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics for both benchmarks * feat(tasks): add CV-Bench reasoning tasks - Add cv_bench_reasoning, cv_bench_2d_reasoning, cv_bench_3d_reasoning - Include doc_to_messages for reasoning format - Support accuracy and format scoring metrics * fix(reasoning): improve mcq matching with normalize comparison - Apply parse_mcq to ground_truth for consistency - Use case-insensitive comparison for MCQ answers - Strip whitespace for more robust matching * feat(tasks): add OCR-Bench reasoning task - Add ocrbench_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics * feat(tasks): add ChartQA reasoning task - Add chartqa_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics * feat(tasks): add InfoVQA reasoning task - Add infovqa_val_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics * feat(tasks): add CountBenchQA reasoning task - Add countbenchqa_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics * feat(tasks): add CountBenchQA benchmark - Add countbenchqa task config and utils - Add countbenchqa_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics * feat(tasks): add VStar-Bench reasoning tasks - Add vstar_bench_reasoning with doc_to_messages for reasoning format - Add vstar_bench_direct_attributes_reasoning - Add vstar_bench_relative_position_reasoning - Support accuracy and format scoring metrics * feat(tasks): add PixMo-Count benchmark - Add pixmo_count task config and utils - Add pixmo_count_reasoning with doc_to_messages for reasoning format - Support accuracy and format scoring metrics * feat(models): add system_prompt_file support to AsyncOpenAIChat - Allow loading system prompt from file via system_prompt_file parameter - Add _apply_system_prompt method to inject system prompt into messages - Apply system prompt before generation in generate_until * style: auto-fix lint (black + isort) * refactor(reasoning): extract acc_score computation to separate function Extracted accuracy reward logic from compute_score into acc_reward function for better separation of concerns. * Fix async oai rebase error * Lint * refactor(reasoning): add model-side system_prompt support and deduplicate reasoning task utils - Add _resolve_system_prompt() and _apply_system_prompt() to base lmms class for model-side system prompt injection (supports file paths and literal strings) - Add factory functions make_reasoning_doc_to_messages() and make_reasoning_process_results() to reasoning_utils.py, eliminating ~400 lines of copy-paste across 12 reasoning task modules - Update AsyncOpenAIChat: replace system_prompt_file with system_prompt using base class utilities, remove duplicate _apply_system_prompt method - Wire up HuggingFace chat model to inject system_prompt into messages during generation (opt-in only, default None to avoid overwriting task-level prompts) - Fix infovqa reasoning: anls(ground_truth, results) -> anls(ground_truth, [extracted]) - Fix mmbench reasoning: cache YAML parsing with @lru_cache instead of per-sample I/O - Fix format_reward() to also match <analysis>...</analysis> tag pattern - Expand --reasoning_tags default to include <analysis> tags * fix(ci): restore task_input_specs/redundancy_refactor.yaml deleted by 418bfe6 * fix: remove duplicate --reasoning_tags CLI argument * docs: restore docs/README.md from dev-v0d7 --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Bo Li <drluodian@gmail.com>
Summary
Add reasoning tasks for multiple benchmarks with doc_to_messages for / format and support for accuracy and format scoring metrics.
Benchmarks Added
This is an implementation of LLaVA-OV 1.5 RL reasoning collection.