Skip to content

[data] Filter Overlonged Prompts in tool_agent_loop#6079

Open
HwCARI wants to merge 1 commit intoverl-project:mainfrom
HwCARI:filter_overlong_prompt
Open

[data] Filter Overlonged Prompts in tool_agent_loop#6079
HwCARI wants to merge 1 commit intoverl-project:mainfrom
HwCARI:filter_overlong_prompt

Conversation

@HwCARI
Copy link
Copy Markdown
Contributor

@HwCARI HwCARI commented Apr 20, 2026

What does this PR do?

Fixes #2069.

filter_overlong_prompts was silently skipped in the multiturn setting in verl/verl/experimental/agent_loop/tool_agent_loop.py, causing prompts exceeding data.max_prompt_length to pass through unfiltered. This resulted in tensor size mismatches downstream when batching sequences:

RuntimeError: Sizes of tensors must match except in dimension 0.
Expected size 2048 but got size 2106 for tensor number 2 in the list.

This PR ensures the overlong prompt filter is applied consistently regardless of whether multiturn tool use is enabled.

Fix

Apply filter_overlong_prompts in verl/verl/experimental/agent_loop/tool_agent_loop.py.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, fully_async, one_step_off
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a length check for prompt IDs in the agent loop to prevent runtime errors from overlong prompts by terminating the state early. Feedback suggests that this approach may result in blank samples being passed to the trainer, recommending the use of specific metric flags or default rewards for filtered samples and the inclusion of request IDs in logs for improved observability.

Comment on lines +192 to +197
if len(prompt_ids) > self.prompt_length:
logger.warning(
f"Prompt length {len(prompt_ids)} exceeds prompt_length {self.prompt_length}. "
"Filtering out this sample."
)
return AgentState.TERMINATED
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While this check correctly prevents the RuntimeError during batching by terminating early for overlong prompts, returning AgentState.TERMINATED at this stage leaves agent_data.prompt_ids as an empty list. This results in a "blank" sample (all padding) being passed to the trainer.

To improve observability and ensure the trainer handles the filtered sample correctly, consider setting a default reward or a specific metric flag indicating the sample was filtered due to length constraints. Additionally, including the request_id in the warning log would significantly aid in debugging specific failed samples in large-scale runs.

@HwCARI HwCARI changed the title Filter Overlonged Prompts in tool_agent_loop [data] Filter Overlonged Prompts in tool_agent_loop Apr 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

filter_overlong_prompts not working in multiturn setting — leads to tensor size mismatch

1 participant