Summary
The current max_tool_iterations: 20 hard limit is too restrictive for complex tasks. It causes legitimate workflows to fail with "I've completed processing but have no response to give" before reaching their deliverable.
Current Behavior
// ~/.picoclaw/config.json
"agents": {
"defaults": {
"max_tool_iterations": 20
}
}
After 20 tool calls, the agent stops regardless of task completion state.
Problem
A typical literature review task requires:
- 1-2 iterations: project setup
- 2-4 iterations: pubmed search
- 3-6 iterations: pubmed fetch (multiple batches)
- 2-4 iterations: data processing/analysis
- 2-4 iterations: document creation (docx-review)
- 1-2 iterations: verification
Total: 11-22 iterations minimum
The agent consistently exhausts 20 iterations on data gathering and never reaches the document creation step.
Evidence from Logs
iteration=19: pubmed fetch 41635784 41625329... --json > pubmed_fetch_more.json
iteration=20: python - <<'PY' ... (processing)
Response: I've completed processing but have no response to give.
The agent was still gathering data at iteration 20 - never got to docx-review create.
Proposed Solution: OpenClaw's Approach
OpenClaw has no hard iteration limit. Instead it uses:
1. Context Window Bounding
context_window: 150000 # tokens
The agent runs until it hits token limits, then compacts. This naturally limits iterations based on actual resource usage.
2. Loop Detection (Optional)
loop_detection:
enabled: true
repeat_threshold: 3 # warn after 3 identical calls
critical_threshold: 6 # hard stop after 6 identical calls
This catches infinite loops without penalizing legitimate complex workflows.
3. Compaction on Overflow
compaction_mode: safeguard
When context fills, compact the conversation rather than failing.
Why This Is Better
| Approach |
Hard Iteration Limit |
Context + Loop Detection |
| Complex tasks |
❌ Fails at 20 |
✅ Runs to completion |
| Infinite loops |
❌ Wastes 20 iterations |
✅ Caught at 3-6 repeats |
| Resource efficiency |
❌ Arbitrary cutoff |
✅ Based on actual usage |
| User experience |
❌ "No response to give" |
✅ Completes or explains why |
Suggested Implementation
- Deprecate
max_tool_iterations or set default to 200+
- Add
loop_detection config with repeat/critical thresholds
- Add context window tracking with compaction support
- Environment variable:
PICOCLAW_LOOP_DETECTION_ENABLED=true
Acceptance Criteria
Summary
The current
max_tool_iterations: 20hard limit is too restrictive for complex tasks. It causes legitimate workflows to fail with "I've completed processing but have no response to give" before reaching their deliverable.Current Behavior
After 20 tool calls, the agent stops regardless of task completion state.
Problem
A typical literature review task requires:
Total: 11-22 iterations minimum
The agent consistently exhausts 20 iterations on data gathering and never reaches the document creation step.
Evidence from Logs
The agent was still gathering data at iteration 20 - never got to
docx-review create.Proposed Solution: OpenClaw's Approach
OpenClaw has no hard iteration limit. Instead it uses:
1. Context Window Bounding
The agent runs until it hits token limits, then compacts. This naturally limits iterations based on actual resource usage.
2. Loop Detection (Optional)
This catches infinite loops without penalizing legitimate complex workflows.
3. Compaction on Overflow
When context fills, compact the conversation rather than failing.
Why This Is Better
Suggested Implementation
max_tool_iterationsor set default to 200+loop_detectionconfig with repeat/critical thresholdsPICOCLAW_LOOP_DETECTION_ENABLED=trueAcceptance Criteria
max_tool_iterationsconfig still works (for users who want hard limits)