fix: let pi handle context overflow instead of stopping the loop#45
Open
elecnix wants to merge 1 commit into
Open
fix: let pi handle context overflow instead of stopping the loop#45elecnix wants to merge 1 commit into
elecnix wants to merge 1 commit into
Conversation
When context is nearly full, the extension was proactively calling ctx.abort() and disabling autoresearchMode, which stopped the loop instead of letting it continue. pi already has robust built-in context overflow handling: - _checkCompaction detects context overflow errors from the LLM - It automatically compacts the session (summarizes old history) - Then retries the turn The proactive token-history check (isContextExhausted) was redundant and actively harmful — it would fire too early based on predictions, before the LLM even responded, interrupting running experiments unnecessarily. Changes: - Removed isContextExhausted() and related helpers (estimateTokensPerIteration, hasRoomForNextIteration, CONTEXT_SAFETY_MARGIN) - Removed the proactive abort in run_experiment that called ctx.abort() when context was predicted to be nearly full - The token tracking (recordIterationTokens, iterationTokenHistory) stays — it logs useful diagnostics in ASI for experiment analysis - Removed needsCompaction flag (was unnecessary complexity) Now when context overflows, pi auto-compacts and retries automatically. When it overflows mid-experiment, the experiment still completes and gets logged, and the loop auto-resumes in the next turn.
dddgogogo
added a commit
to dddgogogo/pi-autoresearch
that referenced
this pull request
Apr 18, 2026
fix: let pi handle context overflow instead of stopping the loop
Owner
|
I want to rework how we handle context on pi-autoresearch. The idea that I have is that every experiment calls programatically This is not yet possible with current pi API. We'll have some implementation side on the pi codebase so that can work. That will make compaction unnecessary, so don't want to add more complexity as this problem will go away soon. |
Author
|
@davebcn87 I think I understand what you're saying. If every experiment starts at a checkpoint, the context essentially starts from scratch. In this case, the solution to context overflow would be to throw away the experiment, instead of stopping the loop. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
When context becomes nearly full during an autoresearch run, the extension was proactively calling
ctx.abort()and disablingautoresearchMode, stopping the experiment loop instead of letting it continue.Comparison with PR #41
PR #41 takes a different approach: it keeps the proactive
isContextExhaustedcheck but adds anautoCompactResume: trueconfig option to compact and resume instead of stopping.autoCompactResume)run_experimentWhy this PR removes the proactive check:
chars/4heuristic is conservative; large experiment output doesn't mean the LLM will overflow_checkCompactiondetects actual context overflow errors from the LLM, auto-compacts, and retries automaticallyWhat This PR Does
Remove the proactive
isContextExhaustedcheck entirely:_checkCompactiondetects the overflow error, compacts, retries. The experiment subprocess keeps running (detached) and its result gets logged normally.autoresearch.jsonl, soagent_end→agent_startresumes cleanly.Changes
isContextExhausted(),estimateTokensPerIteration(),hasRoomForNextIteration(),CONTEXT_SAFETY_MARGIN(dead code)needsCompactionflag fromAutoresearchRuntime(never wired up)run_experiment— replaced with a commentrecordIterationTokens()anditerationTokenHistory(useful ASI diagnostics)Why not PR #41's approach?
PR #41's
autoCompactResumeconfig is a reasonable alternative if you want to preserve the predictive check for early warning. But:Testing
node --check extensions/pi-autoresearch/index.tspasses