Skip to content

OpenCode plugin: bootstrap content read from disk every agent step (no caching), duplication guard is ineffective #1202

@gzb1128

Description

@gzb1128

Summary

The OpenCode plugin (.opencode/plugins/superpowers.js) registers an experimental.chat.messages.transform hook that injects ~5.4KB of bootstrap content into the first user message. Two performance issues were found after tracing the actual code paths in both the plugin and opencode core.

Problem 1: fs.readFileSync called every agent step without caching

getBootstrapContent() in superpowers.js:56-82 does fs.existsSync + fs.readFileSync + regex frontmatter parsing + string concatenation on every call with zero caching.

The hook fires more often than "every turn" — it fires every agent step. In opencode's prompt.ts, the agent loop looks like:

// prompt.ts:1346-1350
while (true) {
  yield* status.set(sessionID, { type: "busy" })
  let msgs = yield* MessageV2.filterCompactedEffect(sessionID)  // loads fresh from DB every step
  // ...
  yield* plugin.trigger("experimental.chat.messages.transform", {}, { messages: msgs })  // line 1501
  // ...call LLM, handle tools, increment step...
}

A 10-step agent turn triggers 10 file reads + 10 regex parses + 10 string concatenations.

Problem 2: The duplication guard is ineffective (but no actual duplication)

The guard at superpowers.js:107:

if (firstUser.parts.some(p => p.type === 'text' && p.text.includes('EXTREMELY_IMPORTANT'))) return;

This guard never works because of how opencode's message pipeline works:

  1. Messages are loaded fresh from DB every agent stepMessageV2.filterCompactedEffect(sessionID) (defined in message-v2.ts:921-923) reads from the database via stream(sessionID), applies filterCompacted() to handle compaction boundaries, and returns a new array of message objects.

  2. The plugin's injection is purely in-memorysuperpowers.js:109 calls firstUser.parts.unshift(...) which mutates the in-memory object returned by filterCompactedEffect. This mutated object is never written back to the database — there is no updatePart or save call for the injected bootstrap anywhere in the code path.

  3. Therefore the guard always fails — on the next agent step, filterCompactedEffect returns fresh DB data that never contains EXTREMELY_IMPORTANT, so the injection always proceeds.

This does NOT cause technical duplication (each injection is on a new in-memory object, the previous one is discarded), but it means the guard is dead code and every step pays the cost of file I/O + parsing unnecessarily.

Problem 3: Semantic duplication risk via compaction

During compaction in compaction.ts:219-222:

const msgs = structuredClone(messages)
yield* plugin.trigger("experimental.chat.messages.transform", {}, { messages: msgs })
const modelMessages = yield* MessageV2.toModelMessagesEffect(msgs, model, { stripMedia: true })

The transform hook fires on cloned messages with bootstrap injected before they're sent to the LLM for summarization. The compaction summary may therefore reference or paraphrase the bootstrap content. In subsequent turns, the full bootstrap is injected again — the LLM could see both the summary referencing it AND the full content, causing minor semantic duplication.

Suggested fix

Cache the bootstrap content at module level — read and parse the file once, not every step:

let _cached = null;
const getBootstrapContent = () => {
  if (_cached) return _cached;
  // ... existing read + parse logic ...
  _cached = result;
  return result;
};

This alone would eliminate the repeated I/O and parsing overhead. A more thorough fix could also reconsider whether the guard should work differently (e.g., a session-level flag instead of content-based detection).

Environment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions