Skip to content

[FEATURE]: Plugin hooks should be able to inject AI-visible messages into conversation context #17412

@doomsday616

Description

@doomsday616

Feature hasn't been suggested before.

  • I have verified this feature I'm about to request hasn't been suggested before.

Describe the enhancement you want to request

Problem

OpenCode's plugin hooks (tool.execute.before, tool.execute.after, session.idle) can intercept and modify tool calls, but they cannot inject messages into the AI's conversation context. This makes it impossible to build skills that require ongoing behavioral enforcement — the AI simply forgets after a few tool calls, and near-guaranteed forgets after compaction.

Evidence: planning-with-files skill benchmark

The planning-with-files skill enforces a structured 3-file planning pattern (task_plan.md, findings.md, progress.md). On Claude Code, it uses PreToolUse/PostToolUse/Stop hooks to continuously remind the AI. The hooks inject messages the AI sees in conversation — not just intercept tool calls.

Benchmark results (10 parallel subagents, 5 task types, 30 objectively verifiable assertions):

Metric With skill + hooks (Claude Code) Without skill
Pass rate (30 assertions) 96.7% (29/30) 6.7% (2/30)
3-file pattern followed 5/5 evals 0/5 evals
Blind A/B wins 3/3 (100%) 0/3
Avg blind rubric score 10.0/10 6.8/10

What happens on OpenCode today

We run this same skill on OpenCode. The SKILL.md loads, the AI starts strong, then:

  1. After ~5-10 tool calls — the AI "forgets" to update progress.md and findings.md. No PostToolUse hook to remind it.
  2. After automatic compaction — the planning workflow is ~100% forgotten. The compacted summary doesn't preserve behavioral rules. The AI continues the task but abandons the 3-file pattern entirely.
  3. When the AI finishes — no Stop hook to check "did you update all planning files?" — it just stops.

We've tried mitigations: AGENTS.md rules, agent.prompt injection for subagents, TodoWrite tracking. None survive compaction reliably.

The gap

Capability Claude Code OpenCode
Block/modify tool calls ✅ hooks tool.execute.before
Inject AI-visible message after tool call ✅ (stderr → system message)
Re-activate agent before stopping ✅ Stop hook (exit 2 → continue) ❌ (session.idle is fire-and-forget)

Proposed API

For tool.execute.after — allow injecting a message the AI sees:

"tool.execute.after": async (input, output) => {
  if (input.tool === "edit") {
    output.inject = [
      { role: "user", text: "Remember to update progress.md with what you just changed." },
    ]
  }
}

For session.stopping (per #16598) — allow continuation with injected message:

"session.stopping": async (input, output) => {
  output.inject("You haven't updated progress.md yet.");
  output.continue();
}

Implementation

We've submitted a proof-of-concept PR that implements the tool.execute.after injection side:

#19519feat: allow tool.execute.after hooks to inject AI-visible messages

The implementation is ~57 lines across 2 files:

  • Adds an optional inject field to the tool.execute.after output type
  • A flushInjectedMessages() helper persists injected entries as synthetic user messages (same pattern as existing subtask summary messages)
  • Handles all three hook call sites: registry tools, MCP tools, and subtask tools
  • Fully backward compatible — inject is optional, existing plugins are unaffected

Why this matters

Skills are OpenCode's strongest differentiator. But any skill requiring behavioral enforcement beyond initial instructions (planning, linting, security guardrails, SOP compliance) degrades severely without message injection. The benchmark data shows a 96.7% → estimated ~30% drop — making an entire category of skills non-viable.

Related

Metadata

Metadata

Assignees

Labels

coreAnything pertaining to core functionality of the application (opencode server stuff)

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions