Feature hasn't been suggested before.
Describe the enhancement you want to request
Problem
OpenCode's plugin hooks (tool.execute.before, tool.execute.after, session.idle) can intercept and modify tool calls, but they cannot inject messages into the AI's conversation context. This makes it impossible to build skills that require ongoing behavioral enforcement — the AI simply forgets after a few tool calls, and near-guaranteed forgets after compaction.
Evidence: planning-with-files skill benchmark
The planning-with-files skill enforces a structured 3-file planning pattern (task_plan.md, findings.md, progress.md). On Claude Code, it uses PreToolUse/PostToolUse/Stop hooks to continuously remind the AI. The hooks inject messages the AI sees in conversation — not just intercept tool calls.
Benchmark results (10 parallel subagents, 5 task types, 30 objectively verifiable assertions):
| Metric |
With skill + hooks (Claude Code) |
Without skill |
| Pass rate (30 assertions) |
96.7% (29/30) |
6.7% (2/30) |
| 3-file pattern followed |
5/5 evals |
0/5 evals |
| Blind A/B wins |
3/3 (100%) |
0/3 |
| Avg blind rubric score |
10.0/10 |
6.8/10 |
What happens on OpenCode today
We run this same skill on OpenCode. The SKILL.md loads, the AI starts strong, then:
- After ~5-10 tool calls — the AI "forgets" to update progress.md and findings.md. No PostToolUse hook to remind it.
- After automatic compaction — the planning workflow is ~100% forgotten. The compacted summary doesn't preserve behavioral rules. The AI continues the task but abandons the 3-file pattern entirely.
- When the AI finishes — no Stop hook to check "did you update all planning files?" — it just stops.
We've tried mitigations: AGENTS.md rules, agent.prompt injection for subagents, TodoWrite tracking. None survive compaction reliably.
The gap
| Capability |
Claude Code |
OpenCode |
| Block/modify tool calls |
✅ hooks |
✅ tool.execute.before |
| Inject AI-visible message after tool call |
✅ (stderr → system message) |
❌ |
| Re-activate agent before stopping |
✅ Stop hook (exit 2 → continue) |
❌ (session.idle is fire-and-forget) |
Proposed API
For tool.execute.after — allow injecting a message the AI sees:
"tool.execute.after": async (input, output) => {
if (input.tool === "edit") {
output.inject = [
{ role: "user", text: "Remember to update progress.md with what you just changed." },
]
}
}
For session.stopping (per #16598) — allow continuation with injected message:
"session.stopping": async (input, output) => {
output.inject("You haven't updated progress.md yet.");
output.continue();
}
Implementation
We've submitted a proof-of-concept PR that implements the tool.execute.after injection side:
#19519 — feat: allow tool.execute.after hooks to inject AI-visible messages
The implementation is ~57 lines across 2 files:
- Adds an optional
inject field to the tool.execute.after output type
- A
flushInjectedMessages() helper persists injected entries as synthetic user messages (same pattern as existing subtask summary messages)
- Handles all three hook call sites: registry tools, MCP tools, and subtask tools
- Fully backward compatible —
inject is optional, existing plugins are unaffected
Why this matters
Skills are OpenCode's strongest differentiator. But any skill requiring behavioral enforcement beyond initial instructions (planning, linting, security guardrails, SOP compliance) degrades severely without message injection. The benchmark data shows a 96.7% → estimated ~30% drop — making an entire category of skills non-viable.
Related
Feature hasn't been suggested before.
Describe the enhancement you want to request
Problem
OpenCode's plugin hooks (
tool.execute.before,tool.execute.after,session.idle) can intercept and modify tool calls, but they cannot inject messages into the AI's conversation context. This makes it impossible to build skills that require ongoing behavioral enforcement — the AI simply forgets after a few tool calls, and near-guaranteed forgets after compaction.Evidence: planning-with-files skill benchmark
The planning-with-files skill enforces a structured 3-file planning pattern (task_plan.md, findings.md, progress.md). On Claude Code, it uses
PreToolUse/PostToolUse/Stophooks to continuously remind the AI. The hooks inject messages the AI sees in conversation — not just intercept tool calls.Benchmark results (10 parallel subagents, 5 task types, 30 objectively verifiable assertions):
What happens on OpenCode today
We run this same skill on OpenCode. The SKILL.md loads, the AI starts strong, then:
We've tried mitigations: AGENTS.md rules,
agent.promptinjection for subagents, TodoWrite tracking. None survive compaction reliably.The gap
tool.execute.beforesession.idleis fire-and-forget)Proposed API
For
tool.execute.after— allow injecting a message the AI sees:For
session.stopping(per #16598) — allow continuation with injected message:Implementation
We've submitted a proof-of-concept PR that implements the
tool.execute.afterinjection side:The implementation is ~57 lines across 2 files:
injectfield to thetool.execute.afteroutput typeflushInjectedMessages()helper persists injected entries as synthetic user messages (same pattern as existing subtask summary messages)injectis optional, existing plugins are unaffectedWhy this matters
Skills are OpenCode's strongest differentiator. But any skill requiring behavioral enforcement beyond initial instructions (planning, linting, security guardrails, SOP compliance) degrades severely without message injection. The benchmark data shows a 96.7% → estimated ~30% drop — making an entire category of skills non-viable.
Related
session.stoppinglifecycle event (prerequisite for Stop hook)