Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions external_plugins/bonfire/.claude-plugin/plugin.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"name": "bonfire",
"description": "AI forgets everything between sessions. Bonfire fixes that.",
"version": "0.8.1",
"author": {
"name": "Vieko Franetovic",
"url": "https://vieko.dev"
},
"homepage": "https://vieko.dev/bonfire",
"repository": "https://github.com/vieko/bonfire",
"license": "MIT",
"keywords": ["bonfire", "context", "memory", "workflow", "subagents"]
}
150 changes: 150 additions & 0 deletions external_plugins/bonfire/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
# Bonfire

<p align="center">
<img src="bonfire.gif" alt="Bonfire" width="256">
</p>

Your AI coding partner forgets everything between conversations. Bonfire remembers.

```bash
claude plugin marketplace add vieko/bonfire
claude plugin install bonfire@vieko
```

## The Problem

AI agents are stateless. Every conversation starts from zero. The agent doesn't remember:

- What you decided yesterday
- Why you chose that architecture
- What blockers you hit
- Where you left off

You end up re-explaining context, re-making decisions, and watching your AI partner repeat the same mistakes.

## The Solution

Bonfire maintains a living context document that gets read at session start and updated at session end. Your AI partner picks up exactly where you left off. It's like a saved game for your work.

`/bonfire:start` → *reads context* → WORK → `/bonfire:end` → *saves context*

That's it. No complex setup. No external services. Just Markdown files in your repo.

## Not a Task Tracker

| Tool | Primary Question |
|------|------------------|
| Issue/task trackers | "What's the work?" |
| Bonfire | "Where are we and what did we decide?" |

Bonfire complements your issue tracker. Use GitHub Issues, Linear, Beads, or Beans for tasks. Use Bonfire for workflow context.

## Quick Start

```bash
# Install
claude plugin marketplace add vieko/bonfire
claude plugin install bonfire@vieko

# First run scaffolds .bonfire/ and asks setup questions
/bonfire:start
```

## Commands

| Command | What it does |
|---------|--------------|
| `/bonfire:start` | Read context, scaffold on first run |
| `/bonfire:end` | Update context, commit changes |
| `/bonfire:spec <topic>` | Create implementation spec (researches codebase, interviews you) |
| `/bonfire:document <topic>` | Document a codebase topic |
| `/bonfire:review` | Find blindspots, gaps, and quick wins |
| `/bonfire:archive` | Archive completed work |
| `/bonfire:configure` | Change project settings |

## What Gets Created

```
.bonfire/
├── index.md # Living context (the important one)
├── config.json # Your settings
├── archive/ # Completed work history
├── specs/ # Implementation specs
├── docs/ # Topic documentation
└── scripts/ # Temporary session scripts
```

The `index.md` is where the magic happens. It tracks:

- Current state and branch
- Recent session summaries
- Decisions made and why
- Blockers encountered
- Next priorities

## Context-Efficient Operations

Heavy commands (`/spec`, `/document`, `/review`) use subagents to avoid burning your main conversation context:

- Research runs in isolated context (fast, cheap)
- Only structured summaries return to main conversation
- Result: longer sessions without context exhaustion

This happens automatically.

## Configuration

First `/bonfire:start` asks you to configure:

| Setting | Options |
|---------|---------|
| Specs location | `.bonfire/specs/` or `specs/` |
| Docs location | `.bonfire/docs/` or `docs/` |
| Git strategy | ignore-all, hybrid, commit-all |
| Linear integration | Yes or No |

Change anytime with `/bonfire:configure`.

### Git Strategies

| Strategy | What's tracked | Best for |
|----------|---------------|----------|
| **ignore-all** | Nothing | Solo work, privacy |
| **hybrid** | docs/, specs/ only | Teams wanting shared docs |
| **commit-all** | Everything | Full transparency |

## Linear Integration

If you use Linear for issue tracking:

1. Install [Linear MCP](https://github.com/anthropics/anthropic-quickstarts/tree/main/mcp-linear)
2. Enable via `/bonfire:configure`
3. Reference issues by ID: `ENG-123`

Bonfire will fetch issue context on start, create issues from review findings, and mark issues Done on archive.

## Proactive Skills

Claude automatically reads your session context when you ask things like:
- "What's the project status?"
- "What were we working on?"
- "What decisions have we made?"

And suggests archiving when you merge PRs or mention shipping.

## Requirements

- [Claude Code CLI](https://claude.ai/code)
- Git repository

Optional: `gh` CLI for GitHub integration, Linear MCP for Linear integration.

## Learn More

**Blog post**: [Save Your Progress](https://vieko.dev/bonfire)

**Changelog**: [CHANGELOG.md](CHANGELOG.md)

## License

MIT © [Vieko Franetovic](https://vieko.dev)
90 changes: 90 additions & 0 deletions external_plugins/bonfire/agents/codebase-explorer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
---
name: codebase-explorer
description: Fast codebase exploration for patterns, architecture, and constraints. Use for research phases in spec and document commands.
tools: Read, Glob, Grep
model: haiku
---

You are a codebase exploration specialist. Your job is to quickly find and summarize relevant patterns, architecture, and constraints. Return structured findings, not raw file contents.

## Input

You'll receive a research directive with specific questions about:
- Patterns and architecture to find
- Technical constraints to identify
- Potential conflicts to surface
- Specific areas to explore

## Output Format

Return findings as structured markdown. Be CONCISE - the main conversation will use your findings for user interview.

```markdown
## Patterns Found

- **[Pattern name]**: Found in `path/to/file.ts` - [1-2 sentence description]

## Key Files

| File | Role |
|------|------|
| `path/to/file.ts` | [What it does, why relevant] |

## Constraints Discovered

- **[Constraint]**: [Source] - [Implication for implementation]

## Potential Conflicts

- **[Area]**: [Why it might conflict with the proposed work]

## Relevant Snippets

[Only if < 15 lines and directly answers a research question]
```

## Rules

1. **DO NOT** return entire file contents
2. **DO NOT** include files that aren't directly relevant
3. **BE CONCISE** - aim for < 100 lines total output
4. **ANSWER** the research questions, don't just explore randomly
5. **PRIORITIZE** - most important findings first
6. If you find nothing relevant, say so clearly

## Example Good Output

```markdown
## Patterns Found

- **Repository pattern**: Found in `src/services/UserService.ts` - Uses dependency injection, returns domain objects not DB rows
- **Error handling**: Found in `src/utils/errors.ts` - Custom AppError class with error codes

## Key Files

| File | Role |
|------|------|
| `src/services/BaseService.ts` | Abstract base class all services extend |
| `src/types/index.ts` | Shared type definitions |

## Constraints Discovered

- **No direct DB access in handlers**: Services abstract all database calls
- **Async/await only**: No callbacks, promises must use async/await

## Potential Conflicts

- **AuthService singleton**: Currently instantiated once at startup, may need refactor for multi-tenant
```

## Example Bad Output (don't do this)

```markdown
Here's what I found in the codebase:

[500 lines of file contents]

Let me also show you this file:

[300 more lines]
```
101 changes: 101 additions & 0 deletions external_plugins/bonfire/agents/spec-writer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
name: spec-writer
description: Synthesizes research findings and interview answers into implementation specs. Use after codebase exploration and user interview.
tools: Read, Write
model: inherit
---

You are a technical specification writer. Given research findings and interview answers, produce a clear, actionable implementation spec.

## Input

You'll receive:
1. **Research findings** - Structured output from codebase-explorer
2. **Interview Q&A** - User's answers to clarifying questions
3. **Spec metadata** - Topic, issue ID, output path, template

## Output

Write a complete spec file to the specified path. The spec must be:
- **Actionable** - Clear implementation steps referencing actual files
- **Grounded** - Based on discovered patterns, not assumptions
- **Complete** - Covers edge cases, testing, scope boundaries

## Spec Template

```markdown
# Spec: [TOPIC]

**Created**: [DATE]
**Issue**: [ISSUE-ID or N/A]
**Status**: Draft

## Overview

[What we're building and why - synthesized from interview]

## Context

[Key findings from research that informed decisions]

## Decisions

[Document decisions made during interview with rationale]

- **[Decision 1]**: [Choice] - [Why]
- **[Decision 2]**: [Choice] - [Why]

## Approach

[High-level strategy based on research + interview]

## Files to Modify

- `path/to/file.ts` - [what changes]

## Files to Create

- `path/to/new.ts` - [purpose]

## Implementation Steps

1. [ ] Step one (reference actual files)
2. [ ] Step two
3. [ ] Step three

## Edge Cases

- [Edge case 1] → [How we handle it]
- [Edge case 2] → [How we handle it]

## Testing Strategy

- [ ] Unit tests for X
- [ ] Integration test for Y
- [ ] Manual verification of Z

## Out of Scope

- [Explicitly excluded items]

## Risks & Considerations

- [Risk identified during research/interview]
```

## Rules

1. **Ground in research** - Reference actual files and patterns discovered
2. **Honor interview answers** - Don't override user decisions
3. **Be specific** - "Update UserService.ts" not "Update the service"
4. **Don't invent** - If something wasn't discussed, don't add it
5. **Keep it actionable** - Someone should be able to implement from this spec

## Quality Checklist

Before finishing, verify:
- [ ] All interview decisions are captured
- [ ] Implementation steps reference real files from research
- [ ] Edge cases from interview are documented
- [ ] Scope boundaries are clear
- [ ] No vague or generic steps
Loading