opencode-bonfire 1.3.0 → 1.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -10,9 +10,47 @@ Always runs interactively - asks all configuration questions regardless of argum
10
10
 
11
11
  Run `git rev-parse --show-toplevel` to locate the repository root.
12
12
 
13
- ## Step 2: Check for Bonfire Directory
13
+ ## Step 2: Ensure Bonfire Directory Exists
14
14
 
15
- If `<git-root>/.bonfire/` does not exist, tell the user to run `/bonfire-start` first.
15
+ If `<git-root>/.bonfire/` does not exist, create it.
16
+
17
+ If `<git-root>/.bonfire/index.md` does not exist, create a minimal version:
18
+
19
+ ```markdown
20
+ # Session Context: [PROJECT_NAME]
21
+
22
+ **Date**: [CURRENT_DATE]
23
+ **Status**: Active
24
+ **Branch**: [CURRENT_BRANCH]
25
+
26
+ ---
27
+
28
+ ## Current State
29
+
30
+ [Created via /bonfire-configure - run /bonfire-start for full setup]
31
+
32
+ ---
33
+
34
+ ## Recent Sessions
35
+
36
+ _No sessions recorded yet._
37
+
38
+ ---
39
+
40
+ ## Next Session Priorities
41
+
42
+ 1. [Define your priorities]
43
+
44
+ ---
45
+
46
+ ## Notes
47
+
48
+ [Add notes here]
49
+ ```
50
+
51
+ Detect project name from: package.json name → git remote → directory name.
52
+
53
+ This ensures configure can be run as the first entry point without leaving the project in an incomplete state.
16
54
 
17
55
  ## Step 3: Read Current Config
18
56
 
@@ -4,175 +4,102 @@ description: Create documentation about a topic in the codebase
4
4
 
5
5
  # Document Topic
6
6
 
7
- Create reference documentation using subagent for research, preserving main context.
7
+ Create reference documentation for **$ARGUMENTS**.
8
8
 
9
- ## Step 1: Find Git Root
10
-
11
- Run `git rev-parse --show-toplevel` to locate the repository root.
12
-
13
- ## Step 2: Check Config
14
-
15
- Read `<git-root>/.bonfire/config.json` if it exists.
16
-
17
- **Docs location**: Read `docsLocation` from config. Default to `.bonfire/docs/` if not set.
18
-
19
- ## Step 3: Understand the Topic
20
-
21
- The topic to document is: $ARGUMENTS
22
-
23
- If no topic provided, ask the user what they want documented.
24
-
25
- ## Step 4: Explore the Codebase (Subagent)
26
-
27
- **Progress**: Tell the user "Exploring codebase for [TOPIC]..."
28
-
29
- Use the task tool to invoke the **codebase-explorer** subagent for research.
30
-
31
- Provide a research directive:
32
-
33
- ```
34
- Research the codebase to document: [TOPIC]
35
-
36
- Find:
37
- 1. **Architecture**: How this system/feature is structured, key components
38
- 2. **Key Files**: Important files and their roles
39
- 3. **Flow**: How data/control flows through the system
40
- 4. **Patterns**: Design patterns and conventions used
41
- 5. **Gotchas**: Important details, edge cases, things to watch out for
42
-
43
- Return structured findings with file paths and brief descriptions.
44
- ```
45
-
46
- **Wait for the subagent to return findings** before proceeding.
47
-
48
- The subagent runs in isolated context (haiku model, fast), preserving main context for writing.
9
+ ---
49
10
 
50
- ### Exploration Validation
11
+ ## Outcome
51
12
 
52
- After the subagent returns, validate the response:
13
+ Complete reference documentation that helps developers understand a system, feature, or pattern in the codebase. The doc should enable someone unfamiliar with the code to:
14
+ - Understand what it does and why it exists
15
+ - Find the relevant files quickly
16
+ - Understand how it works at a conceptual level
17
+ - Avoid common pitfalls
53
18
 
54
- **Valid response contains at least one of:**
55
- - `## Architecture` or `## Patterns Found` with content
56
- - `## Key Files` with entries
57
- - `## Flow` or `## Gotchas` with items
19
+ ---
58
20
 
59
- **On valid response**: Proceed to Step 5.
21
+ ## Acceptance Criteria
60
22
 
61
- **On invalid/empty response**:
62
- 1. Warn user: "Codebase exploration returned limited results. I'll research directly."
63
- 2. Fall back to in-context research:
64
- - `glob("**/*[topic-related]*")` to find relevant files
65
- - `grep("topic-keywords")` to find implementations
66
- - Read identified files
67
- 3. Continue to Step 5 with in-context findings.
23
+ The doc file must contain these sections:
68
24
 
69
- **On subagent failure** (timeout, error):
70
- 1. Warn user: "Subagent exploration failed. Continuing with direct research."
71
- 2. Perform in-context research as above.
72
- 3. Continue to Step 5.
25
+ | Section | Purpose |
26
+ |---------|---------|
27
+ | `## Overview` | What this is and why it matters |
28
+ | `## Key Files` | Important files with their roles |
29
+ | `## How It Works` | Conceptual explanation of flow/behavior |
30
+ | `## Gotchas` | Edge cases, pitfalls, things to watch out for |
73
31
 
74
- ### Resumable Exploration (Large Codebases)
32
+ Additional sections are welcome (Architecture, Examples, Related Topics) but these four are required.
75
33
 
76
- For very large codebases, exploration may need multiple passes. The task tool returns a `session_id` you can use to resume.
34
+ **Quality signals:**
35
+ - File paths are accurate and exist in the codebase
36
+ - Explanations match actual code behavior
37
+ - Gotchas reflect real issues (not hypothetical concerns)
77
38
 
78
- **When to offer resume:**
79
- - Subagent returns with "X additional items omitted" notes
80
- - Findings cover only part of the topic (e.g., found architecture but not flows)
81
- - User asks for deeper exploration of a specific aspect
39
+ ---
82
40
 
83
- **To resume exploration:**
84
- 1. Tell user: "Exploration found [X] but there's more to document. Continue exploring [specific aspect]?"
85
- 2. If yes, re-invoke codebase-explorer with the `session_id` parameter:
86
- - Pass the session_id from the previous invocation
87
- - Provide a refined directive: "Continue exploring: [specific aspect]. Focus on [what to find]."
88
- 3. Merge findings from resumed exploration with previous findings.
89
- 4. Repeat if needed, up to 3 passes maximum.
41
+ ## Constraints
90
42
 
91
- **Example multi-pass scenario:**
92
- - Pass 1: "Document payment system" → finds payment service, stripe integration
93
- - Pass 2 (resume): "Continue exploring: refund handling" → finds refund logic, webhooks
94
- - Merge: Combined findings produce more complete documentation
43
+ ### Context Isolation
95
44
 
96
- ## Step 5: Write Documentation (Subagent)
45
+ Research happens in an isolated subagent context to preserve main context.
97
46
 
98
- **Progress**: Tell the user "Writing documentation..."
47
+ | Phase | Agent | Model | Why |
48
+ |-------|-------|-------|-----|
49
+ | Research | `codebase-explorer` | haiku | Fast exploration without polluting main context |
50
+ | Writing | `doc-writer` | inherit | Synthesis in isolation with full research context |
99
51
 
100
- **Naming convention**: `<topic>.md` (kebab-case)
52
+ ### No Interview Required
101
53
 
102
- Examples:
103
- - `inbound-agent-architecture.md`
104
- - `sampling-strategies.md`
105
- - `authentication-flow.md`
54
+ Unlike specs, documentation is based purely on codebase research. The code is the source of truth.
106
55
 
107
- Use the task tool to invoke the **doc-writer** subagent.
56
+ ### File Locations
108
57
 
109
- Provide the prompt in this exact format:
58
+ - **Config**: `<git-root>/.bonfire/config.json` contains `docsLocation`
59
+ - **Default**: `.bonfire/docs/` if not configured
60
+ - **Naming**: `<topic>.md` (kebab-case, e.g., `authentication-flow.md`)
110
61
 
111
- ```
112
- ## Research Findings
62
+ ### Verification
113
63
 
114
- <paste structured findings from Step 4>
64
+ After writing, verify the doc contains all 4 required sections. If incomplete:
65
+ - Warn user what's missing
66
+ - Offer: proceed / retry / abort
115
67
 
116
- ## Doc Metadata
68
+ ### Session Context
117
69
 
118
- - **Topic**: <topic name>
119
- - **Output Path**: <git-root>/<docsLocation>/<topic>.md
120
- - **Date**: <YYYY-MM-DD>
121
- ```
70
+ After writing, add a reference to the doc in `<git-root>/.bonfire/index.md` under Key Resources.
122
71
 
123
- The subagent will write the doc file directly to the Output Path.
72
+ ### Completion
124
73
 
125
- ### Doc Verification
74
+ After verification, confirm doc creation and offer options:
75
+ - Add more detail to any section
76
+ - Document related topics
77
+ - Proceed with other work
126
78
 
127
- After the doc-writer subagent returns, verify the doc is complete.
79
+ ---
128
80
 
129
- **Key sections to check** (lenient - only these 4):
130
- - `## Overview`
131
- - `## Key Files`
132
- - `## How It Works`
133
- - `## Gotchas`
81
+ ## Guidance (Not Rules)
134
82
 
135
- **Verification steps:**
83
+ These patterns tend to work well, but adapt as needed:
136
84
 
137
- 1. **Read the doc file** at `<git-root>/<docsLocation>/<topic>.md`
85
+ **Research before writing** - Let the codebase inform the structure.
138
86
 
139
- 2. **If file missing or empty**:
140
- - Warn user: "Doc file wasn't written. Writing directly..."
141
- - Write the doc yourself using the write tool
142
- - Run verification again on the written file
87
+ **Show your work** - Tell user what you're doing: "Exploring codebase...", "Writing documentation..."
143
88
 
144
- 3. **If file exists, check for key sections**:
145
- - Scan content for the 4 section headers above
146
- - Track which sections are present/missing
89
+ **Fallback gracefully** - If subagent fails, do the work in main context. Warn user but don't stop.
147
90
 
148
- 4. **If all 4 sections present**:
149
- - Tell user: "Doc written and verified (4/4 key sections present)."
150
- - Proceed to Step 6.
91
+ **Large codebases** - Explorer may need multiple passes. Offer to continue if findings seem incomplete for the topic.
151
92
 
152
- 5. **If 1-3 sections missing** (partial write):
153
- - Warn user: "Doc appears incomplete. Missing sections: [list missing]"
154
- - Show which sections ARE present
155
- - Ask: "Proceed with partial doc, retry write, or abort?"
156
- - **Proceed**: Continue to Step 6
157
- - **Retry**: Re-invoke doc-writer subagent with same input, then verify again
158
- - **Abort**: Stop and inform user the incomplete doc file remains at path
93
+ **Follow the code** - Document what the code actually does, not what comments claim or what you assume.
159
94
 
160
- 6. **If all sections missing but has content**:
161
- - Treat as invalid format, trigger fallback write
162
- - Write the doc yourself, then verify the written file
95
+ ---
163
96
 
164
- **On subagent failure** (timeout, error):
165
- - Warn user: "Doc writer failed. Writing doc directly..."
166
- - Write the doc yourself using the write tool
167
- - Run verification on the written file
97
+ ## Anti-Patterns
168
98
 
169
- ## Step 5: Link to Session Context
99
+ **Don't document assumptions** - If you can't find it in the code, don't write about it.
170
100
 
171
- Add a reference to the doc in `<git-root>/.bonfire/index.md` under Key Resources or Notes.
101
+ **Don't over-abstract** - Concrete file paths and function names are more useful than vague descriptions.
172
102
 
173
- ## Step 6: Confirm
103
+ **Don't skip verification** - Subagents can produce partial output. Always check.
174
104
 
175
- Summarize what was documented and ask if the user wants:
176
- - More detail on any section
177
- - Related topics documented
178
- - To proceed with other work
105
+ **Don't write tutorials** - This is reference documentation (how it works), not a guide (how to use it).
@@ -0,0 +1,229 @@
1
+ ---
2
+ description: Review a GitHub pull request and post inline comments
3
+ ---
4
+
5
+ # Review Pull Request
6
+
7
+ Review a GitHub PR in an isolated worktree, then post inline comments on findings.
8
+
9
+ ## Step 1: Parse Arguments
10
+
11
+ Extract PR number from `$ARGUMENTS`:
12
+ - `333` or `#333` → PR number 333
13
+ - Empty → Show usage and abort
14
+
15
+ **Usage**: `/bonfire-review-pr <pr-number>`
16
+
17
+ If no PR number provided:
18
+ > "Usage: `/bonfire-review-pr <pr-number>`
19
+ >
20
+ > Example: `/bonfire-review-pr 333`"
21
+
22
+ ## Step 2: Verify Environment
23
+
24
+ Check if running inside tmux:
25
+
26
+ ```bash
27
+ [ -n "$TMUX" ] && echo "tmux: yes" || echo "tmux: no"
28
+ ```
29
+
30
+ **If not in tmux**: Provide manual instructions and abort:
31
+
32
+ > "PR review with inline comments requires tmux for worktree isolation.
33
+ >
34
+ > **Manual alternative:**
35
+ > 1. Create worktree: `git worktree add ../pr-<number>-review origin/<branch>`
36
+ > 2. Open new terminal in that directory
37
+ > 3. Run: `opencode 'Review this PR and help me post comments'`
38
+ > 4. Clean up when done: `git worktree remove ../pr-<number>-review`"
39
+
40
+ ## Step 3: Fetch PR Metadata
41
+
42
+ Get PR details:
43
+
44
+ ```bash
45
+ gh pr view <number> --json number,title,headRefName,baseRefName,headRefOid,url,body,files
46
+ ```
47
+
48
+ **If PR not found**: Abort with "PR #<number> not found in this repository."
49
+
50
+ Extract and store:
51
+ - `headRefName` - PR branch name
52
+ - `baseRefName` - Target branch (usually main)
53
+ - `headRefOid` - Commit SHA for inline comments
54
+ - `title` - PR title
55
+ - `url` - PR URL
56
+ - `files` - Changed files list
57
+
58
+ ## Step 4: Find Git Root and Compute Paths
59
+
60
+ ```bash
61
+ git rev-parse --show-toplevel
62
+ ```
63
+
64
+ Compute worktree path: `<git-root>/../<repo-name>-pr-<number>-review`
65
+
66
+ Example: `/Users/vieko/dev/gtm` → `/Users/vieko/dev/gtm-pr-333-review`
67
+
68
+ ## Step 5: Create Worktree
69
+
70
+ Create isolated worktree for PR branch:
71
+
72
+ ```bash
73
+ git fetch origin <headRefName>
74
+ git worktree add <worktree-path> origin/<headRefName>
75
+ ```
76
+
77
+ **On failure** (branch conflict, dirty state, etc.):
78
+
79
+ 1. Check error message
80
+ 2. If worktree already exists: Ask user "Worktree already exists. Remove and recreate?"
81
+ - Yes: `git worktree remove <worktree-path> --force` then retry
82
+ - No: Abort
83
+ 3. If other error: Report error and abort with suggestion to check `git worktree list`
84
+
85
+ ## Step 6: Get PR Diff Summary
86
+
87
+ Get the diff for context:
88
+
89
+ ```bash
90
+ cd <worktree-path> && git diff origin/<baseRefName>...HEAD --stat
91
+ ```
92
+
93
+ Get changed files:
94
+
95
+ ```bash
96
+ cd <worktree-path> && git diff origin/<baseRefName>...HEAD --name-only
97
+ ```
98
+
99
+ ## Step 7: Generate Review Context
100
+
101
+ Create context document for spawned session.
102
+
103
+ Write to `<worktree-path>/.bonfire-pr-review-context.md`:
104
+
105
+ ```markdown
106
+ # PR Review Context
107
+
108
+ **PR**: #<number> - <title>
109
+ **URL**: <url>
110
+ **Branch**: <headRefName> → <baseRefName>
111
+ **Commit**: <headRefOid>
112
+
113
+ ## Changed Files
114
+
115
+ <list of changed files>
116
+
117
+ ## PR Description
118
+
119
+ <body from PR>
120
+
121
+ ---
122
+
123
+ ## Instructions
124
+
125
+ You are reviewing PR #<number> in an isolated worktree.
126
+
127
+ ### Step 1: Run Review
128
+
129
+ Use the task tool to invoke the **work-reviewer** subagent:
130
+
131
+ ```
132
+ Review this pull request for blindspots, gaps, and improvements.
133
+
134
+ **Scope**: PR #<number> - <title>
135
+
136
+ **Files changed**:
137
+ <list of changed files>
138
+
139
+ **PR Description**:
140
+ <body>
141
+
142
+ Return categorized findings with severity, effort, and specific file:line references.
143
+ ```
144
+
145
+ ### Step 2: Present Findings
146
+
147
+ After review completes, present findings grouped by severity.
148
+
149
+ For each finding, note:
150
+ - File and line number (if applicable)
151
+ - Severity (critical/moderate/minor)
152
+ - Description
153
+
154
+ ### Step 3: Batch Comment Selection
155
+
156
+ Ask user: "Which findings should I post as PR comments?"
157
+
158
+ Options:
159
+ 1. List findings by number, let user select (e.g., "1, 3, 5")
160
+ 2. "All" - post all findings
161
+ 3. "None" - skip commenting
162
+
163
+ ### Step 4: Post Inline Comments
164
+
165
+ For each selected finding with a file:line reference, post an inline comment:
166
+
167
+ ```bash
168
+ gh api repos/{owner}/{repo}/pulls/<number>/comments \
169
+ -f body="**Review Finding**
170
+
171
+ <finding description>
172
+
173
+ *Severity: <severity> | Effort: <effort>*" \
174
+ -f commit_id="<headRefOid>" \
175
+ -f path="<file-path>" \
176
+ -f line=<line-number>
177
+ ```
178
+
179
+ For findings without line numbers, post as general PR comment:
180
+
181
+ ```bash
182
+ gh pr comment <number> --body "**Review Finding**
183
+
184
+ <finding description>
185
+
186
+ *Severity: <severity> | Effort: <effort>*"
187
+ ```
188
+
189
+ **Note**: GitHub only allows inline comments on files that are part of the PR diff. If a finding references a file not in the diff (e.g., missing config in turbo.json when turbo.json wasn't changed), post it as a general PR comment instead.
190
+
191
+ ### Step 5: Offer Cleanup
192
+
193
+ After commenting, ask: "Review complete. Remove worktree?"
194
+
195
+ If yes:
196
+ ```bash
197
+ cd <original-git-root>
198
+ git worktree remove <worktree-path>
199
+ ```
200
+
201
+ Report: "Worktree cleaned up. PR review complete."
202
+ ```
203
+
204
+ ## Step 8: Spawn Review Session
205
+
206
+ Spawn a new OpenCode session in the worktree:
207
+
208
+ ```bash
209
+ WORKTREE_PATH="<computed-worktree-path>"
210
+ CONTEXT="$(cat "$WORKTREE_PATH/.bonfire-pr-review-context.md")"
211
+ tmux split-window -h -c "$WORKTREE_PATH" \
212
+ "opencode --append-system-prompt '$CONTEXT' 'Ready to review PR #<number>. Starting work-reviewer subagent...'"
213
+ ```
214
+
215
+ **Verify spawn succeeded**: If tmux fails (terminal too small), warn user and provide manual instructions.
216
+
217
+ ## Step 9: Confirm
218
+
219
+ Tell the user:
220
+
221
+ > **PR review session spawned.**
222
+ >
223
+ > - Worktree created at `<worktree-path>`
224
+ > - Review session opened in adjacent pane
225
+ > - The new session will run work-reviewer and help you post comments
226
+ >
227
+ > When done, the review session will offer to clean up the worktree.
228
+ >
229
+ > You can continue working here - your current branch is unchanged.
@@ -4,269 +4,111 @@ description: Create an implementation spec for a feature or task
4
4
 
5
5
  # Create Implementation Spec
6
6
 
7
- A hybrid approach using subagents: research in isolated context, interview in main context, write in isolated context.
7
+ Create an implementation spec for **$ARGUMENTS**.
8
8
 
9
- ## Step 1: Find Git Root
10
-
11
- Run `git rev-parse --show-toplevel` to locate the repository root.
12
-
13
- ## Step 2: Check Config
14
-
15
- Read `<git-root>/.bonfire/config.json` if it exists.
16
-
17
- **Specs location**: Read `specsLocation` from config. Default to `.bonfire/specs/` if not set.
18
-
19
- ## Step 3: Gather Initial Context
20
-
21
- Get the topic from $ARGUMENTS or ask if unclear.
22
-
23
- Check for existing context:
24
- - Read `<git-root>/.bonfire/index.md` for project state
25
- - Check for `SPEC.md` or `spec.md` in git root (user's spec template)
26
- - If issue ID provided, note for filename
27
-
28
- ## Step 4: Research Phase (Subagent)
29
-
30
- **Progress**: Tell the user "Researching codebase for patterns and constraints..."
31
-
32
- Use the task tool to invoke the **codebase-explorer** subagent for research.
33
-
34
- Provide a research directive with these questions:
35
-
36
- ```
37
- Research the codebase for implementing: [TOPIC]
38
-
39
- Find:
40
- 1. **Patterns**: How similar features are implemented, existing abstractions to reuse, naming conventions
41
- 2. **Constraints**: Dependencies, API boundaries, performance considerations
42
- 3. **Potential Conflicts**: Files that need changes, intersections with existing code, migration concerns
43
-
44
- Return structured findings only - no raw file contents.
45
- ```
46
-
47
- **Wait for the subagent to return findings** before proceeding.
48
-
49
- The subagent runs in isolated context (haiku model, fast), preserving main context for interview.
50
-
51
- ### Research Validation
52
-
53
- After the subagent returns, validate the response:
54
-
55
- **Valid response contains at least one of:**
56
- - `## Patterns Found` with content
57
- - `## Key Files` with entries
58
- - `## Constraints Discovered` with items
59
-
60
- **On valid response**: Proceed to Step 5.
61
-
62
- **On invalid/empty response**:
63
- 1. Warn user: "Codebase exploration returned limited results. I'll research directly."
64
- 2. Fall back to in-context research using glob, grep, and read:
65
- - Search for patterns: `glob("**/*.{ts,js,py,go}")` to find code files
66
- - Look for similar implementations: `grep("pattern-keyword")`
67
- - Read key files identified
68
- 3. Continue to Step 5 with in-context findings.
69
-
70
- **On subagent failure** (timeout, error):
71
- 1. Warn user: "Subagent research failed. Continuing with direct exploration."
72
- 2. Perform in-context research as above.
73
- 3. Continue to Step 5.
74
-
75
- ### Resumable Exploration (Large Codebases)
76
-
77
- For very large codebases, exploration may need multiple passes. The task tool returns a `session_id` you can use to resume.
78
-
79
- **When to offer resume:**
80
- - Subagent returns with "X additional items omitted" notes
81
- - Findings cover only part of the codebase (e.g., backend but not frontend)
82
- - User asks for deeper exploration of a specific area
83
-
84
- **To resume exploration:**
85
- 1. Tell user: "Exploration found [X] but there's more to explore. Continue exploring [specific area]?"
86
- 2. If yes, re-invoke codebase-explorer with the `session_id` parameter:
87
- - Pass the session_id from the previous invocation
88
- - Provide a refined directive: "Continue exploring: [specific area]. Focus on [what to find]."
89
- 3. Merge findings from resumed exploration with previous findings.
90
- 4. Repeat if needed, up to 3 passes maximum.
91
-
92
- **Example multi-pass scenario:**
93
- - Pass 1: "Research authentication" → finds auth middleware, auth service
94
- - Pass 2 (resume): "Continue exploring: authorization rules" → finds permissions, role checks
95
- - Merge: Combined findings inform better interview questions
96
-
97
- ## Step 5: Interview Phase (Main Context)
98
-
99
- **Progress**: Tell the user "Starting interview (3 rounds: core decisions, edge cases, testing & scope)..."
100
-
101
- Using the research findings, interview the user with **informed questions** via the question tool.
102
-
103
- ### Round 1: Core Decisions
104
-
105
- **Progress**: "Round 1/3: Core decisions..."
106
-
107
- Ask about fundamental approach based on patterns found:
108
-
109
- Example questions (adapt based on actual findings):
110
- - "I found [Pattern A] in `services/` and [Pattern B] in `handlers/`. Which pattern should this feature follow?"
111
- - "The existing [Component] handles [X]. Should we extend it or create a new [Y]?"
112
- - "I see [Library] is used for [purpose]. Should we use it here or try [Alternative]?"
113
-
114
- ### Round 2: Edge Cases & Tradeoffs
115
-
116
- **Progress**: "Round 2/3: Edge cases and tradeoffs..."
117
-
118
- Based on Round 1 answers and research, ask about:
119
- - Error handling approach
120
- - Edge cases identified in research
121
- - Performance vs simplicity tradeoffs
122
- - User experience considerations
123
-
124
- Example questions:
125
- - "What should happen when [edge case from research]?"
126
- - "I found [potential conflict]. How should we handle it?"
127
- - "[Approach A] is simpler but [tradeoff]. [Approach B] is more complex but [benefit]. Preference?"
128
-
129
- ### Round 3: Testing & Scope (Required)
130
-
131
- **Progress**: "Round 3/3: Testing and scope (final round)..."
132
-
133
- Always ask about testing and scope, even if user seems ready to proceed:
134
-
135
- **Testing** (must ask one):
136
- - "What's the testing approach? Unit tests, integration tests, manual testing, or skip tests for MVP?"
137
- - "Should this include tests? If so, what should be covered?"
138
-
139
- **Scope** (must ask one):
140
- - "What's explicitly out of scope for this implementation?"
141
- - "MVP vs full implementation - any features to defer?"
9
+ ---
142
10
 
143
- Example combined question:
144
- - "Two quick questions: (1) Testing approach for this feature? (2) Anything explicitly out of scope?"
11
+ ## Outcome
145
12
 
146
- **Do not skip Round 3.** These questions take 30 seconds and prevent spec gaps.
13
+ A complete implementation spec written to the configured specs location that captures:
14
+ - What to build and why
15
+ - Key decisions with rationale
16
+ - Concrete implementation steps
17
+ - Edge cases and error handling
18
+ - Testing approach and scope boundaries
147
19
 
148
- ## Step 6: Write the Spec (Subagent)
20
+ ---
149
21
 
150
- **Progress**: Tell the user "Writing implementation spec..."
22
+ ## Acceptance Criteria
151
23
 
152
- Use the task tool to invoke the **spec-writer** subagent.
24
+ The spec file must contain these sections:
153
25
 
154
- Provide the prompt in this exact format:
26
+ | Section | Purpose |
27
+ |---------|---------|
28
+ | `## Overview` | What this feature does and why it matters |
29
+ | `## Decisions` | Key technical choices with rationale |
30
+ | `## Implementation Steps` | Ordered, actionable steps to build it |
31
+ | `## Edge Cases` | Error handling, boundary conditions, failure modes |
155
32
 
156
- ```
157
- ## Research Findings
33
+ Additional sections are welcome but these four are required.
158
34
 
159
- <paste structured findings from Step 4>
35
+ **Quality signals:**
36
+ - Decisions reference actual codebase patterns (not generic advice)
37
+ - Implementation steps are specific to this codebase (file paths, function names)
38
+ - Edge cases reflect real constraints discovered in research
160
39
 
161
- ## Interview Q&A
40
+ ---
162
41
 
163
- ### Core Decisions
164
- **Q**: <question from Round 1>
165
- **A**: <user's answer>
42
+ ## Constraints
166
43
 
167
- ### Edge Cases & Tradeoffs
168
- **Q**: <question from Round 2>
169
- **A**: <user's answer>
44
+ ### Context Isolation
170
45
 
171
- ### Scope & Boundaries
172
- **Q**: <question from Round 3>
173
- **A**: <user's answer>
46
+ Research and writing happen in isolated subagent contexts to preserve main context for user interaction.
174
47
 
175
- ## Spec Metadata
48
+ | Phase | Agent | Model | Why |
49
+ |-------|-------|-------|-----|
50
+ | Research | `codebase-explorer` | haiku | Fast, cheap exploration without polluting main context |
51
+ | Writing | `spec-writer` | inherit | Synthesis in isolation; has full research + interview context |
176
52
 
177
- - **Topic**: <topic name>
178
- - **Issue**: <issue ID or N/A>
179
- - **Output Path**: <git-root>/<specsLocation>/<filename>.md
180
- - **Date**: <YYYY-MM-DD>
181
- ```
53
+ ### User Interview Required
182
54
 
183
- The subagent will write the spec file directly to the Output Path.
55
+ The user must be interviewed before writing. Research informs questions; questions surface decisions the user wouldn't think to mention.
184
56
 
185
- **Naming convention**: `<issue-id>-<topic>.md` or `<topic>.md`
57
+ **Interview must cover:**
58
+ - Core technical decisions (patterns, approaches, tradeoffs)
59
+ - Edge cases and error handling preferences
60
+ - Testing approach
61
+ - Scope boundaries (what's explicitly out)
186
62
 
187
- ### Spec Verification
63
+ Use the question tool. Good questions are informed by research, about tradeoffs, and codebase-specific.
188
64
 
189
- After the spec-writer subagent returns, verify the spec is complete.
65
+ ### File Locations
190
66
 
191
- **Key sections to check** (lenient - only these 4):
192
- - `## Overview`
193
- - `## Decisions`
194
- - `## Implementation Steps`
195
- - `## Edge Cases`
67
+ - **Config**: `<git-root>/.bonfire/config.json` contains `specsLocation`
68
+ - **Default**: `.bonfire/specs/` if not configured
69
+ - **Naming**: `<issue-id>-<topic>.md` or `<topic>.md`
196
70
 
197
- **Verification steps:**
71
+ ### Verification
198
72
 
199
- 1. **Read the spec file** at `<git-root>/<specsLocation>/<filename>.md`
73
+ After writing, verify the spec contains all 4 required sections. If incomplete:
74
+ - Warn user what's missing
75
+ - Offer: proceed / retry / abort
200
76
 
201
- 2. **If file missing or empty**:
202
- - Warn user: "Spec file wasn't written. Writing directly..."
203
- - Write the spec yourself using the write tool
204
- - Run verification again on the written file
77
+ ### Session Context
205
78
 
206
- 3. **If file exists, check for key sections**:
207
- - Scan content for the 4 section headers above
208
- - Track which sections are present/missing
79
+ After writing the spec, add a reference to it in `<git-root>/.bonfire/index.md` under Current State. This links the spec to the session that created it.
209
80
 
210
- 4. **If all 4 sections present**:
211
- - Tell user: "Spec written and verified (4/4 key sections present)."
212
- - Proceed to Step 7.
81
+ ### Completion
213
82
 
214
- 5. **If 1-3 sections missing** (partial write):
215
- - Warn user: "Spec appears incomplete. Missing sections: [list missing]"
216
- - Show which sections ARE present
217
- - Ask: "Proceed with partial spec, retry write, or abort?"
218
- - **Proceed**: Continue to Step 7
219
- - **Retry**: Re-invoke spec-writer subagent with same input, then verify again
220
- - **Abort**: Stop and inform user the incomplete spec file remains at path
83
+ After verification, confirm spec creation and offer options:
84
+ - Proceed with implementation
85
+ - Refine specific sections
86
+ - Save for later
221
87
 
222
- 6. **If all sections missing but has content**:
223
- - Treat as invalid format, trigger fallback write
224
- - Write the spec yourself, then verify the written file
88
+ ---
225
89
 
226
- **On subagent failure** (timeout, error):
227
- - Warn user: "Spec writer failed. Writing spec directly..."
228
- - Write the spec yourself using the write tool
229
- - Run verification on the written file
90
+ ## Guidance (Not Rules)
230
91
 
231
- ## Step 7: Link to Session Context
92
+ These patterns tend to work well, but adapt as needed:
232
93
 
233
- Add a reference to the spec in `<git-root>/.bonfire/index.md` under Current State.
94
+ **Research before interviewing** - Findings make questions specific and valuable.
234
95
 
235
- ## Step 8: Confirm
96
+ **Three interview rounds** - Core decisions → Edge cases → Testing & scope. But collapse if user is time-constrained.
236
97
 
237
- Read the generated spec and present a summary. Ask if user wants to:
238
- - Proceed with implementation
239
- - Refine specific sections
240
- - Add more detail to any area
241
- - Save for later
98
+ **Show your work** - Tell user what you're doing: "Researching...", "Starting interview...", "Writing spec..."
242
99
 
243
- ## Interview Tips
100
+ **Fallback gracefully** - If subagent fails, do the work in main context. Warn user but don't stop.
244
101
 
245
- **Good questions are:**
246
- - Informed by research (not generic)
247
- - About tradeoffs (not yes/no)
248
- - Specific to the codebase
249
- - Non-obvious (user wouldn't think to mention)
102
+ **Large codebases** - Explorer may need multiple passes. Offer to continue if findings seem incomplete.
250
103
 
251
- **Bad questions:**
252
- - "What features do you want?" (too broad)
253
- - "Should we add error handling?" (obvious)
254
- - Generic without codebase context
104
+ ---
255
105
 
256
- **Examples of good informed questions:**
257
- - "I found `UserService` uses repository pattern but `OrderService` uses direct DB access. Which approach?"
258
- - "The `auth` middleware validates JWT but doesn't check permissions. Should this feature add permission checks or assume auth is enough?"
259
- - "There's a `BaseController` with shared logic. Extend it or keep this feature standalone?"
106
+ ## Anti-Patterns
260
107
 
261
- ## Spec Lifecycle
108
+ **Don't ask generic questions** - "What features do you want?" wastes an interview slot.
262
109
 
263
- Specs are **temporary artifacts** - they exist to guide implementation:
110
+ **Don't skip the interview** - Research alone misses user intent. Interview alone misses codebase reality.
264
111
 
265
- 1. **Draft** Created, ready for review
266
- 2. **In Progress** → Being implemented
267
- 3. **Completed** → Implementation done
112
+ **Don't write without verification** - Subagents can produce partial output. Always check.
268
113
 
269
- **When a spec is fully implemented**:
270
- - If it contains reusable reference material, move to `docs/`
271
- - Delete the spec file - archive has the record
272
- - Don't let specs accumulate
114
+ **Don't over-specify implementation** - Steps should guide, not micromanage. Leave room for implementation judgment.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "opencode-bonfire",
3
- "version": "1.3.0",
3
+ "version": "1.5.0",
4
4
  "description": "OpenCode forgets everything between sessions. Bonfire remembers.",
5
5
  "type": "module",
6
6
  "bin": {