opencode-bonfire 1.4.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -19,25 +19,7 @@ If no PR number provided:
19
19
  >
20
20
  > Example: `/bonfire-review-pr 333`"
21
21
 
22
- ## Step 2: Verify Environment
23
-
24
- Check if running inside tmux:
25
-
26
- ```bash
27
- [ -n "$TMUX" ] && echo "tmux: yes" || echo "tmux: no"
28
- ```
29
-
30
- **If not in tmux**: Provide manual instructions and abort:
31
-
32
- > "PR review with inline comments requires tmux for worktree isolation.
33
- >
34
- > **Manual alternative:**
35
- > 1. Create worktree: `git worktree add ../pr-<number>-review origin/<branch>`
36
- > 2. Open new terminal in that directory
37
- > 3. Run: `opencode 'Review this PR and help me post comments'`
38
- > 4. Clean up when done: `git worktree remove ../pr-<number>-review`"
39
-
40
- ## Step 3: Fetch PR Metadata
22
+ ## Step 2: Fetch PR Metadata
41
23
 
42
24
  Get PR details:
43
25
 
@@ -55,7 +37,7 @@ Extract and store:
55
37
  - `url` - PR URL
56
38
  - `files` - Changed files list
57
39
 
58
- ## Step 4: Find Git Root and Compute Paths
40
+ ## Step 3: Compute Worktree Path
59
41
 
60
42
  ```bash
61
43
  git rev-parse --show-toplevel
@@ -65,7 +47,7 @@ Compute worktree path: `<git-root>/../<repo-name>-pr-<number>-review`
65
47
 
66
48
  Example: `/Users/vieko/dev/gtm` → `/Users/vieko/dev/gtm-pr-333-review`
67
49
 
68
- ## Step 5: Create Worktree
50
+ ## Step 4: Create Worktree
69
51
 
70
52
  Create isolated worktree for PR branch:
71
53
 
@@ -82,85 +64,78 @@ git worktree add <worktree-path> origin/<headRefName>
82
64
  - No: Abort
83
65
  3. If other error: Report error and abort with suggestion to check `git worktree list`
84
66
 
85
- ## Step 6: Get PR Diff Summary
67
+ ## Step 5: Get PR Diff
86
68
 
87
69
  Get the diff for context:
88
70
 
89
71
  ```bash
90
- cd <worktree-path> && git diff origin/<baseRefName>...HEAD --stat
72
+ git -C <worktree-path> diff origin/<baseRefName>...HEAD --stat
91
73
  ```
92
74
 
93
75
  Get changed files:
94
76
 
95
77
  ```bash
96
- cd <worktree-path> && git diff origin/<baseRefName>...HEAD --name-only
78
+ git -C <worktree-path> diff origin/<baseRefName>...HEAD --name-only
97
79
  ```
98
80
 
99
- ## Step 7: Generate Review Context
100
-
101
- Create context document for spawned session.
102
-
103
- Write to `<worktree-path>/.bonfire-pr-review-context.md`:
104
-
105
- ```markdown
106
- # PR Review Context
107
-
108
- **PR**: #<number> - <title>
109
- **URL**: <url>
110
- **Branch**: <headRefName> → <baseRefName>
111
- **Commit**: <headRefOid>
112
-
113
- ## Changed Files
114
-
115
- <list of changed files>
116
-
117
- ## PR Description
118
-
119
- <body from PR>
120
-
121
- ---
122
-
123
- ## Instructions
81
+ ## Step 6: Run Review (Subagent)
124
82
 
125
- You are reviewing PR #<number> in an isolated worktree.
83
+ **Progress**: Tell the user "Reviewing PR for blindspots and gaps..."
126
84
 
127
- ### Step 1: Run Review
85
+ Use the Task tool to invoke the **work-reviewer** agent.
128
86
 
129
- Use the task tool to invoke the **work-reviewer** subagent:
87
+ Provide the review context:
130
88
 
131
89
  ```
132
90
  Review this pull request for blindspots, gaps, and improvements.
133
91
 
134
92
  **Scope**: PR #<number> - <title>
135
93
 
94
+ **PR Description**:
95
+ <body from PR>
96
+
136
97
  **Files changed**:
137
98
  <list of changed files>
138
99
 
139
- **PR Description**:
140
- <body>
100
+ **Worktree path**: <worktree-path>
141
101
 
102
+ Read the changed files from the worktree to understand the actual changes.
142
103
  Return categorized findings with severity, effort, and specific file:line references.
143
104
  ```
144
105
 
145
- ### Step 2: Present Findings
106
+ **Wait for the subagent to return findings** before proceeding.
107
+
108
+ ### Review Validation
109
+
110
+ After the subagent returns, validate the response:
146
111
 
147
- After review completes, present findings grouped by severity.
112
+ **Valid response contains:**
113
+ - Findings with file:line references where applicable
114
+ - Severity categorization
148
115
 
149
- For each finding, note:
116
+ **On subagent failure**: Fall back to in-context review using the diff.
117
+
118
+ ## Step 7: Present Findings
119
+
120
+ Present the findings grouped by severity:
121
+
122
+ For each finding, show:
150
123
  - File and line number (if applicable)
151
124
  - Severity (critical/moderate/minor)
152
125
  - Description
153
126
 
154
- ### Step 3: Batch Comment Selection
127
+ ## Step 8: Batch Comment Selection
155
128
 
156
129
  Ask user: "Which findings should I post as PR comments?"
157
130
 
158
- Options:
159
- 1. List findings by number, let user select (e.g., "1, 3, 5")
160
- 2. "All" - post all findings
131
+ Use the question tool with options:
132
+ 1. "All" - post all findings
133
+ 2. "Select" - user will specify which ones (e.g., "1, 3, 5")
161
134
  3. "None" - skip commenting
162
135
 
163
- ### Step 4: Post Inline Comments
136
+ If "Select" chosen, ask which finding numbers to post.
137
+
138
+ ## Step 9: Post Comments
164
139
 
165
140
  For each selected finding with a file:line reference, post an inline comment:
166
141
 
@@ -186,44 +161,23 @@ gh pr comment <number> --body "**Review Finding**
186
161
  *Severity: <severity> | Effort: <effort>*"
187
162
  ```
188
163
 
189
- **Note**: GitHub only allows inline comments on files that are part of the PR diff. If a finding references a file not in the diff (e.g., missing config in turbo.json when turbo.json wasn't changed), post it as a general PR comment instead.
164
+ **Note**: GitHub only allows inline comments on files that are part of the PR diff. If a finding references a file not in the diff, post it as a general PR comment instead.
190
165
 
191
- ### Step 5: Offer Cleanup
166
+ ## Step 10: Cleanup Worktree
192
167
 
193
168
  After commenting, ask: "Review complete. Remove worktree?"
194
169
 
195
170
  If yes:
196
171
  ```bash
197
- cd <original-git-root>
198
172
  git worktree remove <worktree-path>
199
173
  ```
200
174
 
201
175
  Report: "Worktree cleaned up. PR review complete."
202
- ```
203
176
 
204
- ## Step 8: Spawn Review Session
177
+ ## Step 11: Confirm
205
178
 
206
- Spawn a new OpenCode session in the worktree:
207
-
208
- ```bash
209
- WORKTREE_PATH="<computed-worktree-path>"
210
- CONTEXT="$(cat "$WORKTREE_PATH/.bonfire-pr-review-context.md")"
211
- tmux split-window -h -c "$WORKTREE_PATH" \
212
- "opencode --append-system-prompt '$CONTEXT' 'Ready to review PR #<number>. Starting work-reviewer subagent...'"
213
- ```
214
-
215
- **Verify spawn succeeded**: If tmux fails (terminal too small), warn user and provide manual instructions.
216
-
217
- ## Step 9: Confirm
218
-
219
- Tell the user:
220
-
221
- > **PR review session spawned.**
222
- >
223
- > - Worktree created at `<worktree-path>`
224
- > - Review session opened in adjacent pane
225
- > - The new session will run work-reviewer and help you post comments
226
- >
227
- > When done, the review session will offer to clean up the worktree.
228
- >
229
- > You can continue working here - your current branch is unchanged.
179
+ Summarize:
180
+ - PR reviewed: #<number> - <title>
181
+ - Findings: <count> total, <posted> posted as comments
182
+ - PR URL for reference
183
+ - Worktree status (cleaned up or retained)
@@ -4,269 +4,111 @@ description: Create an implementation spec for a feature or task
4
4
 
5
5
  # Create Implementation Spec
6
6
 
7
- A hybrid approach using subagents: research in isolated context, interview in main context, write in isolated context.
7
+ Create an implementation spec for **$ARGUMENTS**.
8
8
 
9
- ## Step 1: Find Git Root
10
-
11
- Run `git rev-parse --show-toplevel` to locate the repository root.
12
-
13
- ## Step 2: Check Config
14
-
15
- Read `<git-root>/.bonfire/config.json` if it exists.
16
-
17
- **Specs location**: Read `specsLocation` from config. Default to `.bonfire/specs/` if not set.
18
-
19
- ## Step 3: Gather Initial Context
20
-
21
- Get the topic from $ARGUMENTS or ask if unclear.
22
-
23
- Check for existing context:
24
- - Read `<git-root>/.bonfire/index.md` for project state
25
- - Check for `SPEC.md` or `spec.md` in git root (user's spec template)
26
- - If issue ID provided, note for filename
27
-
28
- ## Step 4: Research Phase (Subagent)
29
-
30
- **Progress**: Tell the user "Researching codebase for patterns and constraints..."
31
-
32
- Use the task tool to invoke the **codebase-explorer** subagent for research.
33
-
34
- Provide a research directive with these questions:
35
-
36
- ```
37
- Research the codebase for implementing: [TOPIC]
38
-
39
- Find:
40
- 1. **Patterns**: How similar features are implemented, existing abstractions to reuse, naming conventions
41
- 2. **Constraints**: Dependencies, API boundaries, performance considerations
42
- 3. **Potential Conflicts**: Files that need changes, intersections with existing code, migration concerns
43
-
44
- Return structured findings only - no raw file contents.
45
- ```
46
-
47
- **Wait for the subagent to return findings** before proceeding.
48
-
49
- The subagent runs in isolated context (haiku model, fast), preserving main context for interview.
50
-
51
- ### Research Validation
52
-
53
- After the subagent returns, validate the response:
54
-
55
- **Valid response contains at least one of:**
56
- - `## Patterns Found` with content
57
- - `## Key Files` with entries
58
- - `## Constraints Discovered` with items
59
-
60
- **On valid response**: Proceed to Step 5.
61
-
62
- **On invalid/empty response**:
63
- 1. Warn user: "Codebase exploration returned limited results. I'll research directly."
64
- 2. Fall back to in-context research using glob, grep, and read:
65
- - Search for patterns: `glob("**/*.{ts,js,py,go}")` to find code files
66
- - Look for similar implementations: `grep("pattern-keyword")`
67
- - Read key files identified
68
- 3. Continue to Step 5 with in-context findings.
69
-
70
- **On subagent failure** (timeout, error):
71
- 1. Warn user: "Subagent research failed. Continuing with direct exploration."
72
- 2. Perform in-context research as above.
73
- 3. Continue to Step 5.
74
-
75
- ### Resumable Exploration (Large Codebases)
76
-
77
- For very large codebases, exploration may need multiple passes. The task tool returns a `session_id` you can use to resume.
78
-
79
- **When to offer resume:**
80
- - Subagent returns with "X additional items omitted" notes
81
- - Findings cover only part of the codebase (e.g., backend but not frontend)
82
- - User asks for deeper exploration of a specific area
83
-
84
- **To resume exploration:**
85
- 1. Tell user: "Exploration found [X] but there's more to explore. Continue exploring [specific area]?"
86
- 2. If yes, re-invoke codebase-explorer with the `session_id` parameter:
87
- - Pass the session_id from the previous invocation
88
- - Provide a refined directive: "Continue exploring: [specific area]. Focus on [what to find]."
89
- 3. Merge findings from resumed exploration with previous findings.
90
- 4. Repeat if needed, up to 3 passes maximum.
91
-
92
- **Example multi-pass scenario:**
93
- - Pass 1: "Research authentication" → finds auth middleware, auth service
94
- - Pass 2 (resume): "Continue exploring: authorization rules" → finds permissions, role checks
95
- - Merge: Combined findings inform better interview questions
96
-
97
- ## Step 5: Interview Phase (Main Context)
98
-
99
- **Progress**: Tell the user "Starting interview (3 rounds: core decisions, edge cases, testing & scope)..."
100
-
101
- Using the research findings, interview the user with **informed questions** via the question tool.
102
-
103
- ### Round 1: Core Decisions
104
-
105
- **Progress**: "Round 1/3: Core decisions..."
106
-
107
- Ask about fundamental approach based on patterns found:
108
-
109
- Example questions (adapt based on actual findings):
110
- - "I found [Pattern A] in `services/` and [Pattern B] in `handlers/`. Which pattern should this feature follow?"
111
- - "The existing [Component] handles [X]. Should we extend it or create a new [Y]?"
112
- - "I see [Library] is used for [purpose]. Should we use it here or try [Alternative]?"
113
-
114
- ### Round 2: Edge Cases & Tradeoffs
115
-
116
- **Progress**: "Round 2/3: Edge cases and tradeoffs..."
117
-
118
- Based on Round 1 answers and research, ask about:
119
- - Error handling approach
120
- - Edge cases identified in research
121
- - Performance vs simplicity tradeoffs
122
- - User experience considerations
123
-
124
- Example questions:
125
- - "What should happen when [edge case from research]?"
126
- - "I found [potential conflict]. How should we handle it?"
127
- - "[Approach A] is simpler but [tradeoff]. [Approach B] is more complex but [benefit]. Preference?"
128
-
129
- ### Round 3: Testing & Scope (Required)
130
-
131
- **Progress**: "Round 3/3: Testing and scope (final round)..."
132
-
133
- Always ask about testing and scope, even if user seems ready to proceed:
134
-
135
- **Testing** (must ask one):
136
- - "What's the testing approach? Unit tests, integration tests, manual testing, or skip tests for MVP?"
137
- - "Should this include tests? If so, what should be covered?"
138
-
139
- **Scope** (must ask one):
140
- - "What's explicitly out of scope for this implementation?"
141
- - "MVP vs full implementation - any features to defer?"
9
+ ---
142
10
 
143
- Example combined question:
144
- - "Two quick questions: (1) Testing approach for this feature? (2) Anything explicitly out of scope?"
11
+ ## Outcome
145
12
 
146
- **Do not skip Round 3.** These questions take 30 seconds and prevent spec gaps.
13
+ A complete implementation spec written to the configured specs location that captures:
14
+ - What to build and why
15
+ - Key decisions with rationale
16
+ - Concrete implementation steps
17
+ - Edge cases and error handling
18
+ - Testing approach and scope boundaries
147
19
 
148
- ## Step 6: Write the Spec (Subagent)
20
+ ---
149
21
 
150
- **Progress**: Tell the user "Writing implementation spec..."
22
+ ## Acceptance Criteria
151
23
 
152
- Use the task tool to invoke the **spec-writer** subagent.
24
+ The spec file must contain these sections:
153
25
 
154
- Provide the prompt in this exact format:
26
+ | Section | Purpose |
27
+ |---------|---------|
28
+ | `## Overview` | What this feature does and why it matters |
29
+ | `## Decisions` | Key technical choices with rationale |
30
+ | `## Implementation Steps` | Ordered, actionable steps to build it |
31
+ | `## Edge Cases` | Error handling, boundary conditions, failure modes |
155
32
 
156
- ```
157
- ## Research Findings
33
+ Additional sections are welcome but these four are required.
158
34
 
159
- <paste structured findings from Step 4>
35
+ **Quality signals:**
36
+ - Decisions reference actual codebase patterns (not generic advice)
37
+ - Implementation steps are specific to this codebase (file paths, function names)
38
+ - Edge cases reflect real constraints discovered in research
160
39
 
161
- ## Interview Q&A
40
+ ---
162
41
 
163
- ### Core Decisions
164
- **Q**: <question from Round 1>
165
- **A**: <user's answer>
42
+ ## Constraints
166
43
 
167
- ### Edge Cases & Tradeoffs
168
- **Q**: <question from Round 2>
169
- **A**: <user's answer>
44
+ ### Context Isolation
170
45
 
171
- ### Scope & Boundaries
172
- **Q**: <question from Round 3>
173
- **A**: <user's answer>
46
+ Research and writing happen in isolated subagent contexts to preserve main context for user interaction.
174
47
 
175
- ## Spec Metadata
48
+ | Phase | Agent | Model | Why |
49
+ |-------|-------|-------|-----|
50
+ | Research | `codebase-explorer` | haiku | Fast, cheap exploration without polluting main context |
51
+ | Writing | `spec-writer` | inherit | Synthesis in isolation; has full research + interview context |
176
52
 
177
- - **Topic**: <topic name>
178
- - **Issue**: <issue ID or N/A>
179
- - **Output Path**: <git-root>/<specsLocation>/<filename>.md
180
- - **Date**: <YYYY-MM-DD>
181
- ```
53
+ ### User Interview Required
182
54
 
183
- The subagent will write the spec file directly to the Output Path.
55
+ The user must be interviewed before writing. Research informs questions; questions surface decisions the user wouldn't think to mention.
184
56
 
185
- **Naming convention**: `<issue-id>-<topic>.md` or `<topic>.md`
57
+ **Interview must cover:**
58
+ - Core technical decisions (patterns, approaches, tradeoffs)
59
+ - Edge cases and error handling preferences
60
+ - Testing approach
61
+ - Scope boundaries (what's explicitly out)
186
62
 
187
- ### Spec Verification
63
+ Use the question tool. Good questions are informed by research, about tradeoffs, and codebase-specific.
188
64
 
189
- After the spec-writer subagent returns, verify the spec is complete.
65
+ ### File Locations
190
66
 
191
- **Key sections to check** (lenient - only these 4):
192
- - `## Overview`
193
- - `## Decisions`
194
- - `## Implementation Steps`
195
- - `## Edge Cases`
67
+ - **Config**: `<git-root>/.bonfire/config.json` contains `specsLocation`
68
+ - **Default**: `.bonfire/specs/` if not configured
69
+ - **Naming**: `<issue-id>-<topic>.md` or `<topic>.md`
196
70
 
197
- **Verification steps:**
71
+ ### Verification
198
72
 
199
- 1. **Read the spec file** at `<git-root>/<specsLocation>/<filename>.md`
73
+ After writing, verify the spec contains all 4 required sections. If incomplete:
74
+ - Warn user what's missing
75
+ - Offer: proceed / retry / abort
200
76
 
201
- 2. **If file missing or empty**:
202
- - Warn user: "Spec file wasn't written. Writing directly..."
203
- - Write the spec yourself using the write tool
204
- - Run verification again on the written file
77
+ ### Session Context
205
78
 
206
- 3. **If file exists, check for key sections**:
207
- - Scan content for the 4 section headers above
208
- - Track which sections are present/missing
79
+ After writing the spec, add a reference to it in `<git-root>/.bonfire/index.md` under Current State. This links the spec to the session that created it.
209
80
 
210
- 4. **If all 4 sections present**:
211
- - Tell user: "Spec written and verified (4/4 key sections present)."
212
- - Proceed to Step 7.
81
+ ### Completion
213
82
 
214
- 5. **If 1-3 sections missing** (partial write):
215
- - Warn user: "Spec appears incomplete. Missing sections: [list missing]"
216
- - Show which sections ARE present
217
- - Ask: "Proceed with partial spec, retry write, or abort?"
218
- - **Proceed**: Continue to Step 7
219
- - **Retry**: Re-invoke spec-writer subagent with same input, then verify again
220
- - **Abort**: Stop and inform user the incomplete spec file remains at path
83
+ After verification, confirm spec creation and offer options:
84
+ - Proceed with implementation
85
+ - Refine specific sections
86
+ - Save for later
221
87
 
222
- 6. **If all sections missing but has content**:
223
- - Treat as invalid format, trigger fallback write
224
- - Write the spec yourself, then verify the written file
88
+ ---
225
89
 
226
- **On subagent failure** (timeout, error):
227
- - Warn user: "Spec writer failed. Writing spec directly..."
228
- - Write the spec yourself using the write tool
229
- - Run verification on the written file
90
+ ## Guidance (Not Rules)
230
91
 
231
- ## Step 7: Link to Session Context
92
+ These patterns tend to work well, but adapt as needed:
232
93
 
233
- Add a reference to the spec in `<git-root>/.bonfire/index.md` under Current State.
94
+ **Research before interviewing** - Findings make questions specific and valuable.
234
95
 
235
- ## Step 8: Confirm
96
+ **Three interview rounds** - Core decisions → Edge cases → Testing & scope. But collapse if user is time-constrained.
236
97
 
237
- Read the generated spec and present a summary. Ask if user wants to:
238
- - Proceed with implementation
239
- - Refine specific sections
240
- - Add more detail to any area
241
- - Save for later
98
+ **Show your work** - Tell user what you're doing: "Researching...", "Starting interview...", "Writing spec..."
242
99
 
243
- ## Interview Tips
100
+ **Fallback gracefully** - If subagent fails, do the work in main context. Warn user but don't stop.
244
101
 
245
- **Good questions are:**
246
- - Informed by research (not generic)
247
- - About tradeoffs (not yes/no)
248
- - Specific to the codebase
249
- - Non-obvious (user wouldn't think to mention)
102
+ **Large codebases** - Explorer may need multiple passes. Offer to continue if findings seem incomplete.
250
103
 
251
- **Bad questions:**
252
- - "What features do you want?" (too broad)
253
- - "Should we add error handling?" (obvious)
254
- - Generic without codebase context
104
+ ---
255
105
 
256
- **Examples of good informed questions:**
257
- - "I found `UserService` uses repository pattern but `OrderService` uses direct DB access. Which approach?"
258
- - "The `auth` middleware validates JWT but doesn't check permissions. Should this feature add permission checks or assume auth is enough?"
259
- - "There's a `BaseController` with shared logic. Extend it or keep this feature standalone?"
106
+ ## Anti-Patterns
260
107
 
261
- ## Spec Lifecycle
108
+ **Don't ask generic questions** - "What features do you want?" wastes an interview slot.
262
109
 
263
- Specs are **temporary artifacts** - they exist to guide implementation:
110
+ **Don't skip the interview** - Research alone misses user intent. Interview alone misses codebase reality.
264
111
 
265
- 1. **Draft** Created, ready for review
266
- 2. **In Progress** → Being implemented
267
- 3. **Completed** → Implementation done
112
+ **Don't write without verification** - Subagents can produce partial output. Always check.
268
113
 
269
- **When a spec is fully implemented**:
270
- - If it contains reusable reference material, move to `docs/`
271
- - Delete the spec file - archive has the record
272
- - Don't let specs accumulate
114
+ **Don't over-specify implementation** - Steps should guide, not micromanage. Leave room for implementation judgment.