opencode-bonfire 1.1.1 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,332 @@
1
+ ---
2
+ description: Create a Proof of Concept (POC) plan
3
+ ---
4
+
5
+ # Create POC Plan
6
+
7
+ A hybrid approach using subagents: research in isolated context, interview in main context, write in isolated context.
8
+
9
+ ## Step 1: Find Git Root
10
+
11
+ Run `git rev-parse --show-toplevel` to locate the repository root.
12
+
13
+ ## Step 2: Check Config
14
+
15
+ Read `<git-root>/.bonfire/config.json` if it exists.
16
+
17
+ **Docs location**: Read `docsLocation` from config. Default to `.bonfire/docs/` if not set.
18
+
19
+ ## Step 3: Gather Initial Context
20
+
21
+ Get the customer/project name from $ARGUMENTS or ask if unclear.
22
+
23
+ Check for existing context:
24
+ - Read `<git-root>/.bonfire/index.md` for project state
25
+ - Check for existing POC plans in docs location
26
+ - If issue ID provided, note for filename
27
+
28
+ ## Step 4: Research Phase (Subagent)
29
+
30
+ **Progress**: Tell the user "Researching codebase for POC context..."
31
+
32
+ Use the task tool to invoke the **codebase-explorer** subagent for research.
33
+
34
+ Provide a research directive with these questions:
35
+
36
+ ```
37
+ Research the codebase for POC context: [CUSTOMER/PROJECT]
38
+
39
+ Find:
40
+ 1. **Relevant Features**: Features/products being evaluated in this POC
41
+ 2. **Integration Points**: APIs, webhooks, SDKs that customer would use
42
+ 3. **Configuration**: Environment setup, feature flags, plan requirements
43
+ 4. **Limitations**: Known constraints, quotas, edge cases to test
44
+
45
+ Return structured findings only - no raw file contents.
46
+ ```
47
+
48
+ **Wait for the subagent to return findings** before proceeding.
49
+
50
+ ### Research Validation
51
+
52
+ After the subagent returns, validate the response:
53
+
54
+ **Valid response contains at least one of:**
55
+ - `## Relevant Features` or `## Key Files` with content
56
+ - `## Integration Points` with entries
57
+ - `## Configuration` or `## Limitations` with items
58
+
59
+ **On valid response**: Proceed to Step 5.
60
+
61
+ **On invalid/empty response**:
62
+ 1. Warn user: "Codebase exploration returned limited results. I'll research directly."
63
+ 2. Fall back to in-context research using glob, grep, and read.
64
+ 3. Continue to Step 5 with in-context findings.
65
+
66
+ **Note**: POCs may be less code-focused. If research returns minimal findings, that's okay - the interview will gather most context.
67
+
68
+ ## Step 5: Interview Phase (Main Context)
69
+
70
+ **Progress**: Tell the user "Starting interview (4 rounds: context, goals, plan, logistics)..."
71
+
72
+ Using the research findings, interview the user with **informed questions** via the question tool.
73
+
74
+ ### Round 1: Customer Context
75
+
76
+ **Progress**: "Round 1/4: Customer context..."
77
+
78
+ Ask about the customer and current state:
79
+
80
+ Example questions:
81
+ - "Who is the customer? Brief context on their business/use case."
82
+ - "What's their current state? (existing stack, competitor product, greenfield)"
83
+ - "Who are the DRIs on the customer side? (technical lead, decision maker)"
84
+ - "Why are they evaluating us? What triggered this POC?"
85
+
86
+ ### Round 2: Goals & Success Criteria
87
+
88
+ **Progress**: "Round 2/4: Goals and success criteria..."
89
+
90
+ Based on Round 1 answers and research, ask about success:
91
+
92
+ Example questions:
93
+ - "What are the top 3 goals for this POC? What must we prove?"
94
+ - "What does success look like? (specific, measurable criteria)"
95
+ - "What would make this POC fail? (dealbreakers, must-haves)"
96
+ - "I found [feature/limitation]. Is this relevant to their evaluation?"
97
+
98
+ ### Round 3: Scope & Timeline
99
+
100
+ **Progress**: "Round 3/4: Scope and timeline..."
101
+
102
+ Ask about what's included:
103
+
104
+ **Scope** (must ask):
105
+ - "What's in scope for this POC? (features, workloads, environments)"
106
+ - "What's explicitly out of scope?"
107
+
108
+ **Timeline** (must ask):
109
+ - "POC start date and target decision date?"
110
+ - "Any hard deadlines? (contract renewal, board meeting, etc.)"
111
+
112
+ ### Round 4: Risks & Responsibilities (Required)
113
+
114
+ **Progress**: "Round 4/4: Risks and responsibilities (final round)..."
115
+
116
+ Always ask about logistics:
117
+
118
+ **Responsibilities** (must ask):
119
+ - "What will our team own vs what will the customer own?"
120
+ - "Who is the internal DRI for this POC?"
121
+
122
+ **Risks** (must ask):
123
+ - "What are the main risks? (technical, timeline, relationship)"
124
+ - "Any assumptions we're making that could be wrong?"
125
+
126
+ ## Step 6: Write the POC Plan (Subagent)
127
+
128
+ **Progress**: Tell the user "Writing POC plan..."
129
+
130
+ Use the task tool to invoke the **doc-writer** subagent.
131
+
132
+ Provide the prompt in this exact format:
133
+
134
+ ```
135
+ ## Document Type
136
+
137
+ POC (Proof of Concept) Plan
138
+
139
+ ## Research Findings
140
+
141
+ <paste structured findings from Step 4>
142
+
143
+ ## Interview Q&A
144
+
145
+ ### Customer Context
146
+ **Q**: <question from Round 1>
147
+ **A**: <user's answer>
148
+
149
+ ### Goals & Success Criteria
150
+ **Q**: <question from Round 2>
151
+ **A**: <user's answer>
152
+
153
+ ### Scope & Timeline
154
+ **Q**: <question from Round 3>
155
+ **A**: <user's answer>
156
+
157
+ ### Risks & Responsibilities
158
+ **Q**: <question from Round 4>
159
+ **A**: <user's answer>
160
+
161
+ ## Document Metadata
162
+
163
+ - **Customer**: <customer name>
164
+ - **Internal DRI**: <from interview>
165
+ - **Issue**: <issue ID or N/A>
166
+ - **Output Path**: <git-root>/<docsLocation>/poc-<customer>.md
167
+ - **Date**: <YYYY-MM-DD>
168
+
169
+ ## Template
170
+
171
+ Use this structure:
172
+
173
+ # <Customer> - Proof of Concept (POC) Plan
174
+
175
+ **Customer / Partner:** <name>
176
+ **Internal DRIs:** <names & roles>
177
+ **Customer DRIs:** <names & roles>
178
+ **POC Start:** <date>
179
+ **Target Decision Date:** <date>
180
+
181
+ ---
182
+
183
+ ## 1. Context
184
+
185
+ <!-- Short summary: customer, current state, why this POC -->
186
+
187
+ ## 2. Goals
188
+
189
+ <!-- 3-5 bullets: what we want to validate -->
190
+
191
+ - Goal 1...
192
+ - Goal 2...
193
+
194
+ ## 3. Success Criteria
195
+
196
+ <!-- Concrete, measurable exit criteria -->
197
+
198
+ - Technical: ...
199
+ - Performance / reliability: ...
200
+ - DX / workflow: ...
201
+ - Commercial (optional): ...
202
+
203
+ ## 4. Scope
204
+
205
+ ### 4.1 In Scope
206
+
207
+ - Workloads / apps / surfaces included
208
+ - Products / features being evaluated
209
+ - Environments (staging, prod shadow, etc.)
210
+
211
+ ### 4.2 Out of Scope
212
+
213
+ - What we will NOT do in this POC
214
+
215
+ ## 5. Plan & Timeline
216
+
217
+ ### Phase 1 - Prep
218
+
219
+ - Environment setup
220
+ - Access and security requirements
221
+ - Baseline metrics (before POC)
222
+
223
+ ### Phase 2 - Implementation
224
+
225
+ - Tasks / owners (Internal vs Customer)
226
+ - Milestones
227
+
228
+ ### Phase 3 - Validation
229
+
230
+ - Tests to run
231
+ - How we'll collect metrics and feedback
232
+
233
+ ### Phase 4 - Review & Decision
234
+
235
+ - Joint review meeting
236
+ - Decision options: Go / No-go / Extend
237
+ - Next steps if "Go"
238
+
239
+ ## 6. Responsibilities
240
+
241
+ ### Internal Team
242
+
243
+ - ...
244
+
245
+ ### Customer
246
+
247
+ - ...
248
+
249
+ ## 7. Assumptions
250
+
251
+ <!-- Things we're assuming will be true -->
252
+
253
+ - ...
254
+
255
+ ## 8. Risks & Mitigations
256
+
257
+ - Risk: ...
258
+ Mitigation: ...
259
+
260
+ ## 9. Reporting
261
+
262
+ <!-- How progress and results will be shared -->
263
+
264
+ - Weekly update: ...
265
+ - Final summary: ...
266
+
267
+ ## 10. Appendix
268
+
269
+ <!-- Links to architecture, repos, dashboards, contracts -->
270
+ ```
271
+
272
+ The subagent will write the POC plan directly to the Output Path.
273
+
274
+ **Naming convention**: `poc-<customer>.md` or `poc-<issue-id>-<customer>.md`
275
+
276
+ ### Document Verification
277
+
278
+ After the doc-writer subagent returns, verify the POC plan is complete.
279
+
280
+ **Key sections to check** (lenient - only these 4):
281
+ - `## 2. Goals`
282
+ - `## 3. Success Criteria`
283
+ - `## 4. Scope`
284
+ - `## 5. Plan & Timeline`
285
+
286
+ **Verification steps:**
287
+
288
+ 1. **Read the POC file** at the output path
289
+
290
+ 2. **If file missing or empty**:
291
+ - Warn user: "POC plan wasn't written. Writing directly..."
292
+ - Write the POC plan yourself using the write tool
293
+
294
+ 3. **If file exists, check for key sections**:
295
+ - Scan content for the 4 section headers above
296
+ - Track which sections are present/missing
297
+
298
+ 4. **If all 4 sections present**:
299
+ - Tell user: "POC plan written and verified (4/4 key sections present)."
300
+ - Proceed to Step 7.
301
+
302
+ 5. **If sections missing**:
303
+ - Warn user: "POC plan appears incomplete. Missing sections: [list]"
304
+ - Ask: "Proceed with partial plan, retry write, or abort?"
305
+
306
+ ## Step 7: Link to Session Context
307
+
308
+ Add a reference to the POC plan in `<git-root>/.bonfire/index.md` under Current State.
309
+
310
+ ## Step 8: Confirm
311
+
312
+ Read the generated POC plan and present a summary. Ask if user wants to:
313
+ - Share with customer
314
+ - Refine specific sections
315
+ - Add more detail to timeline
316
+ - Create related issues/tasks
317
+
318
+ ## POC Lifecycle
319
+
320
+ POC plans progress through states:
321
+
322
+ 1. **Draft** - Initial creation
323
+ 2. **Prep** - Environment setup, access provisioned
324
+ 3. **Active** - POC in progress
325
+ 4. **Review** - Evaluating results
326
+ 5. **Decided** - Go / No-go / Extend
327
+
328
+ **When a POC concludes**:
329
+ - Document outcome and learnings
330
+ - If "Go": Create onboarding plan, handoff docs
331
+ - If "No-go": Document reasons for future reference
332
+ - Archive the POC plan with outcome notes
@@ -0,0 +1,332 @@
1
+ ---
2
+ description: Create a Product Requirements Document (PRD)
3
+ ---
4
+
5
+ # Create PRD
6
+
7
+ A hybrid approach using subagents: research in isolated context, interview in main context, write in isolated context.
8
+
9
+ ## Step 1: Find Git Root
10
+
11
+ Run `git rev-parse --show-toplevel` to locate the repository root.
12
+
13
+ ## Step 2: Check Config
14
+
15
+ Read `<git-root>/.bonfire/config.json` if it exists.
16
+
17
+ **Docs location**: Read `docsLocation` from config. Default to `.bonfire/docs/` if not set.
18
+
19
+ ## Step 3: Gather Initial Context
20
+
21
+ Get the feature/product name from $ARGUMENTS or ask if unclear.
22
+
23
+ Check for existing context:
24
+ - Read `<git-root>/.bonfire/index.md` for project state
25
+ - Check for existing PRDs in docs location
26
+ - If issue ID provided, note for filename
27
+
28
+ ## Step 4: Research Phase (Subagent)
29
+
30
+ **Progress**: Tell the user "Researching codebase for product context..."
31
+
32
+ Use the task tool to invoke the **codebase-explorer** subagent for research.
33
+
34
+ Provide a research directive with these questions:
35
+
36
+ ```
37
+ Research the codebase for PRD context: [FEATURE/PRODUCT]
38
+
39
+ Find:
40
+ 1. **Related Features**: Existing similar features, integration points, shared components
41
+ 2. **User Flows**: Current user journeys, entry points, related screens/endpoints
42
+ 3. **Data Model**: Relevant entities, schemas, APIs that would be affected
43
+ 4. **Technical Constraints**: Performance limits, plan gating, existing quotas/limits
44
+
45
+ Return structured findings only - no raw file contents.
46
+ ```
47
+
48
+ **Wait for the subagent to return findings** before proceeding.
49
+
50
+ ### Research Validation
51
+
52
+ After the subagent returns, validate the response:
53
+
54
+ **Valid response contains at least one of:**
55
+ - `## Related Features` or `## Patterns Found` with content
56
+ - `## Key Files` with entries
57
+ - `## User Flows` or `## Data Model` with items
58
+
59
+ **On valid response**: Proceed to Step 5.
60
+
61
+ **On invalid/empty response**:
62
+ 1. Warn user: "Codebase exploration returned limited results. I'll research directly."
63
+ 2. Fall back to in-context research using glob, grep, and read.
64
+ 3. Continue to Step 5 with in-context findings.
65
+
66
+ ## Step 5: Interview Phase (Main Context)
67
+
68
+ **Progress**: Tell the user "Starting interview (4 rounds: problem, users, requirements, scope)..."
69
+
70
+ Using the research findings, interview the user with **informed questions** via the question tool.
71
+
72
+ ### Round 1: Problem & Opportunity
73
+
74
+ **Progress**: "Round 1/4: Problem and opportunity..."
75
+
76
+ Ask about the problem and why now:
77
+
78
+ Example questions (adapt based on findings):
79
+ - "What problem does this feature solve? Who feels this pain most?"
80
+ - "Why build this now vs later? (market, competition, customer requests)"
81
+ - "I found [existing feature]. How does this relate or differ?"
82
+ - "What's the business opportunity? Revenue, retention, expansion?"
83
+
84
+ ### Round 2: Target Users
85
+
86
+ **Progress**: "Round 2/4: Target users..."
87
+
88
+ Based on Round 1 answers and research, ask about users:
89
+
90
+ Example questions:
91
+ - "Who is the primary audience? (persona, plan tier, role)"
92
+ - "Secondary audiences?"
93
+ - "I see [existing user flow]. Will these same users use this feature?"
94
+ - "Any users who should NOT have access? (plan gating, permissions)"
95
+
96
+ ### Round 3: Requirements & Metrics
97
+
98
+ **Progress**: "Round 3/4: Requirements and success metrics..."
99
+
100
+ Ask about what success looks like:
101
+
102
+ **Functional requirements** (must ask):
103
+ - "What must this feature do? (the 'must haves')"
104
+ - "What should it do? (the 'should haves')"
105
+
106
+ **Success metrics** (must ask):
107
+ - "How will we measure success? (adoption, retention, revenue, NPS)"
108
+ - "What are the guardrail metrics? (performance, reliability, support load)"
109
+
110
+ ### Round 4: Scope (Required)
111
+
112
+ **Progress**: "Round 4/4: Scope (final round)..."
113
+
114
+ Always ask about scope:
115
+
116
+ **In scope** (must ask):
117
+ - "What's explicitly in scope for v1?"
118
+
119
+ **Out of scope** (must ask):
120
+ - "What's explicitly out of scope or deferred to later?"
121
+
122
+ **Dependencies**:
123
+ - "Any dependencies on other teams, projects, or launches?"
124
+
125
+ ## Step 6: Write the PRD (Subagent)
126
+
127
+ **Progress**: Tell the user "Writing PRD document..."
128
+
129
+ Use the task tool to invoke the **doc-writer** subagent.
130
+
131
+ Provide the prompt in this exact format:
132
+
133
+ ```
134
+ ## Document Type
135
+
136
+ PRD (Product Requirements Document)
137
+
138
+ ## Research Findings
139
+
140
+ <paste structured findings from Step 4>
141
+
142
+ ## Interview Q&A
143
+
144
+ ### Problem & Opportunity
145
+ **Q**: <question from Round 1>
146
+ **A**: <user's answer>
147
+
148
+ ### Target Users
149
+ **Q**: <question from Round 2>
150
+ **A**: <user's answer>
151
+
152
+ ### Requirements & Metrics
153
+ **Q**: <question from Round 3>
154
+ **A**: <user's answer>
155
+
156
+ ### Scope
157
+ **Q**: <question from Round 4>
158
+ **A**: <user's answer>
159
+
160
+ ## Document Metadata
161
+
162
+ - **Feature**: <feature name>
163
+ - **DRI (PM)**: <from interview or ask>
164
+ - **Issue**: <issue ID or N/A>
165
+ - **Output Path**: <git-root>/<docsLocation>/prd-<feature>.md
166
+ - **Date**: <YYYY-MM-DD>
167
+
168
+ ## Template
169
+
170
+ Use this structure:
171
+
172
+ # PRD: <Feature Name>
173
+
174
+ **DRI (PM):** <name>
175
+ **Engineering DRI:** <name or TBD>
176
+ **Product Area:** <area/team>
177
+ **Last Updated:** <YYYY-MM-DD>
178
+
179
+ ---
180
+
181
+ ## 1. Overview
182
+
183
+ <!-- 3-5 sentence narrative: what, who, outcome -->
184
+
185
+ ## 2. Problem
186
+
187
+ ### Customer Pain Points
188
+
189
+ - ...
190
+
191
+ ### Internal Pain Points
192
+
193
+ - ...
194
+
195
+ ## 3. Opportunity / Why Now
196
+
197
+ <!-- Why this matters now -->
198
+
199
+ ## 4. Target Audience
200
+
201
+ - **Primary:** ...
202
+ - **Secondary:** ...
203
+
204
+ ## 5. Goals & Success Metrics
205
+
206
+ ### Goals
207
+
208
+ - G1: ...
209
+ - G2: ...
210
+
211
+ ### Metrics for Success
212
+
213
+ - Core metric(s): ...
214
+ - Guardrails: ...
215
+
216
+ ## 6. Product Requirements
217
+
218
+ ### 6.1 Functional Requirements
219
+
220
+ - FR1: ...
221
+ - FR2: ...
222
+
223
+ ### 6.2 Non-functional Requirements
224
+
225
+ - NFR1: ...
226
+
227
+ ## 7. User Stories
228
+
229
+ - As a <persona>, I want <goal> so that <outcome>.
230
+
231
+ ## 8. Solution Outline
232
+
233
+ ### 8.1 UX / Flows
234
+
235
+ <!-- High-level flows, link to designs -->
236
+
237
+ ### 8.2 Product Details
238
+
239
+ - Plans: which plans get what
240
+ - Limits / quotas
241
+ - Interactions with existing features
242
+
243
+ ## 9. Scope
244
+
245
+ ### 9.1 In Scope
246
+
247
+ - ...
248
+
249
+ ### 9.2 Out of Scope
250
+
251
+ - ...
252
+
253
+ ## 10. Dependencies & Risks
254
+
255
+ ### Dependencies
256
+
257
+ - ...
258
+
259
+ ### Risks & Mitigations
260
+
261
+ - Risk: ...
262
+ Mitigation: ...
263
+
264
+ ## 11. Open Questions
265
+
266
+ - Q1...
267
+
268
+ ## 12. Appendix
269
+
270
+ <!-- Links to designs, RFCs, customer notes -->
271
+ ```
272
+
273
+ The subagent will write the PRD file directly to the Output Path.
274
+
275
+ **Naming convention**: `prd-<feature>.md` or `prd-<issue-id>-<feature>.md`
276
+
277
+ ### Document Verification
278
+
279
+ After the doc-writer subagent returns, verify the PRD is complete.
280
+
281
+ **Key sections to check** (lenient - only these 4):
282
+ - `## 2. Problem`
283
+ - `## 5. Goals & Success Metrics`
284
+ - `## 6. Product Requirements`
285
+ - `## 9. Scope`
286
+
287
+ **Verification steps:**
288
+
289
+ 1. **Read the PRD file** at the output path
290
+
291
+ 2. **If file missing or empty**:
292
+ - Warn user: "PRD file wasn't written. Writing directly..."
293
+ - Write the PRD yourself using the write tool
294
+
295
+ 3. **If file exists, check for key sections**:
296
+ - Scan content for the 4 section headers above
297
+ - Track which sections are present/missing
298
+
299
+ 4. **If all 4 sections present**:
300
+ - Tell user: "PRD written and verified (4/4 key sections present)."
301
+ - Proceed to Step 7.
302
+
303
+ 5. **If sections missing**:
304
+ - Warn user: "PRD appears incomplete. Missing sections: [list]"
305
+ - Ask: "Proceed with partial PRD, retry write, or abort?"
306
+
307
+ ## Step 7: Link to Session Context
308
+
309
+ Add a reference to the PRD in `<git-root>/.bonfire/index.md` under Current State.
310
+
311
+ ## Step 8: Confirm
312
+
313
+ Read the generated PRD and present a summary. Ask if user wants to:
314
+ - Share with stakeholders
315
+ - Refine specific sections
316
+ - Add more requirements
317
+ - Create implementation specs from this
318
+
319
+ ## PRD Lifecycle
320
+
321
+ PRDs progress through states:
322
+
323
+ 1. **Draft** - Initial creation, gathering input
324
+ 2. **In Review** - Shared with stakeholders
325
+ 3. **Approved** - Signed off, ready for engineering
326
+ 4. **In Development** - Being built
327
+ 5. **Shipped** - Feature launched
328
+
329
+ **When a PRD is approved**:
330
+ - Create RFCs for technical decisions
331
+ - Create implementation specs for engineering
332
+ - Link PRD in related issues/PRs
@@ -0,0 +1,267 @@
1
+ ---
2
+ description: Create a Request for Comments (RFC) document
3
+ ---
4
+
5
+ # Create RFC
6
+
7
+ A hybrid approach using subagents: research in isolated context, interview in main context, write in isolated context.
8
+
9
+ ## Step 1: Find Git Root
10
+
11
+ Run `git rev-parse --show-toplevel` to locate the repository root.
12
+
13
+ ## Step 2: Check Config
14
+
15
+ Read `<git-root>/.bonfire/config.json` if it exists.
16
+
17
+ **Docs location**: Read `docsLocation` from config. Default to `.bonfire/docs/` if not set.
18
+
19
+ ## Step 3: Gather Initial Context
20
+
21
+ Get the topic from $ARGUMENTS or ask if unclear.
22
+
23
+ Check for existing context:
24
+ - Read `<git-root>/.bonfire/index.md` for project state
25
+ - Check for existing RFCs in docs location
26
+ - If issue ID provided, note for filename
27
+
28
+ ## Step 4: Research Phase (Subagent)
29
+
30
+ **Progress**: Tell the user "Researching codebase for context and prior art..."
31
+
32
+ Use the task tool to invoke the **codebase-explorer** subagent for research.
33
+
34
+ Provide a research directive with these questions:
35
+
36
+ ```
37
+ Research the codebase for RFC context: [TOPIC]
38
+
39
+ Find:
40
+ 1. **Prior Art**: Existing implementations, related features, previous approaches
41
+ 2. **Architecture**: Current system design, relevant components, integration points
42
+ 3. **Constraints**: Technical limitations, dependencies, performance considerations
43
+ 4. **Stakeholders**: Teams/systems that would be affected by changes
44
+
45
+ Return structured findings only - no raw file contents.
46
+ ```
47
+
48
+ **Wait for the subagent to return findings** before proceeding.
49
+
50
+ ### Research Validation
51
+
52
+ After the subagent returns, validate the response:
53
+
54
+ **Valid response contains at least one of:**
55
+ - `## Prior Art` or `## Patterns Found` with content
56
+ - `## Key Files` with entries
57
+ - `## Architecture` or `## Constraints` with items
58
+
59
+ **On valid response**: Proceed to Step 5.
60
+
61
+ **On invalid/empty response**:
62
+ 1. Warn user: "Codebase exploration returned limited results. I'll research directly."
63
+ 2. Fall back to in-context research using glob, grep, and read.
64
+ 3. Continue to Step 5 with in-context findings.
65
+
66
+ ## Step 5: Interview Phase (Main Context)
67
+
68
+ **Progress**: Tell the user "Starting interview (3 rounds: problem, solutions, logistics)..."
69
+
70
+ Using the research findings, interview the user with **informed questions** via the question tool.
71
+
72
+ ### Round 1: Problem Definition
73
+
74
+ **Progress**: "Round 1/3: Problem definition..."
75
+
76
+ Ask about the problems being solved:
77
+
78
+ Example questions (adapt based on findings):
79
+ - "What specific problems does this RFC address? I found [existing approach] - what's not working?"
80
+ - "Who experiences these problems? End users, developers, ops?"
81
+ - "How do we know these are real problems? (metrics, incidents, feedback)"
82
+ - "I see [related system]. Is this problem isolated or connected to that?"
83
+
84
+ ### Round 2: Proposed Solutions
85
+
86
+ **Progress**: "Round 2/3: Proposed solutions..."
87
+
88
+ Based on Round 1 answers and research, ask about solutions:
89
+
90
+ Example questions:
91
+ - "What's your primary proposed solution?"
92
+ - "I found [existing pattern]. Should the solution extend this or take a different approach?"
93
+ - "What alternatives did you consider? Why not [alternative approach]?"
94
+ - "What are the main tradeoffs of your proposed solution?"
95
+
96
+ ### Round 3: Logistics & Scope (Required)
97
+
98
+ **Progress**: "Round 3/3: Logistics and scope (final round)..."
99
+
100
+ Always ask about logistics:
101
+
102
+ **Reviewers** (must ask):
103
+ - "Who should review this RFC? Which teams need to sign off?"
104
+
105
+ **Scope** (must ask):
106
+ - "What's explicitly out of scope for this RFC?"
107
+
108
+ **Timeline** (optional):
109
+ - "Any timeline constraints or dependencies?"
110
+
111
+ ## Step 6: Write the RFC (Subagent)
112
+
113
+ **Progress**: Tell the user "Writing RFC document..."
114
+
115
+ Use the task tool to invoke the **doc-writer** subagent.
116
+
117
+ Provide the prompt in this exact format:
118
+
119
+ ```
120
+ ## Document Type
121
+
122
+ RFC (Request for Comments)
123
+
124
+ ## Research Findings
125
+
126
+ <paste structured findings from Step 4>
127
+
128
+ ## Interview Q&A
129
+
130
+ ### Problem Definition
131
+ **Q**: <question from Round 1>
132
+ **A**: <user's answer>
133
+
134
+ ### Proposed Solutions
135
+ **Q**: <question from Round 2>
136
+ **A**: <user's answer>
137
+
138
+ ### Logistics & Scope
139
+ **Q**: <question from Round 3>
140
+ **A**: <user's answer>
141
+
142
+ ## Document Metadata
143
+
144
+ - **Topic**: <topic name>
145
+ - **Author**: <from git config or ask>
146
+ - **Issue**: <issue ID or N/A>
147
+ - **Output Path**: <git-root>/<docsLocation>/rfc-<topic>.md
148
+ - **Date**: <YYYY-MM-DD>
149
+
150
+ ## Template
151
+
152
+ Use this structure:
153
+
154
+ # RFC: <Title>
155
+
156
+ **Author(s):** <name>
157
+ **Reviewers:** <names/teams from interview>
158
+ **Status:** Draft
159
+ **Date:** <YYYY-MM-DD>
160
+
161
+ ## Abstract
162
+
163
+ <!-- 1-3 sentences summarizing proposal and why -->
164
+
165
+ ## Background
166
+
167
+ <!-- Context, history, prior work, relevant links -->
168
+
169
+ ## Problems We Need To Solve
170
+
171
+ <!-- Bullet out concrete problems with evidence -->
172
+
173
+ - Problem 1...
174
+ - Problem 2...
175
+
176
+ ## Proposed Solution
177
+
178
+ ### Overview
179
+
180
+ ### Architecture / Implementation
181
+
182
+ ### Pros
183
+
184
+ - ...
185
+
186
+ ### Cons / Tradeoffs
187
+
188
+ - ...
189
+
190
+ ## Alternatives Considered
191
+
192
+ ### Alternative A
193
+
194
+ - Summary
195
+ - Pros
196
+ - Cons
197
+
198
+ ## Open Questions
199
+
200
+ <!-- Unresolved items needing feedback -->
201
+
202
+ - Question 1...
203
+
204
+ ## Appendix
205
+
206
+ <!-- Links to issues, docs, prior RFCs -->
207
+ ```
208
+
209
+ The subagent will write the RFC file directly to the Output Path.
210
+
211
+ **Naming convention**: `rfc-<topic>.md` or `rfc-<issue-id>-<topic>.md`
212
+
213
+ ### Document Verification
214
+
215
+ After the doc-writer subagent returns, verify the RFC is complete.
216
+
217
+ **Key sections to check** (lenient - only these 4):
218
+ - `## Abstract`
219
+ - `## Problems We Need To Solve`
220
+ - `## Proposed Solution`
221
+ - `## Alternatives Considered`
222
+
223
+ **Verification steps:**
224
+
225
+ 1. **Read the RFC file** at the output path
226
+
227
+ 2. **If file missing or empty**:
228
+ - Warn user: "RFC file wasn't written. Writing directly..."
229
+ - Write the RFC yourself using the write tool
230
+
231
+ 3. **If file exists, check for key sections**:
232
+ - Scan content for the 4 section headers above
233
+ - Track which sections are present/missing
234
+
235
+ 4. **If all 4 sections present**:
236
+ - Tell user: "RFC written and verified (4/4 key sections present)."
237
+ - Proceed to Step 7.
238
+
239
+ 5. **If sections missing**:
240
+ - Warn user: "RFC appears incomplete. Missing sections: [list]"
241
+ - Ask: "Proceed with partial RFC, retry write, or abort?"
242
+
243
+ ## Step 7: Link to Session Context
244
+
245
+ Add a reference to the RFC in `<git-root>/.bonfire/index.md` under Current State.
246
+
247
+ ## Step 8: Confirm
248
+
249
+ Read the generated RFC and present a summary. Ask if user wants to:
250
+ - Share with reviewers
251
+ - Refine specific sections
252
+ - Add more alternatives
253
+ - Save for later
254
+
255
+ ## RFC Lifecycle
256
+
257
+ RFCs progress through states:
258
+
259
+ 1. **Draft** - Initial creation, gathering feedback
260
+ 2. **In Review** - Shared with reviewers, collecting comments
261
+ 3. **Approved** - Accepted, ready for implementation
262
+ 4. **Rejected** - Not moving forward (document why)
263
+
264
+ **When an RFC is approved**:
265
+ - Create implementation specs from it
266
+ - Link RFC in related issues/PRs
267
+ - Keep RFC as historical record
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "opencode-bonfire",
3
- "version": "1.1.1",
3
+ "version": "1.2.0",
4
4
  "description": "OpenCode forgets everything between sessions. Bonfire remembers.",
5
5
  "type": "module",
6
6
  "bin": {