flow-cc 0.5.8 → 0.6.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,290 +1,305 @@
1
- ---
2
- name: flow:spec
3
- description: Spec interview that produces an executable PRD with wave-based phases and testable acceptance criteria
4
- user_invocable: true
5
- ---
6
-
7
- # /flow:spec — Spec Interview → Executable PRD
8
-
9
- You are executing the `/flow:spec` skill. This is the KEYSTONE skill of the flow system. You will interview the user about their milestone, then produce a detailed PRD that agents can execute without ambiguity.
10
-
11
- **Interview mode:** Always thorough by default. The user can say "done", "finalize", "that's enough", or "move on" at ANY time to wrap up early. Respect their signal and finalize with whatever depth has been achieved.
12
-
13
- **Plan mode warning:** Do NOT use this skill with plan mode enabled. Plan mode's read-only constraint prevents the PRD from being written during the interview. `/flow:spec` IS the planning phase — plan mode on top of it is redundant and breaks the workflow.
14
-
15
- **Skill boundary:** You are inside the `/flow:*` workflow. NEVER invoke, suggest, or reference skills from other workflow systems (`/lisa:*`, `/gsd:*`, `/superpowers:*`, etc.). Only suggest `/flow:*` commands as next steps. Do NOT use the Skill tool to call any non-flow skill. If the user needs a different workflow, they will invoke it themselves.
16
-
17
- ## Phase 1 — Context Gathering
18
-
19
- 1. Read `.planning/STATE.md` and `.planning/ROADMAP.md` — understand current milestone and what's done
20
- 2. Read `CLAUDE.md` — understand project rules and tech stack
21
- 3. Check for existing PRD:
22
- - List `.planning/prds/` directory (if it exists) for existing PRD files
23
- - Also check for legacy `PRD.md` at project root (backward compat)
24
- - If a PRD exists for the target milestone, note it for resume/extend flow
25
- 4. **Milestone Targeting** — determine which milestone this PRD targets before scanning the codebase:
26
-
27
- 1. **If the user passed an argument** (e.g., `/flow:spec v3: Payments`) — match against ROADMAP.md milestones. If no match, print available milestones and ask which one.
28
-
29
- 2. **If no argument** — read ROADMAP.md and list all incomplete milestones. Use AskUserQuestion to let the user pick which milestone to spec. Pre-select the next unspecced milestone as the first option. Always show the picker, even if only one milestone is listed — the user may want to confirm or choose "Other" to define a new milestone first.
30
-
31
- 3. **Derive the PRD slug:** Take the milestone's version prefix and name (e.g., "v3: Dashboard Analytics"), lowercase it, replace spaces and special characters with hyphens, collapse consecutive hyphens. Result: `v3-dashboard-analytics`. The PRD path is `.planning/prds/{slug}.md`.
32
-
33
- 4. **Check for existing PRD at that path:**
34
- - **If PRD exists** → Use AskUserQuestion: "A PRD already exists for this milestone at `.planning/prds/{slug}.md`. What would you like to do?"
35
- - "Resume editing" — load the existing PRD and continue the interview from where it left off
36
- - "Start fresh" — delete the existing PRD and start a new interview
37
- - "Pick a different milestone" — show available milestones
38
- - **If no PRD exists** → Proceed with a fresh interview
39
-
40
- 5. **Future milestone detection:** If the target milestone is NOT the current milestone in STATE.md, note this — the PRD will be written but STATE.md's "Active PRD" field will NOT be updated (it stays pointing at the current milestone's PRD). Print: "Speccing future milestone [name]. STATE.md will not be updated — this PRD will be available when you reach this milestone."
41
-
42
- 5. **Codebase scan** (brownfield projects) — spawn **3 parallel Explore subagents** via the Task tool to scan the codebase without consuming main context:
43
-
44
- | Agent | Focus | Looks For |
45
- |-------|-------|-----------|
46
- | **Structure & Config** | Project skeleton, build tooling | `package.json`, `tsconfig`, config files, entry points, CI/CD, env setup |
47
- | **UI, Pages & Routes** | Components, pages, routing | `components/`, `pages/`, `app/`, route definitions, layouts, navigation |
48
- | **Data Layer & APIs** | Database, APIs, types | `api/`, `models/`, `types/`, `schemas/`, ORM definitions, query functions |
49
-
50
- **Each subagent prompt MUST include:**
51
- - **Exclusions:** NEVER scan `node_modules/`, `.git/`, `dist/`, `build/`, `.next/`, `__pycache__/`, `*.min.js`, `*.map`, `*.lock`
52
- - **Size-adaptive scanning:** If the agent's domain has >200 files, switch to focused mode (entry points, config, and type definitions only)
53
- - **20-file sample cap per agent** (60 total across all 3)
54
- - **15-line summary max** — structured as: key files found, patterns observed, reusable code/components, notes
55
- - **Explicit instruction:** Do NOT return raw file contents — return only structured summaries
56
-
57
- 6. **Assemble summaries:** Collect the 3 agent summaries into a brief context block (~45 lines total). Print to user: "Here's what I found in the codebase: [key components, patterns, data layer]. Starting the spec interview."
58
-
59
- ## Phase 2 — Adaptive Interview
60
-
61
- ### CRITICAL RULES (follow these exactly)
62
-
63
- 1. **USE AskUserQuestion FOR ALL QUESTIONS.** Never just print questions as text. Always use the AskUserQuestion tool so the user gets structured prompts with selectable options. Provide 2-4 concrete options per question based on what you learned in Phase 1.
64
-
65
- 2. **ASK NON-OBVIOUS QUESTIONS.** Don't just ask "what features do you want?" — you already read the codebase. Ask questions that PROBE deeper:
66
- - "I see you have [existing pattern]. Should we extend that or build a new approach?"
67
- - "What happens when [edge case] occurs?"
68
- - "You mentioned [X] — does that mean [Y] is also needed, or is that separate?"
69
- - "What's the failure mode here? What does the user see when things go wrong?"
70
- - "Who else touches this data/workflow? Any downstream effects?"
71
- - "What's the minimum version of this that would be useful?"
72
-
73
- 3. **CONTINUE UNTIL THE USER SAYS STOP.** Do NOT stop after covering all 7 areas once. After each answer, immediately ask the next question. Keep going deeper until the user says "done", "finalize", "that's enough", "ship it", or similar. A thorough interview is 15-30 questions, not 5.
74
-
75
- 4. **MAINTAIN A RUNNING DRAFT.** Every 2-3 questions, update `.planning/prds/{slug}.md` with what you've learned so far (create `.planning/prds/` directory if it doesn't exist). Print: "Updated PRD draft — added [brief summary]." The user should see the spec taking shape in real-time, not all at the end.
76
-
77
- 5. **BE ADAPTIVE.** Base your next question on the previous answer. If the user reveals something surprising, probe deeper on THAT — don't robotically move to the next category. The best specs come from following interesting threads.
78
-
79
- ### First-Principles Mode (optional)
80
-
81
- If the user says "challenge this", "first principles", or "push back" — start with 3-5 challenge questions before detailed spec gathering:
82
- - "Why build this at all? What's the cost of NOT building it?"
83
- - "Is there a simpler way to achieve 80% of this value?"
84
- - "What assumptions are we making that might be wrong?"
85
- - "Who is this really for, and have they asked for it?"
86
- - "What would you cut if you had half the time?"
87
-
88
- Then proceed to the coverage areas below.
89
-
90
- ### Coverage Areas
91
-
92
- Cover these areas thoroughly. There are no "rounds" — move fluidly between areas based on the conversation. Circle back to earlier areas when later answers reveal new information.
93
-
94
- **1. Scope Definition**
95
- - What features are IN scope for this milestone? What's the MVP vs. the full vision?
96
- - What is explicitly OUT of scope / deferred to a future milestone?
97
- - Is there any code that is sacred (must NOT be touched)? Why?
98
- - What existing code/features should we ignore entirely (not break, not improve, not touch)?
99
-
100
- **2. User Stories (CRITICAL — spend the most time here)**
101
- - What does the user actually DO? Walk through the key workflows step by step.
102
- - What should they see at each step? What feedback do they get?
103
- - Frame as: "As [role], I want [action], so that [outcome]"
104
- - **Story-splitting:** If a story has more than 3-4 acceptance criteria, split it into smaller stories. Each story should be independently deliverable.
105
- - **Anti-vagueness enforcement:**
106
- - BAD acceptance criteria: "Works correctly", "Is fast", "Handles errors well", "Looks good"
107
- - GOOD acceptance criteria: "Returns 200 with JSON body for valid input", "Shows error toast with message for invalid email format", "Page renders in < 200ms on 3G", "Matches Figma comp within 4px"
108
- - If the user gives vague criteria, push back: "How would you specifically test that? What would you check?"
109
- - **Verification per story:** Each story must have at least one concrete verification step (a command to run, a page to visit, a state to check)
110
-
111
- **3. Technical Design**
112
- - What database changes are needed? (new tables, columns, indexes, migrations)
113
- - What API endpoints? (method, path, request/response shape, auth requirements)
114
- - What new files need to be created? What existing files get modified?
115
- - What existing utilities/components/DAL queries should be reused (not rebuilt)?
116
- - What's the data flow? Where does data originate, transform, and render?
117
- - Any wave-parallelization opportunities? (independent agents building separate files)
118
-
119
- **4. User Experience**
120
- - What are the key user flows? Walk through click-by-click.
121
- - What edge cases exist? (empty states, error states, loading states, partial data)
122
- - Accessibility requirements? (keyboard navigation, screen readers, ARIA labels)
123
- - Mobile/responsive behavior? (breakpoints, touch targets, layout shifts)
124
- - What does the user see while waiting? (loading spinners, skeletons, optimistic updates)
125
-
126
- **5. Trade-offs & Constraints**
127
- - Performance vs. simplicity? What's good enough for v1?
128
- - Any security considerations? (auth, data access, input validation)
129
- - Any third-party dependencies or integrations?
130
- - What technical debt is acceptable for now vs. must be done right?
131
- - Any browser/device support requirements?
132
-
133
- **6. Implementation Phases**
134
- - How should this break into sequential phases?
135
- - What can be parallelized within each phase? (wave-based agent structure)
136
- - What's the critical path — what must be built first?
137
- - What's the minimum viable first phase? (what gives us something testable fastest?)
138
- - Any phases that could be cut if time runs short?
139
-
140
- **7. Verification & Feedback Loops**
141
- - What commands verify the build works? (`tsc`, `biome`, test suite)
142
- - What does "done" look like for each phase? How do we know it worked?
143
- - Are there integration points that need end-to-end testing?
144
- - What should we check after each phase before moving to the next?
145
- - Any monitoring or logging needed to confirm production behavior?
146
-
147
- **User signals done:** If the user says "done", "finalize", "that's enough", "ship it", or similar — immediately stop interviewing and go to Phase 3. Finalize the PRD with whatever depth has been achieved.
148
-
149
- ## Phase 3 PRD Generation
150
-
151
- ### Minimum Viable PRD Check
152
-
153
- Before generating the final PRD, validate:
154
- - At least **3 user stories** with acceptance criteria have been discussed
155
- - At least **1 implementation phase** has been defined
156
- - At least **1 verification command** has been specified
157
-
158
- If any check fails, print what's missing and use AskUserQuestion:
159
- - "The PRD is thin [missing items]. Want to continue the interview to flesh it out, or finalize as-is?"
160
- - "Continue interview" return to Phase 2 and probe the missing areas
161
- - "Finalize as-is" — proceed with what we have
162
-
163
- ### Write PRD
164
-
165
- Write the PRD to `.planning/prds/{slug}.md` (create `.planning/prds/` directory first if it doesn't exist) with this EXACT structure:
166
-
167
- ```markdown
168
- # [Milestone Name] — Specification
169
-
170
- **Milestone:** [full milestone name, e.g., "v3: Dashboard Analytics"]
171
- **Status:** Ready for execution
172
- **Branch:** feat/[milestone-slug]
173
- **Created:** [today's date]
174
-
175
- ## Overview
176
- [One paragraph summary of the milestone]
177
-
178
- ## Problem Statement
179
- [Why this work exists — what problem does it solve?]
180
-
181
- ## Scope
182
-
183
- ### In Scope
184
- - [Bullet list of what ships in this milestone]
185
-
186
- ### Out of Scope
187
- - [Explicitly deferred items]
188
-
189
- ### Sacred / Do NOT Touch
190
- - [Code paths that must not be modified, with reasons]
191
-
192
- ## User Stories
193
-
194
- ### US-1: [Feature Name]
195
- **Description:** As [role], I want [action], so that [outcome].
196
- **Acceptance Criteria:**
197
- - [ ] Specific, testable requirement (names actual functions/components)
198
- - [ ] Another requirement
199
- - [ ] [Verification command] passes
200
-
201
- ### US-2: [Feature Name]
202
- ...
203
-
204
- ## Technical Design
205
-
206
- ### New Database Tables
207
- [SQL DDL or schema description, with indexes]
208
-
209
- ### New API Endpoints
210
- [Method + path + request/response shape for each]
211
-
212
- ### New Files to Create
213
- [Grouped by phase. Absolute paths with one-line descriptions]
214
-
215
- ### Existing Files to Modify
216
- [Paths + what changes in each]
217
-
218
- ### Key Existing Code (DO NOT recreate — use as-is)
219
- [Functions, utilities, DAL queries, components that agents should import/reuse]
220
-
221
- ## Implementation Phases
222
-
223
- ### Phase 1: [Name]
224
- **Goal:** [One sentence]
225
-
226
- **Wave 1 — [Theme] (N agents parallel):**
227
- 1. agent-name: Creates file1.ts, file2.ts — [what it does]
228
- 2. agent-name: Modifies file3.ts — [what changes]
229
-
230
- **Wave 2 — [Theme] (N agents parallel):**
231
- 3. agent-name: Creates file4.ts — [what it does]
232
- 4. agent-name: Wires component into page — [specifics]
233
-
234
- **Wave 3Integration:**
235
- 5. Integration check, responsive verification, cleanup
236
-
237
- **Verification:** [Exact commands to run]
238
- **Acceptance:** US-1 criteria [list which], US-2 criteria [list which]
239
-
240
- ### Phase 2: [Name]
241
- ...
242
-
243
- ## Execution Rules
244
- 1. DELEGATE EVERYTHING. Lead context is sacred — do not read implementation files into it.
245
- 2. Verify after every phase: [verification commands from CLAUDE.md]
246
- 3. Atomic commits after each agent's work lands
247
- 4. Never `git add .` — stage specific files only
248
- 5. Read `tasks/lessons.md` before spawning agents — inject relevant lessons into agent prompts
249
-
250
- ## Definition of Done
251
- - [ ] All user story acceptance criteria pass
252
- - [ ] All phases verified with [verification commands]
253
- - [ ] Branch pushed and PR opened
254
- - [ ] STATE.md and ROADMAP.md updated
255
- ```
256
-
257
- ## Phase 4 — Post-PRD Updates
258
-
259
- After writing the PRD to `.planning/prds/{slug}.md`:
260
-
261
- 1. **Update ROADMAP.md:** Add phase breakdown under the current milestone section. Each phase gets a row in the progress table with status "Pending".
262
-
263
- 2. **Update STATE.md** (current milestone only):
264
- - Set current phase to "Phase 1ready for `/flow:go`"
265
- - Set "Active PRD" to `.planning/prds/{slug}.md`
266
- - Update "Next Actions" to reference the first phase
267
- - **Skip this step if speccing a future milestone** — STATE.md stays pointing at the current milestone. Print: "PRD written to `.planning/prds/{slug}.md`. STATE.md not updated (future milestone)."
268
-
269
- 3. **Generate Phase 1 handoff prompt:**
270
- ```
271
- Phase 1: [Name] [short description]. Read STATE.md, ROADMAP.md, and .planning/prds/{slug}.md.
272
- [One sentence of context about what Phase 1 builds].
273
- ```
274
-
275
- 4. Print the handoff prompt in a fenced code block.
276
-
277
- 5. Print: "PRD ready at `.planning/prds/{slug}.md`. Run `/flow:go` to execute Phase 1, or review the PRD first."
278
-
279
- ## Quality Gates
280
-
281
- Before finalizing, self-check the PRD:
282
- - [ ] Every phase has wave-based agent assignments with explicit file lists
283
- - [ ] Every user story has checkbox acceptance criteria that are testable
284
- - [ ] Every phase has verification commands
285
- - [ ] "Key Existing Code" section references actual files/functions found in the codebase scan
286
- - [ ] PRD is written to `.planning/prds/{slug}.md`, NOT to root `PRD.md`
287
- - [ ] No phase has more than 5 agents in a single wave (too many = coordination overhead)
288
- - [ ] Sacred code section is populated (even if empty with "None identified")
289
-
290
- If any gate fails, fix the PRD before presenting it.
1
+ ---
2
+ name: flow:spec
3
+ description: Spec interview that produces an executable PRD with wave-based phases and testable acceptance criteria
4
+ user_invocable: true
5
+ ---
6
+
7
+ # /flow:spec — Spec Interview → Executable PRD
8
+
9
+ You are executing the `/flow:spec` skill. This is the KEYSTONE skill of the flow system. You will interview the user about their milestone, then produce a detailed PRD that agents can execute without ambiguity.
10
+
11
+ **Interview mode:** Always thorough by default. The user can say "done", "finalize", "that's enough", or "move on" at ANY time to wrap up early. Respect their signal and finalize with whatever depth has been achieved.
12
+
13
+ **Plan mode warning:** Do NOT use this skill with plan mode enabled. Plan mode's read-only constraint prevents the PRD from being written during the interview. `/flow:spec` IS the planning phase — plan mode on top of it is redundant and breaks the workflow.
14
+
15
+ **Skill boundary:** You are inside the `/flow:*` workflow. NEVER invoke, suggest, or reference skills from other workflow systems (`/lisa:*`, `/gsd:*`, `/superpowers:*`, etc.). Only suggest `/flow:*` commands as next steps. Do NOT use the Skill tool to call any non-flow skill. If the user needs a different workflow, they will invoke it themselves.
16
+
17
+ ## Phase 1 — Context Gathering
18
+
19
+ 1. Read `.planning/STATE.md` and `.planning/ROADMAP.md` — understand current milestone and what's done
20
+ 2. Read `CLAUDE.md` — understand project rules and tech stack
21
+ 3. Check for existing PRD:
22
+ - List `.planning/prds/` directory (if it exists) for existing PRD files
23
+ - Also check for legacy `PRD.md` at project root (backward compat)
24
+ - If a PRD exists for the target milestone, note it for resume/extend flow
25
+ 4. **Milestone Targeting** — determine which milestone this PRD targets before scanning the codebase:
26
+
27
+ 1. **If the user passed an argument** (e.g., `/flow:spec v3: Payments`) — match against ROADMAP.md milestones. If no match, print available milestones and ask which one.
28
+
29
+ 2. **If no argument** — read ROADMAP.md and list all incomplete milestones. Use AskUserQuestion to let the user pick which milestone to spec. Pre-select the next unspecced milestone as the first option. Always show the picker, even if only one milestone is listed — the user may want to confirm or choose "Other" to define a new milestone first.
30
+
31
+ 3. **Derive the PRD slug:** Take the milestone's version prefix and name (e.g., "v3: Dashboard Analytics"), lowercase it, replace spaces and special characters with hyphens, collapse consecutive hyphens. Result: `v3-dashboard-analytics`. The PRD path is `.planning/prds/{slug}.md`.
32
+
33
+ 4. **Check for existing PRD at that path:**
34
+ - **If PRD exists** → Use AskUserQuestion: "A PRD already exists for this milestone at `.planning/prds/{slug}.md`. What would you like to do?"
35
+ - "Resume editing" — load the existing PRD and continue the interview from where it left off
36
+ - "Start fresh" — delete the existing PRD and start a new interview
37
+ - "Pick a different milestone" — show available milestones
38
+ - **If no PRD exists** → Proceed with a fresh interview
39
+
40
+ 5. **Future milestone detection:** If the target milestone is NOT the current milestone in STATE.md, note this — the PRD will be written but STATE.md's "Active PRD" field will NOT be updated (it stays pointing at the current milestone's PRD). Print: "Speccing future milestone [name]. STATE.md will not be updated — this PRD will be available when you reach this milestone."
41
+
42
+ 5. **Codebase scan** (brownfield projects) — spawn **3 parallel Explore subagents** via the Task tool to scan the codebase without consuming main context:
43
+
44
+ | Agent | Focus | Looks For |
45
+ |-------|-------|-----------|
46
+ | **Structure & Config** | Project skeleton, build tooling | `package.json`, `tsconfig`, config files, entry points, CI/CD, env setup |
47
+ | **UI, Pages & Routes** | Components, pages, routing | `components/`, `pages/`, `app/`, route definitions, layouts, navigation |
48
+ | **Data Layer & APIs** | Database, APIs, types | `api/`, `models/`, `types/`, `schemas/`, ORM definitions, query functions |
49
+
50
+ **Each subagent prompt MUST include:**
51
+ - **Exclusions:** NEVER scan `node_modules/`, `.git/`, `dist/`, `build/`, `.next/`, `__pycache__/`, `*.min.js`, `*.map`, `*.lock`
52
+ - **Size-adaptive scanning:** If the agent's domain has >200 files, switch to focused mode (entry points, config, and type definitions only)
53
+ - **20-file sample cap per agent** (60 total across all 3)
54
+ - **15-line summary max** — structured as: key files found, patterns observed, reusable code/components, notes
55
+ - **Explicit instruction:** Do NOT return raw file contents — return only structured summaries
56
+
57
+ 6. **Assemble summaries:** Collect the 3 agent summaries into a brief context block (~45 lines total). Print to user: "Here's what I found in the codebase: [key components, patterns, data layer]. Starting the spec interview."
58
+
59
+ ## Phase 2 — Adaptive Interview
60
+
61
+ ### CRITICAL RULES (follow these exactly)
62
+
63
+ 1. **USE AskUserQuestion FOR ALL QUESTIONS.** Never just print questions as text. Always use the AskUserQuestion tool so the user gets structured prompts with selectable options. Provide 2-4 concrete options per question based on what you learned in Phase 1.
64
+
65
+ 2. **ASK NON-OBVIOUS QUESTIONS.** Don't just ask "what features do you want?" — you already read the codebase. Ask questions that PROBE deeper:
66
+ - "I see you have [existing pattern]. Should we extend that or build a new approach?"
67
+ - "What happens when [edge case] occurs?"
68
+ - "You mentioned [X] — does that mean [Y] is also needed, or is that separate?"
69
+ - "What's the failure mode here? What does the user see when things go wrong?"
70
+ - "Who else touches this data/workflow? Any downstream effects?"
71
+ - "What's the minimum version of this that would be useful?"
72
+
73
+ 3. **CONTINUE UNTIL THE USER SAYS STOP.** Do NOT stop after covering all 7 areas once. After each answer, immediately ask the next question. Keep going deeper until the user says "done", "finalize", "that's enough", "ship it", or similar. A thorough interview is 15-30 questions, not 5.
74
+
75
+ 4. **MAINTAIN A RUNNING DRAFT.** Every 2-3 questions, update `.planning/prds/{slug}.md` with what you've learned so far (create `.planning/prds/` directory if it doesn't exist). Print: "Updated PRD draft — added [brief summary]." The user should see the spec taking shape in real-time, not all at the end.
76
+
77
+ 5. **BE ADAPTIVE.** Base your next question on the previous answer. If the user reveals something surprising, probe deeper on THAT — don't robotically move to the next category. The best specs come from following interesting threads.
78
+
79
+ ### First-Principles Mode (optional)
80
+
81
+ If the user says "challenge this", "first principles", or "push back" — start with 3-5 challenge questions before detailed spec gathering:
82
+ - "Why build this at all? What's the cost of NOT building it?"
83
+ - "Is there a simpler way to achieve 80% of this value?"
84
+ - "What assumptions are we making that might be wrong?"
85
+ - "Who is this really for, and have they asked for it?"
86
+ - "What would you cut if you had half the time?"
87
+
88
+ Then proceed to the coverage areas below.
89
+
90
+ ### Coverage Areas
91
+
92
+ Cover these areas thoroughly. There are no "rounds" — move fluidly between areas based on the conversation. Circle back to earlier areas when later answers reveal new information.
93
+
94
+ **1. Scope Definition**
95
+ - What features are IN scope for this milestone? What's the MVP vs. the full vision?
96
+ - What is explicitly OUT of scope / deferred to a future milestone?
97
+ - Is there any code that is sacred (must NOT be touched)? Why?
98
+ - What existing code/features should we ignore entirely (not break, not improve, not touch)?
99
+
100
+ **2. User Stories (CRITICAL — spend the most time here)**
101
+ - What does the user actually DO? Walk through the key workflows step by step.
102
+ - What should they see at each step? What feedback do they get?
103
+ - Frame as: "As [role], I want [action], so that [outcome]"
104
+ - **Story-splitting:** If a story has more than 3-4 acceptance criteria, split it into smaller stories. Each story should be independently deliverable.
105
+ - **Anti-vagueness enforcement:**
106
+ - BAD acceptance criteria: "Works correctly", "Is fast", "Handles errors well", "Looks good"
107
+ - GOOD acceptance criteria: "Returns 200 with JSON body for valid input", "Shows error toast with message for invalid email format", "Page renders in < 200ms on 3G", "Matches Figma comp within 4px"
108
+ - If the user gives vague criteria, push back: "How would you specifically test that? What would you check?"
109
+ - **Verification per story:** Each story must have at least one concrete verification step (a command to run, a page to visit, a state to check)
110
+
111
+ **3. Technical Design**
112
+ - What database changes are needed? (new tables, columns, indexes, migrations)
113
+ - What API endpoints? (method, path, request/response shape, auth requirements)
114
+ - What new files need to be created? What existing files get modified?
115
+ - What existing utilities/components/DAL queries should be reused (not rebuilt)?
116
+ - What's the data flow? Where does data originate, transform, and render?
117
+ - Any wave-parallelization opportunities? (independent agents building separate files)
118
+
119
+ **4. User Experience**
120
+ - What are the key user flows? Walk through click-by-click.
121
+ - What edge cases exist? (empty states, error states, loading states, partial data)
122
+ - Accessibility requirements? (keyboard navigation, screen readers, ARIA labels)
123
+ - Mobile/responsive behavior? (breakpoints, touch targets, layout shifts)
124
+ - What does the user see while waiting? (loading spinners, skeletons, optimistic updates)
125
+
126
+ **5. Trade-offs & Constraints**
127
+ - Performance vs. simplicity? What's good enough for v1?
128
+ - Any security considerations? (auth, data access, input validation)
129
+ - Any third-party dependencies or integrations?
130
+ - What technical debt is acceptable for now vs. must be done right?
131
+ - Any browser/device support requirements?
132
+
133
+ **6. Implementation Phases**
134
+ - How should this break into sequential phases?
135
+ - What can be parallelized within each phase? (wave-based agent structure)
136
+ - What's the critical path — what must be built first?
137
+ - What's the minimum viable first phase? (what gives us something testable fastest?)
138
+ - Any phases that could be cut if time runs short?
139
+ - **Assignability check:** After defining phases, probe which are independent enough for a different developer:
140
+ - "Which of these phases could a second developer work on independently?"
141
+ - Evaluate each phase for: dependency chains (does it need output from another phase?), domain independence (is it a separate concern?), context requirements (does it need deep codebase familiarity?), onboarding suitability (could a newer dev handle it?)
142
+ - Add assignability notes to each phase, e.g., "Phase 3 is independent — good for either dev" or "Phase 2 depends on Phase 1 — same dev"
143
+
144
+ **7. Verification & Feedback Loops**
145
+ - What commands verify the build works? (`tsc`, `biome`, test suite)
146
+ - What does "done" look like for each phase? How do we know it worked?
147
+ - Are there integration points that need end-to-end testing?
148
+ - What should we check after each phase before moving to the next?
149
+ - Any monitoring or logging needed to confirm production behavior?
150
+
151
+ **User signals done:** If the user says "done", "finalize", "that's enough", "ship it", or similar — immediately stop interviewing and go to Phase 3. Finalize the PRD with whatever depth has been achieved.
152
+
153
+ ## Phase 3 PRD Generation
154
+
155
+ ### Minimum Viable PRD Check
156
+
157
+ Before generating the final PRD, validate:
158
+ - At least **3 user stories** with acceptance criteria have been discussed
159
+ - At least **1 implementation phase** has been defined
160
+ - At least **1 verification command** has been specified
161
+
162
+ If any check fails, print what's missing and use AskUserQuestion:
163
+ - "The PRD is thin — [missing items]. Want to continue the interview to flesh it out, or finalize as-is?"
164
+ - "Continue interview" — return to Phase 2 and probe the missing areas
165
+ - "Finalize as-is" proceed with what we have
166
+
167
+ ### Write PRD
168
+
169
+ Write the PRD to `.planning/prds/{slug}.md` (create `.planning/prds/` directory first if it doesn't exist) with this EXACT structure:
170
+
171
+ ```markdown
172
+ # [Milestone Name] — Specification
173
+
174
+ **Milestone:** [full milestone name, e.g., "v3: Dashboard Analytics"]
175
+ **Status:** Ready for execution
176
+ **Branch:** feat/[milestone-slug]
177
+ **Created:** [today's date]
178
+ **Assigned To:** [developer name or "unassigned"]
179
+
180
+ ## Overview
181
+ [One paragraph summary of the milestone]
182
+
183
+ ## Problem Statement
184
+ [Why this work exists — what problem does it solve?]
185
+
186
+ ## Scope
187
+
188
+ ### In Scope
189
+ - [Bullet list of what ships in this milestone]
190
+
191
+ ### Out of Scope
192
+ - [Explicitly deferred items]
193
+
194
+ ### Sacred / Do NOT Touch
195
+ - [Code paths that must not be modified, with reasons]
196
+
197
+ ## User Stories
198
+
199
+ ### US-1: [Feature Name]
200
+ **Description:** As [role], I want [action], so that [outcome].
201
+ **Acceptance Criteria:**
202
+ - [ ] Specific, testable requirement (names actual functions/components)
203
+ - [ ] Another requirement
204
+ - [ ] [Verification command] passes
205
+
206
+ ### US-2: [Feature Name]
207
+ ...
208
+
209
+ ## Technical Design
210
+
211
+ ### New Database Tables
212
+ [SQL DDL or schema description, with indexes]
213
+
214
+ ### New API Endpoints
215
+ [Method + path + request/response shape for each]
216
+
217
+ ### New Files to Create
218
+ [Grouped by phase. Absolute paths with one-line descriptions]
219
+
220
+ ### Existing Files to Modify
221
+ [Paths + what changes in each]
222
+
223
+ ### Key Existing Code (DO NOT recreate — use as-is)
224
+ [Functions, utilities, DAL queries, components that agents should import/reuse]
225
+
226
+ ## Implementation Phases
227
+
228
+ ### Phase 1: [Name]
229
+ **Assigned To:** [developer name or "unassigned"]
230
+ **Goal:** [One sentence]
231
+
232
+ **Wave 1 [Theme] (N agents parallel):**
233
+ 1. agent-name: Creates file1.ts, file2.ts — [what it does]
234
+ 2. agent-name: Modifies file3.ts [what changes]
235
+
236
+ **Wave 2 — [Theme] (N agents parallel):**
237
+ 3. agent-name: Creates file4.ts — [what it does]
238
+ 4. agent-name: Wires component into page [specifics]
239
+
240
+ **Wave 3 Integration:**
241
+ 5. Integration check, responsive verification, cleanup
242
+
243
+ **Verification:** [Exact commands to run]
244
+ **Acceptance:** US-1 criteria [list which], US-2 criteria [list which]
245
+
246
+ ### Phase 2: [Name]
247
+ **Assigned To:** [developer name or "unassigned"]
248
+ ...
249
+
250
+ ## Execution Rules
251
+ 1. DELEGATE EVERYTHING. Lead context is sacred do not read implementation files into it.
252
+ 2. Verify after every phase: [verification commands from CLAUDE.md]
253
+ 3. Atomic commits after each agent's work lands
254
+ 4. Never `git add .` stage specific files only
255
+ 5. Read `tasks/lessons.md` before spawning agents — inject relevant lessons into agent prompts
256
+
257
+ ## Definition of Done
258
+ - [ ] All user story acceptance criteria pass
259
+ - [ ] All phases verified with [verification commands]
260
+ - [ ] Branch pushed and PR opened
261
+ - [ ] STATE.md and ROADMAP.md updated
262
+ ```
263
+
264
+ ## Phase 4Post-PRD Updates
265
+
266
+ After writing the PRD to `.planning/prds/{slug}.md`:
267
+
268
+ 1. **Update ROADMAP.md:** Add phase breakdown under the current milestone section. Each phase gets a row in the progress table with status "Pending".
269
+
270
+ 2. **Update STATE.md** (current milestone only):
271
+ - Set current phase to "Phase 1 ready for `/flow:go`"
272
+ - Set "Active PRD" to `.planning/prds/{slug}.md`
273
+ - Update "Next Actions" to reference the first phase
274
+ - **Skip this step if speccing a future milestone** — STATE.md stays pointing at the current milestone. Print: "PRD written to `.planning/prds/{slug}.md`. STATE.md not updated (future milestone)."
275
+
276
+ 3. **Generate Phase 1 handoff prompt:**
277
+ ```
278
+ Phase 1: [Name] — [short description]. Read STATE.md, ROADMAP.md, and .planning/prds/{slug}.md.
279
+ [One sentence of context about what Phase 1 builds].
280
+ ```
281
+
282
+ 4. **Linear Issue Creation (optional):**
283
+ - Check if Linear MCP tools are available (try `mcp__linear__list_teams`)
284
+ - If available: search for a Linear project matching the milestone name (`mcp__linear__list_projects` with query)
285
+ - If found: create one Linear issue per phase under that project using `mcp__linear__create_issue`, with title "Phase N: [Name]" and description from PRD phase section. Set team to "Monument Square".
286
+ - If not found: use AskUserQuestion to ask user to pick a project or skip Linear integration
287
+ - If Linear MCP not available: skip silently (no error message)
288
+ - Print count: "[N] Linear issues created under project [name]" (or "Linear integration skipped" if not available)
289
+
290
+ 5. Print the handoff prompt in a fenced code block.
291
+
292
+ 6. Print: "PRD ready at `.planning/prds/{slug}.md`. Run `/flow:go` to execute Phase 1, or review the PRD first."
293
+
294
+ ## Quality Gates
295
+
296
+ Before finalizing, self-check the PRD:
297
+ - [ ] Every phase has wave-based agent assignments with explicit file lists
298
+ - [ ] Every user story has checkbox acceptance criteria that are testable
299
+ - [ ] Every phase has verification commands
300
+ - [ ] "Key Existing Code" section references actual files/functions found in the codebase scan
301
+ - [ ] PRD is written to `.planning/prds/{slug}.md`, NOT to root `PRD.md`
302
+ - [ ] No phase has more than 5 agents in a single wave (too many = coordination overhead)
303
+ - [ ] Sacred code section is populated (even if empty with "None identified")
304
+
305
+ If any gate fails, fix the PRD before presenting it.