@hustle-together/api-dev-tools 2.0.6 → 3.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. package/README.md +283 -478
  2. package/bin/cli.js +55 -11
  3. package/commands/README.md +124 -251
  4. package/commands/api-create.md +318 -136
  5. package/commands/api-interview.md +252 -256
  6. package/commands/api-research.md +209 -234
  7. package/commands/api-verify.md +231 -0
  8. package/demo/audio/generate-all-narrations.js +516 -0
  9. package/demo/audio/generate-voice-previews.js +140 -0
  10. package/demo/audio/narration-adam-timing.json +3666 -0
  11. package/demo/audio/narration-adam.mp3 +0 -0
  12. package/demo/audio/narration-creature-timing.json +3666 -0
  13. package/demo/audio/narration-creature.mp3 +0 -0
  14. package/demo/audio/narration-gaming-timing.json +3666 -0
  15. package/demo/audio/narration-gaming.mp3 +0 -0
  16. package/demo/audio/narration-hope-timing.json +3666 -0
  17. package/demo/audio/narration-hope.mp3 +0 -0
  18. package/demo/audio/narration-mark-timing.json +3666 -0
  19. package/demo/audio/narration-mark.mp3 +0 -0
  20. package/demo/audio/previews/manifest.json +30 -0
  21. package/demo/audio/previews/preview-creature.mp3 +0 -0
  22. package/demo/audio/previews/preview-gaming.mp3 +0 -0
  23. package/demo/audio/previews/preview-hope.mp3 +0 -0
  24. package/demo/audio/previews/preview-mark.mp3 +0 -0
  25. package/demo/audio/voices-manifest.json +50 -0
  26. package/demo/hustle-together/blog/gemini-vs-claude-widgets.html +30 -28
  27. package/demo/hustle-together/blog/interview-driven-api-development.html +37 -23
  28. package/demo/hustle-together/index.html +142 -109
  29. package/demo/workflow-demo.html +1054 -544
  30. package/hooks/periodic-reground.py +154 -0
  31. package/hooks/session-startup.py +151 -0
  32. package/hooks/track-tool-use.py +109 -17
  33. package/hooks/verify-after-green.py +152 -0
  34. package/package.json +2 -2
  35. package/templates/api-dev-state.json +42 -7
  36. package/templates/research-index.json +6 -0
  37. package/templates/settings.json +23 -0
@@ -1,335 +1,331 @@
1
- # API Interview - Research-Based Structured Discovery
1
+ # API Interview - Research-Driven Dynamic Discovery v3.0
2
2
 
3
3
  **Usage:** `/api-interview [endpoint-name]`
4
4
 
5
- **Purpose:** Conduct structured interview with MULTIPLE-CHOICE questions derived from research findings to understand API endpoint purpose, usage, and requirements before any implementation.
5
+ **Purpose:** Conduct structured interview where questions are GENERATED FROM research findings, not generic templates. Every question is specific to the discovered API capabilities.
6
6
 
7
- ## v1.8.0 REQUIREMENT: Structured Questions with Options
7
+ ## Key Principle: Questions FROM Research
8
8
 
9
- **CRITICAL:** All interview questions MUST:
10
- 1. Be based on COMPLETED research (Context7 + WebSearch)
11
- 2. Use AskUserQuestion tool with the `options` parameter
12
- 3. Provide multiple-choice selections derived from research findings
13
- 4. Include a "Type something else..." option for custom input
14
-
15
- **Example of CORRECT structured question:**
9
+ **OLD WAY (Generic Templates):**
16
10
  ```
17
- AskUserQuestion(
18
- question="Which AI provider should this endpoint support?",
19
- options=[
20
- {"value": "openai", "label": "OpenAI (GPT-4o, GPT-4-turbo)"},
21
- {"value": "anthropic", "label": "Anthropic (Claude Sonnet, Opus)"},
22
- {"value": "google", "label": "Google (Gemini Pro, Flash)"},
23
- {"value": "all", "label": "All providers (multi-provider support)"},
24
- {"value": "custom", "label": "Type something else..."}
25
- ]
26
- )
11
+ "Which AI provider should this endpoint support?"
12
+ - OpenAI
13
+ - Anthropic
14
+ - Google
27
15
  ```
28
16
 
29
- This gives users clear choices from RESEARCHED capabilities, not guesses.
30
-
31
- ## Interview Methodology
32
-
33
- Based on Anthropic Interviewer approach with three phases:
34
-
35
- ### Phase 0: PREREQUISITE - Research (MANDATORY)
36
- **You CANNOT start the interview until research is complete.**
37
-
38
- Before asking ANY questions:
39
- 1. Use Context7 to get SDK/API documentation
40
- 2. Use WebSearch (2-3 searches) for official docs
41
- 3. Gather all available options, parameters, models, providers
42
- 4. Build your question options FROM this research
43
-
44
- Example research flow:
45
- ```
46
- 1. mcp__context7__resolve-library-id("vercel ai sdk")
47
- 2. mcp__context7__get-library-docs(libraryId)
48
- 3. WebSearch("Vercel AI SDK providers models 2025")
49
- 4. WebSearch("Vercel AI SDK streaming options parameters")
17
+ **NEW WAY (From Research):**
50
18
  ```
19
+ Based on research, Brandfetch API has 7 parameters:
51
20
 
52
- **Research informs options. No research = no good options = interview blocked.**
21
+ 1. DOMAIN (required) - string
22
+ → No question needed (always required)
53
23
 
54
- ### Phase 1: Planning (Internal)
55
- - Review endpoint name and context
56
- - Review research findings (what providers, models, parameters exist?)
57
- - Build structured question options from research
58
- - Prepare follow-up questions
24
+ 2. FORMAT: ["json", "svg", "png", "raw"]
25
+ Q: Which formats do you need?
26
+ [x] json [x] svg [x] png [ ] raw
59
27
 
60
- ### Phase 2: Interviewing (User Interaction with Structured Options)
28
+ 3. QUALITY: 1-100 (continuous range)
29
+ Q: How should we TEST this continuous parameter?
30
+ [ ] All values (100 tests)
31
+ [x] Boundary (1, 50, 100)
32
+ [ ] Sample (1, 25, 50, 75, 100)
33
+ [ ] Custom: ____
34
+ ```
61
35
 
62
- **EVERY question uses AskUserQuestion with options parameter.**
36
+ ## Interview Flow
63
37
 
64
- #### A. Purpose & Context
38
+ ### Phase 0: PREREQUISITE - Research Must Be Complete
65
39
 
66
- **Question 1: Primary Purpose**
67
- ```
68
- AskUserQuestion(
69
- question="What is the primary purpose of this endpoint?",
70
- options=[
71
- {"value": "data_retrieval", "label": "Retrieve/query data"},
72
- {"value": "data_transform", "label": "Transform/process data"},
73
- {"value": "ai_generation", "label": "AI content generation"},
74
- {"value": "ai_analysis", "label": "AI analysis/classification"},
75
- {"value": "integration", "label": "Third-party integration"},
76
- {"value": "custom", "label": "Type something else..."}
77
- ]
78
- )
79
- ```
40
+ **Interview is BLOCKED until research is done.**
80
41
 
81
- **Question 2: Primary Users**
42
+ The interview READS from the research findings:
82
43
  ```
83
- AskUserQuestion(
84
- question="Who are the primary users of this endpoint?",
85
- options=[
86
- {"value": "frontend", "label": "Frontend developers (React, Vue, etc.)"},
87
- {"value": "backend", "label": "Backend services (server-to-server)"},
88
- {"value": "mobile", "label": "Mobile app developers"},
89
- {"value": "enduser", "label": "End users directly (browser)"},
90
- {"value": "automation", "label": "Automated systems/bots"},
91
- {"value": "custom", "label": "Type something else..."}
92
- ]
93
- )
44
+ State file shows:
45
+ research_initial.status = "complete"
46
+ research_initial.sources = [...]
47
+
48
+ Discovered parameters:
49
+ - 5 required parameters
50
+ - 12 optional parameters
51
+ - 3 enum types
52
+ - 2 continuous ranges
94
53
  ```
95
54
 
96
- **Question 3: Usage Trigger**
97
- ```
98
- AskUserQuestion(
99
- question="What triggers a call to this endpoint?",
100
- options=[
101
- {"value": "user_action", "label": "User action (button click, form submit)"},
102
- {"value": "page_load", "label": "Page/component load"},
103
- {"value": "scheduled", "label": "Scheduled/cron job"},
104
- {"value": "webhook", "label": "External webhook/event"},
105
- {"value": "realtime", "label": "Real-time/streaming updates"},
106
- {"value": "custom", "label": "Type something else..."}
107
- ]
108
- )
109
- ```
55
+ ### Phase 1: Parameter-Based Questions
110
56
 
111
- #### B. Technical Requirements (Research-Based Options)
57
+ For each discovered parameter, generate an appropriate question:
112
58
 
113
- **Question 4: AI Provider** (options from research)
59
+ #### Required Parameters (Confirmation Only)
114
60
  ```
115
- AskUserQuestion(
116
- question="Which AI provider(s) should this endpoint support?",
117
- options=[
118
- // These options come from your Context7/WebSearch research!
119
- {"value": "openai", "label": "OpenAI (gpt-4o, gpt-4-turbo)"},
120
- {"value": "anthropic", "label": "Anthropic (claude-sonnet-4-20250514, claude-opus-4-20250514)"},
121
- {"value": "google", "label": "Google (gemini-pro, gemini-flash)"},
122
- {"value": "groq", "label": "Groq (llama-3.1-70b, mixtral)"},
123
- {"value": "multiple", "label": "Multiple providers (configurable)"},
124
- {"value": "custom", "label": "Type something else..."}
125
- ]
126
- )
61
+ ┌────────────────────────────────────────────────────────────┐
62
+ REQUIRED PARAMETERS │
63
+ │ │
64
+ These parameters are required by the API: │
65
+ │ │
66
+ 1. domain (string) - The domain to fetch brand data for │
67
+ 2. apiKey (string) - Your Brandfetch API key │
68
+ │ │
69
+ Confirm these are understood? [Y/n] │
70
+ └────────────────────────────────────────────────────────────┘
127
71
  ```
128
72
 
129
- **Question 5: Response Format** (options from research)
73
+ #### Enum Parameters (Multi-Select)
130
74
  ```
131
- AskUserQuestion(
132
- question="What response format is needed?",
133
- options=[
134
- // Options based on what the SDK supports (from research)
135
- {"value": "streaming", "label": "Streaming (real-time chunks)"},
136
- {"value": "complete", "label": "Complete response (wait for full)"},
137
- {"value": "structured", "label": "Structured/JSON mode"},
138
- {"value": "tool_calls", "label": "Tool calling/function calls"},
139
- {"value": "custom", "label": "Type something else..."}
140
- ]
141
- )
75
+ ┌────────────────────────────────────────────────────────────┐
76
+ FORMAT PARAMETER │
77
+ │ │
78
+ Research found these format options: │
79
+ │ │
80
+ [x] json - Structured JSON response
81
+ │ [x] svg - Vector logo format │
82
+ [x] png - Raster logo format │
83
+ │ [ ] raw - Raw API response (advanced) │
84
+ │ │
85
+ │ Which formats should we support? │
86
+ └────────────────────────────────────────────────────────────┘
142
87
  ```
143
88
 
144
- **Question 6: Required Parameters**
89
+ #### Continuous Parameters (Test Strategy)
145
90
  ```
146
- AskUserQuestion(
147
- question="Which parameters are REQUIRED (cannot work without)?",
148
- options=[
149
- // Based on researched SDK parameters
150
- {"value": "prompt_only", "label": "Just the prompt/message"},
151
- {"value": "prompt_model", "label": "Prompt + model selection"},
152
- {"value": "prompt_model_config", "label": "Prompt + model + configuration"},
153
- {"value": "full_config", "label": "Full configuration required"},
154
- {"value": "custom", "label": "Type something else..."}
155
- ]
156
- )
91
+ ┌────────────────────────────────────────────────────────────┐
92
+ QUALITY PARAMETER │
93
+ │ │
94
+ Research found: quality is a continuous range 1-100 │
95
+ │ │
96
+ How should we TEST this parameter? │
97
+ │ │
98
+ [ ] All values (100 tests - comprehensive but slow) │
99
+ [x] Boundary (1, 50, 100 - 3 tests) │
100
+ │ [ ] Sample (1, 25, 50, 75, 100 - 5 tests) │
101
+ │ [ ] Custom values: ____ │
102
+ │ │
103
+ │ Your testing strategy affects test count. │
104
+ └────────────────────────────────────────────────────────────┘
157
105
  ```
158
106
 
159
- **Question 7: Optional Parameters**
107
+ #### Boolean Parameters (Enable/Disable)
160
108
  ```
161
- AskUserQuestion(
162
- question="Which optional parameters should be supported?",
163
- options=[
164
- // From research: discovered optional parameters
165
- {"value": "temperature", "label": "temperature (creativity control)"},
166
- {"value": "max_tokens", "label": "maxTokens (response length)"},
167
- {"value": "system_prompt", "label": "system (system prompt)"},
168
- {"value": "tools", "label": "tools (function calling)"},
169
- {"value": "all_standard", "label": "All standard AI parameters"},
170
- {"value": "custom", "label": "Type something else..."}
171
- ]
172
- )
109
+ ┌────────────────────────────────────────────────────────────┐
110
+ INCLUDE_COLORS PARAMETER │
111
+ │ │
112
+ Research found: includeColors (boolean, default: true) │
113
+ │ │
114
+ Should we expose this parameter? │
115
+ │ │
116
+ [x] Yes - Let users toggle it │
117
+ [ ] No - Use default (true) always │
118
+ [ ] Hardcode to: ____ │
119
+ └────────────────────────────────────────────────────────────┘
173
120
  ```
174
121
 
175
- #### C. Dependencies & Integration
122
+ ### Phase 2: Feature Questions
123
+
124
+ Based on discovered features, ask about priorities:
176
125
 
177
- **Question 8: External Services**
178
126
  ```
179
- AskUserQuestion(
180
- question="What external services does this endpoint need?",
181
- options=[
182
- {"value": "ai_only", "label": "AI provider only (OpenAI, Anthropic, etc.)"},
183
- {"value": "ai_search", "label": "AI + Search (Brave, Perplexity)"},
184
- {"value": "ai_scrape", "label": "AI + Web scraping (Firecrawl)"},
185
- {"value": "ai_db", "label": "AI + Database (Supabase)"},
186
- {"value": "multiple", "label": "Multiple external services"},
187
- {"value": "custom", "label": "Type something else..."}
188
- ]
189
- )
127
+ ┌────────────────────────────────────────────────────────────┐
128
+ DISCOVERED FEATURES │
129
+ │ │
130
+ Research found these features. Mark priorities: │
131
+ │ │
132
+ [x] Basic brand fetch - Get logo, colors, fonts │
133
+ [x] Multiple formats - Support json, svg, png │
134
+ [ ] Webhook callbacks - Async notification (skip for now) │
135
+ [ ] Batch processing - Multiple domains at once │
136
+ │ [x] Error handling - Graceful degradation │
137
+ │ │
138
+ │ Confirm feature scope? │
139
+ └────────────────────────────────────────────────────────────┘
190
140
  ```
191
141
 
192
- **Question 9: API Key Handling**
142
+ ### Phase 3: Error Handling Questions
143
+
193
144
  ```
194
- AskUserQuestion(
195
- question="How should API keys be handled?",
196
- options=[
197
- {"value": "server_only", "label": "Server environment variables only"},
198
- {"value": "server_header", "label": "Server env + custom header override"},
199
- {"value": "client_next", "label": "NEXT_PUBLIC_ client-side keys"},
200
- {"value": "all_methods", "label": "All methods (env, header, client)"},
201
- {"value": "custom", "label": "Type something else..."}
202
- ]
203
- )
145
+ ┌────────────────────────────────────────────────────────────┐
146
+ ERROR HANDLING │
147
+ │ │
148
+ Research found these error cases in the API: │
149
+ │ │
150
+ - 400: Invalid domain format │
151
+ - 401: Invalid API key │
152
+ - 404: Brand not found │
153
+ │ - 429: Rate limit exceeded │
154
+ │ - 500: Server error │
155
+ │ │
156
+ │ How should we handle rate limits (429)? │
157
+ │ │
158
+ │ [x] Retry with exponential backoff │
159
+ │ [ ] Return error immediately │
160
+ │ [ ] Queue and retry later │
161
+ │ [ ] Custom: ____ │
162
+ └────────────────────────────────────────────────────────────┘
204
163
  ```
205
164
 
206
- #### D. Error Handling & Edge Cases
165
+ ### Phase 4: Deep Research Proposal
166
+
167
+ After interview, propose additional research:
207
168
 
208
- **Question 10: Error Response Format**
209
169
  ```
210
- AskUserQuestion(
211
- question="How should errors be returned?",
212
- options=[
213
- {"value": "simple", "label": "Simple: {error: string}"},
214
- {"value": "detailed", "label": "Detailed: {error, code, details}"},
215
- {"value": "ai_sdk", "label": "AI SDK standard format"},
216
- {"value": "http_native", "label": "HTTP status codes + body"},
217
- {"value": "custom", "label": "Type something else..."}
218
- ]
219
- )
170
+ ┌────────────────────────────────────────────────────────────┐
171
+ PROPOSED DEEP RESEARCH │
172
+ │ │
173
+ Based on your selections, I recommend researching:
174
+ │ │
175
+ [x] Rate limiting behavior │
176
+ │ Reason: You selected "retry with backoff"
177
+ │ │
178
+ │ [x] SVG optimization │
179
+ │ Reason: You selected SVG format │
180
+ │ │
181
+ │ [ ] Webhook format │
182
+ │ Reason: You skipped webhook feature │
183
+ │ │
184
+ │ Approve these searches? [Y] │
185
+ │ Add more: ____ │
186
+ │ Skip and proceed: [n] │
187
+ └────────────────────────────────────────────────────────────┘
220
188
  ```
221
189
 
222
- ### Phase 3: Analysis (Documentation)
223
- After interview, I will:
224
- - Synthesize answers into structured document
225
- - Identify gaps or ambiguities
226
- - Create preliminary schema based on answers
227
- - Document all external links and resources
228
- - Outline test cases from real-world scenarios
190
+ ## Question Types Summary
191
+
192
+ | Discovered Type | Question Type | Example |
193
+ |----------------|---------------|---------|
194
+ | Required param | Confirmation | "Confirm these are understood?" |
195
+ | Enum param | Multi-select | "Which formats to support?" |
196
+ | Continuous range | Test strategy | "How to test 1-100 range?" |
197
+ | Boolean param | Enable/disable | "Expose this parameter?" |
198
+ | Optional feature | Priority | "Include this feature?" |
199
+ | Error case | Handling strategy | "How to handle rate limits?" |
200
+
201
+ ## State Tracking
202
+
203
+ All decisions are saved to `.claude/api-dev-state.json`:
204
+
205
+ ```json
206
+ {
207
+ "phases": {
208
+ "interview": {
209
+ "status": "complete",
210
+ "questions": [
211
+ {
212
+ "parameter": "format",
213
+ "type": "enum",
214
+ "options": ["json", "svg", "png", "raw"],
215
+ "selected": ["json", "svg", "png"],
216
+ "timestamp": "..."
217
+ },
218
+ {
219
+ "parameter": "quality",
220
+ "type": "continuous",
221
+ "range": [1, 100],
222
+ "test_strategy": "boundary",
223
+ "test_values": [1, 50, 100],
224
+ "timestamp": "..."
225
+ }
226
+ ],
227
+ "decisions": {
228
+ "format": ["json", "svg", "png"],
229
+ "quality_testing": "boundary",
230
+ "quality_values": [1, 50, 100],
231
+ "rate_limit_handling": "exponential_backoff"
232
+ }
233
+ }
234
+ }
235
+ }
236
+ ```
229
237
 
230
238
  ## Output
231
239
 
232
- Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
240
+ Creates: `.claude/research/[api-name]/interview.md`
233
241
 
234
- **Document Structure:**
235
242
  ```markdown
236
- # API Endpoint: [endpoint-name]
243
+ # Interview: [API Name]
237
244
 
238
245
  **Date:** [current-date]
239
- **Interviewed by:** Claude Code
246
+ **Research Sources:** [list from research phase]
240
247
  **Status:** Interview Complete
241
- **Research Sources:** [list of Context7/WebSearch sources]
242
248
 
243
- ## 1. Purpose & Context
244
- [Synthesized understanding of why this exists]
249
+ ## Discovered Parameters
245
250
 
246
- ## 2. Users & Usage Patterns
247
- [Who uses it and how]
251
+ | Parameter | Type | Required | Decision |
252
+ |-----------|------|----------|----------|
253
+ | domain | string | Yes | Always required |
254
+ | format | enum | No | json, svg, png |
255
+ | quality | 1-100 | No | Boundary testing: 1, 50, 100 |
248
256
 
249
- ## 3. Request Schema (Preliminary)
250
- [Zod-style schema based on interview]
257
+ ## Feature Scope
251
258
 
252
- ## 4. Response Schema (Preliminary)
253
- [Expected response structure]
259
+ | Feature | Included | Reason |
260
+ |---------|----------|--------|
261
+ | Basic fetch | Yes | Core functionality |
262
+ | Multiple formats | Yes | User selected |
263
+ | Webhooks | No | Deferred to v2 |
254
264
 
255
- ## 5. Dependencies
256
- - External Services: [list]
257
- - API Keys Required: [list]
258
- - AI Models: [list]
265
+ ## Test Strategy
259
266
 
260
- ## 6. Interview Responses
261
- | Question | User Selection | Notes |
262
- |----------|---------------|-------|
263
- | Purpose | [selected option] | |
264
- | Users | [selected option] | |
265
- | ... | ... | |
267
+ - Enum parameters: Test all selected values
268
+ - Continuous parameters: Boundary testing (3 values)
269
+ - Error cases: 400, 401, 404, 429, 500
270
+
271
+ ## Decisions Summary
266
272
 
267
- ## 7. Real-World Scenarios
268
- ### Scenario 1: [Common Use Case]
269
- **Request:**
270
- ```json
271
- {example}
272
- ```
273
- **Response:**
274
273
  ```json
275
- {example}
274
+ {
275
+ "format": ["json", "svg", "png"],
276
+ "quality_testing": "boundary",
277
+ "rate_limit_handling": "exponential_backoff"
278
+ }
276
279
  ```
277
280
 
278
- ## 8. Edge Cases & Error Handling
279
- [Identified edge cases and how to handle them]
280
-
281
- ## 9. Validation Rules
282
- [What must be validated]
281
+ ## Deep Research Approved
283
282
 
284
- ## 10. Documentation Links
285
- - [Official docs from research]
286
- - [SDK docs from Context7]
287
- - [Related resources from WebSearch]
283
+ - Rate limiting behavior (for retry logic)
284
+ - SVG optimization (for SVG format)
288
285
 
289
- ## 11. Test Cases (To Implement)
290
- - [ ] Test: [scenario from interview]
291
- - [ ] Test: [edge case from interview]
292
- - [ ] Test: [error handling from interview]
286
+ ## Open Questions
293
287
 
294
- ## 12. Open Questions
295
- [Any ambiguities to resolve]
288
+ [Any remaining ambiguities]
296
289
  ```
297
290
 
298
- ## Usage Example
291
+ ## Integration with Hooks
292
+
293
+ The `enforce-interview.py` hook injects these decisions when Claude tries to write implementation:
299
294
 
300
- ```bash
301
- /api-interview generate-css
302
295
  ```
296
+ INTERVIEW CONTEXT REMINDER
297
+
298
+ When implementing, remember user decisions:
299
+ - format: ["json", "svg", "png"] (raw excluded)
300
+ - quality: boundary testing (1, 50, 100)
301
+ - rate limits: exponential backoff
303
302
 
304
- I will:
305
- 1. First complete research (Context7 + WebSearch)
306
- 2. Build structured questions with options from research
307
- 3. Ask all questions using AskUserQuestion with options
308
- 4. Document responses and create the endpoint documentation
303
+ Source: .claude/api-dev-state.json
304
+ ```
309
305
 
310
306
  <claude-commands-template>
311
- ## Interview Guidelines (v1.8.0)
307
+ ## Interview Guidelines v3.0
312
308
 
313
- 1. **RESEARCH FIRST** - Complete Context7 + WebSearch before ANY questions
314
- 2. **STRUCTURED OPTIONS** - Every question uses AskUserQuestion with options[]
315
- 3. **OPTIONS FROM RESEARCH** - Multiple-choice options come from discovered capabilities
316
- 4. **ALWAYS INCLUDE "Type something else..."** - Let user provide custom input
317
- 5. **ASK ONE AT A TIME** - Wait for response before next question
318
- 6. **DOCUMENT EVERYTHING** - Capture ALL selections and custom inputs
309
+ 1. **Questions FROM Research** - Never use generic templates
310
+ 2. **Parameter-Specific** - Each discovered param gets appropriate question
311
+ 3. **Test Strategy for Continuous** - Ask how to test ranges
312
+ 4. **Track Decisions** - Save everything to state file
313
+ 5. **Propose Deep Research** - Based on selections
314
+ 6. **No Skipped Parameters** - Every discovered param must have a decision
319
315
 
320
- ## Question Format Checklist
316
+ ## Question Generation Rules
321
317
 
322
- Before asking each question:
323
- - [ ] Is research complete?
324
- - [ ] Are options based on research findings?
325
- - [ ] Does it use AskUserQuestion with options parameter?
326
- - [ ] Is there a "Type something else..." option?
327
- - [ ] Will this help build the schema?
318
+ | If research finds... | Then ask... |
319
+ |---------------------|-------------|
320
+ | Enum with 3+ options | Multi-select: which to support |
321
+ | Continuous range | Test strategy: all/boundary/sample |
322
+ | Boolean param | Enable/disable/hardcode |
323
+ | Optional feature | Include/exclude/defer |
324
+ | Error case | Handling strategy |
328
325
 
329
326
  ## After Interview
330
327
 
331
- - Read interview document before implementing
332
- - Refer to it during TDD cycles
333
- - Update it if requirements change
334
- - Use it for code review context
328
+ - Decisions saved to state file
329
+ - Decisions injected during implementation via hook
330
+ - Consistency between interview answers and code enforced
335
331
  </claude-commands-template>