@hustle-together/api-dev-tools 1.7.1 → 1.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,120 +1,223 @@
1
- # API Interview - Structured API Discovery
1
+ # API Interview - Research-Based Structured Discovery
2
2
 
3
3
  **Usage:** `/api-interview [endpoint-name]`
4
4
 
5
- **Purpose:** Conduct structured interview to understand API endpoint purpose, usage, and requirements before any implementation.
5
+ **Purpose:** Conduct structured interview with MULTIPLE-CHOICE questions derived from research findings to understand API endpoint purpose, usage, and requirements before any implementation.
6
+
7
+ ## v1.8.0 REQUIREMENT: Structured Questions with Options
8
+
9
+ **CRITICAL:** All interview questions MUST:
10
+ 1. Be based on COMPLETED research (Context7 + WebSearch)
11
+ 2. Use AskUserQuestion tool with the `options` parameter
12
+ 3. Provide multiple-choice selections derived from research findings
13
+ 4. Include a "Type something else..." option for custom input
14
+
15
+ **Example of CORRECT structured question:**
16
+ ```
17
+ AskUserQuestion(
18
+ question="Which AI provider should this endpoint support?",
19
+ options=[
20
+ {"value": "openai", "label": "OpenAI (GPT-4o, GPT-4-turbo)"},
21
+ {"value": "anthropic", "label": "Anthropic (Claude Sonnet, Opus)"},
22
+ {"value": "google", "label": "Google (Gemini Pro, Flash)"},
23
+ {"value": "all", "label": "All providers (multi-provider support)"},
24
+ {"value": "custom", "label": "Type something else..."}
25
+ ]
26
+ )
27
+ ```
28
+
29
+ This gives users clear choices from RESEARCHED capabilities, not guesses.
6
30
 
7
31
  ## Interview Methodology
8
32
 
9
33
  Based on Anthropic Interviewer approach with three phases:
10
34
 
35
+ ### Phase 0: PREREQUISITE - Research (MANDATORY)
36
+ **You CANNOT start the interview until research is complete.**
37
+
38
+ Before asking ANY questions:
39
+ 1. Use Context7 to get SDK/API documentation
40
+ 2. Use WebSearch (2-3 searches) for official docs
41
+ 3. Gather all available options, parameters, models, providers
42
+ 4. Build your question options FROM this research
43
+
44
+ Example research flow:
45
+ ```
46
+ 1. mcp__context7__resolve-library-id("vercel ai sdk")
47
+ 2. mcp__context7__get-library-docs(libraryId)
48
+ 3. WebSearch("Vercel AI SDK providers models 2025")
49
+ 4. WebSearch("Vercel AI SDK streaming options parameters")
50
+ ```
51
+
52
+ **Research informs options. No research = no good options = interview blocked.**
53
+
11
54
  ### Phase 1: Planning (Internal)
12
55
  - Review endpoint name and context
13
- - Identify key areas to explore
56
+ - Review research findings (what providers, models, parameters exist?)
57
+ - Build structured question options from research
14
58
  - Prepare follow-up questions
15
59
 
16
- ### Phase 2: Interviewing (User Interaction)
17
- Ask structured questions to understand:
60
+ ### Phase 2: Interviewing (User Interaction with Structured Options)
61
+
62
+ **EVERY question uses AskUserQuestion with options parameter.**
18
63
 
19
64
  #### A. Purpose & Context
20
- 1. **What problem does this API endpoint solve?**
21
- - What's the business/technical need?
22
- - What happens without this endpoint?
23
-
24
- 2. **Who are the primary users?**
25
- - Frontend developers? End users? Other systems?
26
- - What's their technical level?
27
-
28
- 3. **What triggers usage of this API?**
29
- - User action? Scheduled task? Event-driven?
30
- - How frequently is it called?
31
-
32
- #### B. Real-World Usage Scenarios
33
- 4. **Walk me through a typical request:**
34
- - What data does the user provide?
35
- - What do they expect back?
36
- - What happens with the response?
37
-
38
- 5. **What are the most common use cases?** (Ask for 3-5 examples)
39
- - Common scenario 1: ___
40
- - Common scenario 2: ___
41
- - Edge scenario: ___
42
-
43
- 6. **Show me an example request/response you envision:**
44
- - Request body/params
45
- - Expected response
46
- - Error cases
47
-
48
- #### C. Technical Requirements
49
- 7. **What parameters are absolutely REQUIRED?**
50
- - Can the API work without them?
51
- - What happens if they're missing?
52
-
53
- 8. **What parameters are OPTIONAL?**
54
- - What defaults make sense?
55
- - How do they modify behavior?
56
-
57
- 9. **What are valid value ranges/formats?**
58
- - Type constraints (string, number, enum)?
59
- - Length/size limits?
60
- - Format requirements (email, URL, date)?
61
-
62
- 10. **Are there parameter dependencies?**
63
- - If X is provided, must Y also be provided?
64
- - Mutually exclusive options?
65
-
66
- #### D. Dependencies & Integration
67
- 11. **What external services does this use?**
68
- - AI providers (OpenAI, Anthropic, Google)?
69
- - Third-party APIs (Firecrawl, Brave Search)?
70
- - Database (Supabase)?
71
-
72
- 12. **What API keys are required?**
73
- - Where are they configured?
74
- - Are there fallback options?
75
- - Can users provide their own keys?
76
-
77
- 13. **What AI models/providers are involved?**
78
- - Specific models (GPT-4, Claude Sonnet)?
79
- - Why those models?
80
- - Are there alternatives?
81
-
82
- 14. **Are there rate limits, quotas, or costs?**
83
- - Per-request costs?
84
- - Rate limiting needed?
85
- - Cost tracking required?
86
-
87
- #### E. Error Handling & Edge Cases
88
- 15. **What can go wrong?**
89
- - Invalid input?
90
- - External service failures?
91
- - Timeout scenarios?
92
-
93
- 16. **How should errors be communicated?**
94
- - HTTP status codes?
95
- - Error message format?
96
- - User-facing vs. technical errors?
97
-
98
- 17. **What are boundary conditions?**
99
- - Very large inputs?
100
- - Empty/null values?
101
- - Concurrent requests?
102
-
103
- 18. **What should be validated before processing?**
104
- - Input validation rules?
105
- - Authentication/authorization?
106
- - Resource availability?
107
-
108
- #### F. Documentation & Resources
109
- 19. **Where is official documentation?**
110
- - Link to external API docs
111
- - SDK documentation
112
- - Code examples
113
-
114
- 20. **Are there similar endpoints for reference?**
115
- - In this codebase?
116
- - In other projects?
117
- - Industry examples?
65
+
66
+ **Question 1: Primary Purpose**
67
+ ```
68
+ AskUserQuestion(
69
+ question="What is the primary purpose of this endpoint?",
70
+ options=[
71
+ {"value": "data_retrieval", "label": "Retrieve/query data"},
72
+ {"value": "data_transform", "label": "Transform/process data"},
73
+ {"value": "ai_generation", "label": "AI content generation"},
74
+ {"value": "ai_analysis", "label": "AI analysis/classification"},
75
+ {"value": "integration", "label": "Third-party integration"},
76
+ {"value": "custom", "label": "Type something else..."}
77
+ ]
78
+ )
79
+ ```
80
+
81
+ **Question 2: Primary Users**
82
+ ```
83
+ AskUserQuestion(
84
+ question="Who are the primary users of this endpoint?",
85
+ options=[
86
+ {"value": "frontend", "label": "Frontend developers (React, Vue, etc.)"},
87
+ {"value": "backend", "label": "Backend services (server-to-server)"},
88
+ {"value": "mobile", "label": "Mobile app developers"},
89
+ {"value": "enduser", "label": "End users directly (browser)"},
90
+ {"value": "automation", "label": "Automated systems/bots"},
91
+ {"value": "custom", "label": "Type something else..."}
92
+ ]
93
+ )
94
+ ```
95
+
96
+ **Question 3: Usage Trigger**
97
+ ```
98
+ AskUserQuestion(
99
+ question="What triggers a call to this endpoint?",
100
+ options=[
101
+ {"value": "user_action", "label": "User action (button click, form submit)"},
102
+ {"value": "page_load", "label": "Page/component load"},
103
+ {"value": "scheduled", "label": "Scheduled/cron job"},
104
+ {"value": "webhook", "label": "External webhook/event"},
105
+ {"value": "realtime", "label": "Real-time/streaming updates"},
106
+ {"value": "custom", "label": "Type something else..."}
107
+ ]
108
+ )
109
+ ```
110
+
111
+ #### B. Technical Requirements (Research-Based Options)
112
+
113
+ **Question 4: AI Provider** (options from research)
114
+ ```
115
+ AskUserQuestion(
116
+ question="Which AI provider(s) should this endpoint support?",
117
+ options=[
118
+ // These options come from your Context7/WebSearch research!
119
+ {"value": "openai", "label": "OpenAI (gpt-4o, gpt-4-turbo)"},
120
+ {"value": "anthropic", "label": "Anthropic (claude-sonnet-4-20250514, claude-opus-4-20250514)"},
121
+ {"value": "google", "label": "Google (gemini-pro, gemini-flash)"},
122
+ {"value": "groq", "label": "Groq (llama-3.1-70b, mixtral)"},
123
+ {"value": "multiple", "label": "Multiple providers (configurable)"},
124
+ {"value": "custom", "label": "Type something else..."}
125
+ ]
126
+ )
127
+ ```
128
+
129
+ **Question 5: Response Format** (options from research)
130
+ ```
131
+ AskUserQuestion(
132
+ question="What response format is needed?",
133
+ options=[
134
+ // Options based on what the SDK supports (from research)
135
+ {"value": "streaming", "label": "Streaming (real-time chunks)"},
136
+ {"value": "complete", "label": "Complete response (wait for full)"},
137
+ {"value": "structured", "label": "Structured/JSON mode"},
138
+ {"value": "tool_calls", "label": "Tool calling/function calls"},
139
+ {"value": "custom", "label": "Type something else..."}
140
+ ]
141
+ )
142
+ ```
143
+
144
+ **Question 6: Required Parameters**
145
+ ```
146
+ AskUserQuestion(
147
+ question="Which parameters are REQUIRED (cannot work without)?",
148
+ options=[
149
+ // Based on researched SDK parameters
150
+ {"value": "prompt_only", "label": "Just the prompt/message"},
151
+ {"value": "prompt_model", "label": "Prompt + model selection"},
152
+ {"value": "prompt_model_config", "label": "Prompt + model + configuration"},
153
+ {"value": "full_config", "label": "Full configuration required"},
154
+ {"value": "custom", "label": "Type something else..."}
155
+ ]
156
+ )
157
+ ```
158
+
159
+ **Question 7: Optional Parameters**
160
+ ```
161
+ AskUserQuestion(
162
+ question="Which optional parameters should be supported?",
163
+ options=[
164
+ // From research: discovered optional parameters
165
+ {"value": "temperature", "label": "temperature (creativity control)"},
166
+ {"value": "max_tokens", "label": "maxTokens (response length)"},
167
+ {"value": "system_prompt", "label": "system (system prompt)"},
168
+ {"value": "tools", "label": "tools (function calling)"},
169
+ {"value": "all_standard", "label": "All standard AI parameters"},
170
+ {"value": "custom", "label": "Type something else..."}
171
+ ]
172
+ )
173
+ ```
174
+
175
+ #### C. Dependencies & Integration
176
+
177
+ **Question 8: External Services**
178
+ ```
179
+ AskUserQuestion(
180
+ question="What external services does this endpoint need?",
181
+ options=[
182
+ {"value": "ai_only", "label": "AI provider only (OpenAI, Anthropic, etc.)"},
183
+ {"value": "ai_search", "label": "AI + Search (Brave, Perplexity)"},
184
+ {"value": "ai_scrape", "label": "AI + Web scraping (Firecrawl)"},
185
+ {"value": "ai_db", "label": "AI + Database (Supabase)"},
186
+ {"value": "multiple", "label": "Multiple external services"},
187
+ {"value": "custom", "label": "Type something else..."}
188
+ ]
189
+ )
190
+ ```
191
+
192
+ **Question 9: API Key Handling**
193
+ ```
194
+ AskUserQuestion(
195
+ question="How should API keys be handled?",
196
+ options=[
197
+ {"value": "server_only", "label": "Server environment variables only"},
198
+ {"value": "server_header", "label": "Server env + custom header override"},
199
+ {"value": "client_next", "label": "NEXT_PUBLIC_ client-side keys"},
200
+ {"value": "all_methods", "label": "All methods (env, header, client)"},
201
+ {"value": "custom", "label": "Type something else..."}
202
+ ]
203
+ )
204
+ ```
205
+
206
+ #### D. Error Handling & Edge Cases
207
+
208
+ **Question 10: Error Response Format**
209
+ ```
210
+ AskUserQuestion(
211
+ question="How should errors be returned?",
212
+ options=[
213
+ {"value": "simple", "label": "Simple: {error: string}"},
214
+ {"value": "detailed", "label": "Detailed: {error, code, details}"},
215
+ {"value": "ai_sdk", "label": "AI SDK standard format"},
216
+ {"value": "http_native", "label": "HTTP status codes + body"},
217
+ {"value": "custom", "label": "Type something else..."}
218
+ ]
219
+ )
220
+ ```
118
221
 
119
222
  ### Phase 3: Analysis (Documentation)
120
223
  After interview, I will:
@@ -135,6 +238,7 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
135
238
  **Date:** [current-date]
136
239
  **Interviewed by:** Claude Code
137
240
  **Status:** Interview Complete
241
+ **Research Sources:** [list of Context7/WebSearch sources]
138
242
 
139
243
  ## 1. Purpose & Context
140
244
  [Synthesized understanding of why this exists]
@@ -153,7 +257,14 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
153
257
  - API Keys Required: [list]
154
258
  - AI Models: [list]
155
259
 
156
- ## 6. Real-World Scenarios
260
+ ## 6. Interview Responses
261
+ | Question | User Selection | Notes |
262
+ |----------|---------------|-------|
263
+ | Purpose | [selected option] | |
264
+ | Users | [selected option] | |
265
+ | ... | ... | |
266
+
267
+ ## 7. Real-World Scenarios
157
268
  ### Scenario 1: [Common Use Case]
158
269
  **Request:**
159
270
  ```json
@@ -164,23 +275,23 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
164
275
  {example}
165
276
  ```
166
277
 
167
- ## 7. Edge Cases & Error Handling
278
+ ## 8. Edge Cases & Error Handling
168
279
  [Identified edge cases and how to handle them]
169
280
 
170
- ## 8. Validation Rules
281
+ ## 9. Validation Rules
171
282
  [What must be validated]
172
283
 
173
- ## 9. Documentation Links
174
- - [Official docs]
175
- - [SDK docs]
176
- - [Related resources]
284
+ ## 10. Documentation Links
285
+ - [Official docs from research]
286
+ - [SDK docs from Context7]
287
+ - [Related resources from WebSearch]
177
288
 
178
- ## 10. Test Cases (To Implement)
289
+ ## 11. Test Cases (To Implement)
179
290
  - [ ] Test: [scenario from interview]
180
291
  - [ ] Test: [edge case from interview]
181
292
  - [ ] Test: [error handling from interview]
182
293
 
183
- ## 11. Open Questions
294
+ ## 12. Open Questions
184
295
  [Any ambiguities to resolve]
185
296
  ```
186
297
 
@@ -190,17 +301,30 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
190
301
  /api-interview generate-css
191
302
  ```
192
303
 
193
- I will then ask all 20 questions, document responses, and create the endpoint documentation file ready for TDD implementation.
304
+ I will:
305
+ 1. First complete research (Context7 + WebSearch)
306
+ 2. Build structured questions with options from research
307
+ 3. Ask all questions using AskUserQuestion with options
308
+ 4. Document responses and create the endpoint documentation
194
309
 
195
310
  <claude-commands-template>
196
- ## Interview Guidelines
197
-
198
- 1. **Ask ALL questions** - Don't skip steps even if obvious
199
- 2. **Request examples** - Concrete examples > abstract descriptions
200
- 3. **Clarify ambiguity** - If answer is unclear, ask follow-ups
201
- 4. **Document links** - Capture ALL external documentation URLs
202
- 5. **Real scenarios** - Focus on actual usage, not hypothetical
203
- 6. **Be thorough** - Better to over-document than under-document
311
+ ## Interview Guidelines (v1.8.0)
312
+
313
+ 1. **RESEARCH FIRST** - Complete Context7 + WebSearch before ANY questions
314
+ 2. **STRUCTURED OPTIONS** - Every question uses AskUserQuestion with options[]
315
+ 3. **OPTIONS FROM RESEARCH** - Multiple-choice options come from discovered capabilities
316
+ 4. **ALWAYS INCLUDE "Type something else..."** - Let user provide custom input
317
+ 5. **ASK ONE AT A TIME** - Wait for response before next question
318
+ 6. **DOCUMENT EVERYTHING** - Capture ALL selections and custom inputs
319
+
320
+ ## Question Format Checklist
321
+
322
+ Before asking each question:
323
+ - [ ] Is research complete?
324
+ - [ ] Are options based on research findings?
325
+ - [ ] Does it use AskUserQuestion with options parameter?
326
+ - [ ] Is there a "Type something else..." option?
327
+ - [ ] Will this help build the schema?
204
328
 
205
329
  ## After Interview
206
330
 
@@ -6,10 +6,18 @@ Purpose: Block proceeding to schema/TDD if interview has no USER answers
6
6
  This hook ensures Claude actually asks the user questions and records
7
7
  their answers, rather than self-answering the interview.
8
8
 
9
+ v1.8.0 MAJOR UPDATE: Now requires STRUCTURED questions with multiple-choice
10
+ options derived from research phase findings.
11
+
9
12
  It checks:
10
- 1. Interview status is "complete"
11
- 2. There are actual questions with answers
12
- 3. Answers don't look auto-generated (contain user-specific details)
13
+ 1. Research phase is complete (questions must be based on research)
14
+ 2. Interview status is "complete"
15
+ 3. Questions used AskUserQuestion tool with STRUCTURED OPTIONS
16
+ 4. At least MIN_STRUCTURED_QUESTIONS have multiple-choice or typed options
17
+ 5. Answers don't look auto-generated (contain user-specific details)
18
+
19
+ The goal: Questions like Claude Code shows - with numbered options and
20
+ "Type something" at the end, all based on research findings.
13
21
 
14
22
  Returns:
15
23
  - {"permissionDecision": "allow"} - Let the tool run
@@ -23,7 +31,10 @@ from pathlib import Path
23
31
  STATE_FILE = Path(__file__).parent.parent / "api-dev-state.json"
24
32
 
25
33
  # Minimum questions required for a valid interview
26
- MIN_QUESTIONS = 3
34
+ MIN_QUESTIONS = 5 # Increased - need comprehensive interview
35
+
36
+ # Minimum questions that MUST have structured options (multiple-choice)
37
+ MIN_STRUCTURED_QUESTIONS = 3
27
38
 
28
39
  # Phrases that indicate self-answered (not real user input)
29
40
  SELF_ANSWER_INDICATORS = [
@@ -33,6 +44,12 @@ SELF_ANSWER_INDICATORS = [
33
44
  "typical use case",
34
45
  "standard implementation",
35
46
  "common pattern",
47
+ "i'll assume",
48
+ "assuming",
49
+ "probably",
50
+ "most likely",
51
+ "default to",
52
+ "usually",
36
53
  ]
37
54
 
38
55
 
@@ -81,40 +98,84 @@ Run /api-create [endpoint-name] to begin the interview-driven workflow."""
81
98
  sys.exit(0)
82
99
 
83
100
  phases = state.get("phases", {})
101
+ research = phases.get("research_initial", {})
84
102
  interview = phases.get("interview", {})
85
103
  interview_status = interview.get("status", "not_started")
86
104
  interview_desc = interview.get("description", "").lower()
87
105
  questions = interview.get("questions", [])
106
+ research_queries = state.get("research_queries", [])
107
+
108
+ # Check 0: Research must be complete FIRST (questions based on research)
109
+ research_status = research.get("status", "not_started")
110
+ if research_status != "complete":
111
+ sources_count = len(research.get("sources", []))
112
+ print(json.dumps({
113
+ "permissionDecision": "deny",
114
+ "reason": f"""❌ BLOCKED: Research phase must complete BEFORE interview.
115
+
116
+ Research status: {research_status}
117
+ Sources consulted: {sources_count}
118
+ Research queries: {len(research_queries)}
119
+
120
+ ═══════════════════════════════════════════════════════════
121
+ ⚠️ COMPLETE RESEARCH FIRST - THEN ASK QUESTIONS
122
+ ═══════════════════════════════════════════════════════════
123
+
124
+ The interview questions MUST be based on research findings:
125
+ 1. Use Context7 to get SDK/API documentation
126
+ 2. Use WebSearch (2-3 searches) for official docs
127
+ 3. THEN generate interview questions with STRUCTURED OPTIONS
128
+ based on what you discovered
129
+
130
+ Example: If research found 5 available models, ask:
131
+ "Which model should this endpoint use?"
132
+ 1. gpt-4o (fastest, cheapest)
133
+ 2. claude-sonnet-4-20250514 (best reasoning)
134
+ 3. gemini-pro (multimodal)
135
+ 4. Type something else...
136
+
137
+ Research INFORMS the options. No research = no good options."""
138
+ }))
139
+ sys.exit(0)
88
140
 
89
141
  # Check 1: Interview must be complete
90
142
  if interview_status != "complete":
143
+ # Build example based on actual research
144
+ research_based_example = _build_research_based_example(research_queries)
145
+
91
146
  print(json.dumps({
92
147
  "permissionDecision": "deny",
93
148
  "reason": f"""❌ BLOCKED: Interview phase not complete.
94
149
 
95
150
  Current status: {interview_status}
96
151
  AskUserQuestion calls: {interview.get('user_question_count', 0)}
152
+ Structured questions: {interview.get('structured_question_count', 0)}
97
153
 
98
154
  ═══════════════════════════════════════════════════════════
99
- ⚠️ YOU MUST STOP AND ASK THE USER QUESTIONS NOW
155
+ ⚠️ USE STRUCTURED QUESTIONS WITH OPTIONS
100
156
  ═══════════════════════════════════════════════════════════
101
157
 
102
- Use the AskUserQuestion tool to ask EACH of these questions ONE AT A TIME:
158
+ Based on your research, ask questions using AskUserQuestion with
159
+ the 'options' parameter to provide multiple-choice selections:
103
160
 
104
- 1. "What is the primary purpose of this endpoint?"
105
- 2. "Who will use it and how?"
106
- 3. "What parameters are essential vs optional?"
161
+ {research_based_example}
107
162
 
108
- WAIT for the user's response after EACH question before continuing.
163
+ REQUIRED FORMAT for AskUserQuestion:
164
+ - question: "Your question text"
165
+ - options: [
166
+ {{"value": "option1", "label": "Option 1 description"}},
167
+ {{"value": "option2", "label": "Option 2 description"}},
168
+ {{"value": "custom", "label": "Type something..."}}
169
+ ]
109
170
 
110
- DO NOT:
111
- Make up answers yourself
112
- ❌ Assume what the user wants
113
- ❌ Mark the interview as complete without asking
114
- ❌ Try to write any code until you have real answers
171
+ You need at least {MIN_STRUCTURED_QUESTIONS} structured questions with options.
172
+ Current: {interview.get('structured_question_count', 0)}
115
173
 
116
- The system is tracking your AskUserQuestion calls. You need at least 3
117
- actual calls with user responses to proceed."""
174
+ DO NOT:
175
+ Ask open-ended questions without options
176
+ ❌ Make up options not based on research
177
+ ❌ Skip the AskUserQuestion tool
178
+ ❌ Self-answer questions"""
118
179
  }))
119
180
  sys.exit(0)
120
181
 
@@ -128,11 +189,11 @@ Questions recorded: {len(questions)}
128
189
  Minimum required: {MIN_QUESTIONS}
129
190
 
130
191
  You must ask the user more questions about their requirements.
131
- DO NOT proceed without understanding the user's actual needs."""
192
+ Use AskUserQuestion with structured options based on your research."""
132
193
  }))
133
194
  sys.exit(0)
134
195
 
135
- # Check 2.5: Verify AskUserQuestion tool was actually used
196
+ # Check 3: Verify AskUserQuestion tool was actually used
136
197
  user_question_count = interview.get("user_question_count", 0)
137
198
  tool_used_count = sum(1 for q in questions if q.get("tool_used", False))
138
199
 
@@ -146,14 +207,43 @@ Minimum required: {MIN_QUESTIONS}
146
207
 
147
208
  You MUST use the AskUserQuestion tool to ask the user directly.
148
209
  Do NOT make up answers or mark the interview as complete without
149
- actually asking the user and receiving their responses.
210
+ actually asking the user and receiving their responses."""
211
+ }))
212
+ sys.exit(0)
150
213
 
151
- The system tracks when AskUserQuestion is used. Self-answering
152
- will be detected and blocked."""
214
+ # Check 4: Verify structured questions were used
215
+ structured_count = interview.get("structured_question_count", 0)
216
+ questions_with_options = sum(1 for q in questions if q.get("has_options", False))
217
+ actual_structured = max(structured_count, questions_with_options)
218
+
219
+ if actual_structured < MIN_STRUCTURED_QUESTIONS:
220
+ print(json.dumps({
221
+ "permissionDecision": "deny",
222
+ "reason": f"""❌ Not enough STRUCTURED questions with options.
223
+
224
+ Structured questions (with options): {actual_structured}
225
+ Minimum required: {MIN_STRUCTURED_QUESTIONS}
226
+
227
+ You MUST use AskUserQuestion with the 'options' parameter to
228
+ provide multiple-choice answers based on your research.
229
+
230
+ Example:
231
+ AskUserQuestion(
232
+ question="Which AI provider should this endpoint support?",
233
+ options=[
234
+ {{"value": "openai", "label": "OpenAI (GPT-4o)"}},
235
+ {{"value": "anthropic", "label": "Anthropic (Claude)"}},
236
+ {{"value": "google", "label": "Google (Gemini)"}},
237
+ {{"value": "all", "label": "All of the above"}},
238
+ {{"value": "custom", "label": "Type something else..."}}
239
+ ]
240
+ )
241
+
242
+ This gives the user clear choices based on what you researched."""
153
243
  }))
154
244
  sys.exit(0)
155
245
 
156
- # Check 3: Look for self-answer indicators
246
+ # Check 5: Look for self-answer indicators
157
247
  for indicator in SELF_ANSWER_INDICATORS:
158
248
  if indicator in interview_desc:
159
249
  print(json.dumps({
@@ -162,15 +252,10 @@ will be detected and blocked."""
162
252
 
163
253
  Detected: "{indicator}" in interview description.
164
254
 
165
- You MUST actually ask the user questions using AskUserQuestion.
166
- Self-answering the interview defeats its purpose.
255
+ You MUST actually ask the user questions using AskUserQuestion
256
+ with structured options. Self-answering defeats the purpose.
167
257
 
168
- Reset the interview phase and ask the user directly:
169
- 1. What do you want this endpoint to do?
170
- 2. Which providers/models should it support?
171
- 3. What parameters matter most to you?
172
-
173
- Wait for their real answers before proceeding."""
258
+ Reset the interview and ask with options based on research."""
174
259
  }))
175
260
  sys.exit(0)
176
261
 
@@ -179,5 +264,41 @@ Wait for their real answers before proceeding."""
179
264
  sys.exit(0)
180
265
 
181
266
 
267
+ def _build_research_based_example(research_queries: list) -> str:
268
+ """Build an example question based on actual research queries."""
269
+ if not research_queries:
270
+ return """Example (generic - do research first!):
271
+ "What is the main use case for this endpoint?"
272
+ 1. Data retrieval
273
+ 2. Data transformation
274
+ 3. AI processing
275
+ 4. Type something..."""
276
+
277
+ # Extract terms from research to suggest relevant options
278
+ all_terms = []
279
+ for query in research_queries[-5:]: # Last 5 queries
280
+ terms = query.get("terms", [])
281
+ all_terms.extend(terms)
282
+
283
+ # Deduplicate and get top terms
284
+ unique_terms = list(dict.fromkeys(all_terms))[:4]
285
+
286
+ if unique_terms:
287
+ options_example = "\n ".join([
288
+ f"{i+1}. {term.title()}" for i, term in enumerate(unique_terms)
289
+ ])
290
+ return f"""Example based on your research:
291
+ "Which of these should be the primary focus?"
292
+ {options_example}
293
+ {len(unique_terms)+1}. Type something else..."""
294
+
295
+ return """Example:
296
+ "What capability is most important?"
297
+ 1. Option based on research finding 1
298
+ 2. Option based on research finding 2
299
+ 3. Option based on research finding 3
300
+ 4. Type something..."""
301
+
302
+
182
303
  if __name__ == "__main__":
183
304
  main()
@@ -60,7 +60,8 @@ def main():
60
60
  interview = phases.setdefault("interview", {
61
61
  "status": "not_started",
62
62
  "questions": [],
63
- "user_question_count": 0
63
+ "user_question_count": 0,
64
+ "structured_question_count": 0
64
65
  })
65
66
 
66
67
  # Track the question
@@ -68,10 +69,22 @@ def main():
68
69
  user_count = interview.get("user_question_count", 0) + 1
69
70
  interview["user_question_count"] = user_count
70
71
 
72
+ # Check if this question has structured options (multiple-choice)
73
+ options = tool_input.get("options", [])
74
+ has_options = len(options) > 0
75
+
76
+ # Track structured questions count
77
+ if has_options:
78
+ structured_count = interview.get("structured_question_count", 0) + 1
79
+ interview["structured_question_count"] = structured_count
80
+
71
81
  question_entry = {
72
82
  "question": tool_input.get("question", ""),
73
83
  "timestamp": datetime.now().isoformat(),
74
- "tool_used": True # Proves AskUserQuestion was actually called
84
+ "tool_used": True, # Proves AskUserQuestion was actually called
85
+ "has_options": has_options,
86
+ "options_count": len(options),
87
+ "options": [opt.get("label", opt.get("value", "")) for opt in options[:5]] if options else []
75
88
  }
76
89
  questions.append(question_entry)
77
90
 
@@ -82,6 +95,14 @@ def main():
82
95
 
83
96
  interview["last_activity"] = datetime.now().isoformat()
84
97
 
98
+ # Log for visibility
99
+ if has_options:
100
+ interview["last_structured_question"] = {
101
+ "question": tool_input.get("question", "")[:100],
102
+ "options_count": len(options),
103
+ "timestamp": datetime.now().isoformat()
104
+ }
105
+
85
106
  # Save and exit
86
107
  STATE_FILE.write_text(json.dumps(state, indent=2))
87
108
  print(json.dumps({"continue": True}))
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@hustle-together/api-dev-tools",
3
- "version": "1.7.1",
3
+ "version": "1.8.0",
4
4
  "description": "Interview-driven API development workflow for Claude Code - Automates research, testing, and documentation",
5
5
  "main": "bin/cli.js",
6
6
  "bin": {