@hustle-together/api-dev-tools 1.7.0 → 1.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,120 +1,223 @@
1
- # API Interview - Structured API Discovery
1
+ # API Interview - Research-Based Structured Discovery
2
2
 
3
3
  **Usage:** `/api-interview [endpoint-name]`
4
4
 
5
- **Purpose:** Conduct structured interview to understand API endpoint purpose, usage, and requirements before any implementation.
5
+ **Purpose:** Conduct structured interview with MULTIPLE-CHOICE questions derived from research findings to understand API endpoint purpose, usage, and requirements before any implementation.
6
+
7
+ ## v1.8.0 REQUIREMENT: Structured Questions with Options
8
+
9
+ **CRITICAL:** All interview questions MUST:
10
+ 1. Be based on COMPLETED research (Context7 + WebSearch)
11
+ 2. Use AskUserQuestion tool with the `options` parameter
12
+ 3. Provide multiple-choice selections derived from research findings
13
+ 4. Include a "Type something else..." option for custom input
14
+
15
+ **Example of CORRECT structured question:**
16
+ ```
17
+ AskUserQuestion(
18
+ question="Which AI provider should this endpoint support?",
19
+ options=[
20
+ {"value": "openai", "label": "OpenAI (GPT-4o, GPT-4-turbo)"},
21
+ {"value": "anthropic", "label": "Anthropic (Claude Sonnet, Opus)"},
22
+ {"value": "google", "label": "Google (Gemini Pro, Flash)"},
23
+ {"value": "all", "label": "All providers (multi-provider support)"},
24
+ {"value": "custom", "label": "Type something else..."}
25
+ ]
26
+ )
27
+ ```
28
+
29
+ This gives users clear choices from RESEARCHED capabilities, not guesses.
6
30
 
7
31
  ## Interview Methodology
8
32
 
9
33
  Based on Anthropic Interviewer approach with three phases:
10
34
 
35
+ ### Phase 0: PREREQUISITE - Research (MANDATORY)
36
+ **You CANNOT start the interview until research is complete.**
37
+
38
+ Before asking ANY questions:
39
+ 1. Use Context7 to get SDK/API documentation
40
+ 2. Use WebSearch (2-3 searches) for official docs
41
+ 3. Gather all available options, parameters, models, providers
42
+ 4. Build your question options FROM this research
43
+
44
+ Example research flow:
45
+ ```
46
+ 1. mcp__context7__resolve-library-id("vercel ai sdk")
47
+ 2. mcp__context7__get-library-docs(libraryId)
48
+ 3. WebSearch("Vercel AI SDK providers models 2025")
49
+ 4. WebSearch("Vercel AI SDK streaming options parameters")
50
+ ```
51
+
52
+ **Research informs options. No research = no good options = interview blocked.**
53
+
11
54
  ### Phase 1: Planning (Internal)
12
55
  - Review endpoint name and context
13
- - Identify key areas to explore
56
+ - Review research findings (what providers, models, parameters exist?)
57
+ - Build structured question options from research
14
58
  - Prepare follow-up questions
15
59
 
16
- ### Phase 2: Interviewing (User Interaction)
17
- Ask structured questions to understand:
60
+ ### Phase 2: Interviewing (User Interaction with Structured Options)
61
+
62
+ **EVERY question uses AskUserQuestion with options parameter.**
18
63
 
19
64
  #### A. Purpose & Context
20
- 1. **What problem does this API endpoint solve?**
21
- - What's the business/technical need?
22
- - What happens without this endpoint?
23
-
24
- 2. **Who are the primary users?**
25
- - Frontend developers? End users? Other systems?
26
- - What's their technical level?
27
-
28
- 3. **What triggers usage of this API?**
29
- - User action? Scheduled task? Event-driven?
30
- - How frequently is it called?
31
-
32
- #### B. Real-World Usage Scenarios
33
- 4. **Walk me through a typical request:**
34
- - What data does the user provide?
35
- - What do they expect back?
36
- - What happens with the response?
37
-
38
- 5. **What are the most common use cases?** (Ask for 3-5 examples)
39
- - Common scenario 1: ___
40
- - Common scenario 2: ___
41
- - Edge scenario: ___
42
-
43
- 6. **Show me an example request/response you envision:**
44
- - Request body/params
45
- - Expected response
46
- - Error cases
47
-
48
- #### C. Technical Requirements
49
- 7. **What parameters are absolutely REQUIRED?**
50
- - Can the API work without them?
51
- - What happens if they're missing?
52
-
53
- 8. **What parameters are OPTIONAL?**
54
- - What defaults make sense?
55
- - How do they modify behavior?
56
-
57
- 9. **What are valid value ranges/formats?**
58
- - Type constraints (string, number, enum)?
59
- - Length/size limits?
60
- - Format requirements (email, URL, date)?
61
-
62
- 10. **Are there parameter dependencies?**
63
- - If X is provided, must Y also be provided?
64
- - Mutually exclusive options?
65
-
66
- #### D. Dependencies & Integration
67
- 11. **What external services does this use?**
68
- - AI providers (OpenAI, Anthropic, Google)?
69
- - Third-party APIs (Firecrawl, Brave Search)?
70
- - Database (Supabase)?
71
-
72
- 12. **What API keys are required?**
73
- - Where are they configured?
74
- - Are there fallback options?
75
- - Can users provide their own keys?
76
-
77
- 13. **What AI models/providers are involved?**
78
- - Specific models (GPT-4, Claude Sonnet)?
79
- - Why those models?
80
- - Are there alternatives?
81
-
82
- 14. **Are there rate limits, quotas, or costs?**
83
- - Per-request costs?
84
- - Rate limiting needed?
85
- - Cost tracking required?
86
-
87
- #### E. Error Handling & Edge Cases
88
- 15. **What can go wrong?**
89
- - Invalid input?
90
- - External service failures?
91
- - Timeout scenarios?
92
-
93
- 16. **How should errors be communicated?**
94
- - HTTP status codes?
95
- - Error message format?
96
- - User-facing vs. technical errors?
97
-
98
- 17. **What are boundary conditions?**
99
- - Very large inputs?
100
- - Empty/null values?
101
- - Concurrent requests?
102
-
103
- 18. **What should be validated before processing?**
104
- - Input validation rules?
105
- - Authentication/authorization?
106
- - Resource availability?
107
-
108
- #### F. Documentation & Resources
109
- 19. **Where is official documentation?**
110
- - Link to external API docs
111
- - SDK documentation
112
- - Code examples
113
-
114
- 20. **Are there similar endpoints for reference?**
115
- - In this codebase?
116
- - In other projects?
117
- - Industry examples?
65
+
66
+ **Question 1: Primary Purpose**
67
+ ```
68
+ AskUserQuestion(
69
+ question="What is the primary purpose of this endpoint?",
70
+ options=[
71
+ {"value": "data_retrieval", "label": "Retrieve/query data"},
72
+ {"value": "data_transform", "label": "Transform/process data"},
73
+ {"value": "ai_generation", "label": "AI content generation"},
74
+ {"value": "ai_analysis", "label": "AI analysis/classification"},
75
+ {"value": "integration", "label": "Third-party integration"},
76
+ {"value": "custom", "label": "Type something else..."}
77
+ ]
78
+ )
79
+ ```
80
+
81
+ **Question 2: Primary Users**
82
+ ```
83
+ AskUserQuestion(
84
+ question="Who are the primary users of this endpoint?",
85
+ options=[
86
+ {"value": "frontend", "label": "Frontend developers (React, Vue, etc.)"},
87
+ {"value": "backend", "label": "Backend services (server-to-server)"},
88
+ {"value": "mobile", "label": "Mobile app developers"},
89
+ {"value": "enduser", "label": "End users directly (browser)"},
90
+ {"value": "automation", "label": "Automated systems/bots"},
91
+ {"value": "custom", "label": "Type something else..."}
92
+ ]
93
+ )
94
+ ```
95
+
96
+ **Question 3: Usage Trigger**
97
+ ```
98
+ AskUserQuestion(
99
+ question="What triggers a call to this endpoint?",
100
+ options=[
101
+ {"value": "user_action", "label": "User action (button click, form submit)"},
102
+ {"value": "page_load", "label": "Page/component load"},
103
+ {"value": "scheduled", "label": "Scheduled/cron job"},
104
+ {"value": "webhook", "label": "External webhook/event"},
105
+ {"value": "realtime", "label": "Real-time/streaming updates"},
106
+ {"value": "custom", "label": "Type something else..."}
107
+ ]
108
+ )
109
+ ```
110
+
111
+ #### B. Technical Requirements (Research-Based Options)
112
+
113
+ **Question 4: AI Provider** (options from research)
114
+ ```
115
+ AskUserQuestion(
116
+ question="Which AI provider(s) should this endpoint support?",
117
+ options=[
118
+ // These options come from your Context7/WebSearch research!
119
+ {"value": "openai", "label": "OpenAI (gpt-4o, gpt-4-turbo)"},
120
+ {"value": "anthropic", "label": "Anthropic (claude-sonnet-4-20250514, claude-opus-4-20250514)"},
121
+ {"value": "google", "label": "Google (gemini-pro, gemini-flash)"},
122
+ {"value": "groq", "label": "Groq (llama-3.1-70b, mixtral)"},
123
+ {"value": "multiple", "label": "Multiple providers (configurable)"},
124
+ {"value": "custom", "label": "Type something else..."}
125
+ ]
126
+ )
127
+ ```
128
+
129
+ **Question 5: Response Format** (options from research)
130
+ ```
131
+ AskUserQuestion(
132
+ question="What response format is needed?",
133
+ options=[
134
+ // Options based on what the SDK supports (from research)
135
+ {"value": "streaming", "label": "Streaming (real-time chunks)"},
136
+ {"value": "complete", "label": "Complete response (wait for full)"},
137
+ {"value": "structured", "label": "Structured/JSON mode"},
138
+ {"value": "tool_calls", "label": "Tool calling/function calls"},
139
+ {"value": "custom", "label": "Type something else..."}
140
+ ]
141
+ )
142
+ ```
143
+
144
+ **Question 6: Required Parameters**
145
+ ```
146
+ AskUserQuestion(
147
+ question="Which parameters are REQUIRED (cannot work without)?",
148
+ options=[
149
+ // Based on researched SDK parameters
150
+ {"value": "prompt_only", "label": "Just the prompt/message"},
151
+ {"value": "prompt_model", "label": "Prompt + model selection"},
152
+ {"value": "prompt_model_config", "label": "Prompt + model + configuration"},
153
+ {"value": "full_config", "label": "Full configuration required"},
154
+ {"value": "custom", "label": "Type something else..."}
155
+ ]
156
+ )
157
+ ```
158
+
159
+ **Question 7: Optional Parameters**
160
+ ```
161
+ AskUserQuestion(
162
+ question="Which optional parameters should be supported?",
163
+ options=[
164
+ // From research: discovered optional parameters
165
+ {"value": "temperature", "label": "temperature (creativity control)"},
166
+ {"value": "max_tokens", "label": "maxTokens (response length)"},
167
+ {"value": "system_prompt", "label": "system (system prompt)"},
168
+ {"value": "tools", "label": "tools (function calling)"},
169
+ {"value": "all_standard", "label": "All standard AI parameters"},
170
+ {"value": "custom", "label": "Type something else..."}
171
+ ]
172
+ )
173
+ ```
174
+
175
+ #### C. Dependencies & Integration
176
+
177
+ **Question 8: External Services**
178
+ ```
179
+ AskUserQuestion(
180
+ question="What external services does this endpoint need?",
181
+ options=[
182
+ {"value": "ai_only", "label": "AI provider only (OpenAI, Anthropic, etc.)"},
183
+ {"value": "ai_search", "label": "AI + Search (Brave, Perplexity)"},
184
+ {"value": "ai_scrape", "label": "AI + Web scraping (Firecrawl)"},
185
+ {"value": "ai_db", "label": "AI + Database (Supabase)"},
186
+ {"value": "multiple", "label": "Multiple external services"},
187
+ {"value": "custom", "label": "Type something else..."}
188
+ ]
189
+ )
190
+ ```
191
+
192
+ **Question 9: API Key Handling**
193
+ ```
194
+ AskUserQuestion(
195
+ question="How should API keys be handled?",
196
+ options=[
197
+ {"value": "server_only", "label": "Server environment variables only"},
198
+ {"value": "server_header", "label": "Server env + custom header override"},
199
+ {"value": "client_next", "label": "NEXT_PUBLIC_ client-side keys"},
200
+ {"value": "all_methods", "label": "All methods (env, header, client)"},
201
+ {"value": "custom", "label": "Type something else..."}
202
+ ]
203
+ )
204
+ ```
205
+
206
+ #### D. Error Handling & Edge Cases
207
+
208
+ **Question 10: Error Response Format**
209
+ ```
210
+ AskUserQuestion(
211
+ question="How should errors be returned?",
212
+ options=[
213
+ {"value": "simple", "label": "Simple: {error: string}"},
214
+ {"value": "detailed", "label": "Detailed: {error, code, details}"},
215
+ {"value": "ai_sdk", "label": "AI SDK standard format"},
216
+ {"value": "http_native", "label": "HTTP status codes + body"},
217
+ {"value": "custom", "label": "Type something else..."}
218
+ ]
219
+ )
220
+ ```
118
221
 
119
222
  ### Phase 3: Analysis (Documentation)
120
223
  After interview, I will:
@@ -135,6 +238,7 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
135
238
  **Date:** [current-date]
136
239
  **Interviewed by:** Claude Code
137
240
  **Status:** Interview Complete
241
+ **Research Sources:** [list of Context7/WebSearch sources]
138
242
 
139
243
  ## 1. Purpose & Context
140
244
  [Synthesized understanding of why this exists]
@@ -153,7 +257,14 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
153
257
  - API Keys Required: [list]
154
258
  - AI Models: [list]
155
259
 
156
- ## 6. Real-World Scenarios
260
+ ## 6. Interview Responses
261
+ | Question | User Selection | Notes |
262
+ |----------|---------------|-------|
263
+ | Purpose | [selected option] | |
264
+ | Users | [selected option] | |
265
+ | ... | ... | |
266
+
267
+ ## 7. Real-World Scenarios
157
268
  ### Scenario 1: [Common Use Case]
158
269
  **Request:**
159
270
  ```json
@@ -164,23 +275,23 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
164
275
  {example}
165
276
  ```
166
277
 
167
- ## 7. Edge Cases & Error Handling
278
+ ## 8. Edge Cases & Error Handling
168
279
  [Identified edge cases and how to handle them]
169
280
 
170
- ## 8. Validation Rules
281
+ ## 9. Validation Rules
171
282
  [What must be validated]
172
283
 
173
- ## 9. Documentation Links
174
- - [Official docs]
175
- - [SDK docs]
176
- - [Related resources]
284
+ ## 10. Documentation Links
285
+ - [Official docs from research]
286
+ - [SDK docs from Context7]
287
+ - [Related resources from WebSearch]
177
288
 
178
- ## 10. Test Cases (To Implement)
289
+ ## 11. Test Cases (To Implement)
179
290
  - [ ] Test: [scenario from interview]
180
291
  - [ ] Test: [edge case from interview]
181
292
  - [ ] Test: [error handling from interview]
182
293
 
183
- ## 11. Open Questions
294
+ ## 12. Open Questions
184
295
  [Any ambiguities to resolve]
185
296
  ```
186
297
 
@@ -190,17 +301,30 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
190
301
  /api-interview generate-css
191
302
  ```
192
303
 
193
- I will then ask all 20 questions, document responses, and create the endpoint documentation file ready for TDD implementation.
304
+ I will:
305
+ 1. First complete research (Context7 + WebSearch)
306
+ 2. Build structured questions with options from research
307
+ 3. Ask all questions using AskUserQuestion with options
308
+ 4. Document responses and create the endpoint documentation
194
309
 
195
310
  <claude-commands-template>
196
- ## Interview Guidelines
197
-
198
- 1. **Ask ALL questions** - Don't skip steps even if obvious
199
- 2. **Request examples** - Concrete examples > abstract descriptions
200
- 3. **Clarify ambiguity** - If answer is unclear, ask follow-ups
201
- 4. **Document links** - Capture ALL external documentation URLs
202
- 5. **Real scenarios** - Focus on actual usage, not hypothetical
203
- 6. **Be thorough** - Better to over-document than under-document
311
+ ## Interview Guidelines (v1.8.0)
312
+
313
+ 1. **RESEARCH FIRST** - Complete Context7 + WebSearch before ANY questions
314
+ 2. **STRUCTURED OPTIONS** - Every question uses AskUserQuestion with options[]
315
+ 3. **OPTIONS FROM RESEARCH** - Multiple-choice options come from discovered capabilities
316
+ 4. **ALWAYS INCLUDE "Type something else..."** - Let user provide custom input
317
+ 5. **ASK ONE AT A TIME** - Wait for response before next question
318
+ 6. **DOCUMENT EVERYTHING** - Capture ALL selections and custom inputs
319
+
320
+ ## Question Format Checklist
321
+
322
+ Before asking each question:
323
+ - [ ] Is research complete?
324
+ - [ ] Are options based on research findings?
325
+ - [ ] Does it use AskUserQuestion with options parameter?
326
+ - [ ] Is there a "Type something else..." option?
327
+ - [ ] Will this help build the schema?
204
328
 
205
329
  ## After Interview
206
330
 
@@ -1,13 +1,20 @@
1
1
  #!/usr/bin/env python3
2
2
  """
3
3
  Hook: UserPromptSubmit
4
- Purpose: Enforce research before answering external API/SDK questions
4
+ Purpose: ALWAYS enforce research before answering technical questions
5
5
 
6
- This hook runs BEFORE Claude processes the user's prompt. It detects
7
- questions about external APIs, SDKs, or services and injects context
8
- requiring Claude to research first before answering.
6
+ This hook runs BEFORE Claude processes the user's prompt. It aggressively
7
+ detects ANY technical question and requires comprehensive research using
8
+ BOTH Context7 AND multiple WebSearches before answering.
9
9
 
10
- Philosophy: "When in doubt, research. Training data is ALWAYS potentially outdated."
10
+ Philosophy: "ALWAYS research. Training data is NEVER trustworthy for technical info."
11
+
12
+ The hook triggers on:
13
+ - ANY mention of APIs, SDKs, libraries, packages, frameworks
14
+ - ANY technical "how to" or capability questions
15
+ - ANY code-related questions (functions, methods, parameters, types)
16
+ - ANY questions about tools, services, or platforms
17
+ - ANY request for implementation, editing, or changes
11
18
 
12
19
  Returns:
13
20
  - Prints context to stdout (injected into conversation)
@@ -23,134 +30,172 @@ from datetime import datetime
23
30
  STATE_FILE = Path(__file__).parent.parent / "api-dev-state.json"
24
31
 
25
32
  # ============================================================================
26
- # PATTERN-BASED DETECTION
33
+ # AGGRESSIVE DETECTION PATTERNS
27
34
  # ============================================================================
28
35
 
29
- # Patterns that indicate external service/API mentions
30
- EXTERNAL_SERVICE_PATTERNS = [
31
- # Package names
32
- r"@[\w-]+/[\w-]+", # @scope/package
33
- r"\b[\w-]+-(?:sdk|api|js|ts|py)\b", # something-sdk, something-api, something-js
34
-
35
- # API/SDK keywords
36
- r"\b(?:api|sdk|library|package|module|framework)\b",
37
-
38
- # Technical implementation terms
39
- r"\b(?:endpoint|route|webhook|oauth|auth|token)\b",
40
-
41
- # Version references
42
- r"\bv?\d+\.\d+(?:\.\d+)?\b", # version numbers like v1.2.3, 2.0
43
-
44
- # Import/require patterns
45
- r"(?:import|require|from)\s+['\"][\w@/-]+['\"]",
36
+ # Technical terms that ALWAYS trigger research
37
+ TECHNICAL_TERMS = [
38
+ # Code/Development
39
+ r"\b(?:function|method|class|interface|type|schema|model)\b",
40
+ r"\b(?:parameter|argument|option|config|setting|property)\b",
41
+ r"\b(?:import|export|require|module|package|library|dependency)\b",
42
+ r"\b(?:api|sdk|framework|runtime|engine|platform)\b",
43
+ r"\b(?:endpoint|route|url|path|request|response|header)\b",
44
+ r"\b(?:database|query|table|collection|document|record)\b",
45
+ r"\b(?:authentication|authorization|token|key|secret|credential)\b",
46
+ r"\b(?:error|exception|bug|issue|problem|fix)\b",
47
+ r"\b(?:test|spec|coverage|mock|stub|fixture)\b",
48
+ r"\b(?:deploy|build|compile|bundle|publish|release)\b",
49
+ r"\b(?:install|setup|configure|initialize|migrate)\b",
50
+ r"\b(?:provider|service|client|server|handler|middleware)\b",
51
+ r"\b(?:stream|async|await|promise|callback|event)\b",
52
+ r"\b(?:component|widget|element|view|layout|template)\b",
53
+ r"\b(?:state|store|reducer|action|context|hook)\b",
54
+ r"\b(?:validate|parse|serialize|transform|convert)\b",
55
+
56
+ # Package patterns
57
+ r"@[\w-]+/[\w-]+", # @scope/package
58
+ r"\b[\w-]+-(?:sdk|api|js|ts|py|go|rs)\b", # something-sdk, something-api
59
+
60
+ # Version patterns
61
+ r"\bv?\d+\.\d+(?:\.\d+)?(?:-[\w.]+)?\b", # v1.2.3, 2.0.0-beta
62
+
63
+ # File patterns
64
+ r"\b[\w-]+\.(?:ts|js|tsx|jsx|py|go|rs|json|yaml|yml|toml|env)\b",
46
65
  ]
47
66
 
48
- # Patterns that indicate asking about features/capabilities
49
- CAPABILITY_QUESTION_PATTERNS = [
50
- # "What does X support/have/do"
51
- r"what\s+(?:does|can|are|is)\s+\w+",
52
- r"what\s+\w+\s+(?:support|have|provide|offer)",
53
-
54
- # "Does X support/have"
55
- r"(?:does|can|will)\s+\w+\s+(?:support|have|handle|do|work)",
56
-
57
- # "How to/do" questions
58
- r"how\s+(?:to|do|does|can|should)\s+",
59
-
60
- # Lists and availability
61
- r"(?:list|show)\s+(?:of|all|available)",
62
- r"which\s+\w+\s+(?:are|is)\s+(?:available|supported)",
63
- r"all\s+(?:available|supported)\s+\w+",
64
-
65
- # Examples and implementation
66
- r"example\s+(?:of|for|using|with)",
67
- r"how\s+to\s+(?:use|implement|integrate|connect|setup|configure)",
67
+ # Question patterns that indicate asking about functionality
68
+ QUESTION_PATTERNS = [
69
+ # Direct questions
70
+ r"\b(?:what|which|where|when|why|how)\b",
71
+ r"\b(?:can|could|would|should|will|does|do|is|are)\b.*\?",
72
+
73
+ # Requests
74
+ r"\b(?:show|tell|explain|describe|list|find|get|give)\b",
75
+ r"\b(?:help|need|want|looking for|trying to)\b",
76
+
77
+ # Actions
78
+ r"\b(?:create|make|build|add|implement|write|generate)\b",
79
+ r"\b(?:update|change|modify|edit|fix|refactor|improve)\b",
80
+ r"\b(?:delete|remove|drop|clear|reset)\b",
81
+ r"\b(?:connect|integrate|link|sync|merge)\b",
82
+ r"\b(?:debug|trace|log|monitor|track)\b",
83
+
84
+ # Comparisons
85
+ r"\b(?:difference|compare|versus|vs|between|or)\b",
86
+ r"\b(?:better|best|recommended|preferred|alternative)\b",
68
87
  ]
69
88
 
70
- # Common external service/company names (partial list - patterns catch the rest)
71
- KNOWN_SERVICES = [
72
- # AI/ML
73
- "openai", "anthropic", "google", "gemini", "gpt", "claude", "llama",
74
- "groq", "perplexity", "mistral", "cohere", "huggingface", "replicate",
75
-
76
- # Cloud/Infrastructure
77
- "aws", "azure", "gcp", "vercel", "netlify", "cloudflare", "supabase",
78
- "firebase", "mongodb", "postgres", "redis", "elasticsearch",
79
-
80
- # APIs/Services
81
- "stripe", "twilio", "sendgrid", "mailchimp", "slack", "discord",
82
- "github", "gitlab", "bitbucket", "jira", "notion", "airtable",
83
- "shopify", "salesforce", "hubspot", "zendesk",
84
-
85
- # Data/Analytics
86
- "segment", "mixpanel", "amplitude", "datadog", "sentry", "grafana",
87
-
88
- # Media/Content
89
- "cloudinary", "imgix", "mux", "brandfetch", "unsplash", "pexels",
90
-
91
- # Auth
92
- "auth0", "okta", "clerk", "nextauth", "passport",
89
+ # Phrases that ALWAYS require research (no exceptions)
90
+ ALWAYS_RESEARCH_PHRASES = [
91
+ r"how (?:to|do|does|can|should|would)",
92
+ r"what (?:is|are|does|can|should)",
93
+ r"(?:does|can|will|should) .+ (?:support|have|handle|work|do)",
94
+ r"(?:list|show|get|find) (?:all|available|supported)",
95
+ r"example (?:of|for|using|with|code)",
96
+ r"(?:implement|add|create|build|write|generate) .+",
97
+ r"(?:update|change|modify|edit|fix) .+",
98
+ r"(?:configure|setup|install|deploy) .+",
99
+ r"(?:error|issue|problem|bug|not working)",
100
+ r"(?:api|sdk|library|package|module|framework)",
101
+ r"(?:documentation|docs|reference|guide)",
102
+ ]
93
103
 
94
- # Payments
95
- "paypal", "square", "braintree", "adyen",
104
+ # Exclusion patterns - things that DON'T need research
105
+ EXCLUDE_PATTERNS = [
106
+ r"^(?:hi|hello|hey|thanks|thank you|ok|okay|yes|no|sure)[\s!?.]*$",
107
+ r"^(?:good morning|good afternoon|good evening|goodbye|bye)[\s!?.]*$",
108
+ r"^(?:please|sorry|excuse me)[\s!?.]*$",
109
+ r"^(?:\d+[\s+\-*/]\d+|calculate|math).*$", # Simple math
96
110
  ]
97
111
 
98
112
  # ============================================================================
99
113
  # DETECTION LOGIC
100
114
  # ============================================================================
101
115
 
102
- def detect_external_api_question(prompt: str) -> dict:
116
+ def is_excluded(prompt: str) -> bool:
117
+ """Check if prompt is a simple greeting or non-technical."""
118
+ prompt_clean = prompt.strip().lower()
119
+
120
+ # Very short prompts that are just greetings
121
+ if len(prompt_clean) < 20:
122
+ for pattern in EXCLUDE_PATTERNS:
123
+ if re.match(pattern, prompt_clean, re.IGNORECASE):
124
+ return True
125
+
126
+ return False
127
+
128
+
129
+ def detect_technical_question(prompt: str) -> dict:
103
130
  """
104
- Detect if the prompt is asking about external APIs/SDKs.
131
+ Aggressively detect if the prompt is technical and requires research.
105
132
 
106
133
  Returns:
107
134
  {
108
135
  "detected": bool,
109
136
  "terms": list of detected terms,
110
- "patterns_matched": list of pattern types matched,
111
- "confidence": "high" | "medium" | "low"
137
+ "patterns_matched": list of pattern types,
138
+ "confidence": "critical" | "high" | "medium" | "low" | "none"
112
139
  }
113
140
  """
141
+ if is_excluded(prompt):
142
+ return {
143
+ "detected": False,
144
+ "terms": [],
145
+ "patterns_matched": [],
146
+ "confidence": "none",
147
+ }
148
+
114
149
  prompt_lower = prompt.lower()
115
150
  detected_terms = []
116
151
  patterns_matched = []
117
152
 
118
- # Check for known services
119
- for service in KNOWN_SERVICES:
120
- if service in prompt_lower:
121
- detected_terms.append(service)
122
- patterns_matched.append("known_service")
123
-
124
- # Check external service patterns
125
- for pattern in EXTERNAL_SERVICE_PATTERNS:
153
+ # Check for ALWAYS_RESEARCH_PHRASES first (highest priority)
154
+ for pattern in ALWAYS_RESEARCH_PHRASES:
155
+ if re.search(pattern, prompt_lower, re.IGNORECASE):
156
+ patterns_matched.append("always_research")
157
+ # Extract the matched phrase
158
+ match = re.search(pattern, prompt_lower, re.IGNORECASE)
159
+ if match:
160
+ detected_terms.append(match.group(0)[:50])
161
+
162
+ # Check technical terms
163
+ for pattern in TECHNICAL_TERMS:
126
164
  matches = re.findall(pattern, prompt_lower, re.IGNORECASE)
127
165
  if matches:
128
- detected_terms.extend(matches)
129
- patterns_matched.append("external_service_pattern")
166
+ detected_terms.extend(matches[:3]) # Limit per pattern
167
+ patterns_matched.append("technical_term")
130
168
 
131
- # Check capability question patterns
132
- for pattern in CAPABILITY_QUESTION_PATTERNS:
169
+ # Check question patterns
170
+ for pattern in QUESTION_PATTERNS:
133
171
  if re.search(pattern, prompt_lower, re.IGNORECASE):
134
- patterns_matched.append("capability_question")
172
+ patterns_matched.append("question_pattern")
135
173
  break
136
174
 
137
175
  # Deduplicate
138
- detected_terms = list(set(detected_terms))
176
+ detected_terms = list(dict.fromkeys(detected_terms))[:10]
139
177
  patterns_matched = list(set(patterns_matched))
140
178
 
141
- # Determine confidence
142
- if "known_service" in patterns_matched and "capability_question" in patterns_matched:
179
+ # Determine confidence - MUCH more aggressive
180
+ if "always_research" in patterns_matched:
181
+ confidence = "critical"
182
+ elif "technical_term" in patterns_matched and "question_pattern" in patterns_matched:
143
183
  confidence = "high"
144
- elif "known_service" in patterns_matched or len(detected_terms) >= 2:
145
- confidence = "medium"
146
- elif patterns_matched:
147
- confidence = "low"
184
+ elif "technical_term" in patterns_matched:
185
+ confidence = "high" # Technical terms alone = high
186
+ elif "question_pattern" in patterns_matched and len(prompt) > 30:
187
+ confidence = "medium" # Questions longer than 30 chars
188
+ elif len(prompt) > 50:
189
+ confidence = "low" # Longer prompts default to low (still triggers)
148
190
  else:
149
191
  confidence = "none"
150
192
 
193
+ # AGGRESSIVE: Trigger on anything except "none"
194
+ detected = confidence != "none"
195
+
151
196
  return {
152
- "detected": confidence in ["high", "medium"],
153
- "terms": detected_terms[:10], # Limit to 10 terms
197
+ "detected": detected,
198
+ "terms": detected_terms,
154
199
  "patterns_matched": patterns_matched,
155
200
  "confidence": confidence,
156
201
  }
@@ -165,11 +210,11 @@ def check_active_workflow() -> bool:
165
210
  state = json.loads(STATE_FILE.read_text())
166
211
  phases = state.get("phases", {})
167
212
 
168
- # Check if any phase is in progress
169
213
  for phase_key, phase_data in phases.items():
170
214
  if isinstance(phase_data, dict):
171
215
  status = phase_data.get("status", "")
172
- if status in ["in_progress", "pending"]:
216
+ if status in ["in_progress", "pending", "complete"]:
217
+ # If ANY phase has been touched, we're in a workflow
173
218
  return True
174
219
 
175
220
  return False
@@ -177,50 +222,13 @@ def check_active_workflow() -> bool:
177
222
  return False
178
223
 
179
224
 
180
- def check_already_researched(terms: list) -> list:
181
- """Check which terms have already been researched."""
182
- if not STATE_FILE.exists():
183
- return []
184
-
185
- try:
186
- state = json.loads(STATE_FILE.read_text())
187
- research_queries = state.get("research_queries", [])
188
-
189
- # Also check sources in phases
190
- phases = state.get("phases", {})
191
- all_sources = []
192
- for phase_data in phases.values():
193
- if isinstance(phase_data, dict):
194
- sources = phase_data.get("sources", [])
195
- all_sources.extend(sources)
196
-
197
- # Combine all research text
198
- all_research_text = " ".join(str(s) for s in all_sources)
199
- all_research_text += " ".join(
200
- str(q.get("query", "")) + " " + str(q.get("term", ""))
201
- for q in research_queries
202
- if isinstance(q, dict)
203
- )
204
- all_research_text = all_research_text.lower()
205
-
206
- # Find which terms were already researched
207
- already_researched = []
208
- for term in terms:
209
- if term.lower() in all_research_text:
210
- already_researched.append(term)
211
-
212
- return already_researched
213
- except (json.JSONDecodeError, Exception):
214
- return []
215
-
216
-
217
- def log_detection(prompt: str, detection: dict) -> None:
225
+ def log_detection(prompt: str, detection: dict, injected: bool) -> None:
218
226
  """Log this detection for debugging/auditing."""
219
- if not STATE_FILE.exists():
220
- return
221
-
222
227
  try:
223
- state = json.loads(STATE_FILE.read_text())
228
+ if STATE_FILE.exists():
229
+ state = json.loads(STATE_FILE.read_text())
230
+ else:
231
+ state = {"prompt_detections": []}
224
232
 
225
233
  if "prompt_detections" not in state:
226
234
  state["prompt_detections"] = []
@@ -229,11 +237,13 @@ def log_detection(prompt: str, detection: dict) -> None:
229
237
  "timestamp": datetime.now().isoformat(),
230
238
  "prompt_preview": prompt[:100] + "..." if len(prompt) > 100 else prompt,
231
239
  "detection": detection,
240
+ "injected": injected,
232
241
  })
233
242
 
234
- # Keep only last 20 detections
235
- state["prompt_detections"] = state["prompt_detections"][-20:]
243
+ # Keep only last 50 detections
244
+ state["prompt_detections"] = state["prompt_detections"][-50:]
236
245
 
246
+ STATE_FILE.parent.mkdir(parents=True, exist_ok=True)
237
247
  STATE_FILE.write_text(json.dumps(state, indent=2))
238
248
  except Exception:
239
249
  pass # Don't fail the hook on logging errors
@@ -248,69 +258,69 @@ def main():
248
258
  try:
249
259
  input_data = json.load(sys.stdin)
250
260
  except json.JSONDecodeError:
251
- # If we can't parse input, allow without injection
252
261
  sys.exit(0)
253
262
 
254
263
  prompt = input_data.get("prompt", "")
255
264
 
256
- if not prompt:
265
+ if not prompt or len(prompt.strip()) < 5:
257
266
  sys.exit(0)
258
267
 
259
- # Check if in active workflow mode (stricter enforcement)
268
+ # Check if in active workflow mode
260
269
  active_workflow = check_active_workflow()
261
270
 
262
- # Detect external API questions
263
- detection = detect_external_api_question(prompt)
264
-
265
- # Log for debugging
266
- if detection["detected"] or active_workflow:
267
- log_detection(prompt, detection)
271
+ # Detect technical questions
272
+ detection = detect_technical_question(prompt)
268
273
 
269
- # Determine if we should inject research requirement
270
- should_inject = False
271
- inject_reason = ""
274
+ # In active workflow, ALWAYS inject (even for low confidence)
275
+ if active_workflow and detection["confidence"] != "none":
276
+ detection["detected"] = True
272
277
 
273
- if active_workflow:
274
- # In active workflow, ALWAYS inject for technical questions
275
- if detection["confidence"] in ["high", "medium", "low"]:
276
- should_inject = True
277
- inject_reason = "active_workflow"
278
- elif detection["detected"]:
279
- # Check if already researched
280
- already_researched = check_already_researched(detection["terms"])
281
- unresearched_terms = [t for t in detection["terms"] if t not in already_researched]
278
+ # Log all detections
279
+ log_detection(prompt, detection, detection["detected"])
282
280
 
283
- if unresearched_terms:
284
- should_inject = True
285
- inject_reason = "unresearched_terms"
286
- detection["unresearched"] = unresearched_terms
287
-
288
- # Inject context if needed
289
- if should_inject:
290
- terms_str = ", ".join(detection.get("unresearched", detection["terms"])[:5])
281
+ # Inject context if detected
282
+ if detection["detected"]:
283
+ terms_str = ", ".join(detection["terms"][:5]) if detection["terms"] else "technical question"
284
+ confidence = detection["confidence"]
291
285
 
286
+ # Build the injection message
292
287
  injection = f"""
293
288
  <user-prompt-submit-hook>
294
- EXTERNAL API/SDK DETECTED: {terms_str}
295
- Confidence: {detection["confidence"]}
296
- {"Mode: Active API Development Workflow" if active_workflow else ""}
297
-
298
- MANDATORY RESEARCH REQUIREMENT:
299
- Before answering this question, you MUST:
300
-
301
- 1. Use Context7 (mcp__context7__resolve-library-id + get-library-docs) to look up current documentation
302
- 2. Use WebSearch to find official documentation and recent updates
303
- 3. NEVER answer from training data alone - it may be outdated
304
-
305
- Training data can be months or years old. APIs change constantly.
306
- Research first. Then answer with verified, current information.
307
-
308
- After researching, cite your sources in your response.
289
+ RESEARCH REQUIRED - {confidence.upper()} CONFIDENCE
290
+ Detected: {terms_str}
291
+ {"MODE: Active API Development Workflow - STRICT ENFORCEMENT" if active_workflow else ""}
292
+
293
+ MANDATORY BEFORE ANSWERING:
294
+
295
+ 1. USE CONTEXT7 FIRST:
296
+ - Call mcp__context7__resolve-library-id to find the library
297
+ - Call mcp__context7__get-library-docs to get CURRENT documentation
298
+ - This gives you the ACTUAL source of truth
299
+
300
+ 2. USE WEBSEARCH (2-3 SEARCHES MINIMUM):
301
+ - Search for official documentation
302
+ - Search with different phrasings to get comprehensive coverage
303
+ - Search for recent updates, changes, or known issues
304
+ - Example searches:
305
+ * "[topic] official documentation"
306
+ * "[topic] API reference guide"
307
+ * "[topic] latest updates 2024 2025"
308
+
309
+ 3. NEVER TRUST TRAINING DATA:
310
+ - Training data can be months or years outdated
311
+ - APIs change constantly
312
+ - Features get added, deprecated, or modified
313
+ - Parameter names and types change
314
+
315
+ 4. CITE YOUR SOURCES:
316
+ - After researching, mention where the information came from
317
+ - Include links when available
318
+
319
+ RESEARCH FIRST. ANSWER SECOND.
309
320
  </user-prompt-submit-hook>
310
321
  """
311
322
  print(injection)
312
323
 
313
- # Always allow the prompt to proceed
314
324
  sys.exit(0)
315
325
 
316
326
 
@@ -6,10 +6,18 @@ Purpose: Block proceeding to schema/TDD if interview has no USER answers
6
6
  This hook ensures Claude actually asks the user questions and records
7
7
  their answers, rather than self-answering the interview.
8
8
 
9
+ v1.8.0 MAJOR UPDATE: Now requires STRUCTURED questions with multiple-choice
10
+ options derived from research phase findings.
11
+
9
12
  It checks:
10
- 1. Interview status is "complete"
11
- 2. There are actual questions with answers
12
- 3. Answers don't look auto-generated (contain user-specific details)
13
+ 1. Research phase is complete (questions must be based on research)
14
+ 2. Interview status is "complete"
15
+ 3. Questions used AskUserQuestion tool with STRUCTURED OPTIONS
16
+ 4. At least MIN_STRUCTURED_QUESTIONS have multiple-choice or typed options
17
+ 5. Answers don't look auto-generated (contain user-specific details)
18
+
19
+ The goal: Questions like Claude Code shows - with numbered options and
20
+ "Type something" at the end, all based on research findings.
13
21
 
14
22
  Returns:
15
23
  - {"permissionDecision": "allow"} - Let the tool run
@@ -23,7 +31,10 @@ from pathlib import Path
23
31
  STATE_FILE = Path(__file__).parent.parent / "api-dev-state.json"
24
32
 
25
33
  # Minimum questions required for a valid interview
26
- MIN_QUESTIONS = 3
34
+ MIN_QUESTIONS = 5 # Increased - need comprehensive interview
35
+
36
+ # Minimum questions that MUST have structured options (multiple-choice)
37
+ MIN_STRUCTURED_QUESTIONS = 3
27
38
 
28
39
  # Phrases that indicate self-answered (not real user input)
29
40
  SELF_ANSWER_INDICATORS = [
@@ -33,6 +44,12 @@ SELF_ANSWER_INDICATORS = [
33
44
  "typical use case",
34
45
  "standard implementation",
35
46
  "common pattern",
47
+ "i'll assume",
48
+ "assuming",
49
+ "probably",
50
+ "most likely",
51
+ "default to",
52
+ "usually",
36
53
  ]
37
54
 
38
55
 
@@ -81,40 +98,84 @@ Run /api-create [endpoint-name] to begin the interview-driven workflow."""
81
98
  sys.exit(0)
82
99
 
83
100
  phases = state.get("phases", {})
101
+ research = phases.get("research_initial", {})
84
102
  interview = phases.get("interview", {})
85
103
  interview_status = interview.get("status", "not_started")
86
104
  interview_desc = interview.get("description", "").lower()
87
105
  questions = interview.get("questions", [])
106
+ research_queries = state.get("research_queries", [])
107
+
108
+ # Check 0: Research must be complete FIRST (questions based on research)
109
+ research_status = research.get("status", "not_started")
110
+ if research_status != "complete":
111
+ sources_count = len(research.get("sources", []))
112
+ print(json.dumps({
113
+ "permissionDecision": "deny",
114
+ "reason": f"""❌ BLOCKED: Research phase must complete BEFORE interview.
115
+
116
+ Research status: {research_status}
117
+ Sources consulted: {sources_count}
118
+ Research queries: {len(research_queries)}
119
+
120
+ ═══════════════════════════════════════════════════════════
121
+ ⚠️ COMPLETE RESEARCH FIRST - THEN ASK QUESTIONS
122
+ ═══════════════════════════════════════════════════════════
123
+
124
+ The interview questions MUST be based on research findings:
125
+ 1. Use Context7 to get SDK/API documentation
126
+ 2. Use WebSearch (2-3 searches) for official docs
127
+ 3. THEN generate interview questions with STRUCTURED OPTIONS
128
+ based on what you discovered
129
+
130
+ Example: If research found 5 available models, ask:
131
+ "Which model should this endpoint use?"
132
+ 1. gpt-4o (fastest, cheapest)
133
+ 2. claude-sonnet-4-20250514 (best reasoning)
134
+ 3. gemini-pro (multimodal)
135
+ 4. Type something else...
136
+
137
+ Research INFORMS the options. No research = no good options."""
138
+ }))
139
+ sys.exit(0)
88
140
 
89
141
  # Check 1: Interview must be complete
90
142
  if interview_status != "complete":
143
+ # Build example based on actual research
144
+ research_based_example = _build_research_based_example(research_queries)
145
+
91
146
  print(json.dumps({
92
147
  "permissionDecision": "deny",
93
148
  "reason": f"""❌ BLOCKED: Interview phase not complete.
94
149
 
95
150
  Current status: {interview_status}
96
151
  AskUserQuestion calls: {interview.get('user_question_count', 0)}
152
+ Structured questions: {interview.get('structured_question_count', 0)}
97
153
 
98
154
  ═══════════════════════════════════════════════════════════
99
- ⚠️ YOU MUST STOP AND ASK THE USER QUESTIONS NOW
155
+ ⚠️ USE STRUCTURED QUESTIONS WITH OPTIONS
100
156
  ═══════════════════════════════════════════════════════════
101
157
 
102
- Use the AskUserQuestion tool to ask EACH of these questions ONE AT A TIME:
158
+ Based on your research, ask questions using AskUserQuestion with
159
+ the 'options' parameter to provide multiple-choice selections:
103
160
 
104
- 1. "What is the primary purpose of this endpoint?"
105
- 2. "Who will use it and how?"
106
- 3. "What parameters are essential vs optional?"
161
+ {research_based_example}
107
162
 
108
- WAIT for the user's response after EACH question before continuing.
163
+ REQUIRED FORMAT for AskUserQuestion:
164
+ - question: "Your question text"
165
+ - options: [
166
+ {{"value": "option1", "label": "Option 1 description"}},
167
+ {{"value": "option2", "label": "Option 2 description"}},
168
+ {{"value": "custom", "label": "Type something..."}}
169
+ ]
109
170
 
110
- DO NOT:
111
- Make up answers yourself
112
- ❌ Assume what the user wants
113
- ❌ Mark the interview as complete without asking
114
- ❌ Try to write any code until you have real answers
171
+ You need at least {MIN_STRUCTURED_QUESTIONS} structured questions with options.
172
+ Current: {interview.get('structured_question_count', 0)}
115
173
 
116
- The system is tracking your AskUserQuestion calls. You need at least 3
117
- actual calls with user responses to proceed."""
174
+ DO NOT:
175
+ Ask open-ended questions without options
176
+ ❌ Make up options not based on research
177
+ ❌ Skip the AskUserQuestion tool
178
+ ❌ Self-answer questions"""
118
179
  }))
119
180
  sys.exit(0)
120
181
 
@@ -128,11 +189,11 @@ Questions recorded: {len(questions)}
128
189
  Minimum required: {MIN_QUESTIONS}
129
190
 
130
191
  You must ask the user more questions about their requirements.
131
- DO NOT proceed without understanding the user's actual needs."""
192
+ Use AskUserQuestion with structured options based on your research."""
132
193
  }))
133
194
  sys.exit(0)
134
195
 
135
- # Check 2.5: Verify AskUserQuestion tool was actually used
196
+ # Check 3: Verify AskUserQuestion tool was actually used
136
197
  user_question_count = interview.get("user_question_count", 0)
137
198
  tool_used_count = sum(1 for q in questions if q.get("tool_used", False))
138
199
 
@@ -146,14 +207,43 @@ Minimum required: {MIN_QUESTIONS}
146
207
 
147
208
  You MUST use the AskUserQuestion tool to ask the user directly.
148
209
  Do NOT make up answers or mark the interview as complete without
149
- actually asking the user and receiving their responses.
210
+ actually asking the user and receiving their responses."""
211
+ }))
212
+ sys.exit(0)
150
213
 
151
- The system tracks when AskUserQuestion is used. Self-answering
152
- will be detected and blocked."""
214
+ # Check 4: Verify structured questions were used
215
+ structured_count = interview.get("structured_question_count", 0)
216
+ questions_with_options = sum(1 for q in questions if q.get("has_options", False))
217
+ actual_structured = max(structured_count, questions_with_options)
218
+
219
+ if actual_structured < MIN_STRUCTURED_QUESTIONS:
220
+ print(json.dumps({
221
+ "permissionDecision": "deny",
222
+ "reason": f"""❌ Not enough STRUCTURED questions with options.
223
+
224
+ Structured questions (with options): {actual_structured}
225
+ Minimum required: {MIN_STRUCTURED_QUESTIONS}
226
+
227
+ You MUST use AskUserQuestion with the 'options' parameter to
228
+ provide multiple-choice answers based on your research.
229
+
230
+ Example:
231
+ AskUserQuestion(
232
+ question="Which AI provider should this endpoint support?",
233
+ options=[
234
+ {{"value": "openai", "label": "OpenAI (GPT-4o)"}},
235
+ {{"value": "anthropic", "label": "Anthropic (Claude)"}},
236
+ {{"value": "google", "label": "Google (Gemini)"}},
237
+ {{"value": "all", "label": "All of the above"}},
238
+ {{"value": "custom", "label": "Type something else..."}}
239
+ ]
240
+ )
241
+
242
+ This gives the user clear choices based on what you researched."""
153
243
  }))
154
244
  sys.exit(0)
155
245
 
156
- # Check 3: Look for self-answer indicators
246
+ # Check 5: Look for self-answer indicators
157
247
  for indicator in SELF_ANSWER_INDICATORS:
158
248
  if indicator in interview_desc:
159
249
  print(json.dumps({
@@ -162,15 +252,10 @@ will be detected and blocked."""
162
252
 
163
253
  Detected: "{indicator}" in interview description.
164
254
 
165
- You MUST actually ask the user questions using AskUserQuestion.
166
- Self-answering the interview defeats its purpose.
255
+ You MUST actually ask the user questions using AskUserQuestion
256
+ with structured options. Self-answering defeats the purpose.
167
257
 
168
- Reset the interview phase and ask the user directly:
169
- 1. What do you want this endpoint to do?
170
- 2. Which providers/models should it support?
171
- 3. What parameters matter most to you?
172
-
173
- Wait for their real answers before proceeding."""
258
+ Reset the interview and ask with options based on research."""
174
259
  }))
175
260
  sys.exit(0)
176
261
 
@@ -179,5 +264,41 @@ Wait for their real answers before proceeding."""
179
264
  sys.exit(0)
180
265
 
181
266
 
267
+ def _build_research_based_example(research_queries: list) -> str:
268
+ """Build an example question based on actual research queries."""
269
+ if not research_queries:
270
+ return """Example (generic - do research first!):
271
+ "What is the main use case for this endpoint?"
272
+ 1. Data retrieval
273
+ 2. Data transformation
274
+ 3. AI processing
275
+ 4. Type something..."""
276
+
277
+ # Extract terms from research to suggest relevant options
278
+ all_terms = []
279
+ for query in research_queries[-5:]: # Last 5 queries
280
+ terms = query.get("terms", [])
281
+ all_terms.extend(terms)
282
+
283
+ # Deduplicate and get top terms
284
+ unique_terms = list(dict.fromkeys(all_terms))[:4]
285
+
286
+ if unique_terms:
287
+ options_example = "\n ".join([
288
+ f"{i+1}. {term.title()}" for i, term in enumerate(unique_terms)
289
+ ])
290
+ return f"""Example based on your research:
291
+ "Which of these should be the primary focus?"
292
+ {options_example}
293
+ {len(unique_terms)+1}. Type something else..."""
294
+
295
+ return """Example:
296
+ "What capability is most important?"
297
+ 1. Option based on research finding 1
298
+ 2. Option based on research finding 2
299
+ 3. Option based on research finding 3
300
+ 4. Type something..."""
301
+
302
+
182
303
  if __name__ == "__main__":
183
304
  main()
@@ -60,7 +60,8 @@ def main():
60
60
  interview = phases.setdefault("interview", {
61
61
  "status": "not_started",
62
62
  "questions": [],
63
- "user_question_count": 0
63
+ "user_question_count": 0,
64
+ "structured_question_count": 0
64
65
  })
65
66
 
66
67
  # Track the question
@@ -68,10 +69,22 @@ def main():
68
69
  user_count = interview.get("user_question_count", 0) + 1
69
70
  interview["user_question_count"] = user_count
70
71
 
72
+ # Check if this question has structured options (multiple-choice)
73
+ options = tool_input.get("options", [])
74
+ has_options = len(options) > 0
75
+
76
+ # Track structured questions count
77
+ if has_options:
78
+ structured_count = interview.get("structured_question_count", 0) + 1
79
+ interview["structured_question_count"] = structured_count
80
+
71
81
  question_entry = {
72
82
  "question": tool_input.get("question", ""),
73
83
  "timestamp": datetime.now().isoformat(),
74
- "tool_used": True # Proves AskUserQuestion was actually called
84
+ "tool_used": True, # Proves AskUserQuestion was actually called
85
+ "has_options": has_options,
86
+ "options_count": len(options),
87
+ "options": [opt.get("label", opt.get("value", "")) for opt in options[:5]] if options else []
75
88
  }
76
89
  questions.append(question_entry)
77
90
 
@@ -82,6 +95,14 @@ def main():
82
95
 
83
96
  interview["last_activity"] = datetime.now().isoformat()
84
97
 
98
+ # Log for visibility
99
+ if has_options:
100
+ interview["last_structured_question"] = {
101
+ "question": tool_input.get("question", "")[:100],
102
+ "options_count": len(options),
103
+ "timestamp": datetime.now().isoformat()
104
+ }
105
+
85
106
  # Save and exit
86
107
  STATE_FILE.write_text(json.dumps(state, indent=2))
87
108
  print(json.dumps({"continue": True}))
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@hustle-together/api-dev-tools",
3
- "version": "1.7.0",
3
+ "version": "1.8.0",
4
4
  "description": "Interview-driven API development workflow for Claude Code - Automates research, testing, and documentation",
5
5
  "main": "bin/cli.js",
6
6
  "bin": {