@hustle-together/api-dev-tools 1.7.1 → 1.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -30,9 +30,9 @@ Six Python hooks that provide **real programmatic guarantees**:
30
30
 
31
31
  - **`enforce-external-research.py`** - (v1.7.0) Detects external API questions and requires research before answering
32
32
  - **`enforce-research.py`** - Blocks API code writing until research is complete
33
- - **`enforce-interview.py`** - Verifies user questions were actually asked (prevents self-answering)
33
+ - **`enforce-interview.py`** - (v1.8.0+) Verifies structured questions with options were asked; (v1.9.0+) Injects decision reminders on writes
34
34
  - **`verify-implementation.py`** - Checks implementation matches interview requirements
35
- - **`track-tool-use.py`** - Logs all research activity (Context7, WebSearch, WebFetch, AskUserQuestion)
35
+ - **`track-tool-use.py`** - (v1.9.0+) Captures user decisions from AskUserQuestion; logs all research activity
36
36
  - **`api-workflow-check.py`** - Prevents stopping until required phases are complete + git diff verification
37
37
 
38
38
  ### State Tracking
@@ -470,6 +470,34 @@ INJECTS: "RESEARCH REQUIRED: Use Context7/WebSearch before answering"
470
470
  CLAUDE: Researches first → Gives accurate answer
471
471
  ```
472
472
 
473
+ ### Gap 7: Interview Decisions Not Used During Implementation (v1.9.0+)
474
+ **Problem:** AI asks good interview questions but then ignores the answers when writing code.
475
+
476
+ **Example:**
477
+ - Interview: User selected "server environment variables only" for API key handling
478
+ - Implementation: AI writes code with custom header overrides (not what user wanted!)
479
+
480
+ **Fix:** Two-part solution in `track-tool-use.py` and `enforce-interview.py`:
481
+
482
+ 1. **track-tool-use.py** now captures:
483
+ - The user's actual response from AskUserQuestion
484
+ - Matches responses to option values
485
+ - Stores decisions in categorized `decisions` dict (purpose, api_key_handling, etc.)
486
+
487
+ 2. **enforce-interview.py** now injects a decision summary on EVERY write:
488
+ ```
489
+ ✅ Interview complete. REMEMBER THE USER'S DECISIONS:
490
+
491
+ • Primary Purpose: full_brand_kit
492
+ • API Key Handling: server_only
493
+ • Response Format: JSON with asset URLs
494
+ • Error Handling: detailed (error, code, details)
495
+
496
+ Your implementation MUST align with these choices.
497
+ ```
498
+
499
+ This ensures the AI is constantly reminded of what the user actually wanted throughout the entire implementation phase.
500
+
473
501
  ## 🔧 Requirements
474
502
 
475
503
  - **Node.js** 14.0.0 or higher
@@ -1,120 +1,223 @@
1
- # API Interview - Structured API Discovery
1
+ # API Interview - Research-Based Structured Discovery
2
2
 
3
3
  **Usage:** `/api-interview [endpoint-name]`
4
4
 
5
- **Purpose:** Conduct structured interview to understand API endpoint purpose, usage, and requirements before any implementation.
5
+ **Purpose:** Conduct structured interview with MULTIPLE-CHOICE questions derived from research findings to understand API endpoint purpose, usage, and requirements before any implementation.
6
+
7
+ ## v1.8.0 REQUIREMENT: Structured Questions with Options
8
+
9
+ **CRITICAL:** All interview questions MUST:
10
+ 1. Be based on COMPLETED research (Context7 + WebSearch)
11
+ 2. Use AskUserQuestion tool with the `options` parameter
12
+ 3. Provide multiple-choice selections derived from research findings
13
+ 4. Include a "Type something else..." option for custom input
14
+
15
+ **Example of CORRECT structured question:**
16
+ ```
17
+ AskUserQuestion(
18
+ question="Which AI provider should this endpoint support?",
19
+ options=[
20
+ {"value": "openai", "label": "OpenAI (GPT-4o, GPT-4-turbo)"},
21
+ {"value": "anthropic", "label": "Anthropic (Claude Sonnet, Opus)"},
22
+ {"value": "google", "label": "Google (Gemini Pro, Flash)"},
23
+ {"value": "all", "label": "All providers (multi-provider support)"},
24
+ {"value": "custom", "label": "Type something else..."}
25
+ ]
26
+ )
27
+ ```
28
+
29
+ This gives users clear choices from RESEARCHED capabilities, not guesses.
6
30
 
7
31
  ## Interview Methodology
8
32
 
9
33
  Based on Anthropic Interviewer approach with three phases:
10
34
 
35
+ ### Phase 0: PREREQUISITE - Research (MANDATORY)
36
+ **You CANNOT start the interview until research is complete.**
37
+
38
+ Before asking ANY questions:
39
+ 1. Use Context7 to get SDK/API documentation
40
+ 2. Use WebSearch (2-3 searches) for official docs
41
+ 3. Gather all available options, parameters, models, providers
42
+ 4. Build your question options FROM this research
43
+
44
+ Example research flow:
45
+ ```
46
+ 1. mcp__context7__resolve-library-id("vercel ai sdk")
47
+ 2. mcp__context7__get-library-docs(libraryId)
48
+ 3. WebSearch("Vercel AI SDK providers models 2025")
49
+ 4. WebSearch("Vercel AI SDK streaming options parameters")
50
+ ```
51
+
52
+ **Research informs options. No research = no good options = interview blocked.**
53
+
11
54
  ### Phase 1: Planning (Internal)
12
55
  - Review endpoint name and context
13
- - Identify key areas to explore
56
+ - Review research findings (what providers, models, parameters exist?)
57
+ - Build structured question options from research
14
58
  - Prepare follow-up questions
15
59
 
16
- ### Phase 2: Interviewing (User Interaction)
17
- Ask structured questions to understand:
60
+ ### Phase 2: Interviewing (User Interaction with Structured Options)
61
+
62
+ **EVERY question uses AskUserQuestion with options parameter.**
18
63
 
19
64
  #### A. Purpose & Context
20
- 1. **What problem does this API endpoint solve?**
21
- - What's the business/technical need?
22
- - What happens without this endpoint?
23
-
24
- 2. **Who are the primary users?**
25
- - Frontend developers? End users? Other systems?
26
- - What's their technical level?
27
-
28
- 3. **What triggers usage of this API?**
29
- - User action? Scheduled task? Event-driven?
30
- - How frequently is it called?
31
-
32
- #### B. Real-World Usage Scenarios
33
- 4. **Walk me through a typical request:**
34
- - What data does the user provide?
35
- - What do they expect back?
36
- - What happens with the response?
37
-
38
- 5. **What are the most common use cases?** (Ask for 3-5 examples)
39
- - Common scenario 1: ___
40
- - Common scenario 2: ___
41
- - Edge scenario: ___
42
-
43
- 6. **Show me an example request/response you envision:**
44
- - Request body/params
45
- - Expected response
46
- - Error cases
47
-
48
- #### C. Technical Requirements
49
- 7. **What parameters are absolutely REQUIRED?**
50
- - Can the API work without them?
51
- - What happens if they're missing?
52
-
53
- 8. **What parameters are OPTIONAL?**
54
- - What defaults make sense?
55
- - How do they modify behavior?
56
-
57
- 9. **What are valid value ranges/formats?**
58
- - Type constraints (string, number, enum)?
59
- - Length/size limits?
60
- - Format requirements (email, URL, date)?
61
-
62
- 10. **Are there parameter dependencies?**
63
- - If X is provided, must Y also be provided?
64
- - Mutually exclusive options?
65
-
66
- #### D. Dependencies & Integration
67
- 11. **What external services does this use?**
68
- - AI providers (OpenAI, Anthropic, Google)?
69
- - Third-party APIs (Firecrawl, Brave Search)?
70
- - Database (Supabase)?
71
-
72
- 12. **What API keys are required?**
73
- - Where are they configured?
74
- - Are there fallback options?
75
- - Can users provide their own keys?
76
-
77
- 13. **What AI models/providers are involved?**
78
- - Specific models (GPT-4, Claude Sonnet)?
79
- - Why those models?
80
- - Are there alternatives?
81
-
82
- 14. **Are there rate limits, quotas, or costs?**
83
- - Per-request costs?
84
- - Rate limiting needed?
85
- - Cost tracking required?
86
-
87
- #### E. Error Handling & Edge Cases
88
- 15. **What can go wrong?**
89
- - Invalid input?
90
- - External service failures?
91
- - Timeout scenarios?
92
-
93
- 16. **How should errors be communicated?**
94
- - HTTP status codes?
95
- - Error message format?
96
- - User-facing vs. technical errors?
97
-
98
- 17. **What are boundary conditions?**
99
- - Very large inputs?
100
- - Empty/null values?
101
- - Concurrent requests?
102
-
103
- 18. **What should be validated before processing?**
104
- - Input validation rules?
105
- - Authentication/authorization?
106
- - Resource availability?
107
-
108
- #### F. Documentation & Resources
109
- 19. **Where is official documentation?**
110
- - Link to external API docs
111
- - SDK documentation
112
- - Code examples
113
-
114
- 20. **Are there similar endpoints for reference?**
115
- - In this codebase?
116
- - In other projects?
117
- - Industry examples?
65
+
66
+ **Question 1: Primary Purpose**
67
+ ```
68
+ AskUserQuestion(
69
+ question="What is the primary purpose of this endpoint?",
70
+ options=[
71
+ {"value": "data_retrieval", "label": "Retrieve/query data"},
72
+ {"value": "data_transform", "label": "Transform/process data"},
73
+ {"value": "ai_generation", "label": "AI content generation"},
74
+ {"value": "ai_analysis", "label": "AI analysis/classification"},
75
+ {"value": "integration", "label": "Third-party integration"},
76
+ {"value": "custom", "label": "Type something else..."}
77
+ ]
78
+ )
79
+ ```
80
+
81
+ **Question 2: Primary Users**
82
+ ```
83
+ AskUserQuestion(
84
+ question="Who are the primary users of this endpoint?",
85
+ options=[
86
+ {"value": "frontend", "label": "Frontend developers (React, Vue, etc.)"},
87
+ {"value": "backend", "label": "Backend services (server-to-server)"},
88
+ {"value": "mobile", "label": "Mobile app developers"},
89
+ {"value": "enduser", "label": "End users directly (browser)"},
90
+ {"value": "automation", "label": "Automated systems/bots"},
91
+ {"value": "custom", "label": "Type something else..."}
92
+ ]
93
+ )
94
+ ```
95
+
96
+ **Question 3: Usage Trigger**
97
+ ```
98
+ AskUserQuestion(
99
+ question="What triggers a call to this endpoint?",
100
+ options=[
101
+ {"value": "user_action", "label": "User action (button click, form submit)"},
102
+ {"value": "page_load", "label": "Page/component load"},
103
+ {"value": "scheduled", "label": "Scheduled/cron job"},
104
+ {"value": "webhook", "label": "External webhook/event"},
105
+ {"value": "realtime", "label": "Real-time/streaming updates"},
106
+ {"value": "custom", "label": "Type something else..."}
107
+ ]
108
+ )
109
+ ```
110
+
111
+ #### B. Technical Requirements (Research-Based Options)
112
+
113
+ **Question 4: AI Provider** (options from research)
114
+ ```
115
+ AskUserQuestion(
116
+ question="Which AI provider(s) should this endpoint support?",
117
+ options=[
118
+ // These options come from your Context7/WebSearch research!
119
+ {"value": "openai", "label": "OpenAI (gpt-4o, gpt-4-turbo)"},
120
+ {"value": "anthropic", "label": "Anthropic (claude-sonnet-4-20250514, claude-opus-4-20250514)"},
121
+ {"value": "google", "label": "Google (gemini-pro, gemini-flash)"},
122
+ {"value": "groq", "label": "Groq (llama-3.1-70b, mixtral)"},
123
+ {"value": "multiple", "label": "Multiple providers (configurable)"},
124
+ {"value": "custom", "label": "Type something else..."}
125
+ ]
126
+ )
127
+ ```
128
+
129
+ **Question 5: Response Format** (options from research)
130
+ ```
131
+ AskUserQuestion(
132
+ question="What response format is needed?",
133
+ options=[
134
+ // Options based on what the SDK supports (from research)
135
+ {"value": "streaming", "label": "Streaming (real-time chunks)"},
136
+ {"value": "complete", "label": "Complete response (wait for full)"},
137
+ {"value": "structured", "label": "Structured/JSON mode"},
138
+ {"value": "tool_calls", "label": "Tool calling/function calls"},
139
+ {"value": "custom", "label": "Type something else..."}
140
+ ]
141
+ )
142
+ ```
143
+
144
+ **Question 6: Required Parameters**
145
+ ```
146
+ AskUserQuestion(
147
+ question="Which parameters are REQUIRED (cannot work without)?",
148
+ options=[
149
+ // Based on researched SDK parameters
150
+ {"value": "prompt_only", "label": "Just the prompt/message"},
151
+ {"value": "prompt_model", "label": "Prompt + model selection"},
152
+ {"value": "prompt_model_config", "label": "Prompt + model + configuration"},
153
+ {"value": "full_config", "label": "Full configuration required"},
154
+ {"value": "custom", "label": "Type something else..."}
155
+ ]
156
+ )
157
+ ```
158
+
159
+ **Question 7: Optional Parameters**
160
+ ```
161
+ AskUserQuestion(
162
+ question="Which optional parameters should be supported?",
163
+ options=[
164
+ // From research: discovered optional parameters
165
+ {"value": "temperature", "label": "temperature (creativity control)"},
166
+ {"value": "max_tokens", "label": "maxTokens (response length)"},
167
+ {"value": "system_prompt", "label": "system (system prompt)"},
168
+ {"value": "tools", "label": "tools (function calling)"},
169
+ {"value": "all_standard", "label": "All standard AI parameters"},
170
+ {"value": "custom", "label": "Type something else..."}
171
+ ]
172
+ )
173
+ ```
174
+
175
+ #### C. Dependencies & Integration
176
+
177
+ **Question 8: External Services**
178
+ ```
179
+ AskUserQuestion(
180
+ question="What external services does this endpoint need?",
181
+ options=[
182
+ {"value": "ai_only", "label": "AI provider only (OpenAI, Anthropic, etc.)"},
183
+ {"value": "ai_search", "label": "AI + Search (Brave, Perplexity)"},
184
+ {"value": "ai_scrape", "label": "AI + Web scraping (Firecrawl)"},
185
+ {"value": "ai_db", "label": "AI + Database (Supabase)"},
186
+ {"value": "multiple", "label": "Multiple external services"},
187
+ {"value": "custom", "label": "Type something else..."}
188
+ ]
189
+ )
190
+ ```
191
+
192
+ **Question 9: API Key Handling**
193
+ ```
194
+ AskUserQuestion(
195
+ question="How should API keys be handled?",
196
+ options=[
197
+ {"value": "server_only", "label": "Server environment variables only"},
198
+ {"value": "server_header", "label": "Server env + custom header override"},
199
+ {"value": "client_next", "label": "NEXT_PUBLIC_ client-side keys"},
200
+ {"value": "all_methods", "label": "All methods (env, header, client)"},
201
+ {"value": "custom", "label": "Type something else..."}
202
+ ]
203
+ )
204
+ ```
205
+
206
+ #### D. Error Handling & Edge Cases
207
+
208
+ **Question 10: Error Response Format**
209
+ ```
210
+ AskUserQuestion(
211
+ question="How should errors be returned?",
212
+ options=[
213
+ {"value": "simple", "label": "Simple: {error: string}"},
214
+ {"value": "detailed", "label": "Detailed: {error, code, details}"},
215
+ {"value": "ai_sdk", "label": "AI SDK standard format"},
216
+ {"value": "http_native", "label": "HTTP status codes + body"},
217
+ {"value": "custom", "label": "Type something else..."}
218
+ ]
219
+ )
220
+ ```
118
221
 
119
222
  ### Phase 3: Analysis (Documentation)
120
223
  After interview, I will:
@@ -135,6 +238,7 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
135
238
  **Date:** [current-date]
136
239
  **Interviewed by:** Claude Code
137
240
  **Status:** Interview Complete
241
+ **Research Sources:** [list of Context7/WebSearch sources]
138
242
 
139
243
  ## 1. Purpose & Context
140
244
  [Synthesized understanding of why this exists]
@@ -153,7 +257,14 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
153
257
  - API Keys Required: [list]
154
258
  - AI Models: [list]
155
259
 
156
- ## 6. Real-World Scenarios
260
+ ## 6. Interview Responses
261
+ | Question | User Selection | Notes |
262
+ |----------|---------------|-------|
263
+ | Purpose | [selected option] | |
264
+ | Users | [selected option] | |
265
+ | ... | ... | |
266
+
267
+ ## 7. Real-World Scenarios
157
268
  ### Scenario 1: [Common Use Case]
158
269
  **Request:**
159
270
  ```json
@@ -164,23 +275,23 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
164
275
  {example}
165
276
  ```
166
277
 
167
- ## 7. Edge Cases & Error Handling
278
+ ## 8. Edge Cases & Error Handling
168
279
  [Identified edge cases and how to handle them]
169
280
 
170
- ## 8. Validation Rules
281
+ ## 9. Validation Rules
171
282
  [What must be validated]
172
283
 
173
- ## 9. Documentation Links
174
- - [Official docs]
175
- - [SDK docs]
176
- - [Related resources]
284
+ ## 10. Documentation Links
285
+ - [Official docs from research]
286
+ - [SDK docs from Context7]
287
+ - [Related resources from WebSearch]
177
288
 
178
- ## 10. Test Cases (To Implement)
289
+ ## 11. Test Cases (To Implement)
179
290
  - [ ] Test: [scenario from interview]
180
291
  - [ ] Test: [edge case from interview]
181
292
  - [ ] Test: [error handling from interview]
182
293
 
183
- ## 11. Open Questions
294
+ ## 12. Open Questions
184
295
  [Any ambiguities to resolve]
185
296
  ```
186
297
 
@@ -190,17 +301,30 @@ Creates: `/src/v2/docs/endpoints/[endpoint-name].md`
190
301
  /api-interview generate-css
191
302
  ```
192
303
 
193
- I will then ask all 20 questions, document responses, and create the endpoint documentation file ready for TDD implementation.
304
+ I will:
305
+ 1. First complete research (Context7 + WebSearch)
306
+ 2. Build structured questions with options from research
307
+ 3. Ask all questions using AskUserQuestion with options
308
+ 4. Document responses and create the endpoint documentation
194
309
 
195
310
  <claude-commands-template>
196
- ## Interview Guidelines
197
-
198
- 1. **Ask ALL questions** - Don't skip steps even if obvious
199
- 2. **Request examples** - Concrete examples > abstract descriptions
200
- 3. **Clarify ambiguity** - If answer is unclear, ask follow-ups
201
- 4. **Document links** - Capture ALL external documentation URLs
202
- 5. **Real scenarios** - Focus on actual usage, not hypothetical
203
- 6. **Be thorough** - Better to over-document than under-document
311
+ ## Interview Guidelines (v1.8.0)
312
+
313
+ 1. **RESEARCH FIRST** - Complete Context7 + WebSearch before ANY questions
314
+ 2. **STRUCTURED OPTIONS** - Every question uses AskUserQuestion with options[]
315
+ 3. **OPTIONS FROM RESEARCH** - Multiple-choice options come from discovered capabilities
316
+ 4. **ALWAYS INCLUDE "Type something else..."** - Let user provide custom input
317
+ 5. **ASK ONE AT A TIME** - Wait for response before next question
318
+ 6. **DOCUMENT EVERYTHING** - Capture ALL selections and custom inputs
319
+
320
+ ## Question Format Checklist
321
+
322
+ Before asking each question:
323
+ - [ ] Is research complete?
324
+ - [ ] Are options based on research findings?
325
+ - [ ] Does it use AskUserQuestion with options parameter?
326
+ - [ ] Is there a "Type something else..." option?
327
+ - [ ] Will this help build the schema?
204
328
 
205
329
  ## After Interview
206
330