@hustle-together/api-dev-tools 3.3.0 → 3.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (45) hide show
  1. package/README.md +712 -377
  2. package/commands/api-create.md +68 -23
  3. package/demo/hustle-together/blog/gemini-vs-claude-widgets.html +1 -1
  4. package/demo/hustle-together/blog/interview-driven-api-development.html +1 -1
  5. package/demo/hustle-together/blog/tdd-for-ai.html +1 -1
  6. package/demo/hustle-together/index.html +2 -2
  7. package/demo/workflow-demo-v3.5-backup.html +5008 -0
  8. package/demo/workflow-demo.html +5137 -3805
  9. package/hooks/enforce-deep-research.py +6 -1
  10. package/hooks/enforce-disambiguation.py +7 -1
  11. package/hooks/enforce-documentation.py +6 -1
  12. package/hooks/enforce-environment.py +5 -1
  13. package/hooks/enforce-interview.py +5 -1
  14. package/hooks/enforce-refactor.py +3 -1
  15. package/hooks/enforce-schema.py +0 -0
  16. package/hooks/enforce-scope.py +5 -1
  17. package/hooks/enforce-tdd-red.py +5 -1
  18. package/hooks/enforce-verify.py +0 -0
  19. package/hooks/track-tool-use.py +167 -0
  20. package/hooks/verify-implementation.py +0 -0
  21. package/package.json +1 -1
  22. package/templates/api-dev-state.json +24 -0
  23. package/demo/audio/audio-sync.js +0 -295
  24. package/demo/audio/generate-all-narrations.js +0 -581
  25. package/demo/audio/generate-narration.js +0 -486
  26. package/demo/audio/generate-voice-previews.js +0 -140
  27. package/demo/audio/narration-adam-timing.json +0 -4675
  28. package/demo/audio/narration-adam.mp3 +0 -0
  29. package/demo/audio/narration-creature-timing.json +0 -4675
  30. package/demo/audio/narration-creature.mp3 +0 -0
  31. package/demo/audio/narration-gaming-timing.json +0 -4675
  32. package/demo/audio/narration-gaming.mp3 +0 -0
  33. package/demo/audio/narration-hope-timing.json +0 -4675
  34. package/demo/audio/narration-hope.mp3 +0 -0
  35. package/demo/audio/narration-mark-timing.json +0 -4675
  36. package/demo/audio/narration-mark.mp3 +0 -0
  37. package/demo/audio/narration-timing.json +0 -3614
  38. package/demo/audio/narration-timing.sample.json +0 -48
  39. package/demo/audio/narration.mp3 +0 -0
  40. package/demo/audio/previews/manifest.json +0 -30
  41. package/demo/audio/previews/preview-creature.mp3 +0 -0
  42. package/demo/audio/previews/preview-gaming.mp3 +0 -0
  43. package/demo/audio/previews/preview-hope.mp3 +0 -0
  44. package/demo/audio/previews/preview-mark.mp3 +0 -0
  45. package/demo/audio/voices-manifest.json +0 -50
package/README.md CHANGED
@@ -1,460 +1,795 @@
1
- # API Development Tools for Claude Code v3.0
1
+ # API Development Tools for Claude Code v3.6.0
2
2
 
3
- **Interview-driven, research-first API development workflow with continuous verification loops**
3
+ **Interview-driven, research-first API development with 100% phase enforcement**
4
4
 
5
- Automates the complete API development lifecycle from understanding requirements to deployment through structured interviews, adaptive research, test-driven development, and comprehensive documentation with automatic re-grounding.
5
+ ```
6
+ ┌──────────────────────────────────────────────────────────────────────────────┐
7
+ │ ❯ npx @hustle-together/api-dev-tools --scope=project │
8
+ │ │
9
+ │ 🚀 Installing API Development Tools v3.6.0... │
10
+ │ │
11
+ │ ✅ Found Python 3.12.0 │
12
+ │ 📦 Installing commands: 24 slash commands │
13
+ │ 🔒 Installing hooks: 18 enforcement hooks │
14
+ │ ⚙️ Configuring settings.json │
15
+ │ 📊 Creating api-dev-state.json │
16
+ │ 📚 Setting up research cache │
17
+ │ 🔌 Configuring MCP servers: context7, github │
18
+ │ │
19
+ │ ════════════════════════════════════════════════════════════════════════ │
20
+ │ 🎉 API Development Tools installed successfully! │
21
+ │ ════════════════════════════════════════════════════════════════════════ │
22
+ │ │
23
+ │ Quick Start: /api-create my-endpoint │
24
+ └──────────────────────────────────────────────────────────────────────────────┘
25
+ ```
6
26
 
7
- ## What's New in v3.0
27
+ ---
8
28
 
9
- - **Phase 0: Pre-Research Disambiguation** - Search variations to clarify ambiguous terms
10
- - **Phase 9: Verify** - Re-research after tests pass to catch memory-based errors
11
- - **Adaptive Research** - Propose searches based on interview answers, not shotgun approach
12
- - **Questions FROM Research** - Interview questions generated from discovered parameters
13
- - **7-Turn Re-grounding** - Periodic context injection prevents memory dilution
14
- - **Research Freshness** - 7-day cache with automatic staleness warnings
15
- - **Loop-Back Architecture** - Every verification loops back if not successful
29
+ ## What Problem Does This Solve?
16
30
 
17
- ## Quick Start
31
+ LLMs have predictable failure modes when building APIs:
18
32
 
19
- ```bash
20
- # Install in your project
21
- npx @hustle-together/api-dev-tools --scope=project
33
+ 1. **Wrong Documentation** - Uses training data instead of current docs
34
+ 2. **Memory-Based Implementation** - Forgets research by implementation time
35
+ 3. **Self-Answering Questions** - Asks itself questions instead of the user
36
+ 4. **Context Dilution** - Loses focus in long sessions
37
+ 5. **Skipped Steps** - Jumps to implementation without proper research
22
38
 
23
- # Start developing an API
24
- /api-create my-endpoint
25
- ```
26
-
27
- ## What This Installs
28
-
29
- ```
30
- .claude/
31
- ├── commands/ # Slash commands (*.md)
32
- ├── hooks/ # Enforcement hooks (*.py)
33
- ├── settings.json # Hook configuration
34
- ├── api-dev-state.json # Workflow state tracking
35
- └── research/ # Cached research with freshness
36
-
37
- scripts/
38
- └── api-dev-tools/ # Manifest generation scripts (*.ts)
39
-
40
- src/app/
41
- ├── api-test/ # Test UI page (if Next.js)
42
- └── api/test-structure/# Parser API route (if Next.js)
43
- ```
44
-
45
- ### Slash Commands
46
-
47
- Six powerful slash commands for Claude Code:
48
-
49
- - **`/api-create [endpoint]`** - Complete workflow (Disambiguate → Research → Interview → TDD → Verify → Docs)
50
- - **`/api-interview [endpoint]`** - Dynamic questions GENERATED from research findings
51
- - **`/api-research [library]`** - Adaptive propose-approve research flow
52
- - **`/api-verify [endpoint]`** - Re-research and compare implementation to docs
53
- - **`/api-env [endpoint]`** - Check required API keys and environment setup
54
- - **`/api-status [endpoint]`** - Track implementation progress and phase completion
55
-
56
- ### Enforcement Hooks (8 total)
57
-
58
- Python hooks that provide **real programmatic guarantees**:
59
-
60
- | Hook | Event | Purpose |
61
- |------|-------|---------|
62
- | `session-startup.py` | SessionStart | Inject current state at session start |
63
- | `enforce-external-research.py` | UserPromptSubmit | Detect API terms, require research |
64
- | `enforce-research.py` | PreToolUse | Block writes until research complete |
65
- | `enforce-interview.py` | PreToolUse | Inject interview decisions on writes |
66
- | `verify-implementation.py` | PreToolUse | Check test file exists before route |
67
- | `track-tool-use.py` | PostToolUse | Log research, count turns |
68
- | `periodic-reground.py` | PostToolUse | Re-inject context every 7 turns |
69
- | `verify-after-green.py` | PostToolUse | Trigger Phase 9 after tests pass |
70
- | `api-workflow-check.py` | Stop | Block completion if phases incomplete |
71
-
72
- ### State Tracking
73
-
74
- - **`.claude/api-dev-state.json`** - Persistent workflow state with turn counting
75
- - **`.claude/research/`** - Cached research with freshness tracking
76
- - **`.claude/research/index.json`** - Research catalog with 7-day validity
77
-
78
- ### Manifest Generation Scripts (Programmatic, NO LLM)
79
-
80
- Scripts that automatically generate documentation from test files:
81
-
82
- | Script | Output | Purpose |
83
- |--------|--------|---------|
84
- | `generate-test-manifest.ts` | `api-tests-manifest.json` | Parses Vitest tests → full API manifest |
85
- | `extract-parameters.ts` | `parameter-matrix.json` | Extracts Zod params → coverage matrix |
86
- | `collect-test-results.ts` | `test-results.json` | Runs tests → results with pass/fail |
87
-
88
- **Key principle:** Tests are the SOURCE OF TRUTH. Scripts parse source files programmatically. **NO LLM involvement** in manifest generation.
89
-
90
- Scripts are triggered automatically after tests pass (Phase 8 → Phase 9) via `verify-after-green.py` hook.
91
-
92
- ## Complete Phase Flow (12 Phases)
93
-
94
- ```
95
- ┌─ PHASE 0: DISAMBIGUATION ─────────────────────┐
96
- │ Search 3-5 variations, clarify ambiguity │
97
- │ Loop back if still unclear │
98
- └───────────────────────────────────────────────┘
99
-
100
-
101
- ┌─ PHASE 1: SCOPE CONFIRMATION ─────────────────┐
102
- │ Confirm understanding of endpoint purpose │
103
- └───────────────────────────────────────────────┘
104
-
105
-
106
- ┌─ PHASE 2: INITIAL RESEARCH ───────────────────┐
107
- │ 2-3 targeted searches (Context7, WebSearch) │
108
- │ Present summary, ask to proceed or search more│
109
- │ Loop back if user wants more │
110
- └───────────────────────────────────────────────┘
111
-
112
-
113
- ┌─ PHASE 3: INTERVIEW ──────────────────────────┐
114
- │ Questions GENERATED from research findings │
115
- │ - Enum params → multi-select │
116
- │ - Continuous ranges → test strategy │
117
- │ - Boolean params → enable/disable │
118
- └───────────────────────────────────────────────┘
119
-
120
-
121
- ┌─ PHASE 4: DEEP RESEARCH (Adaptive) ───────────┐
122
- │ PROPOSE searches based on interview answers │
123
- │ User approves/modifies/skips │
124
- │ Not shotgun - targeted to selections │
125
- └───────────────────────────────────────────────┘
126
-
127
-
128
- ┌─ PHASE 5: SCHEMA DESIGN ──────────────────────┐
129
- │ Create Zod schema from research + interview │
130
- │ Loop back if schema wrong │
131
- └───────────────────────────────────────────────┘
132
-
133
-
134
- ┌─ PHASE 6: ENVIRONMENT CHECK ──────────────────┐
135
- │ Verify API keys exist │
136
- │ Loop back if keys missing │
137
- └───────────────────────────────────────────────┘
138
-
139
-
140
- ┌─ PHASE 7: TDD RED ────────────────────────────┐
141
- │ Write failing tests from schema + decisions │
142
- │ Test matrix derived from interview │
143
- └───────────────────────────────────────────────┘
144
-
145
-
146
- ┌─ PHASE 8: TDD GREEN ──────────────────────────┐
147
- │ Minimal implementation to pass tests │
148
- └───────────────────────────────────────────────┘
149
-
150
-
151
- ┌─ MANIFEST GENERATION (Automatic) ─────────────┐
152
- │ verify-after-green.py hook triggers: │
153
- │ • generate-test-manifest.ts │
154
- │ • extract-parameters.ts │
155
- │ • collect-test-results.ts │
156
- │ │
157
- │ Output: api-tests-manifest.json (NO LLM) │
158
- └───────────────────────────────────────────────┘
159
-
160
-
161
- ┌─ PHASE 9: VERIFY (NEW) ───────────────────────┐
162
- │ Re-read original documentation │
163
- │ Compare implementation to docs feature-by- │
164
- │ feature. Find gaps. │
165
- │ │
166
- │ Fix gaps? → Loop back to Phase 7 │
167
- │ Skip (intentional)? → Document as omissions │
168
- └───────────────────────────────────────────────┘
169
-
170
-
171
- ┌─ PHASE 10: TDD REFACTOR ──────────────────────┐
172
- │ Clean up code, tests still pass │
173
- └───────────────────────────────────────────────┘
174
-
175
-
176
- ┌─ PHASE 11: DOCUMENTATION ─────────────────────┐
177
- │ Update manifests, OpenAPI, cache research │
178
- └───────────────────────────────────────────────┘
179
-
180
-
181
- ┌─ PHASE 12: COMPLETION ────────────────────────┐
182
- │ All phases verified, commit │
183
- └───────────────────────────────────────────────┘
184
- ```
185
-
186
- ## Key Features
187
-
188
- ### Adaptive Research (Not Shotgun)
189
-
190
- **OLD:** Run 20 searches blindly
191
-
192
- **NEW:**
193
- 1. Run 2-3 initial searches
194
- 2. Present summary
195
- 3. PROPOSE additional searches based on context
196
- 4. User approves/modifies
197
- 5. Execute only approved searches
198
-
199
- ### Questions FROM Research
200
-
201
- **OLD:** Generic template questions
202
- ```
203
- "Which AI provider should this endpoint support?"
204
- ```
205
-
206
- **NEW:** Questions generated from discovered parameters
207
- ```
208
- Based on research, Brandfetch API has 7 parameters:
209
-
210
- 1. DOMAIN (required) - string
211
- → No question needed
212
-
213
- 2. FORMAT: ["json", "svg", "png", "raw"]
214
- Q: Which formats do you need?
215
-
216
- 3. QUALITY: 1-100 (continuous range)
217
- Q: How should we TEST this?
218
- [ ] All values (100 tests)
219
- [x] Boundary (1, 50, 100)
220
- ```
221
-
222
- ### Phase 9: Verify (Catches Memory Errors)
223
-
224
- After tests pass, automatically:
225
- 1. Re-read original documentation
226
- 2. Compare feature-by-feature
227
- 3. Report discrepancies:
228
-
229
- ```
230
- │ Feature │ In Docs │ Implemented │ Status │
231
- ├───────────────┼─────────┼─────────────┼─────────────│
232
- │ domain param │ ✓ │ ✓ │ ✅ Match │
233
- │ format opts │ 4 │ 3 │ ⚠️ Missing │
234
- │ webhook │ ✓ │ ✗ │ ℹ️ Optional │
235
-
236
- Fix gaps? [Y] → Return to Phase 7
237
- Skip? [n] → Document as omissions
238
- ```
39
+ **This package enforces a 12-phase workflow with Python hooks that BLOCK progress until each phase is complete with explicit user approval.**
239
40
 
240
- ### 7-Turn Re-grounding
41
+ ---
241
42
 
242
- Prevents context dilution in long sessions:
243
- - Every 7 turns, hook injects state summary
244
- - Current phase, decisions, research cache location
245
- - Ensures Claude stays aware of workflow
43
+ ## Complete Workflow Simulation
246
44
 
247
- ### Research Freshness
45
+ Here's exactly what happens when you run `/api-create brandfetch`:
248
46
 
249
- Research is cached with 7-day validity:
250
47
  ```
251
- .claude/research/
252
- ├── brandfetch/
253
- ├── 2025-12-08_initial.md
254
- └── CURRENT.md
255
- └── index.json Tracks freshness
48
+ ┌──────────────────────────────────────────────────────────────────────────────┐
49
+ │ ❯ /api-create brandfetch
50
+
51
+ ╭─────────────────────────────────────────────────────────────────────────╮
52
+ PHASE 0: DISAMBIGUATION │ │
53
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
54
+ │ │ │ │
55
+ │ │ I found multiple things matching "brandfetch": │ │
56
+ │ │ │ │
57
+ │ │ [A] Brandfetch REST API (api.brandfetch.io) │ │
58
+ │ │ [B] @brandfetch/sdk npm package │ │
59
+ │ │ [C] Brandfetch WordPress plugin │ │
60
+ │ │ │ │
61
+ │ │ Which interpretation matches your intent? │ │
62
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
63
+ │ │
64
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
65
+ │ │ 🔒 ENFORCED BY: enforce-disambiguation.py │ │
66
+ │ │ ├─ Blocks Write/Edit until user selects an option │ │
67
+ │ │ ├─ Requires: user_question_asked = true │ │
68
+ │ │ └─ Requires: phase_exit_confirmed = true (user said "yes") │ │
69
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
70
+ │ │
71
+ │ User selects: [A] Brandfetch REST API │
72
+ │ │
73
+ └──────────────────────────────────────────────────────────────────────────────┘
256
74
  ```
257
75
 
258
- If research is >7 days old:
259
76
  ```
260
- ⚠️ Research for "brandfetch" is 15 days old.
261
- Re-research before using? [Y/n]
77
+ ┌──────────────────────────────────────────────────────────────────────────────┐
78
+ │ ╭─────────────────────────────────────────────────────────────────────────╮
79
+ │ │ PHASE 1: SCOPE CONFIRMATION │ │
80
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
81
+ │ │ │ │
82
+ │ │ I'll create: /api/v2/brandfetch │ │
83
+ │ │ │ │
84
+ │ │ Purpose: Fetch brand assets (logos, colors, fonts) from Brandfetch API │ │
85
+ │ │ │ │
86
+ │ │ Is this scope correct? [Yes / No, I need...] │ │
87
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
88
+ │ │
89
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
90
+ │ │ 🔒 ENFORCED BY: enforce-scope.py │ │
91
+ │ │ ├─ Blocks until user confirms scope │ │
92
+ │ │ └─ Requires: phase_exit_confirmed = true │ │
93
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
94
+ │ │
95
+ │ User: "Yes, but also add a logo-only mode" │
96
+ │ │
97
+ └──────────────────────────────────────────────────────────────────────────────┘
262
98
  ```
263
99
 
264
- ## Installation
100
+ ```
101
+ ┌──────────────────────────────────────────────────────────────────────────────┐
102
+ │ ╭─────────────────────────────────────────────────────────────────────────╮ │
103
+ │ │ PHASE 2: INITIAL RESEARCH │ │
104
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
105
+ │ │ │ │
106
+ │ │ 🔍 Searching... │ │
107
+ │ │ │ │
108
+ │ │ [Context7] brandfetch → Found SDK documentation │ │
109
+ │ │ [WebSearch] "Brandfetch API documentation 2025" → Official docs │ │
110
+ │ │ [WebSearch] "Brandfetch API rate limits" → 5 req/sec │ │
111
+ │ │ │ │
112
+ │ │ ────────────────────────────────────────────────────────────────────── │ │
113
+ │ │ │ │
114
+ │ │ RESEARCH SUMMARY: │ │
115
+ │ │ • Base URL: https://api.brandfetch.io/v2 │ │
116
+ │ │ • Auth: Bearer token (BRANDFETCH_API_KEY) │ │
117
+ │ │ • Rate limit: 5 requests/second │ │
118
+ │ │ • Returns: logos, colors, fonts, images │ │
119
+ │ │ │ │
120
+ │ │ Proceed to interview? [Yes / Search more first] │ │
121
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
122
+ │ │
123
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
124
+ │ │ 🔒 ENFORCED BY: enforce-research.py │ │
125
+ │ │ ├─ Blocks Write/Edit until research complete │ │
126
+ │ │ ├─ Requires: sources.length >= 2 │ │
127
+ │ │ └─ Requires: phase_exit_confirmed = true │ │
128
+ │ │ │ │
129
+ │ │ 📝 TRACKED BY: track-tool-use.py │ │
130
+ │ │ └─ Logs all WebSearch/Context7 calls to state │ │
131
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
132
+ │ │
133
+ │ User: "Yes, proceed" │
134
+ │ │
135
+ └──────────────────────────────────────────────────────────────────────────────┘
136
+ ```
265
137
 
266
- ```bash
267
- cd your-project
268
- npx @hustle-together/api-dev-tools --scope=project
138
+ ```
139
+ ┌──────────────────────────────────────────────────────────────────────────────┐
140
+ │ ╭─────────────────────────────────────────────────────────────────────────╮
141
+ │ │ PHASE 3: STRUCTURED INTERVIEW │ │
142
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
143
+ │ │ │ │
144
+ │ │ Based on research, I have 5 questions: │ │
145
+ │ │ │ │
146
+ │ │ Q1: Which response format do you need? │ │
147
+ │ │ [ ] JSON only │ │
148
+ │ │ [x] JSON with asset URLs │ │
149
+ │ │ [ ] Include raw base64 assets │ │
150
+ │ │ │ │
151
+ │ │ Q2: Caching strategy? │ │
152
+ │ │ [ ] No caching │ │
153
+ │ │ [ ] Short (5 minutes) │ │
154
+ │ │ [x] Long (24 hours) - brand data rarely changes │ │
155
+ │ │ │ │
156
+ │ │ Q3: Error handling approach? │ │
157
+ │ │ [ ] Throw errors (caller handles) │ │
158
+ │ │ [x] Return error objects { success: false, error: {...} } │ │
159
+ │ │ │ │
160
+ │ │ Q4: Rate limit handling? │ │
161
+ │ │ [ ] Client handles retry │ │
162
+ │ │ [ ] Server-side retry with backoff │ │
163
+ │ │ [x] Expose rate limit headers (X-RateLimit-*) │ │
164
+ │ │ │ │
165
+ │ │ Q5: Which brand assets to include? │ │
166
+ │ │ [x] Logos │ │
167
+ │ │ [x] Colors │ │
168
+ │ │ [ ] Fonts │ │
169
+ │ │ [ ] Images │ │
170
+ │ │ │ │
171
+ │ │ Interview complete? [Yes, these answers are final / Modify answers] │ │
172
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
173
+ │ │
174
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
175
+ │ │ 🔒 ENFORCED BY: enforce-interview.py │ │
176
+ │ │ ├─ Blocks until structured_question_count >= 3 │ │
177
+ │ │ ├─ Uses AskUserQuestion tool (Claude's built-in dialog) │ │
178
+ │ │ └─ Requires: phase_exit_confirmed = true │ │
179
+ │ │ │ │
180
+ │ │ 💡 LLM RESPONSIBILITY: │ │
181
+ │ │ └─ Generate questions FROM research (not generic templates) │ │
182
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
183
+ │ │
184
+ │ User confirms selections │
185
+ │ │
186
+ └──────────────────────────────────────────────────────────────────────────────┘
269
187
  ```
270
188
 
271
- ### Team-Wide Auto-Installation
189
+ ```
190
+ ┌──────────────────────────────────────────────────────────────────────────────┐
191
+ │ ╭─────────────────────────────────────────────────────────────────────────╮ │
192
+ │ │ PHASE 4: DEEP RESEARCH (Adaptive) │ │
193
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
194
+ │ │ │ │
195
+ │ │ Based on your selections, I want to research: │ │
196
+ │ │ │ │
197
+ │ │ [x] "Brandfetch error response format" - for error objects │ │
198
+ │ │ [x] "Brandfetch rate limit headers" - for X-RateLimit-* headers │ │
199
+ │ │ [ ] "Brandfetch font API" - SKIPPED (you don't need fonts) │ │
200
+ │ │ │ │
201
+ │ │ Approve these searches? [Yes / Add more / Skip all] │ │
202
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
203
+ │ │
204
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
205
+ │ │ 🔒 ENFORCED BY: enforce-deep-research.py │ │
206
+ │ │ ├─ Blocks if approved_searches not executed │ │
207
+ │ │ └─ Requires: phase_exit_confirmed = true │ │
208
+ │ │ │ │
209
+ │ │ 💡 LLM RESPONSIBILITY: │ │
210
+ │ │ └─ Propose searches based on interview (not shotgun approach) │ │
211
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
212
+ │ │
213
+ │ User: "Yes, also add authentication methods" │
214
+ │ │
215
+ └──────────────────────────────────────────────────────────────────────────────┘
216
+ ```
272
217
 
273
- Add to `package.json`:
274
- ```json
275
- {
276
- "scripts": {
277
- "postinstall": "npx @hustle-together/api-dev-tools --scope=project"
278
- }
279
- }
218
+ ```
219
+ ┌──────────────────────────────────────────────────────────────────────────────┐
220
+ │ ╭─────────────────────────────────────────────────────────────────────────╮ │
221
+ │ PHASE 5: SCHEMA CREATION │ │
222
+ │ │ ───────────────────────────────────────────────────────────────────────
223
+ │ │ │
224
+ │ │ Creating Zod schema from research + interview: │ │
225
+ │ │ │ │
226
+ │ │ // Request Schema │ │
227
+ │ │ const BrandfetchRequest = z.object({ │ │
228
+ │ │ domain: z.string().min(1), │ │
229
+ │ │ mode: z.enum(["full", "logo-only"]).default("full"), │ │
230
+ │ │ include: z.object({ │ │
231
+ │ │ logos: z.boolean().default(true), │ │
232
+ │ │ colors: z.boolean().default(true), │ │
233
+ │ │ }).optional(), │ │
234
+ │ │ }); │ │
235
+ │ │ │ │
236
+ │ │ // Response Schema (success) │ │
237
+ │ │ const BrandfetchResponse = z.object({ │ │
238
+ │ │ success: z.literal(true), │ │
239
+ │ │ data: z.object({ │ │
240
+ │ │ logos: z.array(LogoSchema), │ │
241
+ │ │ colors: z.array(ColorSchema), │ │
242
+ │ │ }), │ │
243
+ │ │ cached: z.boolean(), │ │
244
+ │ │ }); │ │
245
+ │ │ │ │
246
+ │ │ Schema matches your interview answers? [Yes / Modify] │ │
247
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
248
+ │ │
249
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
250
+ │ │ 🔒 ENFORCED BY: enforce-schema.py │ │
251
+ │ │ ├─ Blocks if schema not shown to user │ │
252
+ │ │ └─ Requires: phase_exit_confirmed = true │ │
253
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
254
+ │ │
255
+ │ User: "Yes, looks good" │
256
+ │ │
257
+ └──────────────────────────────────────────────────────────────────────────────┘
280
258
  ```
281
259
 
282
- ## Command Reference
260
+ ```
261
+ ┌──────────────────────────────────────────────────────────────────────────────┐
262
+ │ ╭─────────────────────────────────────────────────────────────────────────╮ │
263
+ │ │ PHASE 6: ENVIRONMENT CHECK │ │
264
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
265
+ │ │ │ │
266
+ │ │ Checking required API keys: │ │
267
+ │ │ │ │
268
+ │ │ ✅ BRANDFETCH_API_KEY found in .env │ │
269
+ │ │ ✅ NODE_ENV = development │ │
270
+ │ │ │ │
271
+ │ │ Ready to begin TDD? [Yes / Need to set up keys first] │ │
272
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
273
+ │ │
274
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
275
+ │ │ 🔒 ENFORCED BY: enforce-environment.py │ │
276
+ │ │ ├─ Checks env vars based on endpoint name + interview │ │
277
+ │ │ └─ Requires: phase_exit_confirmed = true │ │
278
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
279
+ │ │
280
+ │ User: "Yes, ready" │
281
+ │ │
282
+ └──────────────────────────────────────────────────────────────────────────────┘
283
+ ```
283
284
 
284
- ### `/api-create [endpoint-name]`
285
+ ```
286
+ ┌──────────────────────────────────────────────────────────────────────────────┐
287
+ │ ╭─────────────────────────────────────────────────────────────────────────╮ │
288
+ │ │ PHASE 7: TDD RED (Write Failing Tests) │ │
289
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
290
+ │ │ │ │
291
+ │ │ TEST MATRIX (from interview decisions): │ │
292
+ │ │ │ │
293
+ │ │ ✅ Success Scenarios: │ │
294
+ │ │ • GET with valid domain → 200 + brand data │ │
295
+ │ │ • mode=logo-only → 200 + logos only │ │
296
+ │ │ • Cache hit → 200 + cached: true │ │
297
+ │ │ │ │
298
+ │ │ ✅ Error Scenarios (your choice: return objects): │ │
299
+ │ │ • Invalid domain → 400 + { success: false, error: {...} } │ │
300
+ │ │ • Missing API key → 401 + error object │ │
301
+ │ │ • Not found → 404 + error object │ │
302
+ │ │ • Rate limited → 429 + X-RateLimit-* headers │ │
303
+ │ │ │ │
304
+ │ │ ✅ Edge Cases: │ │
305
+ │ │ • Empty colors array → 200 + empty array (not error) │ │
306
+ │ │ • Timeout → 504 + error object │ │
307
+ │ │ │ │
308
+ │ │ Total: 9 test scenarios │ │
309
+ │ │ │ │
310
+ │ │ Test plan looks good? [Yes, write tests / Add more tests] │ │
311
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
312
+ │ │
313
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
314
+ │ │ 🔒 ENFORCED BY: enforce-tdd-red.py │ │
315
+ │ │ ├─ Blocks route.ts if no .test.ts exists │ │
316
+ │ │ ├─ Requires: test matrix shown to user │ │
317
+ │ │ └─ Requires: phase_exit_confirmed = true │ │
318
+ │ │ │ │
319
+ │ │ 💡 LLM RESPONSIBILITY: │ │
320
+ │ │ └─ Derive test scenarios from interview decisions │ │
321
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
322
+ │ │
323
+ │ User: "Yes, write them" │
324
+ │ │
325
+ │ 📝 Creating: src/app/api/v2/brandfetch/__tests__/route.test.ts │
326
+ │ ⏳ Running tests... 0/9 passing (expected - RED phase) │
327
+ │ │
328
+ └──────────────────────────────────────────────────────────────────────────────┘
329
+ ```
285
330
 
286
- Complete 12-phase workflow. See [commands/api-create.md](commands/api-create.md).
331
+ ```
332
+ ┌──────────────────────────────────────────────────────────────────────────────┐
333
+ │ ╭─────────────────────────────────────────────────────────────────────────╮ │
334
+ │ │ PHASE 8: TDD GREEN (Make Tests Pass) │ │
335
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
336
+ │ │ │ │
337
+ │ │ 📝 Creating: src/app/api/v2/brandfetch/route.ts │ │
338
+ │ │ │ │
339
+ │ │ ⏳ Running tests... │ │
340
+ │ │ │ │
341
+ │ │ ✅ GET with valid domain PASS │ │
342
+ │ │ ✅ mode=logo-only PASS │ │
343
+ │ │ ✅ Cache hit PASS │ │
344
+ │ │ ✅ Invalid domain → 400 PASS │ │
345
+ │ │ ✅ Missing API key → 401 PASS │ │
346
+ │ │ ✅ Not found → 404 PASS │ │
347
+ │ │ ✅ Rate limited → 429 PASS │ │
348
+ │ │ ✅ Empty colors array PASS │ │
349
+ │ │ ✅ Timeout → 504 PASS │ │
350
+ │ │ │ │
351
+ │ │ Tests: 9/9 passing │ │
352
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
353
+ │ │
354
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
355
+ │ │ 🔒 ENFORCED BY: verify-after-green.py (PostToolUse on Bash) │ │
356
+ │ │ ├─ Detects "pnpm test" or "vitest" in command │ │
357
+ │ │ ├─ Parses output for pass/fail │ │
358
+ │ │ └─ Auto-triggers Phase 9 when all tests pass │ │
359
+ │ │ │ │
360
+ │ │ 💡 LLM RESPONSIBILITY: │ │
361
+ │ │ └─ Write minimal implementation to pass tests │ │
362
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
363
+ │ │
364
+ └──────────────────────────────────────────────────────────────────────────────┘
365
+ ```
287
366
 
288
- ### `/api-interview [endpoint-name]`
367
+ ```
368
+ ┌──────────────────────────────────────────────────────────────────────────────┐
369
+ │ ╭─────────────────────────────────────────────────────────────────────────╮ │
370
+ │ │ PHASE 9: VERIFY (Re-Research After Green) │ │
371
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
372
+ │ │ │ │
373
+ │ │ ⚠️ AUTO-TRIGGERED: Tests passed. Now verifying against docs... │ │
374
+ │ │ │ │
375
+ │ │ Re-reading Brandfetch API documentation... │ │
376
+ │ │ │ │
377
+ │ │ ┌────────────────────────────────────────────────────────────────────┐ │ │
378
+ │ │ │ Feature │ In Docs │ Implemented │ Status │ │ │
379
+ │ │ ├────────────────┼─────────┼─────────────┼──────────────────────────│ │ │
380
+ │ │ │ domain param │ ✓ │ ✓ │ ✅ Match │ │ │
381
+ │ │ │ logo formats │ 4 │ 4 │ ✅ Match │ │ │
382
+ │ │ │ colors │ ✓ │ ✓ │ ✅ Match │ │ │
383
+ │ │ │ include_fonts │ ✓ │ ✗ │ ⚠️ GAP FOUND │ │ │
384
+ │ │ │ webhook │ ✓ │ ✗ │ ℹ️ Optional │ │ │
385
+ │ │ └────────────────────────────────────────────────────────────────────┘ │ │
386
+ │ │ │ │
387
+ │ │ GAPS FOUND: 1 │ │
388
+ │ │ • include_fonts parameter (documented but not implemented) │ │
389
+ │ │ │ │
390
+ │ │ Fix gaps? [Yes, loop back to Phase 7 / Skip - document as omission] │ │
391
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
392
+ │ │
393
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
394
+ │ │ 🔒 ENFORCED BY: enforce-verify.py │ │
395
+ │ │ ├─ Blocks refactoring until verification complete │ │
396
+ │ │ ├─ Requires: gaps_fixed OR documented as omissions │ │
397
+ │ │ └─ Requires: phase_exit_confirmed = true │ │
398
+ │ │ │ │
399
+ │ │ 💡 LLM RESPONSIBILITY: │ │
400
+ │ │ └─ Re-read docs and compare feature-by-feature │ │
401
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
402
+ │ │
403
+ │ User: "Skip - I don't need fonts" │
404
+ │ │
405
+ │ 📝 Documenting as intentional omission: include_fonts │
406
+ │ │
407
+ └──────────────────────────────────────────────────────────────────────────────┘
408
+ ```
289
409
 
290
- Dynamic interview with questions FROM research. See [commands/api-interview.md](commands/api-interview.md).
410
+ ```
411
+ ┌──────────────────────────────────────────────────────────────────────────────┐
412
+ │ ╭─────────────────────────────────────────────────────────────────────────╮ │
413
+ │ │ PHASE 10: TDD REFACTOR │ │
414
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
415
+ │ │ │ │
416
+ │ │ Cleaning up implementation while keeping tests green... │ │
417
+ │ │ │ │
418
+ │ │ • Extracted rate limit logic to helper function │ │
419
+ │ │ • Added JSDoc comments │ │
420
+ │ │ • Removed dead code │ │
421
+ │ │ │ │
422
+ │ │ Tests: 9/9 still passing │ │
423
+ │ │ │ │
424
+ │ │ Refactoring complete? [Yes / Continue refactoring] │ │
425
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
426
+ │ │
427
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
428
+ │ │ 🔒 ENFORCED BY: enforce-refactor.py │ │
429
+ │ │ ├─ Only allows after verify phase complete │ │
430
+ │ │ └─ Requires: phase_exit_confirmed = true │ │
431
+ │ │ │ │
432
+ │ │ 💡 LLM RESPONSIBILITY: │ │
433
+ │ │ └─ Improve code without changing behavior │ │
434
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
435
+ │ │
436
+ │ User: "Yes" │
437
+ │ │
438
+ └──────────────────────────────────────────────────────────────────────────────┘
439
+ ```
291
440
 
292
- ### `/api-research [library-name]`
441
+ ```
442
+ ┌──────────────────────────────────────────────────────────────────────────────┐
443
+ │ ╭─────────────────────────────────────────────────────────────────────────╮ │
444
+ │ │ PHASE 11: DOCUMENTATION │ │
445
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
446
+ │ │ │ │
447
+ │ │ DOCUMENTATION CHECKLIST: │ │
448
+ │ │ │ │
449
+ │ │ ✅ api-tests-manifest.json │ │
450
+ │ │ • Added /api/v2/brandfetch entry │ │
451
+ │ │ • 9 test scenarios documented │ │
452
+ │ │ • Request/response schemas included │ │
453
+ │ │ │ │
454
+ │ │ ✅ Research Cache │ │
455
+ │ │ • .claude/research/brandfetch/sources.json │ │
456
+ │ │ • .claude/research/brandfetch/interview.json │ │
457
+ │ │ │ │
458
+ │ │ ⏭️ OpenAPI Spec (skipped - internal API) │ │
459
+ │ │ │ │
460
+ │ │ Documentation complete? [Yes / Need to add something] │ │
461
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
462
+ │ │
463
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
464
+ │ │ 🔒 ENFORCED BY: enforce-documentation.py │ │
465
+ │ │ ├─ Blocks completion until documentation updated │ │
466
+ │ │ ├─ Checks: manifest_updated OR research_cached │ │
467
+ │ │ └─ Requires: phase_exit_confirmed = true │ │
468
+ │ │ │ │
469
+ │ │ 🔒 ENFORCED BY: api-workflow-check.py (Stop hook) │ │
470
+ │ │ └─ Blocks "stop" if required phases not complete │ │
471
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
472
+ │ │
473
+ │ User: "Yes, all done" │
474
+ │ │
475
+ └──────────────────────────────────────────────────────────────────────────────┘
476
+ ```
293
477
 
294
- Adaptive propose-approve research. See [commands/api-research.md](commands/api-research.md).
478
+ ```
479
+ ┌──────────────────────────────────────────────────────────────────────────────┐
480
+ │ ╭─────────────────────────────────────────────────────────────────────────╮ │
481
+ │ │ ✅ WORKFLOW COMPLETE │ │
482
+ │ │ ─────────────────────────────────────────────────────────────────────── │ │
483
+ │ │ │ │
484
+ │ │ /api/v2/brandfetch created successfully! │ │
485
+ │ │ │ │
486
+ │ │ Files created: │ │
487
+ │ │ • src/app/api/v2/brandfetch/route.ts │ │
488
+ │ │ • src/app/api/v2/brandfetch/__tests__/route.test.ts │ │
489
+ │ │ • src/lib/schemas/brandfetch.ts │ │
490
+ │ │ │ │
491
+ │ │ Tests: 9/9 passing │ │
492
+ │ │ Coverage: 100% │ │
493
+ │ │ │ │
494
+ │ │ Interview decisions preserved in: │ │
495
+ │ │ • .claude/api-dev-state.json │ │
496
+ │ │ • .claude/research/brandfetch/ │ │
497
+ │ │ │ │
498
+ │ │ Intentional omissions documented: │ │
499
+ │ │ • include_fonts parameter │ │
500
+ │ │ • webhook support │ │
501
+ │ ╰─────────────────────────────────────────────────────────────────────────╯ │
502
+ │ │
503
+ └──────────────────────────────────────────────────────────────────────────────┘
504
+ ```
295
505
 
296
- ### `/api-verify [endpoint-name]`
506
+ ---
297
507
 
298
- Manual Phase 9 verification. See [commands/api-verify.md](commands/api-verify.md).
508
+ ## Enforcement vs LLM Responsibility
509
+
510
+ ### 🔒 ENFORCED (Hooks block progress if not complete)
511
+
512
+ | Phase | Hook File | What It Enforces |
513
+ |-------|-----------|------------------|
514
+ | 0 | `enforce-disambiguation.py` | User must select from options |
515
+ | 1 | `enforce-scope.py` | User must confirm scope |
516
+ | 2 | `enforce-research.py` | Minimum 2 sources required |
517
+ | 3 | `enforce-interview.py` | Minimum 3 structured questions |
518
+ | 4 | `enforce-deep-research.py` | Approved searches must execute |
519
+ | 5 | `enforce-schema.py` | Schema must be shown to user |
520
+ | 6 | `enforce-environment.py` | API keys must be verified |
521
+ | 7 | `enforce-tdd-red.py` | Test file must exist before route |
522
+ | 8 | `verify-after-green.py` | Tests must pass (auto-detect) |
523
+ | 9 | `enforce-verify.py` | Gaps must be fixed or documented |
524
+ | 10 | `enforce-refactor.py` | Verify phase must be complete |
525
+ | 11 | `enforce-documentation.py` | Docs must be updated |
526
+
527
+ ### 💡 LLM RESPONSIBILITY (Not enforced, but guided)
528
+
529
+ | Task | Guidance |
530
+ |------|----------|
531
+ | Generate disambiguation options | Based on search results |
532
+ | Create interview questions | FROM research findings |
533
+ | Propose deep research searches | Based on interview answers |
534
+ | Write test scenarios | Derived from interview decisions |
535
+ | Compare docs to implementation | Feature-by-feature comparison |
536
+ | Refactor code | Without changing behavior |
537
+
538
+ ### 🔄 AUTOMATIC (No user action required)
539
+
540
+ | Hook | Trigger | Action |
541
+ |------|---------|--------|
542
+ | `session-startup.py` | Session start | Inject state context |
543
+ | `track-tool-use.py` | Any tool use | Log activity, count turns |
544
+ | `periodic-reground.py` | Every 7 turns | Re-inject context summary |
545
+ | `verify-after-green.py` | Tests pass | Trigger Phase 9 |
299
546
 
300
- ### `/api-env [endpoint-name]`
547
+ ---
301
548
 
302
- Environment and API key check. See [commands/api-env.md](commands/api-env.md).
549
+ ## File Structure
303
550
 
304
- ### `/api-status [endpoint-name]`
551
+ ```
552
+ @hustle-together/api-dev-tools v3.6.0
553
+
554
+ ├── bin/
555
+ │ └── cli.js # NPX installer (verifies 18 hooks)
556
+
557
+ ├── commands/ # 24 slash commands
558
+ │ ├── api-create.md # Main 12-phase workflow
559
+ │ ├── api-interview.md # Structured interview
560
+ │ ├── api-research.md # Adaptive research
561
+ │ ├── api-verify.md # Manual verification
562
+ │ ├── api-env.md # Environment check
563
+ │ ├── api-status.md # Progress tracking
564
+ │ ├── red.md, green.md, refactor.md # TDD commands
565
+ │ ├── cycle.md # Full TDD cycle
566
+ │ ├── commit.md, pr.md # Git operations
567
+ │ └── ... (13 more)
568
+
569
+ ├── hooks/ # 18 Python enforcement hooks
570
+ │ │
571
+ │ │ # Session lifecycle
572
+ │ ├── session-startup.py # SessionStart → inject context
573
+ │ │
574
+ │ │ # User prompt processing
575
+ │ ├── enforce-external-research.py # UserPromptSubmit → detect API terms
576
+ │ │
577
+ │ │ # PreToolUse (Write/Edit) - BLOCKING
578
+ │ ├── enforce-disambiguation.py # Phase 0
579
+ │ ├── enforce-scope.py # Phase 1
580
+ │ ├── enforce-research.py # Phase 2
581
+ │ ├── enforce-interview.py # Phase 3
582
+ │ ├── enforce-deep-research.py # Phase 4
583
+ │ ├── enforce-schema.py # Phase 5
584
+ │ ├── enforce-environment.py # Phase 6
585
+ │ ├── enforce-tdd-red.py # Phase 7
586
+ │ ├── verify-implementation.py # Phase 8 helper
587
+ │ ├── enforce-verify.py # Phase 9
588
+ │ ├── enforce-refactor.py # Phase 10
589
+ │ ├── enforce-documentation.py # Phase 11
590
+ │ │
591
+ │ │ # PostToolUse - TRACKING
592
+ │ ├── track-tool-use.py # Log all tool usage
593
+ │ ├── periodic-reground.py # Re-inject context every 7 turns
594
+ │ ├── verify-after-green.py # Trigger Phase 9 after tests pass
595
+ │ │
596
+ │ │ # Stop - BLOCKING
597
+ │ └── api-workflow-check.py # Block completion if incomplete
598
+
599
+ ├── templates/
600
+ │ ├── api-dev-state.json # State file (12 phases + phase_exit_confirmed)
601
+ │ ├── settings.json # Hook registrations
602
+ │ ├── research-index.json # Research freshness tracking
603
+ │ └── CLAUDE-SECTION.md # CLAUDE.md injection
604
+
605
+ ├── scripts/
606
+ │ ├── generate-test-manifest.ts # Parse tests → manifest (NO LLM)
607
+ │ ├── extract-parameters.ts # Extract Zod params
608
+ │ └── collect-test-results.ts # Run tests → results
609
+
610
+ └── package.json # v3.6.0
611
+ ```
305
612
 
306
- Progress tracking. See [commands/api-status.md](commands/api-status.md).
613
+ ---
307
614
 
308
- ## State File Structure (v3.0)
615
+ ## State File Structure (v3.6.0)
309
616
 
310
617
  ```json
311
618
  {
312
619
  "version": "3.0.0",
313
620
  "endpoint": "brandfetch",
314
- "turn_count": 23,
621
+ "session_id": "abc123",
622
+ "turn_count": 47,
315
623
  "phases": {
316
- "disambiguation": { "status": "complete" },
317
- "scope": { "status": "complete" },
318
- "research_initial": { "status": "complete", "sources": [...] },
624
+ "disambiguation": {
625
+ "status": "complete",
626
+ "user_question_asked": true,
627
+ "user_selected": "Brandfetch REST API",
628
+ "phase_exit_confirmed": true,
629
+ "last_question_type": "exit_confirmation"
630
+ },
631
+ "scope": {
632
+ "status": "complete",
633
+ "confirmed": true,
634
+ "phase_exit_confirmed": true
635
+ },
636
+ "research_initial": {
637
+ "status": "complete",
638
+ "sources": ["context7", "websearch:brandfetch-api", "websearch:rate-limits"],
639
+ "phase_exit_confirmed": true
640
+ },
319
641
  "interview": {
320
642
  "status": "complete",
643
+ "structured_question_count": 5,
321
644
  "decisions": {
322
- "format": ["json", "svg", "png"],
323
- "quality_testing": "boundary"
324
- }
645
+ "response_format": "json_with_urls",
646
+ "caching": "24h",
647
+ "error_handling": "return_objects",
648
+ "rate_limiting": "expose_headers",
649
+ "assets": ["logos", "colors"]
650
+ },
651
+ "phase_exit_confirmed": true
325
652
  },
326
653
  "research_deep": {
327
654
  "status": "complete",
328
- "proposed_searches": [...],
329
- "approved_searches": [...],
330
- "skipped_searches": [...]
655
+ "proposed_searches": ["error-format", "rate-headers", "auth"],
656
+ "approved_searches": ["error-format", "rate-headers", "auth"],
657
+ "executed_searches": ["error-format", "rate-headers", "auth"],
658
+ "phase_exit_confirmed": true
659
+ },
660
+ "schema_creation": {
661
+ "status": "complete",
662
+ "schema_file": "src/lib/schemas/brandfetch.ts",
663
+ "phase_exit_confirmed": true
664
+ },
665
+ "environment_check": {
666
+ "status": "complete",
667
+ "keys_found": ["BRANDFETCH_API_KEY"],
668
+ "phase_exit_confirmed": true
669
+ },
670
+ "tdd_red": {
671
+ "status": "complete",
672
+ "test_file": "src/app/api/v2/brandfetch/__tests__/route.test.ts",
673
+ "test_count": 9,
674
+ "phase_exit_confirmed": true
675
+ },
676
+ "tdd_green": {
677
+ "status": "complete",
678
+ "all_tests_passing": true
331
679
  },
332
- "schema_creation": { "status": "complete" },
333
- "environment_check": { "status": "complete" },
334
- "tdd_red": { "status": "complete", "test_count": 23 },
335
- "tdd_green": { "status": "complete" },
336
680
  "verify": {
337
681
  "status": "complete",
338
682
  "gaps_found": 2,
339
- "gaps_fixed": 2,
340
- "intentional_omissions": ["webhook support"]
683
+ "gaps_fixed": 0,
684
+ "intentional_omissions": ["include_fonts", "webhook"],
685
+ "phase_exit_confirmed": true
341
686
  },
342
- "tdd_refactor": { "status": "complete" },
343
- "documentation": { "status": "complete" }
344
- },
345
- "manifest_generation": {
346
- "last_run": "2025-12-09T10:30:00.000Z",
347
- "manifest_generated": true,
348
- "parameters_extracted": true,
349
- "test_results_collected": true,
350
- "output_files": {
351
- "manifest": "src/app/api-test/api-tests-manifest.json",
352
- "parameters": "src/app/api-test/parameter-matrix.json",
353
- "results": "src/app/api-test/test-results.json"
687
+ "tdd_refactor": {
688
+ "status": "complete",
689
+ "phase_exit_confirmed": true
690
+ },
691
+ "documentation": {
692
+ "status": "complete",
693
+ "manifest_updated": true,
694
+ "research_cached": true,
695
+ "phase_exit_confirmed": true
354
696
  }
355
697
  },
356
698
  "reground_history": [
357
699
  { "turn": 7, "phase": "interview" },
358
- { "turn": 14, "phase": "tdd_red" }
700
+ { "turn": 14, "phase": "tdd_red" },
701
+ { "turn": 21, "phase": "tdd_green" }
359
702
  ]
360
703
  }
361
704
  ```
362
705
 
363
- ## Hook Architecture
364
-
365
- ```
366
- ┌──────────────────────────────────────────────────────────┐
367
- │ SESSION START │
368
- │ → session-startup.py injects current state │
369
- └──────────────────────────────────────────────────────────┘
370
-
371
-
372
- ┌──────────────────────────────────────────────────────────┐
373
- │ USER PROMPT │
374
- │ → enforce-external-research.py detects API terms │
375
- └──────────────────────────────────────────────────────────┘
376
-
377
-
378
- ┌──────────────────────────────────────────────────────────┐
379
- │ RESEARCH TOOLS (WebSearch, Context7) │
380
- │ → track-tool-use.py logs activity, counts turns │
381
- │ → periodic-reground.py injects summary every 7 turns │
382
- └──────────────────────────────────────────────────────────┘
383
-
384
-
385
- ┌──────────────────────────────────────────────────────────┐
386
- │ WRITE/EDIT TOOLS │
387
- │ → enforce-research.py blocks if no research │
388
- │ → enforce-interview.py injects decisions │
389
- │ → verify-implementation.py checks test file exists │
390
- └──────────────────────────────────────────────────────────┘
391
-
392
-
393
- ┌──────────────────────────────────────────────────────────┐
394
- │ TEST COMMANDS (pnpm test) │
395
- │ → verify-after-green.py triggers Phase 9 after pass │
396
- └──────────────────────────────────────────────────────────┘
397
-
398
-
399
- ┌──────────────────────────────────────────────────────────┐
400
- │ STOP │
401
- │ → api-workflow-check.py blocks if phases incomplete │
402
- └──────────────────────────────────────────────────────────┘
403
- ```
404
-
405
- ## Manual Script Usage
406
-
407
- While scripts run automatically after tests pass, you can also run them manually:
706
+ ---
707
+
708
+ ## Installation
408
709
 
409
710
  ```bash
410
- # Generate manifest from test files
411
- npx tsx scripts/api-dev-tools/generate-test-manifest.ts
412
-
413
- # Extract parameters and calculate coverage
414
- npx tsx scripts/api-dev-tools/extract-parameters.ts
711
+ # Install in your project
712
+ npx @hustle-together/api-dev-tools --scope=project
415
713
 
416
- # Collect test results (runs Vitest)
417
- npx tsx scripts/api-dev-tools/collect-test-results.ts
714
+ # Team-wide auto-installation (add to package.json)
715
+ {
716
+ "scripts": {
717
+ "postinstall": "npx @hustle-together/api-dev-tools --scope=project"
718
+ }
719
+ }
418
720
  ```
419
721
 
420
- Output files are written to `src/app/api-test/`:
421
- - `api-tests-manifest.json` - Complete API documentation
422
- - `parameter-matrix.json` - Parameter coverage analysis
423
- - `test-results.json` - Latest test run results
424
-
425
- ## Requirements
722
+ ### Requirements
426
723
 
427
- - **Node.js** 14.0.0 or higher
724
+ - **Node.js** 14.0.0+
428
725
  - **Python 3** (for enforcement hooks)
429
- - **Claude Code** (CLI tool for Claude)
430
- - **Project structure** with `.claude/commands/` support
726
+ - **Claude Code** CLI tool
431
727
 
432
- ## MCP Servers (Auto-installed)
728
+ ---
433
729
 
434
- ### Context7
435
- - Live documentation from library source code
436
- - Current API parameters (not training data)
730
+ ## What's New in v3.6.0
437
731
 
438
- ### GitHub
439
- - Issue management
440
- - Pull request creation
732
+ ### Exit Code 2 for Stronger Enforcement
441
733
 
442
- Set `GITHUB_PERSONAL_ACCESS_TOKEN` for GitHub MCP.
734
+ **Problem:** JSON `permissionDecision: "deny"` blocks actions but Claude may continue normally. We needed a way to force Claude to **actively respond** to the error.
443
735
 
444
- ## Acknowledgments
736
+ **Solution:** All blocking hooks now use `sys.exit(2)` with stderr messages instead of JSON deny:
445
737
 
446
- - **[@wbern/claude-instructions](https://github.com/wbern/claude-instructions)** - TDD commands
447
- - **[Anthropic Interviewer](https://www.anthropic.com/news/anthropic-interviewer)** - Interview methodology
448
- - **[Context7](https://context7.com)** - Live documentation lookup
738
+ ```python
739
+ # Before (passive - Claude sees reason but may continue)
740
+ print(json.dumps({"permissionDecision": "deny", "reason": "..."}))
741
+ sys.exit(0)
449
742
 
450
- ## License
743
+ # After (active - forces Claude into feedback loop)
744
+ print("BLOCKED: ...", file=sys.stderr)
745
+ sys.exit(2)
746
+ ```
747
+
748
+ **Upgraded hooks:**
749
+ - `enforce-research.py` - Forces `/api-research` before implementation
750
+ - `enforce-interview.py` - Forces structured interview completion
751
+ - `api-workflow-check.py` - Forces all phases complete before stopping
752
+ - `verify-implementation.py` - Forces fix of critical mismatches
753
+
754
+ From [Anthropic's docs](https://code.claude.com/docs/en/hooks):
755
+ > "Exit code 2 creates a feedback loop directly to Claude. Claude sees your error message. **Claude adjusts. Claude tries something different.**"
756
+
757
+ ---
758
+
759
+ ## What's New in v3.5.0
760
+
761
+ ### `phase_exit_confirmed` Enforcement
451
762
 
452
- MIT License
763
+ **Problem:** Claude was calling `AskUserQuestion` but immediately self-answering without waiting for user input.
764
+
765
+ **Solution:** Every phase now requires:
766
+ 1. An "exit confirmation" question (detected by patterns like "proceed", "ready", "confirm")
767
+ 2. An affirmative user response (detected by patterns like "yes", "approve", "looks good")
768
+ 3. Both conditions set `phase_exit_confirmed = true` in state
769
+
770
+ ```python
771
+ # In track-tool-use.py
772
+ def _detect_question_type(question_text, options):
773
+ """Detects: 'exit_confirmation', 'data_collection', 'clarification'"""
774
+ exit_patterns = ["proceed", "continue", "ready to", "approve", "confirm", ...]
775
+
776
+ def _is_affirmative_response(response, options):
777
+ """Checks for: 'yes', 'proceed', 'approve', 'confirm', 'ready', ..."""
778
+ ```
779
+
780
+ ---
453
781
 
454
782
  ## Links
455
783
 
456
- - **Repository:** https://github.com/hustle-together/api-dev-tools
457
784
  - **NPM:** https://www.npmjs.com/package/@hustle-together/api-dev-tools
785
+ - **GitHub:** https://github.com/hustle-together/api-dev-tools
786
+ - **Demo:** https://hustle-together.github.io/api-dev-tools/demo/
787
+
788
+ ---
789
+
790
+ ## License
791
+
792
+ MIT
458
793
 
459
794
  ---
460
795