@hustle-together/api-dev-tools 3.5.0 → 3.6.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +1643 -589
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,558 +1,1591 @@
|
|
|
1
|
-
# API Development Tools for Claude Code v3.6.
|
|
1
|
+
# API Development Tools for Claude Code v3.6.1
|
|
2
2
|
|
|
3
3
|
**Interview-driven, research-first API development with 100% phase enforcement**
|
|
4
4
|
|
|
5
5
|
```
|
|
6
|
-
|
|
7
|
-
│
|
|
8
|
-
│
|
|
9
|
-
│
|
|
10
|
-
│
|
|
11
|
-
│
|
|
12
|
-
│
|
|
13
|
-
│
|
|
14
|
-
│
|
|
15
|
-
│
|
|
16
|
-
│
|
|
17
|
-
│
|
|
18
|
-
│
|
|
19
|
-
│
|
|
20
|
-
│
|
|
21
|
-
│
|
|
22
|
-
│
|
|
23
|
-
│
|
|
24
|
-
|
|
6
|
+
┌──────────────────────────────────────────────┐
|
|
7
|
+
│ ❯ npx @hustle-together/api-dev-tools │
|
|
8
|
+
│ --scope=project │
|
|
9
|
+
│ │
|
|
10
|
+
│ 🚀 Installing v3.6.1... │
|
|
11
|
+
│ │
|
|
12
|
+
│ ✅ Python 3.12.0 │
|
|
13
|
+
│ 📦 24 slash commands │
|
|
14
|
+
│ 🔒 18 enforcement hooks │
|
|
15
|
+
│ ⚙️ settings.json configured │
|
|
16
|
+
│ 📊 api-dev-state.json created │
|
|
17
|
+
│ 📚 research cache ready │
|
|
18
|
+
│ 🔌 MCP: context7, github │
|
|
19
|
+
│ │
|
|
20
|
+
│ ═════════════════════════════════════ │
|
|
21
|
+
│ 🎉 Installation complete! │
|
|
22
|
+
│ ═════════════════════════════════════ │
|
|
23
|
+
│ │
|
|
24
|
+
│ Quick Start: /api-create my-endpoint │
|
|
25
|
+
└──────────────────────────────────────────────┘
|
|
25
26
|
```
|
|
26
27
|
|
|
27
28
|
---
|
|
28
29
|
|
|
29
|
-
##
|
|
30
|
+
## Why This Exists
|
|
30
31
|
|
|
31
|
-
|
|
32
|
+
Building high-quality APIs with AI assistance requires consistency and rigor. We created this tool to solve a fundamental problem: **LLMs are powerful but unreliable without enforcement.**
|
|
32
33
|
|
|
33
|
-
|
|
34
|
-
2. **Memory-Based Implementation** - Forgets research by implementation time
|
|
35
|
-
3. **Self-Answering Questions** - Asks itself questions instead of the user
|
|
36
|
-
4. **Context Dilution** - Loses focus in long sessions
|
|
37
|
-
5. **Skipped Steps** - Jumps to implementation without proper research
|
|
34
|
+
This package was built with the goal of enabling teams to produce higher-quality, more consistent API implementations by enforcing a proven workflow that prevents common mistakes **before** they become production issues.
|
|
38
35
|
|
|
39
|
-
|
|
36
|
+
### The Core Problem
|
|
37
|
+
|
|
38
|
+
When developers use Claude (or any LLM) to build APIs, five predictable failure modes emerge:
|
|
39
|
+
|
|
40
|
+
1. **Outdated Documentation** - LLMs use training data from months or years ago. APIs change constantly - endpoints get deprecated, parameters are renamed, authentication methods evolve. Building from stale knowledge means broken code.
|
|
41
|
+
|
|
42
|
+
2. **Memory-Based Implementation** - Even after doing research, Claude often implements from memory by the time it gets to Phase 8 (writing code). It forgets the specific parameter names, error codes, and edge cases it discovered 20 messages ago.
|
|
43
|
+
|
|
44
|
+
3. **Self-Answering Questions** - Claude asks "Which format do you want?" then immediately answers "I'll use JSON" without waiting for your response. You lose control of the decision-making process.
|
|
45
|
+
|
|
46
|
+
4. **Context Dilution** - In a 50-message session, the context from Phase 3 (research) is diluted by Phase 9 (implementation). Critical details get lost. Claude starts guessing instead of referencing research.
|
|
47
|
+
|
|
48
|
+
5. **Skipped Steps** - Without enforcement, Claude jumps directly to implementation. No tests. No verification. No documentation. You get code that "works" but fails in production because edge cases weren't tested.
|
|
49
|
+
|
|
50
|
+
### Our Solution
|
|
51
|
+
|
|
52
|
+
**A 12-phase workflow enforced by Python hooks that BLOCK progress** until each phase is complete with explicit user approval. Not suggestions. Not guidelines. **Hard stops using Exit Code 2.**
|
|
53
|
+
|
|
54
|
+
This means:
|
|
55
|
+
- Claude cannot skip research
|
|
56
|
+
- Claude cannot answer its own questions
|
|
57
|
+
- Claude cannot implement before writing tests
|
|
58
|
+
- Claude cannot finish without verifying against current docs
|
|
59
|
+
- Claude cannot close without updating documentation
|
|
60
|
+
|
|
61
|
+
Every decision is tracked. Every phase is verified. Every step is enforced.
|
|
62
|
+
|
|
63
|
+
---
|
|
64
|
+
|
|
65
|
+
## The 12-Phase Workflow
|
|
66
|
+
|
|
67
|
+
When you run `/api-create brandfetch`, Claude is guided through 12 enforced phases. Each phase has three components:
|
|
68
|
+
|
|
69
|
+
1. **The Problem** - What goes wrong without enforcement
|
|
70
|
+
2. **The Solution** - How the phase prevents the problem
|
|
71
|
+
3. **The Hook** - Python script that blocks progress until complete
|
|
72
|
+
|
|
73
|
+
---
|
|
74
|
+
|
|
75
|
+
### Phase 0: Disambiguation
|
|
76
|
+
|
|
77
|
+
**The Problem We're Solving:**
|
|
78
|
+
Ambiguous terms waste hours of work. When you say "brandfetch," do you mean:
|
|
79
|
+
- The Brandfetch REST API at api.brandfetch.io?
|
|
80
|
+
- The @brandfetch/sdk npm package?
|
|
81
|
+
- A WordPress plugin?
|
|
82
|
+
- Custom code you want to write from scratch?
|
|
83
|
+
|
|
84
|
+
Without disambiguation, Claude guesses wrong, researches the wrong thing, and builds an implementation for the wrong target. You realize the mistake 90 minutes later when tests fail.
|
|
85
|
+
|
|
86
|
+
**How This Phase Works:**
|
|
87
|
+
Claude uses Context7 or WebSearch to discover all possible interpretations, then presents them as multiple choice options:
|
|
88
|
+
|
|
89
|
+
```
|
|
90
|
+
I found multiple things matching "brandfetch":
|
|
91
|
+
[A] Brandfetch REST API (api.brandfetch.io)
|
|
92
|
+
[B] @brandfetch/sdk npm package
|
|
93
|
+
[C] Brandfetch WordPress plugin
|
|
94
|
+
|
|
95
|
+
Which interpretation matches your intent?
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
Claude uses the `AskUserQuestion` tool (Claude Code's built-in dialog) to present the options. The dialog appears in your terminal with a text input field. You type "[A]" and press enter.
|
|
99
|
+
|
|
100
|
+
**The Enforcement:**
|
|
101
|
+
- **Hook:** `enforce-disambiguation.py`
|
|
102
|
+
- **Blocks:** All Write/Edit operations until disambiguation complete
|
|
103
|
+
- **Requires:** `user_question_asked = true` AND `phase_exit_confirmed = true`
|
|
104
|
+
- **Detection:** Hook checks state file for both flags before allowing file edits
|
|
105
|
+
|
|
106
|
+
**State Tracking:**
|
|
107
|
+
```json
|
|
108
|
+
{
|
|
109
|
+
"disambiguation": {
|
|
110
|
+
"status": "complete",
|
|
111
|
+
"user_question_asked": true,
|
|
112
|
+
"user_selected": "Brandfetch REST API",
|
|
113
|
+
"phase_exit_confirmed": true
|
|
114
|
+
}
|
|
115
|
+
}
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
**Why This Matters:**
|
|
119
|
+
Five seconds of clarification saves an hour of wasted research. Everyone starts on the same page. No ambiguity. No guesswork.
|
|
120
|
+
|
|
121
|
+
---
|
|
122
|
+
|
|
123
|
+
<details>
|
|
124
|
+
<summary><strong>Phase 1: Scope Confirmation</strong></summary>
|
|
125
|
+
|
|
126
|
+
**The Problem We're Solving:**
|
|
127
|
+
Misunderstood requirements lead to wrong implementations. You say "fetch brand assets" but Claude thinks you want a full-featured CMS with caching, authentication, rate limiting, and database persistence. Or worse - Claude thinks you want something simpler and omits features you need.
|
|
128
|
+
|
|
129
|
+
Scope drift happens when assumptions aren't confirmed. Claude infers requirements instead of asking.
|
|
130
|
+
|
|
131
|
+
**How This Phase Works:**
|
|
132
|
+
Claude summarizes its understanding of the endpoint in plain language:
|
|
133
|
+
|
|
134
|
+
```
|
|
135
|
+
I'll create: /api/v2/brandfetch
|
|
136
|
+
|
|
137
|
+
Purpose: Fetch brand assets (logos, colors, fonts) from Brandfetch API
|
|
138
|
+
|
|
139
|
+
Input: domain name (string)
|
|
140
|
+
Output: JSON with brand data
|
|
141
|
+
|
|
142
|
+
Is this scope correct? [Yes / No, I need...]
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
You can correct it: "Yes, but also add a logo-only mode for faster responses."
|
|
146
|
+
|
|
147
|
+
Claude updates the scope and asks again: "Updated scope to include logo-only mode. Correct now?"
|
|
148
|
+
|
|
149
|
+
You confirm: "Yes."
|
|
150
|
+
|
|
151
|
+
**The Enforcement:**
|
|
152
|
+
- **Hook:** `enforce-scope.py`
|
|
153
|
+
- **Blocks:** Write/Edit until `phase_exit_confirmed = true`
|
|
154
|
+
- **Requires:** Explicit affirmative response from user ("yes", "correct", "looks good")
|
|
155
|
+
- **Detection:** `track-tool-use.py` monitors `AskUserQuestion` calls and responses
|
|
156
|
+
|
|
157
|
+
**State Tracking:**
|
|
158
|
+
```json
|
|
159
|
+
{
|
|
160
|
+
"scope": {
|
|
161
|
+
"status": "complete",
|
|
162
|
+
"confirmed": true,
|
|
163
|
+
"scope_description": "Fetch brand assets with optional logo-only mode",
|
|
164
|
+
"phase_exit_confirmed": true
|
|
165
|
+
}
|
|
166
|
+
}
|
|
167
|
+
```
|
|
168
|
+
|
|
169
|
+
**Why This Matters:**
|
|
170
|
+
Requirements locked in before research begins. No backtracking. No "wait, that's not what I meant" after implementation. Saves time and ensures alignment.
|
|
171
|
+
|
|
172
|
+
</details>
|
|
173
|
+
|
|
174
|
+
---
|
|
175
|
+
|
|
176
|
+
<details>
|
|
177
|
+
<summary><strong>Phase 2: Initial Research</strong></summary>
|
|
178
|
+
|
|
179
|
+
**The Problem We're Solving:**
|
|
180
|
+
Training data is stale. Claude's knowledge cutoff might be January 2024, but the Brandfetch API was updated in March 2024 with new rate limit headers. Building from outdated knowledge means broken integrations.
|
|
181
|
+
|
|
182
|
+
APIs evolve constantly:
|
|
183
|
+
- New endpoints added
|
|
184
|
+
- Parameters renamed (v1 used `format`, v2 uses `imageFormat`)
|
|
185
|
+
- Authentication methods changed (API key → Bearer token)
|
|
186
|
+
- Rate limits adjusted (100/min → 5/sec)
|
|
187
|
+
|
|
188
|
+
**How This Phase Works:**
|
|
189
|
+
Claude must fetch CURRENT documentation using two methods:
|
|
190
|
+
|
|
191
|
+
1. **Context7 MCP Server:**
|
|
192
|
+
- Resolves package name to library ID: `/brandfetch/api-docs`
|
|
193
|
+
- Fetches live docs from npm, GitHub, official sites
|
|
194
|
+
- Returns 23 endpoints, 47 parameters, code examples
|
|
195
|
+
- Gets version-specific documentation (v2.3 vs v2.0)
|
|
196
|
+
|
|
197
|
+
2. **WebSearch:**
|
|
198
|
+
- Searches "Brandfetch API authentication 2025"
|
|
199
|
+
- Confirms current auth method: Bearer token
|
|
200
|
+
- Searches "Brandfetch API rate limits"
|
|
201
|
+
- Finds current limits: 5 requests/second
|
|
202
|
+
|
|
203
|
+
Claude cross-references both sources before presenting findings:
|
|
204
|
+
|
|
205
|
+
```
|
|
206
|
+
Research Summary:
|
|
207
|
+
• Base URL: https://api.brandfetch.io/v2
|
|
208
|
+
• Auth: Bearer token (BRANDFETCH_API_KEY)
|
|
209
|
+
• Rate limit: 5 requests/second
|
|
210
|
+
• Returns: logos (SVG, PNG, JPG), colors, fonts
|
|
211
|
+
|
|
212
|
+
Sources:
|
|
213
|
+
[1] Context7: /brandfetch/api-docs
|
|
214
|
+
[2] WebSearch: "Brandfetch API documentation 2025"
|
|
215
|
+
|
|
216
|
+
Proceed to interview? [Yes / Search more first]
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
**The Enforcement:**
|
|
220
|
+
- **Hook:** `enforce-research.py`
|
|
221
|
+
- **Blocks:** Write/Edit until `sources.length >= 2`
|
|
222
|
+
- **Exit Code 2:** Triggers if Claude tries to implement without research
|
|
223
|
+
- **Message:** "BLOCKED: Research required. Sources consulted: 0. Required: 2."
|
|
224
|
+
- **Detection:** Hook checks `research_initial.sources` array in state
|
|
225
|
+
|
|
226
|
+
**State Tracking:**
|
|
227
|
+
```json
|
|
228
|
+
{
|
|
229
|
+
"research_initial": {
|
|
230
|
+
"status": "complete",
|
|
231
|
+
"sources": [
|
|
232
|
+
"context7:/brandfetch/api-docs",
|
|
233
|
+
"websearch:brandfetch-api-2025"
|
|
234
|
+
],
|
|
235
|
+
"findings": {
|
|
236
|
+
"base_url": "https://api.brandfetch.io/v2",
|
|
237
|
+
"auth_method": "bearer_token",
|
|
238
|
+
"rate_limit": "5/second"
|
|
239
|
+
},
|
|
240
|
+
"phase_exit_confirmed": true
|
|
241
|
+
}
|
|
242
|
+
}
|
|
243
|
+
```
|
|
244
|
+
|
|
245
|
+
**Why Context7 Matters:**
|
|
246
|
+
Context7 doesn't just fetch docs - it gets CODE EXAMPLES from real implementations. When researching Brandfetch, Context7 found:
|
|
247
|
+
- 15 more parameters than documented in official API reference
|
|
248
|
+
- Error response structure (not documented in v2.0 release notes)
|
|
249
|
+
- Working curl examples with actual requests/responses
|
|
250
|
+
|
|
251
|
+
Training data would have missed all of this.
|
|
252
|
+
|
|
253
|
+
**Why This Matters:**
|
|
254
|
+
Current, accurate documentation prevents integration failures. Cross-referencing catches discrepancies. Your implementation works with TODAY's API, not last year's.
|
|
255
|
+
|
|
256
|
+
</details>
|
|
257
|
+
|
|
258
|
+
---
|
|
259
|
+
|
|
260
|
+
<details>
|
|
261
|
+
<summary><strong>Phase 3: Structured Interview</strong></summary>
|
|
262
|
+
|
|
263
|
+
**The Problem We're Solving:**
|
|
264
|
+
Generic template questions don't reflect actual API capabilities. Without research, Claude asks:
|
|
265
|
+
- "What format do you want?" (doesn't know which formats exist)
|
|
266
|
+
- "How should errors be handled?" (hasn't seen actual error responses)
|
|
267
|
+
- "What caching strategy?" (hasn't checked if API supports caching)
|
|
268
|
+
|
|
269
|
+
This leads to impossible requirements ("I want XML format" when API only returns JSON) or missed opportunities (API supports SVG but Claude never mentioned it).
|
|
270
|
+
|
|
271
|
+
**How This Phase Works:**
|
|
272
|
+
Questions are generated FROM research findings. Claude discovered (in Phase 2) that Brandfetch returns SVG, PNG, and JPG. So those become the options:
|
|
273
|
+
|
|
274
|
+
```
|
|
275
|
+
Based on research, I have 5 questions:
|
|
276
|
+
|
|
277
|
+
Q1: Which response format do you need?
|
|
278
|
+
[ ] JSON only
|
|
279
|
+
[x] JSON with asset URLs
|
|
280
|
+
[ ] Include raw base64 assets
|
|
281
|
+
|
|
282
|
+
Q2: Caching strategy?
|
|
283
|
+
[ ] No caching
|
|
284
|
+
[ ] Short (5 minutes)
|
|
285
|
+
[x] Long (24 hours) - brand data rarely changes
|
|
286
|
+
|
|
287
|
+
Q3: Error handling approach?
|
|
288
|
+
[ ] Throw errors (caller handles)
|
|
289
|
+
[x] Return error objects { success: false, error: {...} }
|
|
290
|
+
|
|
291
|
+
Q4: Rate limit handling?
|
|
292
|
+
[ ] Client handles retry
|
|
293
|
+
[ ] Server-side retry with backoff
|
|
294
|
+
[x] Expose rate limit headers (X-RateLimit-*)
|
|
295
|
+
|
|
296
|
+
Q5: Which brand assets to include?
|
|
297
|
+
[x] Logos
|
|
298
|
+
[x] Colors
|
|
299
|
+
[ ] Fonts
|
|
300
|
+
[ ] Images
|
|
301
|
+
|
|
302
|
+
Interview complete? [Yes, these answers are final / Modify answers]
|
|
303
|
+
```
|
|
304
|
+
|
|
305
|
+
Each question reflects discovered capabilities. The options aren't generic - they're specific to Brandfetch's actual API.
|
|
306
|
+
|
|
307
|
+
**The Enforcement:**
|
|
308
|
+
- **Hook:** `enforce-interview.py`
|
|
309
|
+
- **Blocks:** Until `structured_question_count >= 3`
|
|
310
|
+
- **Exit Code 2:** Triggers if Claude self-answers questions
|
|
311
|
+
- **Detection:** Hook checks for `AskUserQuestion` tool calls, waits for user responses
|
|
312
|
+
- **Prevents:** Claude from saying "I'll assume you want JSON" without asking
|
|
313
|
+
|
|
314
|
+
**How Self-Answering Detection Works:**
|
|
315
|
+
The hook monitors tool calls:
|
|
316
|
+
1. Claude calls `AskUserQuestion` → Hook sets `waiting_for_response = true`
|
|
317
|
+
2. Claude tries to call Write/Edit → Hook blocks with Exit Code 2
|
|
318
|
+
3. User provides response → Hook sets `waiting_for_response = false`
|
|
319
|
+
4. Now Write/Edit is allowed
|
|
320
|
+
|
|
321
|
+
**State Tracking:**
|
|
322
|
+
```json
|
|
323
|
+
{
|
|
324
|
+
"interview": {
|
|
325
|
+
"status": "complete",
|
|
326
|
+
"structured_question_count": 5,
|
|
327
|
+
"decisions": {
|
|
328
|
+
"response_format": "json_with_urls",
|
|
329
|
+
"caching": "24h",
|
|
330
|
+
"error_handling": "return_objects",
|
|
331
|
+
"rate_limiting": "expose_headers",
|
|
332
|
+
"assets": ["logos", "colors"]
|
|
333
|
+
},
|
|
334
|
+
"phase_exit_confirmed": true
|
|
335
|
+
}
|
|
336
|
+
}
|
|
337
|
+
```
|
|
338
|
+
|
|
339
|
+
**Critical Insight:**
|
|
340
|
+
These decisions are saved and automatically injected during implementation phases. When Claude writes tests (Phase 7), it references `decisions.error_handling = "return_objects"` to know what to test. When implementing (Phase 8), it uses `decisions.caching = "24h"` to set cache headers.
|
|
341
|
+
|
|
342
|
+
Your choices drive the entire implementation. No guessing. No forgetting.
|
|
343
|
+
|
|
344
|
+
**Why This Matters:**
|
|
345
|
+
Questions based on research prevent impossible requirements. Interview answers become the contract for implementation. Claude can't forget your decisions because they're in state.
|
|
346
|
+
|
|
347
|
+
</details>
|
|
348
|
+
|
|
349
|
+
---
|
|
350
|
+
|
|
351
|
+
<details>
|
|
352
|
+
<summary><strong>Phase 4: Deep Research</strong></summary>
|
|
353
|
+
|
|
354
|
+
**The Problem We're Solving:**
|
|
355
|
+
Initial research provides the overview, but your specific choices need deeper investigation. You chose "Expose rate limit headers" - but which headers does Brandfetch actually return? You chose "Return error objects" - but what's the exact structure?
|
|
356
|
+
|
|
357
|
+
Without targeted follow-up research, Claude implements from assumptions. It guesses that rate limit headers are `X-RateLimit-Limit` and `X-RateLimit-Remaining` (standard names), but Brandfetch actually uses `X-Plan-RateLimit-Limit` and `X-Plan-RateLimit-Remaining` (non-standard).
|
|
358
|
+
|
|
359
|
+
**How This Phase Works:**
|
|
360
|
+
Claude proposes searches based on YOUR interview answers (not shotgun exploration):
|
|
361
|
+
|
|
362
|
+
```
|
|
363
|
+
Based on your selections, I want to research:
|
|
364
|
+
|
|
365
|
+
[x] "Brandfetch error response format"
|
|
366
|
+
Why: You chose "Return error objects" - need exact structure
|
|
367
|
+
|
|
368
|
+
[x] "Brandfetch rate limit headers"
|
|
369
|
+
Why: You chose "Expose headers" - need header names
|
|
370
|
+
|
|
371
|
+
[ ] "Brandfetch font API"
|
|
372
|
+
Why: SKIPPED - you don't need fonts
|
|
373
|
+
|
|
374
|
+
Approve these searches? [Yes / Add more / Skip all]
|
|
375
|
+
```
|
|
376
|
+
|
|
377
|
+
You can modify: "Yes, also add authentication error examples."
|
|
378
|
+
|
|
379
|
+
Claude runs approved searches and reports findings:
|
|
380
|
+
|
|
381
|
+
```
|
|
382
|
+
Deep Research Results:
|
|
383
|
+
|
|
384
|
+
Error Response Format:
|
|
385
|
+
{
|
|
386
|
+
"success": false,
|
|
387
|
+
"error": {
|
|
388
|
+
"code": "INVALID_DOMAIN",
|
|
389
|
+
"message": "Domain not found",
|
|
390
|
+
"details": {}
|
|
391
|
+
}
|
|
392
|
+
}
|
|
393
|
+
|
|
394
|
+
Rate Limit Headers:
|
|
395
|
+
- X-Plan-RateLimit-Limit: 5
|
|
396
|
+
- X-Plan-RateLimit-Remaining: 4
|
|
397
|
+
- X-Plan-RateLimit-Reset: 1640000000
|
|
398
|
+
|
|
399
|
+
Authentication Errors:
|
|
400
|
+
- 401: Missing API key
|
|
401
|
+
- 403: Invalid API key
|
|
402
|
+
- 429: Rate limit exceeded
|
|
403
|
+
```
|
|
404
|
+
|
|
405
|
+
**The Enforcement:**
|
|
406
|
+
- **Hook:** `enforce-deep-research.py`
|
|
407
|
+
- **Blocks:** If approved searches not executed
|
|
408
|
+
- **Tracks:** `proposed_searches`, `approved_searches`, `executed_searches`
|
|
409
|
+
- **Detection:** Compares arrays - all approved must be in executed
|
|
410
|
+
|
|
411
|
+
**State Tracking:**
|
|
412
|
+
```json
|
|
413
|
+
{
|
|
414
|
+
"research_deep": {
|
|
415
|
+
"status": "complete",
|
|
416
|
+
"proposed_searches": ["error-format", "rate-headers", "auth-errors"],
|
|
417
|
+
"approved_searches": ["error-format", "rate-headers", "auth-errors"],
|
|
418
|
+
"executed_searches": ["error-format", "rate-headers", "auth-errors"],
|
|
419
|
+
"findings": {
|
|
420
|
+
"error_structure": { "documented": true },
|
|
421
|
+
"rate_headers": ["X-Plan-RateLimit-Limit", "X-Plan-RateLimit-Remaining"],
|
|
422
|
+
"auth_errors": { "401": "missing_key", "403": "invalid_key" }
|
|
423
|
+
},
|
|
424
|
+
"phase_exit_confirmed": true
|
|
425
|
+
}
|
|
426
|
+
}
|
|
427
|
+
```
|
|
428
|
+
|
|
429
|
+
**Adaptive Research Logic:**
|
|
430
|
+
The hook enforces that proposed searches align with interview decisions:
|
|
431
|
+
- If `decisions.error_handling = "return_objects"` → Must search error format
|
|
432
|
+
- If `decisions.rate_limiting = "expose_headers"` → Must search rate headers
|
|
433
|
+
- If `decisions.assets` doesn't include "fonts" → Skip font research
|
|
434
|
+
|
|
435
|
+
Your requirements drive the research. No wasted effort on features you don't need.
|
|
436
|
+
|
|
437
|
+
**Why This Matters:**
|
|
438
|
+
Targeted research prevents assumption-based implementation. You get exact header names, exact error codes, exact field names. No guessing. No "close enough."
|
|
439
|
+
|
|
440
|
+
</details>
|
|
441
|
+
|
|
442
|
+
---
|
|
443
|
+
|
|
444
|
+
<details>
|
|
445
|
+
<summary><strong>Phase 5: Schema Creation</strong></summary>
|
|
446
|
+
|
|
447
|
+
**The Problem We're Solving:**
|
|
448
|
+
Without a contract between tests and implementation, they drift apart. Tests check for `logoUrl` but implementation returns `logo_url`. Tests expect 400 errors but implementation returns 500. The API "works" but doesn't match the spec.
|
|
449
|
+
|
|
450
|
+
Validation gaps emerge:
|
|
451
|
+
- Frontend sends `domain` (string) but backend expects `{ domain: string, tld: string }`
|
|
452
|
+
- API returns extra fields that aren't typed
|
|
453
|
+
- Optional fields marked as required (or vice versa)
|
|
454
|
+
|
|
455
|
+
**How This Phase Works:**
|
|
456
|
+
Claude creates Zod schemas encoding ALL research findings and interview decisions:
|
|
457
|
+
|
|
458
|
+
```typescript
|
|
459
|
+
// Request Schema (from research + interview)
|
|
460
|
+
const BrandfetchRequest = z.object({
|
|
461
|
+
domain: z.string().min(1), // From research: required field
|
|
462
|
+
mode: z.enum(["full", "logo-only"]).default("full"), // From interview Q1
|
|
463
|
+
include: z.object({
|
|
464
|
+
logos: z.boolean().default(true), // From interview Q5
|
|
465
|
+
colors: z.boolean().default(true), // From interview Q5
|
|
466
|
+
}).optional(),
|
|
467
|
+
});
|
|
468
|
+
|
|
469
|
+
// Response Schema (success case)
|
|
470
|
+
const BrandfetchResponse = z.object({
|
|
471
|
+
success: z.literal(true),
|
|
472
|
+
data: z.object({
|
|
473
|
+
logos: z.array(z.object({
|
|
474
|
+
format: z.enum(["svg", "png", "jpg"]), // From research: available formats
|
|
475
|
+
url: z.string().url(),
|
|
476
|
+
size: z.object({ width: z.number(), height: z.number() }).optional(),
|
|
477
|
+
})),
|
|
478
|
+
colors: z.array(z.object({
|
|
479
|
+
hex: z.string().regex(/^#[0-9A-F]{6}$/i),
|
|
480
|
+
type: z.enum(["primary", "secondary", "accent"]),
|
|
481
|
+
})),
|
|
482
|
+
}),
|
|
483
|
+
cached: z.boolean(), // From interview Q2: caching enabled
|
|
484
|
+
});
|
|
485
|
+
|
|
486
|
+
// Error Response Schema (from deep research)
|
|
487
|
+
const BrandfetchError = z.object({
|
|
488
|
+
success: z.literal(false),
|
|
489
|
+
error: z.object({
|
|
490
|
+
code: z.enum(["INVALID_DOMAIN", "NOT_FOUND", "RATE_LIMIT", "AUTH_FAILED"]),
|
|
491
|
+
message: z.string(),
|
|
492
|
+
details: z.record(z.any()).optional(),
|
|
493
|
+
}),
|
|
494
|
+
});
|
|
495
|
+
```
|
|
496
|
+
|
|
497
|
+
Claude presents the schema for review:
|
|
498
|
+
|
|
499
|
+
```
|
|
500
|
+
Schema created from research + interview.
|
|
501
|
+
|
|
502
|
+
Does this match your expectations? [Yes / Modify]
|
|
503
|
+
```
|
|
504
|
+
|
|
505
|
+
**The Enforcement:**
|
|
506
|
+
- **Hook:** `enforce-schema.py`
|
|
507
|
+
- **Blocks:** Until schema shown to user and confirmed
|
|
508
|
+
- **Checks:** Schema file exists at expected path
|
|
509
|
+
- **Validates:** Schema references interview decisions
|
|
510
|
+
|
|
511
|
+
**State Tracking:**
|
|
512
|
+
```json
|
|
513
|
+
{
|
|
514
|
+
"schema_creation": {
|
|
515
|
+
"status": "complete",
|
|
516
|
+
"schema_file": "src/lib/schemas/brandfetch.ts",
|
|
517
|
+
"includes_interview_decisions": true,
|
|
518
|
+
"includes_research_findings": true,
|
|
519
|
+
"phase_exit_confirmed": true
|
|
520
|
+
}
|
|
521
|
+
}
|
|
522
|
+
```
|
|
523
|
+
|
|
524
|
+
**Why Schema Matters:**
|
|
525
|
+
The schema becomes the single source of truth:
|
|
526
|
+
- Tests import it to validate responses
|
|
527
|
+
- Implementation imports it to validate inputs
|
|
528
|
+
- Documentation generation reads it to show types
|
|
529
|
+
- Frontend can generate TypeScript types from it
|
|
530
|
+
|
|
531
|
+
If schema says `logo_url` (snake_case), both tests AND implementation use `logo_url`. No drift. No mismatches.
|
|
532
|
+
|
|
533
|
+
**Why This Matters:**
|
|
534
|
+
Type safety from research to production. Tests verify the schema. Implementation matches the schema. Frontend types match the schema. One contract, enforced everywhere.
|
|
535
|
+
|
|
536
|
+
</details>
|
|
537
|
+
|
|
538
|
+
---
|
|
539
|
+
|
|
540
|
+
<details>
|
|
541
|
+
<summary><strong>Phase 6: Environment Check</strong></summary>
|
|
542
|
+
|
|
543
|
+
**The Problem We're Solving:**
|
|
544
|
+
Tests pass locally but fail in CI. Why? Missing `BRANDFETCH_API_KEY` environment variable. You waste 30 minutes debugging, checking logs, re-running tests, before realizing the environment isn't set up.
|
|
545
|
+
|
|
546
|
+
Or worse: You write tests that mock the API because the real API key isn't available. Tests pass with mocks but fail in production with real API calls. Mocks drift from actual behavior.
|
|
547
|
+
|
|
548
|
+
**How This Phase Works:**
|
|
549
|
+
Claude checks required environment variables BEFORE writing any tests:
|
|
550
|
+
|
|
551
|
+
```
|
|
552
|
+
Checking environment setup...
|
|
553
|
+
|
|
554
|
+
Required variables for /api/v2/brandfetch:
|
|
555
|
+
✅ BRANDFETCH_API_KEY - found in .env
|
|
556
|
+
✅ NODE_ENV - set to development
|
|
557
|
+
|
|
558
|
+
Optional variables:
|
|
559
|
+
⚠️ CACHE_TTL - not set (will use default: 24h)
|
|
560
|
+
|
|
561
|
+
Ready to begin TDD? [Yes / Need to set up keys first]
|
|
562
|
+
```
|
|
563
|
+
|
|
564
|
+
If keys are missing, Claude provides setup instructions:
|
|
565
|
+
|
|
566
|
+
```
|
|
567
|
+
Missing: BRANDFETCH_API_KEY
|
|
568
|
+
|
|
569
|
+
Setup instructions:
|
|
570
|
+
1. Get API key: https://brandfetch.com/dashboard/api
|
|
571
|
+
2. Add to .env: BRANDFETCH_API_KEY=your_key_here
|
|
572
|
+
3. Verify: echo $BRANDFETCH_API_KEY
|
|
573
|
+
|
|
574
|
+
Reply "ready" when complete.
|
|
575
|
+
```
|
|
576
|
+
|
|
577
|
+
**The Enforcement:**
|
|
578
|
+
- **Hook:** `enforce-environment.py`
|
|
579
|
+
- **Blocks:** Test file creation until env vars verified
|
|
580
|
+
- **Checks:** `.env` file OR environment variables
|
|
581
|
+
- **Validates:** Key format (not just presence)
|
|
582
|
+
|
|
583
|
+
**How Key Detection Works:**
|
|
584
|
+
```python
|
|
585
|
+
def check_environment(endpoint_name, interview_decisions):
|
|
586
|
+
# Infer required keys from endpoint name
|
|
587
|
+
if "brandfetch" in endpoint_name.lower():
|
|
588
|
+
required_keys = ["BRANDFETCH_API_KEY"]
|
|
589
|
+
elif "openai" in endpoint_name.lower():
|
|
590
|
+
required_keys = ["OPENAI_API_KEY"]
|
|
591
|
+
# ... etc
|
|
592
|
+
|
|
593
|
+
for key in required_keys:
|
|
594
|
+
if not os.getenv(key) and key not in read_env_file():
|
|
595
|
+
return False, f"Missing: {key}"
|
|
596
|
+
|
|
597
|
+
return True, "Environment ready"
|
|
598
|
+
```
|
|
599
|
+
|
|
600
|
+
**State Tracking:**
|
|
601
|
+
```json
|
|
602
|
+
{
|
|
603
|
+
"environment_check": {
|
|
604
|
+
"status": "complete",
|
|
605
|
+
"keys_found": ["BRANDFETCH_API_KEY"],
|
|
606
|
+
"keys_missing": [],
|
|
607
|
+
"validated": true,
|
|
608
|
+
"phase_exit_confirmed": true
|
|
609
|
+
}
|
|
610
|
+
}
|
|
611
|
+
```
|
|
612
|
+
|
|
613
|
+
**Why This Matters:**
|
|
614
|
+
No surprises in CI. No mock/real API drift. Tests run with real API calls from day one. If tests pass locally, they pass in CI. Environment consistency from development to production.
|
|
615
|
+
|
|
616
|
+
</details>
|
|
617
|
+
|
|
618
|
+
---
|
|
619
|
+
|
|
620
|
+
<details>
|
|
621
|
+
<summary><strong>Phase 7: TDD Red (Write Failing Tests)</strong></summary>
|
|
622
|
+
|
|
623
|
+
**The Problem We're Solving:**
|
|
624
|
+
Without tests, you don't know if your code works. You ship bugs. You break things in production. You discover edge cases after users report them.
|
|
625
|
+
|
|
626
|
+
Test-after-code is reactive:
|
|
627
|
+
- "It works on my machine" (but breaks in production)
|
|
628
|
+
- "I didn't think to test that case" (user finds it)
|
|
629
|
+
- "I'll add tests later" (never happens)
|
|
630
|
+
|
|
631
|
+
**How This Phase Works:**
|
|
632
|
+
Claude writes failing tests FIRST, derived from interview decisions and research findings:
|
|
633
|
+
|
|
634
|
+
```
|
|
635
|
+
Test Matrix (from interview + research):
|
|
636
|
+
|
|
637
|
+
Success Scenarios:
|
|
638
|
+
✅ GET /api/v2/brandfetch?domain=google.com → 200 + brand data
|
|
639
|
+
✅ GET with mode=logo-only → 200 + logos only (no colors)
|
|
640
|
+
✅ Cache hit (second request) → 200 + cached: true
|
|
641
|
+
✅ SVG format → logo.format === "svg"
|
|
642
|
+
|
|
643
|
+
Error Scenarios (from interview Q3: return error objects):
|
|
644
|
+
✅ Invalid domain → 400 + { success: false, error: {...} }
|
|
645
|
+
✅ Missing API key → 401 + error object
|
|
646
|
+
✅ Not found → 404 + error object
|
|
647
|
+
✅ Rate limited → 429 + X-Plan-RateLimit-* headers
|
|
648
|
+
|
|
649
|
+
Edge Cases (from deep research):
|
|
650
|
+
✅ Empty colors array → 200 + [] (not error)
|
|
651
|
+
✅ Timeout (>30s) → 504 + error object
|
|
652
|
+
|
|
653
|
+
Total: 10 test scenarios
|
|
654
|
+
|
|
655
|
+
Write these tests? [Yes / Add more / Modify]
|
|
656
|
+
```
|
|
657
|
+
|
|
658
|
+
You confirm. Claude creates the test file:
|
|
659
|
+
|
|
660
|
+
```typescript
|
|
661
|
+
describe('/api/v2/brandfetch', () => {
|
|
662
|
+
it('returns brand data for valid domain', async () => {
|
|
663
|
+
const response = await GET('/api/v2/brandfetch?domain=google.com');
|
|
664
|
+
expect(response.status).toBe(200);
|
|
665
|
+
expect(response.body.success).toBe(true);
|
|
666
|
+
expect(response.body.data.logos).toBeInstanceOf(Array);
|
|
667
|
+
expect(response.body.data.colors).toBeInstanceOf(Array);
|
|
668
|
+
});
|
|
669
|
+
|
|
670
|
+
it('returns error object for invalid domain', async () => {
|
|
671
|
+
const response = await GET('/api/v2/brandfetch?domain=invalid');
|
|
672
|
+
expect(response.status).toBe(400);
|
|
673
|
+
expect(response.body.success).toBe(false);
|
|
674
|
+
expect(response.body.error.code).toBe('INVALID_DOMAIN');
|
|
675
|
+
});
|
|
676
|
+
|
|
677
|
+
// ... 8 more tests
|
|
678
|
+
});
|
|
679
|
+
```
|
|
680
|
+
|
|
681
|
+
Claude runs the tests:
|
|
682
|
+
|
|
683
|
+
```
|
|
684
|
+
⏳ Running tests...
|
|
685
|
+
|
|
686
|
+
FAIL src/app/api/v2/brandfetch/__tests__/route.test.ts
|
|
687
|
+
✗ returns brand data for valid domain (0 ms)
|
|
688
|
+
✗ returns error object for invalid domain (0 ms)
|
|
689
|
+
✗ ... 8 more failures
|
|
690
|
+
|
|
691
|
+
Tests: 0 passing, 10 failing
|
|
692
|
+
|
|
693
|
+
This is CORRECT. RED phase means tests exist and fail.
|
|
694
|
+
```
|
|
695
|
+
|
|
696
|
+
**The Enforcement:**
|
|
697
|
+
- **Hook:** `enforce-tdd-red.py`
|
|
698
|
+
- **Blocks:** `route.ts` creation if no `.test.ts` exists
|
|
699
|
+
- **Validates:** Test file has at least 3 scenarios
|
|
700
|
+
- **Exit Code 2:** Triggers if Claude tries to implement without tests
|
|
701
|
+
|
|
702
|
+
**State Tracking:**
|
|
703
|
+
```json
|
|
704
|
+
{
|
|
705
|
+
"tdd_red": {
|
|
706
|
+
"status": "complete",
|
|
707
|
+
"test_file": "src/app/api/v2/brandfetch/__tests__/route.test.ts",
|
|
708
|
+
"test_count": 10,
|
|
709
|
+
"scenarios": ["success", "errors", "edge_cases"],
|
|
710
|
+
"all_failing": true,
|
|
711
|
+
"phase_exit_confirmed": true
|
|
712
|
+
}
|
|
713
|
+
}
|
|
714
|
+
```
|
|
715
|
+
|
|
716
|
+
**Why RED Phase Matters:**
|
|
717
|
+
Failing tests define success BEFORE writing code. You know exactly what to build. Tests encode requirements. If implementation passes tests, requirements are met.
|
|
718
|
+
|
|
719
|
+
No "I think it works" - tests prove it works.
|
|
720
|
+
|
|
721
|
+
**Why This Matters:**
|
|
722
|
+
Tests first means bugs are caught before code is written. Requirements are encoded as assertions. Implementation is guided by tests. No guessing, no assumptions, no "it should work."
|
|
723
|
+
|
|
724
|
+
</details>
|
|
725
|
+
|
|
726
|
+
---
|
|
727
|
+
|
|
728
|
+
<details>
|
|
729
|
+
<summary><strong>Phase 8: TDD Green (Make Tests Pass)</strong></summary>
|
|
730
|
+
|
|
731
|
+
**The Problem We're Solving:**
|
|
732
|
+
Over-engineering wastes time. Developers often write more code than needed:
|
|
733
|
+
- Add features that weren't requested ("I'll also support XML just in case")
|
|
734
|
+
- Premature optimization ("I'll add connection pooling and caching layers")
|
|
735
|
+
- Gold-plating ("I'll make it pluggable for future extensions")
|
|
736
|
+
|
|
737
|
+
Result: More code to maintain, more bugs to fix, more complexity for no benefit.
|
|
738
|
+
|
|
739
|
+
**How This Phase Works:**
|
|
740
|
+
Claude writes MINIMAL implementation - just enough to pass the tests:
|
|
741
|
+
|
|
742
|
+
```typescript
|
|
743
|
+
export async function GET(request: Request) {
|
|
744
|
+
const { searchParams } = new URL(request.url);
|
|
745
|
+
const domain = searchParams.get('domain');
|
|
746
|
+
const mode = searchParams.get('mode') || 'full';
|
|
747
|
+
|
|
748
|
+
// Validate input (test: invalid domain → 400)
|
|
749
|
+
if (!domain || domain.length === 0) {
|
|
750
|
+
return Response.json({
|
|
751
|
+
success: false,
|
|
752
|
+
error: { code: 'INVALID_DOMAIN', message: 'Domain required' }
|
|
753
|
+
}, { status: 400 });
|
|
754
|
+
}
|
|
755
|
+
|
|
756
|
+
// Check API key (test: missing key → 401)
|
|
757
|
+
const apiKey = process.env.BRANDFETCH_API_KEY;
|
|
758
|
+
if (!apiKey) {
|
|
759
|
+
return Response.json({
|
|
760
|
+
success: false,
|
|
761
|
+
error: { code: 'AUTH_FAILED', message: 'Missing API key' }
|
|
762
|
+
}, { status: 401 });
|
|
763
|
+
}
|
|
764
|
+
|
|
765
|
+
// Fetch from Brandfetch (test: valid domain → 200)
|
|
766
|
+
const response = await fetch(
|
|
767
|
+
`https://api.brandfetch.io/v2/${domain}`,
|
|
768
|
+
{ headers: { 'Authorization': `Bearer ${apiKey}` } }
|
|
769
|
+
);
|
|
770
|
+
|
|
771
|
+
if (!response.ok) {
|
|
772
|
+
// Test: not found → 404, rate limit → 429
|
|
773
|
+
const status = response.status;
|
|
774
|
+
const errorMap = {
|
|
775
|
+
404: { code: 'NOT_FOUND', message: 'Domain not found' },
|
|
776
|
+
429: { code: 'RATE_LIMIT', message: 'Rate limit exceeded' },
|
|
777
|
+
};
|
|
778
|
+
return Response.json({
|
|
779
|
+
success: false,
|
|
780
|
+
error: errorMap[status] || { code: 'ERROR', message: 'Request failed' }
|
|
781
|
+
}, { status });
|
|
782
|
+
}
|
|
783
|
+
|
|
784
|
+
const data = await response.json();
|
|
785
|
+
|
|
786
|
+
// Filter by mode (test: mode=logo-only → logos only)
|
|
787
|
+
if (mode === 'logo-only') {
|
|
788
|
+
return Response.json({
|
|
789
|
+
success: true,
|
|
790
|
+
data: { logos: data.logos, colors: [] },
|
|
791
|
+
cached: false
|
|
792
|
+
});
|
|
793
|
+
}
|
|
794
|
+
|
|
795
|
+
// Test: full mode → logos + colors
|
|
796
|
+
return Response.json({
|
|
797
|
+
success: true,
|
|
798
|
+
data: { logos: data.logos, colors: data.colors },
|
|
799
|
+
cached: false
|
|
800
|
+
});
|
|
801
|
+
}
|
|
802
|
+
```
|
|
803
|
+
|
|
804
|
+
Claude runs tests:
|
|
805
|
+
|
|
806
|
+
```
|
|
807
|
+
✅ Running tests...
|
|
808
|
+
|
|
809
|
+
PASS src/app/api/v2/brandfetch/__tests__/route.test.ts
|
|
810
|
+
✓ returns brand data for valid domain (124 ms)
|
|
811
|
+
✓ returns error object for invalid domain (23 ms)
|
|
812
|
+
✓ returns logos only in logo-only mode (98 ms)
|
|
813
|
+
✓ returns 401 without API key (12 ms)
|
|
814
|
+
✓ returns 404 for not found (145 ms)
|
|
815
|
+
✓ returns 429 on rate limit (178 ms)
|
|
816
|
+
✓ handles empty colors array (87 ms)
|
|
817
|
+
✓ ... 3 more passing
|
|
818
|
+
|
|
819
|
+
Tests: 10 passing, 0 failing
|
|
820
|
+
Coverage: 100%
|
|
821
|
+
```
|
|
822
|
+
|
|
823
|
+
**The Enforcement:**
|
|
824
|
+
- **Hook:** `verify-after-green.py` (PostToolUse on Bash)
|
|
825
|
+
- **Detects:** "pnpm test" or "vitest" in command
|
|
826
|
+
- **Parses:** stdout for pass/fail counts
|
|
827
|
+
- **Auto-triggers:** Phase 9 when all tests pass
|
|
828
|
+
- **No manual step:** Workflow continues automatically
|
|
829
|
+
|
|
830
|
+
**How Auto-Trigger Works:**
|
|
831
|
+
```python
|
|
832
|
+
def on_bash_complete(command, stdout, exit_code):
|
|
833
|
+
if "test" in command and exit_code == 0:
|
|
834
|
+
if "✓" in stdout and "✗" not in stdout:
|
|
835
|
+
# All tests passing
|
|
836
|
+
state["phases"]["tdd_green"]["all_tests_passing"] = True
|
|
837
|
+
print("AUTO-TRIGGERING Phase 9: Verification", file=sys.stderr)
|
|
838
|
+
# Phase 9 hook will now allow verification
|
|
839
|
+
```
|
|
840
|
+
|
|
841
|
+
**State Tracking:**
|
|
842
|
+
```json
|
|
843
|
+
{
|
|
844
|
+
"tdd_green": {
|
|
845
|
+
"status": "complete",
|
|
846
|
+
"all_tests_passing": true,
|
|
847
|
+
"coverage": "100%",
|
|
848
|
+
"auto_triggered_verify": true
|
|
849
|
+
}
|
|
850
|
+
}
|
|
851
|
+
```
|
|
852
|
+
|
|
853
|
+
**Minimal Implementation Philosophy:**
|
|
854
|
+
- No XML support (not in tests)
|
|
855
|
+
- No caching layer (not required yet)
|
|
856
|
+
- No connection pooling (not tested)
|
|
857
|
+
- No extra endpoints (not requested)
|
|
858
|
+
|
|
859
|
+
Just enough code to make tests pass. Nothing more.
|
|
860
|
+
|
|
861
|
+
**Why This Matters:**
|
|
862
|
+
Minimal code = minimal bugs. Tests define "done." If tests pass, you're done. No over-engineering. No "just in case" features. Ship fast, iterate based on real needs.
|
|
863
|
+
|
|
864
|
+
</details>
|
|
865
|
+
|
|
866
|
+
---
|
|
867
|
+
|
|
868
|
+
<details>
|
|
869
|
+
<summary><strong>Phase 9: Verification (Re-Research After Green)</strong></summary>
|
|
870
|
+
|
|
871
|
+
**The Problem We're Solving:**
|
|
872
|
+
Even after researching in Phase 2, Claude implements from memory in Phase 8. Twenty messages have passed. Context has diluted. Claude forgets specific details:
|
|
873
|
+
- Used `format` parameter but docs say `imageFormat`
|
|
874
|
+
- Returned `logo_url` but API returns `logoUrl`
|
|
875
|
+
- Forgot that API supports `include_fonts` parameter
|
|
876
|
+
|
|
877
|
+
Memory-based errors are subtle. Tests pass (they test what was implemented) but implementation drifts from actual API spec.
|
|
878
|
+
|
|
879
|
+
**How This Phase Works:**
|
|
880
|
+
Claude RE-FETCHES documentation after tests pass and builds a feature-by-feature comparison:
|
|
881
|
+
|
|
882
|
+
```
|
|
883
|
+
Re-reading Brandfetch API documentation...
|
|
884
|
+
|
|
885
|
+
Verification Table:
|
|
886
|
+
┌────────────────┬─────────┬─────────────┬───────────────────┐
|
|
887
|
+
│ Feature │ In Docs │ Implemented │ Status │
|
|
888
|
+
├────────────────┼─────────┼─────────────┼───────────────────┤
|
|
889
|
+
│ domain param │ ✓ │ ✓ │ ✅ Match │
|
|
890
|
+
│ mode param │ ✗ │ ✓ │ ⚠️ Extra feature │
|
|
891
|
+
│ logo formats │ 4 │ 4 │ ✅ Match │
|
|
892
|
+
│ colors │ ✓ │ ✓ │ ✅ Match │
|
|
893
|
+
│ include_fonts │ ✓ │ ✗ │ ⚠️ GAP FOUND │
|
|
894
|
+
│ webhook_url │ ✓ │ ✗ │ ℹ️ Optional │
|
|
895
|
+
│ auth method │ Bearer │ Bearer │ ✅ Match │
|
|
896
|
+
│ rate headers │ X-Plan- │ X-Plan- │ ✅ Match │
|
|
897
|
+
└────────────────┴─────────┴─────────────┴───────────────────┘
|
|
898
|
+
|
|
899
|
+
GAPS FOUND: 1
|
|
900
|
+
• include_fonts parameter (documented but not implemented)
|
|
901
|
+
|
|
902
|
+
OPTIONS:
|
|
903
|
+
[A] Loop back to Phase 7 - Write tests for missing features
|
|
904
|
+
[B] Skip - Document as intentional omission
|
|
905
|
+
|
|
906
|
+
Your choice?
|
|
907
|
+
```
|
|
908
|
+
|
|
909
|
+
You choose: "[B] - I don't need fonts."
|
|
910
|
+
|
|
911
|
+
Claude documents the omission:
|
|
912
|
+
|
|
913
|
+
```
|
|
914
|
+
Documented as intentional omission:
|
|
915
|
+
- include_fonts parameter (user confirmed not needed)
|
|
916
|
+
|
|
917
|
+
Verification complete. No gaps requiring fixes.
|
|
918
|
+
```
|
|
919
|
+
|
|
920
|
+
**The Enforcement:**
|
|
921
|
+
- **Hook:** `enforce-verify.py`
|
|
922
|
+
- **Blocks:** Refactoring until verification complete
|
|
923
|
+
- **Requires:** Gaps either fixed OR documented as omissions
|
|
924
|
+
- **Re-research:** Must call Context7/WebSearch again
|
|
925
|
+
|
|
926
|
+
**Loop-Back Logic:**
|
|
927
|
+
If gaps require fixes:
|
|
928
|
+
1. Hook sets `phases.verify.loop_back = true`
|
|
929
|
+
2. State resets to Phase 7
|
|
930
|
+
3. Claude writes tests for missing features
|
|
931
|
+
4. Phase 8 implements missing features
|
|
932
|
+
5. Phase 9 runs again
|
|
933
|
+
6. Repeat until no gaps
|
|
934
|
+
|
|
935
|
+
**State Tracking:**
|
|
936
|
+
```json
|
|
937
|
+
{
|
|
938
|
+
"verify": {
|
|
939
|
+
"status": "complete",
|
|
940
|
+
"re_researched": true,
|
|
941
|
+
"gaps_found": 1,
|
|
942
|
+
"gaps_fixed": 0,
|
|
943
|
+
"intentional_omissions": ["include_fonts"],
|
|
944
|
+
"comparison_table": { ... },
|
|
945
|
+
"phase_exit_confirmed": true
|
|
946
|
+
}
|
|
947
|
+
}
|
|
948
|
+
```
|
|
949
|
+
|
|
950
|
+
**Why Re-Research Matters:**
|
|
951
|
+
This catches errors that slip through research → implementation gap:
|
|
952
|
+
- Claude researched `imageFormat` but implemented `format` (wrong)
|
|
953
|
+
- Claude saw `include_fonts` in docs but forgot to implement (gap)
|
|
954
|
+
- Claude used wrong error codes (research said 400, implemented 500)
|
|
955
|
+
|
|
956
|
+
Re-fetching current docs and comparing line-by-line catches memory errors.
|
|
957
|
+
|
|
958
|
+
**Why This Matters:**
|
|
959
|
+
Verification loop ensures implementation matches current docs. Memory errors caught before production. Gaps filled or explicitly documented. No silent drift from spec.
|
|
960
|
+
|
|
961
|
+
</details>
|
|
40
962
|
|
|
41
963
|
---
|
|
42
964
|
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
|
|
74
|
-
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
84
|
-
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
|
|
92
|
-
|
|
93
|
-
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
127
|
-
|
|
128
|
-
|
|
129
|
-
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
138
|
-
|
|
139
|
-
|
|
140
|
-
|
|
141
|
-
|
|
142
|
-
|
|
143
|
-
|
|
144
|
-
|
|
145
|
-
|
|
146
|
-
|
|
147
|
-
|
|
148
|
-
│ │ [x] JSON with asset URLs │ │
|
|
149
|
-
│ │ [ ] Include raw base64 assets │ │
|
|
150
|
-
│ │ │ │
|
|
151
|
-
│ │ Q2: Caching strategy? │ │
|
|
152
|
-
│ │ [ ] No caching │ │
|
|
153
|
-
│ │ [ ] Short (5 minutes) │ │
|
|
154
|
-
│ │ [x] Long (24 hours) - brand data rarely changes │ │
|
|
155
|
-
│ │ │ │
|
|
156
|
-
│ │ Q3: Error handling approach? │ │
|
|
157
|
-
│ │ [ ] Throw errors (caller handles) │ │
|
|
158
|
-
│ │ [x] Return error objects { success: false, error: {...} } │ │
|
|
159
|
-
│ │ │ │
|
|
160
|
-
│ │ Q4: Rate limit handling? │ │
|
|
161
|
-
│ │ [ ] Client handles retry │ │
|
|
162
|
-
│ │ [ ] Server-side retry with backoff │ │
|
|
163
|
-
│ │ [x] Expose rate limit headers (X-RateLimit-*) │ │
|
|
164
|
-
│ │ │ │
|
|
165
|
-
│ │ Q5: Which brand assets to include? │ │
|
|
166
|
-
│ │ [x] Logos │ │
|
|
167
|
-
│ │ [x] Colors │ │
|
|
168
|
-
│ │ [ ] Fonts │ │
|
|
169
|
-
│ │ [ ] Images │ │
|
|
170
|
-
│ │ │ │
|
|
171
|
-
│ │ Interview complete? [Yes, these answers are final / Modify answers] │ │
|
|
172
|
-
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
|
|
173
|
-
│ │
|
|
174
|
-
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
|
175
|
-
│ │ 🔒 ENFORCED BY: enforce-interview.py │ │
|
|
176
|
-
│ │ ├─ Blocks until structured_question_count >= 3 │ │
|
|
177
|
-
│ │ ├─ Uses AskUserQuestion tool (Claude's built-in dialog) │ │
|
|
178
|
-
│ │ └─ Requires: phase_exit_confirmed = true │ │
|
|
179
|
-
│ │ │ │
|
|
180
|
-
│ │ 💡 LLM RESPONSIBILITY: │ │
|
|
181
|
-
│ │ └─ Generate questions FROM research (not generic templates) │ │
|
|
182
|
-
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
|
183
|
-
│ │
|
|
184
|
-
│ User confirms selections │
|
|
185
|
-
│ │
|
|
186
|
-
└──────────────────────────────────────────────────────────────────────────────┘
|
|
187
|
-
```
|
|
188
|
-
|
|
189
|
-
```
|
|
190
|
-
┌──────────────────────────────────────────────────────────────────────────────┐
|
|
191
|
-
│ ╭─────────────────────────────────────────────────────────────────────────╮ │
|
|
192
|
-
│ │ PHASE 4: DEEP RESEARCH (Adaptive) │ │
|
|
193
|
-
│ │ ─────────────────────────────────────────────────────────────────────── │ │
|
|
194
|
-
│ │ │ │
|
|
195
|
-
│ │ Based on your selections, I want to research: │ │
|
|
196
|
-
│ │ │ │
|
|
197
|
-
│ │ [x] "Brandfetch error response format" - for error objects │ │
|
|
198
|
-
│ │ [x] "Brandfetch rate limit headers" - for X-RateLimit-* headers │ │
|
|
199
|
-
│ │ [ ] "Brandfetch font API" - SKIPPED (you don't need fonts) │ │
|
|
200
|
-
│ │ │ │
|
|
201
|
-
│ │ Approve these searches? [Yes / Add more / Skip all] │ │
|
|
202
|
-
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
|
|
203
|
-
│ │
|
|
204
|
-
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
|
205
|
-
│ │ 🔒 ENFORCED BY: enforce-deep-research.py │ │
|
|
206
|
-
│ │ ├─ Blocks if approved_searches not executed │ │
|
|
207
|
-
│ │ └─ Requires: phase_exit_confirmed = true │ │
|
|
208
|
-
│ │ │ │
|
|
209
|
-
│ │ 💡 LLM RESPONSIBILITY: │ │
|
|
210
|
-
│ │ └─ Propose searches based on interview (not shotgun approach) │ │
|
|
211
|
-
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
|
212
|
-
│ │
|
|
213
|
-
│ User: "Yes, also add authentication methods" │
|
|
214
|
-
│ │
|
|
215
|
-
└──────────────────────────────────────────────────────────────────────────────┘
|
|
216
|
-
```
|
|
217
|
-
|
|
218
|
-
```
|
|
219
|
-
┌──────────────────────────────────────────────────────────────────────────────┐
|
|
220
|
-
│ ╭─────────────────────────────────────────────────────────────────────────╮ │
|
|
221
|
-
│ │ PHASE 5: SCHEMA CREATION │ │
|
|
222
|
-
│ │ ─────────────────────────────────────────────────────────────────────── │ │
|
|
223
|
-
│ │ │ │
|
|
224
|
-
│ │ Creating Zod schema from research + interview: │ │
|
|
225
|
-
│ │ │ │
|
|
226
|
-
│ │ // Request Schema │ │
|
|
227
|
-
│ │ const BrandfetchRequest = z.object({ │ │
|
|
228
|
-
│ │ domain: z.string().min(1), │ │
|
|
229
|
-
│ │ mode: z.enum(["full", "logo-only"]).default("full"), │ │
|
|
230
|
-
│ │ include: z.object({ │ │
|
|
231
|
-
│ │ logos: z.boolean().default(true), │ │
|
|
232
|
-
│ │ colors: z.boolean().default(true), │ │
|
|
233
|
-
│ │ }).optional(), │ │
|
|
234
|
-
│ │ }); │ │
|
|
235
|
-
│ │ │ │
|
|
236
|
-
│ │ // Response Schema (success) │ │
|
|
237
|
-
│ │ const BrandfetchResponse = z.object({ │ │
|
|
238
|
-
│ │ success: z.literal(true), │ │
|
|
239
|
-
│ │ data: z.object({ │ │
|
|
240
|
-
│ │ logos: z.array(LogoSchema), │ │
|
|
241
|
-
│ │ colors: z.array(ColorSchema), │ │
|
|
242
|
-
│ │ }), │ │
|
|
243
|
-
│ │ cached: z.boolean(), │ │
|
|
244
|
-
│ │ }); │ │
|
|
245
|
-
│ │ │ │
|
|
246
|
-
│ │ Schema matches your interview answers? [Yes / Modify] │ │
|
|
247
|
-
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
|
|
248
|
-
│ │
|
|
249
|
-
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
|
250
|
-
│ │ 🔒 ENFORCED BY: enforce-schema.py │ │
|
|
251
|
-
│ │ ├─ Blocks if schema not shown to user │ │
|
|
252
|
-
│ │ └─ Requires: phase_exit_confirmed = true │ │
|
|
253
|
-
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
|
254
|
-
│ │
|
|
255
|
-
│ User: "Yes, looks good" │
|
|
256
|
-
│ │
|
|
257
|
-
└──────────────────────────────────────────────────────────────────────────────┘
|
|
258
|
-
```
|
|
259
|
-
|
|
260
|
-
```
|
|
261
|
-
┌──────────────────────────────────────────────────────────────────────────────┐
|
|
262
|
-
│ ╭─────────────────────────────────────────────────────────────────────────╮ │
|
|
263
|
-
│ │ PHASE 6: ENVIRONMENT CHECK │ │
|
|
264
|
-
│ │ ─────────────────────────────────────────────────────────────────────── │ │
|
|
265
|
-
│ │ │ │
|
|
266
|
-
│ │ Checking required API keys: │ │
|
|
267
|
-
│ │ │ │
|
|
268
|
-
│ │ ✅ BRANDFETCH_API_KEY found in .env │ │
|
|
269
|
-
│ │ ✅ NODE_ENV = development │ │
|
|
270
|
-
│ │ │ │
|
|
271
|
-
│ │ Ready to begin TDD? [Yes / Need to set up keys first] │ │
|
|
272
|
-
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
|
|
273
|
-
│ │
|
|
274
|
-
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
|
275
|
-
│ │ 🔒 ENFORCED BY: enforce-environment.py │ │
|
|
276
|
-
│ │ ├─ Checks env vars based on endpoint name + interview │ │
|
|
277
|
-
│ │ └─ Requires: phase_exit_confirmed = true │ │
|
|
278
|
-
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
|
279
|
-
│ │
|
|
280
|
-
│ User: "Yes, ready" │
|
|
281
|
-
│ │
|
|
282
|
-
└──────────────────────────────────────────────────────────────────────────────┘
|
|
283
|
-
```
|
|
284
|
-
|
|
285
|
-
```
|
|
286
|
-
┌──────────────────────────────────────────────────────────────────────────────┐
|
|
287
|
-
│ ╭─────────────────────────────────────────────────────────────────────────╮ │
|
|
288
|
-
│ │ PHASE 7: TDD RED (Write Failing Tests) │ │
|
|
289
|
-
│ │ ─────────────────────────────────────────────────────────────────────── │ │
|
|
290
|
-
│ │ │ │
|
|
291
|
-
│ │ TEST MATRIX (from interview decisions): │ │
|
|
292
|
-
│ │ │ │
|
|
293
|
-
│ │ ✅ Success Scenarios: │ │
|
|
294
|
-
│ │ • GET with valid domain → 200 + brand data │ │
|
|
295
|
-
│ │ • mode=logo-only → 200 + logos only │ │
|
|
296
|
-
│ │ • Cache hit → 200 + cached: true │ │
|
|
297
|
-
│ │ │ │
|
|
298
|
-
│ │ ✅ Error Scenarios (your choice: return objects): │ │
|
|
299
|
-
│ │ • Invalid domain → 400 + { success: false, error: {...} } │ │
|
|
300
|
-
│ │ • Missing API key → 401 + error object │ │
|
|
301
|
-
│ │ • Not found → 404 + error object │ │
|
|
302
|
-
│ │ • Rate limited → 429 + X-RateLimit-* headers │ │
|
|
303
|
-
│ │ │ │
|
|
304
|
-
│ │ ✅ Edge Cases: │ │
|
|
305
|
-
│ │ • Empty colors array → 200 + empty array (not error) │ │
|
|
306
|
-
│ │ • Timeout → 504 + error object │ │
|
|
307
|
-
│ │ │ │
|
|
308
|
-
│ │ Total: 9 test scenarios │ │
|
|
309
|
-
│ │ │ │
|
|
310
|
-
│ │ Test plan looks good? [Yes, write tests / Add more tests] │ │
|
|
311
|
-
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
|
|
312
|
-
│ │
|
|
313
|
-
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
|
314
|
-
│ │ 🔒 ENFORCED BY: enforce-tdd-red.py │ │
|
|
315
|
-
│ │ ├─ Blocks route.ts if no .test.ts exists │ │
|
|
316
|
-
│ │ ├─ Requires: test matrix shown to user │ │
|
|
317
|
-
│ │ └─ Requires: phase_exit_confirmed = true │ │
|
|
318
|
-
│ │ │ │
|
|
319
|
-
│ │ 💡 LLM RESPONSIBILITY: │ │
|
|
320
|
-
│ │ └─ Derive test scenarios from interview decisions │ │
|
|
321
|
-
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
|
322
|
-
│ │
|
|
323
|
-
│ User: "Yes, write them" │
|
|
324
|
-
│ │
|
|
325
|
-
│ 📝 Creating: src/app/api/v2/brandfetch/__tests__/route.test.ts │
|
|
326
|
-
│ ⏳ Running tests... 0/9 passing (expected - RED phase) │
|
|
327
|
-
│ │
|
|
328
|
-
└──────────────────────────────────────────────────────────────────────────────┘
|
|
329
|
-
```
|
|
330
|
-
|
|
331
|
-
```
|
|
332
|
-
┌──────────────────────────────────────────────────────────────────────────────┐
|
|
333
|
-
│ ╭─────────────────────────────────────────────────────────────────────────╮ │
|
|
334
|
-
│ │ PHASE 8: TDD GREEN (Make Tests Pass) │ │
|
|
335
|
-
│ │ ─────────────────────────────────────────────────────────────────────── │ │
|
|
336
|
-
│ │ │ │
|
|
337
|
-
│ │ 📝 Creating: src/app/api/v2/brandfetch/route.ts │ │
|
|
338
|
-
│ │ │ │
|
|
339
|
-
│ │ ⏳ Running tests... │ │
|
|
340
|
-
│ │ │ │
|
|
341
|
-
│ │ ✅ GET with valid domain PASS │ │
|
|
342
|
-
│ │ ✅ mode=logo-only PASS │ │
|
|
343
|
-
│ │ ✅ Cache hit PASS │ │
|
|
344
|
-
│ │ ✅ Invalid domain → 400 PASS │ │
|
|
345
|
-
│ │ ✅ Missing API key → 401 PASS │ │
|
|
346
|
-
│ │ ✅ Not found → 404 PASS │ │
|
|
347
|
-
│ │ ✅ Rate limited → 429 PASS │ │
|
|
348
|
-
│ │ ✅ Empty colors array PASS │ │
|
|
349
|
-
│ │ ✅ Timeout → 504 PASS │ │
|
|
350
|
-
│ │ │ │
|
|
351
|
-
│ │ Tests: 9/9 passing │ │
|
|
352
|
-
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
|
|
353
|
-
│ │
|
|
354
|
-
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
|
355
|
-
│ │ 🔒 ENFORCED BY: verify-after-green.py (PostToolUse on Bash) │ │
|
|
356
|
-
│ │ ├─ Detects "pnpm test" or "vitest" in command │ │
|
|
357
|
-
│ │ ├─ Parses output for pass/fail │ │
|
|
358
|
-
│ │ └─ Auto-triggers Phase 9 when all tests pass │ │
|
|
359
|
-
│ │ │ │
|
|
360
|
-
│ │ 💡 LLM RESPONSIBILITY: │ │
|
|
361
|
-
│ │ └─ Write minimal implementation to pass tests │ │
|
|
362
|
-
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
|
363
|
-
│ │
|
|
364
|
-
└──────────────────────────────────────────────────────────────────────────────┘
|
|
365
|
-
```
|
|
366
|
-
|
|
367
|
-
```
|
|
368
|
-
┌──────────────────────────────────────────────────────────────────────────────┐
|
|
369
|
-
│ ╭─────────────────────────────────────────────────────────────────────────╮ │
|
|
370
|
-
│ │ PHASE 9: VERIFY (Re-Research After Green) │ │
|
|
371
|
-
│ │ ─────────────────────────────────────────────────────────────────────── │ │
|
|
372
|
-
│ │ │ │
|
|
373
|
-
│ │ ⚠️ AUTO-TRIGGERED: Tests passed. Now verifying against docs... │ │
|
|
374
|
-
│ │ │ │
|
|
375
|
-
│ │ Re-reading Brandfetch API documentation... │ │
|
|
376
|
-
│ │ │ │
|
|
377
|
-
│ │ ┌────────────────────────────────────────────────────────────────────┐ │ │
|
|
378
|
-
│ │ │ Feature │ In Docs │ Implemented │ Status │ │ │
|
|
379
|
-
│ │ ├────────────────┼─────────┼─────────────┼──────────────────────────│ │ │
|
|
380
|
-
│ │ │ domain param │ ✓ │ ✓ │ ✅ Match │ │ │
|
|
381
|
-
│ │ │ logo formats │ 4 │ 4 │ ✅ Match │ │ │
|
|
382
|
-
│ │ │ colors │ ✓ │ ✓ │ ✅ Match │ │ │
|
|
383
|
-
│ │ │ include_fonts │ ✓ │ ✗ │ ⚠️ GAP FOUND │ │ │
|
|
384
|
-
│ │ │ webhook │ ✓ │ ✗ │ ℹ️ Optional │ │ │
|
|
385
|
-
│ │ └────────────────────────────────────────────────────────────────────┘ │ │
|
|
386
|
-
│ │ │ │
|
|
387
|
-
│ │ GAPS FOUND: 1 │ │
|
|
388
|
-
│ │ • include_fonts parameter (documented but not implemented) │ │
|
|
389
|
-
│ │ │ │
|
|
390
|
-
│ │ Fix gaps? [Yes, loop back to Phase 7 / Skip - document as omission] │ │
|
|
391
|
-
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
|
|
392
|
-
│ │
|
|
393
|
-
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
|
394
|
-
│ │ 🔒 ENFORCED BY: enforce-verify.py │ │
|
|
395
|
-
│ │ ├─ Blocks refactoring until verification complete │ │
|
|
396
|
-
│ │ ├─ Requires: gaps_fixed OR documented as omissions │ │
|
|
397
|
-
│ │ └─ Requires: phase_exit_confirmed = true │ │
|
|
398
|
-
│ │ │ │
|
|
399
|
-
│ │ 💡 LLM RESPONSIBILITY: │ │
|
|
400
|
-
│ │ └─ Re-read docs and compare feature-by-feature │ │
|
|
401
|
-
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
|
402
|
-
│ │
|
|
403
|
-
│ User: "Skip - I don't need fonts" │
|
|
404
|
-
│ │
|
|
405
|
-
│ 📝 Documenting as intentional omission: include_fonts │
|
|
406
|
-
│ │
|
|
407
|
-
└──────────────────────────────────────────────────────────────────────────────┘
|
|
408
|
-
```
|
|
409
|
-
|
|
410
|
-
```
|
|
411
|
-
┌──────────────────────────────────────────────────────────────────────────────┐
|
|
412
|
-
│ ╭─────────────────────────────────────────────────────────────────────────╮ │
|
|
413
|
-
│ │ PHASE 10: TDD REFACTOR │ │
|
|
414
|
-
│ │ ─────────────────────────────────────────────────────────────────────── │ │
|
|
415
|
-
│ │ │ │
|
|
416
|
-
│ │ Cleaning up implementation while keeping tests green... │ │
|
|
417
|
-
│ │ │ │
|
|
418
|
-
│ │ • Extracted rate limit logic to helper function │ │
|
|
419
|
-
│ │ • Added JSDoc comments │ │
|
|
420
|
-
│ │ • Removed dead code │ │
|
|
421
|
-
│ │ │ │
|
|
422
|
-
│ │ Tests: 9/9 still passing │ │
|
|
423
|
-
│ │ │ │
|
|
424
|
-
│ │ Refactoring complete? [Yes / Continue refactoring] │ │
|
|
425
|
-
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
|
|
426
|
-
│ │
|
|
427
|
-
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
|
428
|
-
│ │ 🔒 ENFORCED BY: enforce-refactor.py │ │
|
|
429
|
-
│ │ ├─ Only allows after verify phase complete │ │
|
|
430
|
-
│ │ └─ Requires: phase_exit_confirmed = true │ │
|
|
431
|
-
│ │ │ │
|
|
432
|
-
│ │ 💡 LLM RESPONSIBILITY: │ │
|
|
433
|
-
│ │ └─ Improve code without changing behavior │ │
|
|
434
|
-
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
|
435
|
-
│ │
|
|
436
|
-
│ User: "Yes" │
|
|
437
|
-
│ │
|
|
438
|
-
└──────────────────────────────────────────────────────────────────────────────┘
|
|
439
|
-
```
|
|
440
|
-
|
|
441
|
-
```
|
|
442
|
-
┌──────────────────────────────────────────────────────────────────────────────┐
|
|
443
|
-
│ ╭─────────────────────────────────────────────────────────────────────────╮ │
|
|
444
|
-
│ │ PHASE 11: DOCUMENTATION │ │
|
|
445
|
-
│ │ ─────────────────────────────────────────────────────────────────────── │ │
|
|
446
|
-
│ │ │ │
|
|
447
|
-
│ │ DOCUMENTATION CHECKLIST: │ │
|
|
448
|
-
│ │ │ │
|
|
449
|
-
│ │ ✅ api-tests-manifest.json │ │
|
|
450
|
-
│ │ • Added /api/v2/brandfetch entry │ │
|
|
451
|
-
│ │ • 9 test scenarios documented │ │
|
|
452
|
-
│ │ • Request/response schemas included │ │
|
|
453
|
-
│ │ │ │
|
|
454
|
-
│ │ ✅ Research Cache │ │
|
|
455
|
-
│ │ • .claude/research/brandfetch/sources.json │ │
|
|
456
|
-
│ │ • .claude/research/brandfetch/interview.json │ │
|
|
457
|
-
│ │ │ │
|
|
458
|
-
│ │ ⏭️ OpenAPI Spec (skipped - internal API) │ │
|
|
459
|
-
│ │ │ │
|
|
460
|
-
│ │ Documentation complete? [Yes / Need to add something] │ │
|
|
461
|
-
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
|
|
462
|
-
│ │
|
|
463
|
-
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
|
464
|
-
│ │ 🔒 ENFORCED BY: enforce-documentation.py │ │
|
|
465
|
-
│ │ ├─ Blocks completion until documentation updated │ │
|
|
466
|
-
│ │ ├─ Checks: manifest_updated OR research_cached │ │
|
|
467
|
-
│ │ └─ Requires: phase_exit_confirmed = true │ │
|
|
468
|
-
│ │ │ │
|
|
469
|
-
│ │ 🔒 ENFORCED BY: api-workflow-check.py (Stop hook) │ │
|
|
470
|
-
│ │ └─ Blocks "stop" if required phases not complete │ │
|
|
471
|
-
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
|
472
|
-
│ │
|
|
473
|
-
│ User: "Yes, all done" │
|
|
474
|
-
│ │
|
|
475
|
-
└──────────────────────────────────────────────────────────────────────────────┘
|
|
476
|
-
```
|
|
477
|
-
|
|
478
|
-
```
|
|
479
|
-
┌──────────────────────────────────────────────────────────────────────────────┐
|
|
480
|
-
│ ╭─────────────────────────────────────────────────────────────────────────╮ │
|
|
481
|
-
│ │ ✅ WORKFLOW COMPLETE │ │
|
|
482
|
-
│ │ ─────────────────────────────────────────────────────────────────────── │ │
|
|
483
|
-
│ │ │ │
|
|
484
|
-
│ │ /api/v2/brandfetch created successfully! │ │
|
|
485
|
-
│ │ │ │
|
|
486
|
-
│ │ Files created: │ │
|
|
487
|
-
│ │ • src/app/api/v2/brandfetch/route.ts │ │
|
|
488
|
-
│ │ • src/app/api/v2/brandfetch/__tests__/route.test.ts │ │
|
|
489
|
-
│ │ • src/lib/schemas/brandfetch.ts │ │
|
|
490
|
-
│ │ │ │
|
|
491
|
-
│ │ Tests: 9/9 passing │ │
|
|
492
|
-
│ │ Coverage: 100% │ │
|
|
493
|
-
│ │ │ │
|
|
494
|
-
│ │ Interview decisions preserved in: │ │
|
|
495
|
-
│ │ • .claude/api-dev-state.json │ │
|
|
496
|
-
│ │ • .claude/research/brandfetch/ │ │
|
|
497
|
-
│ │ │ │
|
|
498
|
-
│ │ Intentional omissions documented: │ │
|
|
499
|
-
│ │ • include_fonts parameter │ │
|
|
500
|
-
│ │ • webhook support │ │
|
|
501
|
-
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
|
|
502
|
-
│ │
|
|
503
|
-
└──────────────────────────────────────────────────────────────────────────────┘
|
|
965
|
+
<details>
|
|
966
|
+
<summary><strong>Phase 10: TDD Refactor</strong></summary>
|
|
967
|
+
|
|
968
|
+
**The Problem We're Solving:**
|
|
969
|
+
First implementations are messy. Variable names are unclear. Logic is duplicated. Comments are missing. But cleaning up code might break things - you change a condition, forget a case, introduce a regression.
|
|
970
|
+
|
|
971
|
+
Without tests, refactoring is risky. You can't confidently improve code because you don't know if you broke something.
|
|
972
|
+
|
|
973
|
+
**How This Phase Works:**
|
|
974
|
+
Claude refactors while keeping tests green. Every change is verified:
|
|
975
|
+
|
|
976
|
+
```
|
|
977
|
+
Refactoring opportunities identified:
|
|
978
|
+
|
|
979
|
+
1. Extract rate limit logic to helper function
|
|
980
|
+
2. Add JSDoc comments for public functions
|
|
981
|
+
3. Rename `data` variable to `brandData` (clearer)
|
|
982
|
+
4. Remove duplicate error handling code
|
|
983
|
+
|
|
984
|
+
Apply these refactors? [Yes / Skip / Customize]
|
|
985
|
+
```
|
|
986
|
+
|
|
987
|
+
You approve. Claude makes changes:
|
|
988
|
+
|
|
989
|
+
```typescript
|
|
990
|
+
// Before
|
|
991
|
+
export async function GET(request: Request) {
|
|
992
|
+
const { searchParams } = new URL(request.url);
|
|
993
|
+
const domain = searchParams.get('domain');
|
|
994
|
+
// ... 50 lines of logic
|
|
995
|
+
}
|
|
996
|
+
|
|
997
|
+
// After - extracted helper
|
|
998
|
+
async function fetchBrandData(domain: string, apiKey: string) {
|
|
999
|
+
const response = await fetch(
|
|
1000
|
+
`https://api.brandfetch.io/v2/${domain}`,
|
|
1001
|
+
{ headers: { 'Authorization': `Bearer ${apiKey}` } }
|
|
1002
|
+
);
|
|
1003
|
+
return handleBrandfetchResponse(response);
|
|
1004
|
+
}
|
|
1005
|
+
|
|
1006
|
+
/**
|
|
1007
|
+
* GET /api/v2/brandfetch
|
|
1008
|
+
* Fetches brand assets from Brandfetch API
|
|
1009
|
+
* @param domain - Domain name (e.g., "google.com")
|
|
1010
|
+
* @param mode - Response mode: "full" or "logo-only"
|
|
1011
|
+
*/
|
|
1012
|
+
export async function GET(request: Request) {
|
|
1013
|
+
const { domain, mode } = parseRequest(request);
|
|
1014
|
+
const brandData = await fetchBrandData(domain, getApiKey());
|
|
1015
|
+
return formatResponse(brandData, mode);
|
|
1016
|
+
}
|
|
1017
|
+
```
|
|
1018
|
+
|
|
1019
|
+
After each change, tests run:
|
|
1020
|
+
|
|
1021
|
+
```
|
|
1022
|
+
✅ Extracted fetchBrandData helper
|
|
1023
|
+
Tests: 10 passing
|
|
1024
|
+
|
|
1025
|
+
✅ Added JSDoc comments
|
|
1026
|
+
Tests: 10 passing
|
|
1027
|
+
|
|
1028
|
+
✅ Renamed variables for clarity
|
|
1029
|
+
Tests: 10 passing
|
|
1030
|
+
|
|
1031
|
+
✅ Removed duplicate error handling
|
|
1032
|
+
Tests: 10 passing
|
|
1033
|
+
|
|
1034
|
+
Refactor complete. All tests still passing.
|
|
1035
|
+
```
|
|
1036
|
+
|
|
1037
|
+
**The Enforcement:**
|
|
1038
|
+
- **Hook:** `enforce-refactor.py`
|
|
1039
|
+
- **Blocks:** Refactoring until verify phase complete
|
|
1040
|
+
- **Prevents:** Changing behavior (tests must stay green)
|
|
1041
|
+
- **Validates:** Test count stays the same (no deletions)
|
|
1042
|
+
|
|
1043
|
+
**Allowed Changes:**
|
|
1044
|
+
- Extract functions
|
|
1045
|
+
- Rename variables
|
|
1046
|
+
- Add comments
|
|
1047
|
+
- Remove duplication
|
|
1048
|
+
- Improve readability
|
|
1049
|
+
|
|
1050
|
+
**Forbidden Changes:**
|
|
1051
|
+
- Change logic
|
|
1052
|
+
- Add features
|
|
1053
|
+
- Remove functionality
|
|
1054
|
+
- Modify behavior
|
|
1055
|
+
|
|
1056
|
+
**State Tracking:**
|
|
1057
|
+
```json
|
|
1058
|
+
{
|
|
1059
|
+
"tdd_refactor": {
|
|
1060
|
+
"status": "complete",
|
|
1061
|
+
"changes_made": [
|
|
1062
|
+
"extracted_helpers",
|
|
1063
|
+
"added_docs",
|
|
1064
|
+
"renamed_variables"
|
|
1065
|
+
],
|
|
1066
|
+
"tests_still_passing": true,
|
|
1067
|
+
"phase_exit_confirmed": true
|
|
1068
|
+
}
|
|
1069
|
+
}
|
|
504
1070
|
```
|
|
505
1071
|
|
|
1072
|
+
**Why This Matters:**
|
|
1073
|
+
Refactor confidently. Tests prove nothing broke. Code gets cleaner without risk. Maintainability improves without regressions.
|
|
1074
|
+
|
|
1075
|
+
</details>
|
|
1076
|
+
|
|
506
1077
|
---
|
|
507
1078
|
|
|
508
|
-
|
|
509
|
-
|
|
510
|
-
|
|
511
|
-
|
|
512
|
-
|
|
513
|
-
|
|
514
|
-
|
|
515
|
-
|
|
516
|
-
|
|
517
|
-
|
|
518
|
-
|
|
519
|
-
|
|
520
|
-
|
|
521
|
-
|
|
522
|
-
|
|
523
|
-
|
|
524
|
-
|
|
525
|
-
|
|
526
|
-
|
|
527
|
-
|
|
528
|
-
|
|
529
|
-
|
|
530
|
-
|
|
531
|
-
|
|
532
|
-
|
|
533
|
-
|
|
534
|
-
|
|
535
|
-
|
|
536
|
-
|
|
537
|
-
|
|
538
|
-
|
|
539
|
-
|
|
540
|
-
|
|
541
|
-
|
|
542
|
-
|
|
543
|
-
|
|
544
|
-
|
|
545
|
-
|
|
1079
|
+
<details>
|
|
1080
|
+
<summary><strong>Phase 11: Documentation</strong></summary>
|
|
1081
|
+
|
|
1082
|
+
**The Problem We're Solving:**
|
|
1083
|
+
Knowledge gets lost. The next developer (or the next Claude session) starts from scratch:
|
|
1084
|
+
- Makes the same mistakes you just fixed
|
|
1085
|
+
- Asks the same questions you already answered
|
|
1086
|
+
- Researches the same APIs you already researched
|
|
1087
|
+
- Discovers the same edge cases you already handled
|
|
1088
|
+
|
|
1089
|
+
Without documentation, every session is reinventing the wheel.
|
|
1090
|
+
|
|
1091
|
+
**How This Phase Works:**
|
|
1092
|
+
Claude updates three types of documentation:
|
|
1093
|
+
|
|
1094
|
+
1. **Research Cache** (7-day freshness tracking):
|
|
1095
|
+
```markdown
|
|
1096
|
+
# Brandfetch API Research
|
|
1097
|
+
Last updated: 2024-12-11
|
|
1098
|
+
Freshness: 7 days
|
|
1099
|
+
|
|
1100
|
+
## Base URL
|
|
1101
|
+
https://api.brandfetch.io/v2
|
|
1102
|
+
|
|
1103
|
+
## Authentication
|
|
1104
|
+
Bearer token via Authorization header
|
|
1105
|
+
Required: BRANDFETCH_API_KEY
|
|
1106
|
+
|
|
1107
|
+
## Endpoints
|
|
1108
|
+
- GET /:domain - Fetch brand data
|
|
1109
|
+
|
|
1110
|
+
## Parameters
|
|
1111
|
+
- domain (required): Domain name
|
|
1112
|
+
- include_fonts (optional): Include font data
|
|
1113
|
+
|
|
1114
|
+
## Rate Limits
|
|
1115
|
+
- 5 requests/second
|
|
1116
|
+
- Headers: X-Plan-RateLimit-Limit, X-Plan-RateLimit-Remaining
|
|
1117
|
+
|
|
1118
|
+
## Error Codes
|
|
1119
|
+
- 400: Invalid domain
|
|
1120
|
+
- 401: Missing API key
|
|
1121
|
+
- 403: Invalid API key
|
|
1122
|
+
- 404: Domain not found
|
|
1123
|
+
- 429: Rate limit exceeded
|
|
1124
|
+
```
|
|
1125
|
+
|
|
1126
|
+
2. **API Test Manifest** (for /api-test UI):
|
|
1127
|
+
```json
|
|
1128
|
+
{
|
|
1129
|
+
"endpoints": [
|
|
1130
|
+
{
|
|
1131
|
+
"path": "/api/v2/brandfetch",
|
|
1132
|
+
"method": "GET",
|
|
1133
|
+
"description": "Fetch brand assets from Brandfetch API",
|
|
1134
|
+
"request": {
|
|
1135
|
+
"query": {
|
|
1136
|
+
"domain": { "type": "string", "required": true },
|
|
1137
|
+
"mode": { "type": "enum", "values": ["full", "logo-only"], "default": "full" }
|
|
1138
|
+
}
|
|
1139
|
+
},
|
|
1140
|
+
"response": {
|
|
1141
|
+
"success": {
|
|
1142
|
+
"status": 200,
|
|
1143
|
+
"body": {
|
|
1144
|
+
"success": true,
|
|
1145
|
+
"data": {
|
|
1146
|
+
"logos": "LogoSchema[]",
|
|
1147
|
+
"colors": "ColorSchema[]"
|
|
1148
|
+
},
|
|
1149
|
+
"cached": "boolean"
|
|
1150
|
+
}
|
|
1151
|
+
},
|
|
1152
|
+
"errors": {
|
|
1153
|
+
"400": "Invalid domain",
|
|
1154
|
+
"401": "Missing API key",
|
|
1155
|
+
"404": "Domain not found",
|
|
1156
|
+
"429": "Rate limit exceeded"
|
|
1157
|
+
}
|
|
1158
|
+
},
|
|
1159
|
+
"examples": [
|
|
1160
|
+
{
|
|
1161
|
+
"request": "GET /api/v2/brandfetch?domain=google.com",
|
|
1162
|
+
"response": { ... }
|
|
1163
|
+
}
|
|
1164
|
+
],
|
|
1165
|
+
"testCount": 10,
|
|
1166
|
+
"coverage": "100%"
|
|
1167
|
+
}
|
|
1168
|
+
]
|
|
1169
|
+
}
|
|
1170
|
+
```
|
|
1171
|
+
|
|
1172
|
+
3. **State File** (interview decisions + phase completion):
|
|
1173
|
+
Already tracked throughout workflow - now marked as `documentation.manifest_updated = true`
|
|
1174
|
+
|
|
1175
|
+
**The Enforcement:**
|
|
1176
|
+
- **Hook:** `enforce-documentation.py`
|
|
1177
|
+
- **Blocks:** Completion until docs updated
|
|
1178
|
+
- **Checks:** Research cache updated OR fresh (<7 days)
|
|
1179
|
+
- **Validates:** Manifest includes new endpoint
|
|
1180
|
+
|
|
1181
|
+
**Freshness Tracking:**
|
|
1182
|
+
```json
|
|
1183
|
+
{
|
|
1184
|
+
"research_index": {
|
|
1185
|
+
"brandfetch": {
|
|
1186
|
+
"last_updated": "2024-12-11T10:30:00Z",
|
|
1187
|
+
"freshness_days": 7,
|
|
1188
|
+
"is_fresh": true,
|
|
1189
|
+
"sources": ["context7", "websearch"]
|
|
1190
|
+
}
|
|
1191
|
+
}
|
|
1192
|
+
}
|
|
1193
|
+
```
|
|
1194
|
+
|
|
1195
|
+
When freshness expires (>7 days), hook prompts:
|
|
1196
|
+
|
|
1197
|
+
```
|
|
1198
|
+
⚠️ Research cache stale (8 days old)
|
|
1199
|
+
|
|
1200
|
+
OPTIONS:
|
|
1201
|
+
[A] Re-research (fetch current docs)
|
|
1202
|
+
[B] Mark as reviewed (still accurate)
|
|
1203
|
+
[C] Skip (use stale cache)
|
|
1204
|
+
|
|
1205
|
+
Recommendation: [A] - APIs change frequently
|
|
1206
|
+
```
|
|
1207
|
+
|
|
1208
|
+
**State Tracking:**
|
|
1209
|
+
```json
|
|
1210
|
+
{
|
|
1211
|
+
"documentation": {
|
|
1212
|
+
"status": "complete",
|
|
1213
|
+
"manifest_updated": true,
|
|
1214
|
+
"research_cached": true,
|
|
1215
|
+
"cache_freshness": "7 days",
|
|
1216
|
+
"phase_exit_confirmed": true
|
|
1217
|
+
}
|
|
1218
|
+
}
|
|
1219
|
+
```
|
|
1220
|
+
|
|
1221
|
+
**Why This Matters:**
|
|
1222
|
+
Future sessions benefit from today's work:
|
|
1223
|
+
- Research cached → Skip Phase 2 if fresh
|
|
1224
|
+
- Interview decisions preserved → No re-asking same questions
|
|
1225
|
+
- Test examples documented → Copy/paste for similar endpoints
|
|
1226
|
+
|
|
1227
|
+
Knowledge compounds instead of resetting.
|
|
1228
|
+
|
|
1229
|
+
</details>
|
|
1230
|
+
|
|
1231
|
+
---
|
|
1232
|
+
|
|
1233
|
+
<details>
|
|
1234
|
+
<summary><strong>Phase 12: Completion</strong></summary>
|
|
1235
|
+
|
|
1236
|
+
**The Problem We're Solving:**
|
|
1237
|
+
How do you know everything is actually done? Claude might claim "finished" but skip phases:
|
|
1238
|
+
- Skipped verification (Phase 9)
|
|
1239
|
+
- Forgot documentation (Phase 11)
|
|
1240
|
+
- Never wrote tests (Phase 7)
|
|
1241
|
+
|
|
1242
|
+
Without verification, "done" means "Claude stopped talking" not "workflow complete."
|
|
1243
|
+
|
|
1244
|
+
**How This Phase Works:**
|
|
1245
|
+
The `api-workflow-check.py` hook runs on Stop event (when you try to close Claude):
|
|
1246
|
+
|
|
1247
|
+
```
|
|
1248
|
+
Checking workflow completion...
|
|
1249
|
+
|
|
1250
|
+
Phase 0: Disambiguation ✅ Complete
|
|
1251
|
+
Phase 1: Scope ✅ Complete
|
|
1252
|
+
Phase 2: Research ✅ Complete (2 sources)
|
|
1253
|
+
Phase 3: Interview ✅ Complete (5 questions)
|
|
1254
|
+
Phase 4: Deep Research ✅ Complete (3 searches)
|
|
1255
|
+
Phase 5: Schema ✅ Complete
|
|
1256
|
+
Phase 6: Environment ✅ Complete (1 key)
|
|
1257
|
+
Phase 7: TDD Red ✅ Complete (10 tests)
|
|
1258
|
+
Phase 8: TDD Green ✅ Complete (10/10 passing)
|
|
1259
|
+
Phase 9: Verification ✅ Complete (1 omission)
|
|
1260
|
+
Phase 10: Refactor ✅ Complete
|
|
1261
|
+
Phase 11: Documentation ✅ Complete
|
|
1262
|
+
Phase 12: Completion ✅ In progress
|
|
1263
|
+
|
|
1264
|
+
All phases verified. Workflow complete.
|
|
1265
|
+
|
|
1266
|
+
Files created:
|
|
1267
|
+
• src/app/api/v2/brandfetch/route.ts
|
|
1268
|
+
• src/app/api/v2/brandfetch/__tests__/route.test.ts
|
|
1269
|
+
• src/lib/schemas/brandfetch.ts
|
|
1270
|
+
|
|
1271
|
+
Tests: 10/10 passing
|
|
1272
|
+
Coverage: 100%
|
|
1273
|
+
|
|
1274
|
+
Interview decisions preserved in state.
|
|
1275
|
+
Research cached for 7 days.
|
|
1276
|
+
|
|
1277
|
+
You may close this session.
|
|
1278
|
+
```
|
|
1279
|
+
|
|
1280
|
+
If phases are incomplete:
|
|
1281
|
+
|
|
1282
|
+
```
|
|
1283
|
+
⚠️ WORKFLOW INCOMPLETE
|
|
1284
|
+
|
|
1285
|
+
Phase 9: Verification ✗ Not started
|
|
1286
|
+
Phase 11: Documentation ✗ Not started
|
|
1287
|
+
|
|
1288
|
+
BLOCKED: Cannot close until workflow complete.
|
|
1289
|
+
|
|
1290
|
+
Continue? [Yes / Force close anyway]
|
|
1291
|
+
```
|
|
1292
|
+
|
|
1293
|
+
**The Enforcement:**
|
|
1294
|
+
- **Hook:** `api-workflow-check.py` (Stop event)
|
|
1295
|
+
- **Blocks:** Closing Claude until all phases complete
|
|
1296
|
+
- **Exit Code 2:** If phases incomplete
|
|
1297
|
+
- **Validates:** Each phase has `status = "complete"`
|
|
1298
|
+
|
|
1299
|
+
**Stop Hook Behavior:**
|
|
1300
|
+
```python
|
|
1301
|
+
def on_stop_request(state):
|
|
1302
|
+
incomplete = []
|
|
1303
|
+
for phase_name, phase_data in state["phases"].items():
|
|
1304
|
+
if phase_data["status"] != "complete":
|
|
1305
|
+
incomplete.append(phase_name)
|
|
1306
|
+
|
|
1307
|
+
if incomplete:
|
|
1308
|
+
print(f"BLOCKED: Phases incomplete: {incomplete}", file=sys.stderr)
|
|
1309
|
+
sys.exit(2) # Prevent stop
|
|
1310
|
+
|
|
1311
|
+
print("Workflow complete. Safe to close.", file=sys.stderr)
|
|
1312
|
+
sys.exit(0) # Allow stop
|
|
1313
|
+
```
|
|
1314
|
+
|
|
1315
|
+
**State Tracking:**
|
|
1316
|
+
```json
|
|
1317
|
+
{
|
|
1318
|
+
"completion": {
|
|
1319
|
+
"all_phases_complete": true,
|
|
1320
|
+
"files_created": [
|
|
1321
|
+
"route.ts",
|
|
1322
|
+
"route.test.ts",
|
|
1323
|
+
"schemas/brandfetch.ts"
|
|
1324
|
+
],
|
|
1325
|
+
"tests_passing": "10/10",
|
|
1326
|
+
"documentation_updated": true,
|
|
1327
|
+
"research_cached": true
|
|
1328
|
+
}
|
|
1329
|
+
}
|
|
1330
|
+
```
|
|
1331
|
+
|
|
1332
|
+
**Why This Matters:**
|
|
1333
|
+
"Done" is verified, not claimed. All phases complete. All tests pass. All docs updated. Close confidently knowing nothing was skipped.
|
|
1334
|
+
|
|
1335
|
+
</details>
|
|
1336
|
+
|
|
1337
|
+
---
|
|
1338
|
+
|
|
1339
|
+
## Key Enforcement Mechanisms
|
|
1340
|
+
|
|
1341
|
+
### Exit Code 2 (Active Blocking)
|
|
1342
|
+
|
|
1343
|
+
**The Technical Mechanism:**
|
|
1344
|
+
Python hooks exit with code 2 instead of returning JSON deny:
|
|
1345
|
+
|
|
1346
|
+
```python
|
|
1347
|
+
# Old approach (passive - Claude sees reason but may continue)
|
|
1348
|
+
print(json.dumps({
|
|
1349
|
+
"permissionDecision": "deny",
|
|
1350
|
+
"reason": "Research required before implementation"
|
|
1351
|
+
}))
|
|
1352
|
+
sys.exit(0)
|
|
1353
|
+
|
|
1354
|
+
# New approach (active - forces Claude to respond)
|
|
1355
|
+
print("BLOCKED: Research required before implementation", file=sys.stderr)
|
|
1356
|
+
print("Run /api-research first, then try again.", file=sys.stderr)
|
|
1357
|
+
sys.exit(2)
|
|
1358
|
+
```
|
|
1359
|
+
|
|
1360
|
+
**Why Exit Code 2 Matters:**
|
|
1361
|
+
From [Anthropic's documentation](https://code.claude.com/docs/en/hooks):
|
|
1362
|
+
> "Exit code 2 creates a feedback loop directly to Claude. Claude sees your error message. **Claude adjusts. Claude tries something different.**"
|
|
1363
|
+
|
|
1364
|
+
With JSON deny:
|
|
1365
|
+
- Claude sees "permission denied"
|
|
1366
|
+
- Claude might continue with alternative approach
|
|
1367
|
+
- Block is passive
|
|
1368
|
+
|
|
1369
|
+
With Exit Code 2:
|
|
1370
|
+
- Claude sees stderr message
|
|
1371
|
+
- Claude MUST respond to error
|
|
1372
|
+
- Claude cannot proceed without fixing
|
|
1373
|
+
- Block is active
|
|
1374
|
+
|
|
1375
|
+
**Upgraded Hooks Using Exit Code 2:**
|
|
1376
|
+
- `enforce-research.py` - Forces `/api-research` before implementation
|
|
1377
|
+
- `enforce-interview.py` - Forces structured interview completion
|
|
1378
|
+
- `api-workflow-check.py` - Forces all phases complete before stopping
|
|
1379
|
+
- `verify-implementation.py` - Forces fix of critical mismatches
|
|
1380
|
+
|
|
1381
|
+
**Example Error Flow:**
|
|
1382
|
+
```
|
|
1383
|
+
1. Claude: "I'll write route.ts now"
|
|
1384
|
+
2. Hook: BLOCKED (Exit Code 2)
|
|
1385
|
+
Message: "Research required. Sources: 0. Required: 2."
|
|
1386
|
+
3. Claude: "Let me research first using Context7..."
|
|
1387
|
+
4. Claude calls Context7
|
|
1388
|
+
5. Hook: Allowed (research done)
|
|
1389
|
+
6. Claude: "Now I'll write route.ts based on research"
|
|
1390
|
+
```
|
|
1391
|
+
|
|
1392
|
+
### phase_exit_confirmed Enforcement
|
|
1393
|
+
|
|
1394
|
+
**The Problem:**
|
|
1395
|
+
Claude calls `AskUserQuestion` but immediately self-answers without waiting:
|
|
1396
|
+
|
|
1397
|
+
```
|
|
1398
|
+
Claude: "Which format do you want? [JSON / XML]"
|
|
1399
|
+
Claude: "I'll use JSON since it's most common."
|
|
1400
|
+
(User never had a chance to respond)
|
|
1401
|
+
```
|
|
1402
|
+
|
|
1403
|
+
**The Solution:**
|
|
1404
|
+
Every phase requires TWO things:
|
|
1405
|
+
1. An "exit confirmation" question (detected by patterns)
|
|
1406
|
+
2. An affirmative user response (detected by patterns)
|
|
1407
|
+
|
|
1408
|
+
**Detection Logic:**
|
|
1409
|
+
```python
|
|
1410
|
+
def _detect_question_type(question_text, options):
|
|
1411
|
+
"""Classifies questions into types"""
|
|
1412
|
+
exit_patterns = [
|
|
1413
|
+
"proceed", "continue", "ready to", "move to",
|
|
1414
|
+
"approve", "confirm", "looks good", "correct"
|
|
1415
|
+
]
|
|
1416
|
+
|
|
1417
|
+
for pattern in exit_patterns:
|
|
1418
|
+
if pattern in question_text.lower():
|
|
1419
|
+
return "exit_confirmation"
|
|
1420
|
+
|
|
1421
|
+
return "data_collection"
|
|
1422
|
+
|
|
1423
|
+
def _is_affirmative_response(response, options):
|
|
1424
|
+
"""Checks if user approved"""
|
|
1425
|
+
affirmative = [
|
|
1426
|
+
"yes", "proceed", "approve", "confirm",
|
|
1427
|
+
"ready", "looks good", "correct", "go ahead"
|
|
1428
|
+
]
|
|
1429
|
+
|
|
1430
|
+
for pattern in affirmative:
|
|
1431
|
+
if pattern in response.lower():
|
|
1432
|
+
return True
|
|
1433
|
+
|
|
1434
|
+
return False
|
|
1435
|
+
```
|
|
1436
|
+
|
|
1437
|
+
**Enforcement Flow:**
|
|
1438
|
+
```
|
|
1439
|
+
1. Claude asks: "Research complete. Proceed to interview?"
|
|
1440
|
+
2. Hook detects: exit_confirmation question
|
|
1441
|
+
3. Hook sets: waiting_for_response = true
|
|
1442
|
+
4. Claude tries: Write/Edit operation
|
|
1443
|
+
5. Hook blocks: Exit Code 2 (still waiting for response)
|
|
1444
|
+
6. User responds: "yes"
|
|
1445
|
+
7. Hook detects: affirmative response
|
|
1446
|
+
8. Hook sets: phase_exit_confirmed = true
|
|
1447
|
+
9. Claude tries: Write/Edit operation
|
|
1448
|
+
10. Hook allows: phase_exit_confirmed = true
|
|
1449
|
+
```
|
|
1450
|
+
|
|
1451
|
+
**State Tracking:**
|
|
1452
|
+
```json
|
|
1453
|
+
{
|
|
1454
|
+
"research_initial": {
|
|
1455
|
+
"phase_exit_confirmed": false,
|
|
1456
|
+
"last_question_type": "exit_confirmation",
|
|
1457
|
+
"waiting_for_response": true
|
|
1458
|
+
}
|
|
1459
|
+
}
|
|
1460
|
+
```
|
|
1461
|
+
|
|
1462
|
+
After user responds "yes":
|
|
1463
|
+
```json
|
|
1464
|
+
{
|
|
1465
|
+
"research_initial": {
|
|
1466
|
+
"phase_exit_confirmed": true,
|
|
1467
|
+
"last_question_type": "exit_confirmation",
|
|
1468
|
+
"waiting_for_response": false,
|
|
1469
|
+
"user_response": "yes"
|
|
1470
|
+
}
|
|
1471
|
+
}
|
|
1472
|
+
```
|
|
1473
|
+
|
|
1474
|
+
### Context7 Integration
|
|
1475
|
+
|
|
1476
|
+
**What Context7 Is:**
|
|
1477
|
+
Context7 is an MCP server that fetches live documentation from npm, GitHub, and official API docs. It doesn't use Claude's training data - it gets TODAY's docs.
|
|
1478
|
+
|
|
1479
|
+
**How It Works:**
|
|
1480
|
+
1. **Resolve library ID:**
|
|
1481
|
+
```
|
|
1482
|
+
Input: "brandfetch"
|
|
1483
|
+
Context7: Searches npm, GitHub, official sites
|
|
1484
|
+
Output: /brandfetch/api-docs (Context7 ID)
|
|
1485
|
+
```
|
|
1486
|
+
|
|
1487
|
+
2. **Fetch documentation:**
|
|
1488
|
+
```
|
|
1489
|
+
Input: /brandfetch/api-docs
|
|
1490
|
+
Context7: Retrieves docs, parses endpoints, extracts parameters
|
|
1491
|
+
Output: 23 endpoints, 47 parameters, code examples
|
|
1492
|
+
```
|
|
1493
|
+
|
|
1494
|
+
**Why It Matters:**
|
|
1495
|
+
Training data example (wrong):
|
|
1496
|
+
```
|
|
1497
|
+
Brandfetch API (from Claude's training - 2023):
|
|
1498
|
+
- Auth: API key in query parameter
|
|
1499
|
+
- Base URL: https://api.brandfetch.io/v1
|
|
1500
|
+
- Format parameter: "format"
|
|
1501
|
+
```
|
|
1502
|
+
|
|
1503
|
+
Context7 result (correct - 2024):
|
|
1504
|
+
```
|
|
1505
|
+
Brandfetch API (from Context7 - today):
|
|
1506
|
+
- Auth: Bearer token in Authorization header
|
|
1507
|
+
- Base URL: https://api.brandfetch.io/v2
|
|
1508
|
+
- Format parameter: "imageFormat"
|
|
1509
|
+
```
|
|
1510
|
+
|
|
1511
|
+
**Real Example:**
|
|
1512
|
+
When researching Brandfetch, Context7 found:
|
|
1513
|
+
- `include_fonts` parameter (added in v2.3, not in training data)
|
|
1514
|
+
- `X-Plan-RateLimit-*` headers (non-standard names)
|
|
1515
|
+
- Error response structure (undocumented in official API reference)
|
|
1516
|
+
|
|
1517
|
+
Training data would have missed all three.
|
|
1518
|
+
|
|
1519
|
+
**MCP Configuration:**
|
|
1520
|
+
Automatically installed by `npx @hustle-together/api-dev-tools`:
|
|
1521
|
+
```json
|
|
1522
|
+
{
|
|
1523
|
+
"mcpServers": {
|
|
1524
|
+
"context7": {
|
|
1525
|
+
"command": "npx",
|
|
1526
|
+
"args": ["-y", "@context7/mcp-server"]
|
|
1527
|
+
}
|
|
1528
|
+
}
|
|
1529
|
+
}
|
|
1530
|
+
```
|
|
1531
|
+
|
|
1532
|
+
### Automatic Re-grounding
|
|
1533
|
+
|
|
1534
|
+
**The Problem:**
|
|
1535
|
+
Long sessions lose context. Phase 3 (interview) happens at turn 15. Phase 9 (verification) happens at turn 47. By turn 47, Claude has forgotten your interview decisions from turn 15.
|
|
1536
|
+
|
|
1537
|
+
**The Solution:**
|
|
1538
|
+
`periodic-reground.py` re-injects state summary every 7 turns.
|
|
1539
|
+
|
|
1540
|
+
**What Gets Re-injected:**
|
|
1541
|
+
```
|
|
1542
|
+
Turn 7: [RE-GROUNDING]
|
|
1543
|
+
Current phase: interview
|
|
1544
|
+
Interview decisions so far:
|
|
1545
|
+
- response_format: json_with_urls
|
|
1546
|
+
- caching: 24h
|
|
1547
|
+
Research sources:
|
|
1548
|
+
- context7:/brandfetch/api-docs
|
|
1549
|
+
- websearch:brandfetch-api-2025
|
|
1550
|
+
Intentional omissions:
|
|
1551
|
+
- None yet
|
|
1552
|
+
```
|
|
1553
|
+
|
|
1554
|
+
Turn 14, 21, 28, 35, 42, 49... same summary refreshed.
|
|
1555
|
+
|
|
1556
|
+
**State Tracking:**
|
|
1557
|
+
```json
|
|
1558
|
+
{
|
|
1559
|
+
"turn_count": 47,
|
|
1560
|
+
"reground_history": [
|
|
1561
|
+
{ "turn": 7, "phase": "interview" },
|
|
1562
|
+
{ "turn": 14, "phase": "tdd_red" },
|
|
1563
|
+
{ "turn": 21, "phase": "tdd_green" },
|
|
1564
|
+
{ "turn": 28, "phase": "verify" },
|
|
1565
|
+
{ "turn": 35, "phase": "refactor" },
|
|
1566
|
+
{ "turn": 42, "phase": "documentation" }
|
|
1567
|
+
]
|
|
1568
|
+
}
|
|
1569
|
+
```
|
|
1570
|
+
|
|
1571
|
+
**Why Every 7 Turns:**
|
|
1572
|
+
Research shows context dilution accelerates after 7-10 messages. Re-grounding every 7 turns keeps critical decisions fresh without overwhelming Claude with redundant information.
|
|
1573
|
+
|
|
1574
|
+
**What Gets Preserved:**
|
|
1575
|
+
- Interview decisions (so implementation matches choices)
|
|
1576
|
+
- Research sources (so verification can re-check)
|
|
1577
|
+
- Intentional omissions (so they aren't flagged as gaps)
|
|
1578
|
+
- Current phase (so Claude knows where it is)
|
|
546
1579
|
|
|
547
1580
|
---
|
|
548
1581
|
|
|
549
1582
|
## File Structure
|
|
550
1583
|
|
|
551
1584
|
```
|
|
552
|
-
@hustle-together/api-dev-tools v3.6.
|
|
1585
|
+
@hustle-together/api-dev-tools v3.6.1
|
|
553
1586
|
│
|
|
554
1587
|
├── bin/
|
|
555
|
-
│ └── cli.js # NPX installer
|
|
1588
|
+
│ └── cli.js # NPX installer
|
|
556
1589
|
│
|
|
557
1590
|
├── commands/ # 24 slash commands
|
|
558
1591
|
│ ├── api-create.md # Main 12-phase workflow
|
|
@@ -561,18 +1594,32 @@ Here's exactly what happens when you run `/api-create brandfetch`:
|
|
|
561
1594
|
│ ├── api-verify.md # Manual verification
|
|
562
1595
|
│ ├── api-env.md # Environment check
|
|
563
1596
|
│ ├── api-status.md # Progress tracking
|
|
564
|
-
│ ├── red.md
|
|
1597
|
+
│ ├── red.md # TDD Red phase
|
|
1598
|
+
│ ├── green.md # TDD Green phase
|
|
1599
|
+
│ ├── refactor.md # TDD Refactor phase
|
|
565
1600
|
│ ├── cycle.md # Full TDD cycle
|
|
566
|
-
│ ├──
|
|
567
|
-
│
|
|
1601
|
+
│ ├── spike.md # Exploration mode
|
|
1602
|
+
│ ├── commit.md # Git commit
|
|
1603
|
+
│ ├── pr.md # Pull request
|
|
1604
|
+
│ ├── busycommit.md # Atomic commits
|
|
1605
|
+
│ ├── plan.md # Implementation planning
|
|
1606
|
+
│ ├── gap.md # Requirement gaps
|
|
1607
|
+
│ ├── issue.md # GitHub issue workflow
|
|
1608
|
+
│ ├── tdd.md # TDD reminder
|
|
1609
|
+
│ ├── summarize.md # Session summary
|
|
1610
|
+
│ ├── beepboop.md # AI attribution
|
|
1611
|
+
│ ├── add-command.md # Create slash commands
|
|
1612
|
+
│ ├── worktree-add.md # Git worktree management
|
|
1613
|
+
│ ├── worktree-cleanup.md # Worktree cleanup
|
|
1614
|
+
│ └── README.md # Command reference
|
|
568
1615
|
│
|
|
569
1616
|
├── hooks/ # 18 Python enforcement hooks
|
|
570
1617
|
│ │
|
|
571
1618
|
│ │ # Session lifecycle
|
|
572
|
-
│ ├── session-startup.py #
|
|
1619
|
+
│ ├── session-startup.py # Inject state on start
|
|
573
1620
|
│ │
|
|
574
1621
|
│ │ # User prompt processing
|
|
575
|
-
│ ├── enforce-external-research.py #
|
|
1622
|
+
│ ├── enforce-external-research.py # Detect API terms, require research
|
|
576
1623
|
│ │
|
|
577
1624
|
│ │ # PreToolUse (Write/Edit) - BLOCKING
|
|
578
1625
|
│ ├── enforce-disambiguation.py # Phase 0
|
|
@@ -591,15 +1638,15 @@ Here's exactly what happens when you run `/api-create brandfetch`:
|
|
|
591
1638
|
│ │ # PostToolUse - TRACKING
|
|
592
1639
|
│ ├── track-tool-use.py # Log all tool usage
|
|
593
1640
|
│ ├── periodic-reground.py # Re-inject context every 7 turns
|
|
594
|
-
│ ├── verify-after-green.py #
|
|
1641
|
+
│ ├── verify-after-green.py # Auto-trigger Phase 9
|
|
595
1642
|
│ │
|
|
596
1643
|
│ │ # Stop - BLOCKING
|
|
597
|
-
│ └── api-workflow-check.py #
|
|
1644
|
+
│ └── api-workflow-check.py # Phase 12 (block if incomplete)
|
|
598
1645
|
│
|
|
599
1646
|
├── templates/
|
|
600
|
-
│ ├── api-dev-state.json #
|
|
1647
|
+
│ ├── api-dev-state.json # 12 phases + phase_exit_confirmed
|
|
601
1648
|
│ ├── settings.json # Hook registrations
|
|
602
|
-
│ ├── research-index.json #
|
|
1649
|
+
│ ├── research-index.json # 7-day freshness tracking
|
|
603
1650
|
│ └── CLAUDE-SECTION.md # CLAUDE.md injection
|
|
604
1651
|
│
|
|
605
1652
|
├── scripts/
|
|
@@ -607,12 +1654,14 @@ Here's exactly what happens when you run `/api-create brandfetch`:
|
|
|
607
1654
|
│ ├── extract-parameters.ts # Extract Zod params
|
|
608
1655
|
│ └── collect-test-results.ts # Run tests → results
|
|
609
1656
|
│
|
|
610
|
-
└── package.json # v3.6.
|
|
1657
|
+
└── package.json # v3.6.1
|
|
611
1658
|
```
|
|
612
1659
|
|
|
613
1660
|
---
|
|
614
1661
|
|
|
615
|
-
## State File Structure
|
|
1662
|
+
## State File Structure
|
|
1663
|
+
|
|
1664
|
+
The `.claude/api-dev-state.json` file tracks workflow progress:
|
|
616
1665
|
|
|
617
1666
|
```json
|
|
618
1667
|
{
|
|
@@ -631,11 +1680,21 @@ Here's exactly what happens when you run `/api-create brandfetch`:
|
|
|
631
1680
|
"scope": {
|
|
632
1681
|
"status": "complete",
|
|
633
1682
|
"confirmed": true,
|
|
1683
|
+
"scope_description": "Fetch brand assets with logo-only mode",
|
|
634
1684
|
"phase_exit_confirmed": true
|
|
635
1685
|
},
|
|
636
1686
|
"research_initial": {
|
|
637
1687
|
"status": "complete",
|
|
638
|
-
"sources": [
|
|
1688
|
+
"sources": [
|
|
1689
|
+
"context7:/brandfetch/api-docs",
|
|
1690
|
+
"websearch:brandfetch-api-2025",
|
|
1691
|
+
"websearch:brandfetch-rate-limits"
|
|
1692
|
+
],
|
|
1693
|
+
"findings": {
|
|
1694
|
+
"base_url": "https://api.brandfetch.io/v2",
|
|
1695
|
+
"auth_method": "bearer_token",
|
|
1696
|
+
"rate_limit": "5/second"
|
|
1697
|
+
},
|
|
639
1698
|
"phase_exit_confirmed": true
|
|
640
1699
|
},
|
|
641
1700
|
"interview": {
|
|
@@ -652,53 +1711,71 @@ Here's exactly what happens when you run `/api-create brandfetch`:
|
|
|
652
1711
|
},
|
|
653
1712
|
"research_deep": {
|
|
654
1713
|
"status": "complete",
|
|
655
|
-
"proposed_searches": ["error-format", "rate-headers", "auth"],
|
|
656
|
-
"approved_searches": ["error-format", "rate-headers", "auth"],
|
|
657
|
-
"executed_searches": ["error-format", "rate-headers", "auth"],
|
|
1714
|
+
"proposed_searches": ["error-format", "rate-headers", "auth-errors"],
|
|
1715
|
+
"approved_searches": ["error-format", "rate-headers", "auth-errors"],
|
|
1716
|
+
"executed_searches": ["error-format", "rate-headers", "auth-errors"],
|
|
1717
|
+
"findings": {
|
|
1718
|
+
"error_structure": { "documented": true },
|
|
1719
|
+
"rate_headers": ["X-Plan-RateLimit-Limit", "X-Plan-RateLimit-Remaining"]
|
|
1720
|
+
},
|
|
658
1721
|
"phase_exit_confirmed": true
|
|
659
1722
|
},
|
|
660
1723
|
"schema_creation": {
|
|
661
1724
|
"status": "complete",
|
|
662
1725
|
"schema_file": "src/lib/schemas/brandfetch.ts",
|
|
1726
|
+
"includes_interview_decisions": true,
|
|
1727
|
+
"includes_research_findings": true,
|
|
663
1728
|
"phase_exit_confirmed": true
|
|
664
1729
|
},
|
|
665
1730
|
"environment_check": {
|
|
666
1731
|
"status": "complete",
|
|
667
1732
|
"keys_found": ["BRANDFETCH_API_KEY"],
|
|
1733
|
+
"keys_missing": [],
|
|
1734
|
+
"validated": true,
|
|
668
1735
|
"phase_exit_confirmed": true
|
|
669
1736
|
},
|
|
670
1737
|
"tdd_red": {
|
|
671
1738
|
"status": "complete",
|
|
672
1739
|
"test_file": "src/app/api/v2/brandfetch/__tests__/route.test.ts",
|
|
673
|
-
"test_count":
|
|
1740
|
+
"test_count": 10,
|
|
1741
|
+
"scenarios": ["success", "errors", "edge_cases"],
|
|
1742
|
+
"all_failing": true,
|
|
674
1743
|
"phase_exit_confirmed": true
|
|
675
1744
|
},
|
|
676
1745
|
"tdd_green": {
|
|
677
1746
|
"status": "complete",
|
|
678
|
-
"all_tests_passing": true
|
|
1747
|
+
"all_tests_passing": true,
|
|
1748
|
+
"coverage": "100%",
|
|
1749
|
+
"auto_triggered_verify": true
|
|
679
1750
|
},
|
|
680
1751
|
"verify": {
|
|
681
1752
|
"status": "complete",
|
|
682
|
-
"
|
|
1753
|
+
"re_researched": true,
|
|
1754
|
+
"gaps_found": 1,
|
|
683
1755
|
"gaps_fixed": 0,
|
|
684
|
-
"intentional_omissions": ["include_fonts"
|
|
1756
|
+
"intentional_omissions": ["include_fonts"],
|
|
1757
|
+
"comparison_table": { ... },
|
|
685
1758
|
"phase_exit_confirmed": true
|
|
686
1759
|
},
|
|
687
1760
|
"tdd_refactor": {
|
|
688
1761
|
"status": "complete",
|
|
1762
|
+
"changes_made": ["extracted_helpers", "added_docs"],
|
|
1763
|
+
"tests_still_passing": true,
|
|
689
1764
|
"phase_exit_confirmed": true
|
|
690
1765
|
},
|
|
691
1766
|
"documentation": {
|
|
692
1767
|
"status": "complete",
|
|
693
1768
|
"manifest_updated": true,
|
|
694
1769
|
"research_cached": true,
|
|
1770
|
+
"cache_freshness": "7 days",
|
|
695
1771
|
"phase_exit_confirmed": true
|
|
696
1772
|
}
|
|
697
1773
|
},
|
|
698
1774
|
"reground_history": [
|
|
699
1775
|
{ "turn": 7, "phase": "interview" },
|
|
700
1776
|
{ "turn": 14, "phase": "tdd_red" },
|
|
701
|
-
{ "turn": 21, "phase": "tdd_green" }
|
|
1777
|
+
{ "turn": 21, "phase": "tdd_green" },
|
|
1778
|
+
{ "turn": 28, "phase": "verify" }
|
|
702
1779
|
]
|
|
703
1780
|
}
|
|
704
1781
|
```
|
|
@@ -727,55 +1804,32 @@ npx @hustle-together/api-dev-tools --scope=project
|
|
|
727
1804
|
|
|
728
1805
|
---
|
|
729
1806
|
|
|
730
|
-
## What's New in v3.6.
|
|
731
|
-
|
|
732
|
-
### Exit Code 2 for Stronger Enforcement
|
|
733
|
-
|
|
734
|
-
**Problem:** JSON `permissionDecision: "deny"` blocks actions but Claude may continue normally. We needed a way to force Claude to **actively respond** to the error.
|
|
1807
|
+
## What's New in v3.6.1
|
|
735
1808
|
|
|
736
|
-
|
|
737
|
-
|
|
738
|
-
|
|
739
|
-
|
|
740
|
-
|
|
741
|
-
|
|
742
|
-
|
|
743
|
-
# After (active - forces Claude into feedback loop)
|
|
744
|
-
print("BLOCKED: ...", file=sys.stderr)
|
|
745
|
-
sys.exit(2)
|
|
746
|
-
```
|
|
747
|
-
|
|
748
|
-
**Upgraded hooks:**
|
|
749
|
-
- `enforce-research.py` - Forces `/api-research` before implementation
|
|
750
|
-
- `enforce-interview.py` - Forces structured interview completion
|
|
751
|
-
- `api-workflow-check.py` - Forces all phases complete before stopping
|
|
752
|
-
- `verify-implementation.py` - Forces fix of critical mismatches
|
|
753
|
-
|
|
754
|
-
From [Anthropic's docs](https://code.claude.com/docs/en/hooks):
|
|
755
|
-
> "Exit code 2 creates a feedback loop directly to Claude. Claude sees your error message. **Claude adjusts. Claude tries something different.**"
|
|
1809
|
+
### README Improvements
|
|
1810
|
+
- Removed verbose ASCII workflow simulations (was 798 lines)
|
|
1811
|
+
- Added comprehensive explanations for all 12 phases
|
|
1812
|
+
- Detailed sections on Exit Code 2, Context7, phase_exit_confirmed
|
|
1813
|
+
- Better mobile/narrow display formatting (50-char width ASCII)
|
|
1814
|
+
- Collapsible sections for easier scanning
|
|
756
1815
|
|
|
757
1816
|
---
|
|
758
1817
|
|
|
759
|
-
## What's New in v3.
|
|
1818
|
+
## What's New in v3.6.0
|
|
760
1819
|
|
|
761
|
-
###
|
|
1820
|
+
### Exit Code 2 for Stronger Enforcement
|
|
762
1821
|
|
|
763
|
-
|
|
1822
|
+
All blocking hooks now use `sys.exit(2)` instead of JSON deny. This creates an active feedback loop - Claude must respond to the error.
|
|
764
1823
|
|
|
765
|
-
|
|
766
|
-
|
|
767
|
-
|
|
768
|
-
|
|
1824
|
+
Upgraded hooks:
|
|
1825
|
+
- `enforce-research.py`
|
|
1826
|
+
- `enforce-interview.py`
|
|
1827
|
+
- `api-workflow-check.py`
|
|
1828
|
+
- `verify-implementation.py`
|
|
769
1829
|
|
|
770
|
-
|
|
771
|
-
# In track-tool-use.py
|
|
772
|
-
def _detect_question_type(question_text, options):
|
|
773
|
-
"""Detects: 'exit_confirmation', 'data_collection', 'clarification'"""
|
|
774
|
-
exit_patterns = ["proceed", "continue", "ready to", "approve", "confirm", ...]
|
|
1830
|
+
### phase_exit_confirmed Enforcement
|
|
775
1831
|
|
|
776
|
-
|
|
777
|
-
"""Checks for: 'yes', 'proceed', 'approve', 'confirm', 'ready', ..."""
|
|
778
|
-
```
|
|
1832
|
+
Every phase requires an "exit confirmation" question and affirmative user response before advancing. Prevents Claude from self-answering questions.
|
|
779
1833
|
|
|
780
1834
|
---
|
|
781
1835
|
|