forgedev 1.1.3 → 1.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +2 -1
- package/bin/devforge.js +2 -1
- package/docs/00-README.md +310 -0
- package/docs/01-universal-prompt-library.md +1049 -0
- package/docs/02-claude-code-mastery-playbook.md +283 -0
- package/docs/03-multi-agent-verification.md +565 -0
- package/docs/04-errata-and-verification-checklist.md +284 -0
- package/docs/05-universal-scaffolder-vision.md +452 -0
- package/docs/06-confidence-assessment-and-repo-prompt.md +407 -0
- package/docs/errata.md +58 -0
- package/docs/multi-agent-verification.md +66 -0
- package/docs/plans/.gitkeep +0 -0
- package/docs/playbook.md +95 -0
- package/docs/prompt-library.md +160 -0
- package/docs/uat/UAT_CHECKLIST.csv +9 -0
- package/docs/uat/UAT_TEMPLATE.md +163 -0
- package/package.json +10 -2
- package/src/claude-configurator.js +1 -0
- package/src/cli.js +5 -5
- package/src/index.js +3 -3
- package/src/utils.js +1 -1
- package/templates/base/docs/plans/.gitkeep +0 -0
- package/templates/base/docs/uat/UAT_CHECKLIST.csv.template +2 -0
- package/templates/base/docs/uat/UAT_TEMPLATE.md.template +22 -0
- package/templates/claude-code/agents/build-error-resolver.md +3 -2
- package/templates/claude-code/agents/code-quality-reviewer.md +1 -1
- package/templates/claude-code/agents/database-reviewer.md +1 -1
- package/templates/claude-code/agents/doc-updater.md +1 -1
- package/templates/claude-code/agents/harness-optimizer.md +26 -0
- package/templates/claude-code/agents/loop-operator.md +2 -1
- package/templates/claude-code/agents/product-strategist.md +124 -0
- package/templates/claude-code/agents/security-reviewer.md +1 -0
- package/templates/claude-code/agents/spec-validator.md +31 -1
- package/templates/claude-code/agents/uat-validator.md +4 -0
- package/templates/claude-code/claude-md/base.md +1 -0
- package/templates/claude-code/claude-md/nextjs.md +1 -1
- package/templates/claude-code/commands/code-review.md +7 -1
- package/templates/claude-code/commands/full-audit.md +3 -2
- package/templates/claude-code/commands/workflows.md +3 -0
- package/templates/claude-code/hooks/scripts/autofix-polyglot.mjs +20 -10
- package/templates/claude-code/hooks/scripts/autofix-python.mjs +3 -4
- package/templates/claude-code/hooks/scripts/autofix-typescript.mjs +3 -3
- package/templates/claude-code/hooks/scripts/guard-protected-files.mjs +2 -2
- package/templates/claude-code/skills/git-workflow/SKILL.md +2 -2
- package/templates/claude-code/skills/nextjs/SKILL.md +1 -1
- package/templates/claude-code/skills/playwright/SKILL.md +6 -5
- package/templates/claude-code/skills/security-web/SKILL.md +1 -0
- package/templates/infra/github-actions/.github/workflows/ci.yml.template +49 -0
- package/templates/testing/pytest/backend/tests/__init__.py +0 -0
- package/templates/testing/pytest/backend/tests/conftest.py.template +11 -0
- package/templates/testing/pytest/backend/tests/test_health.py.template +10 -0
- package/templates/testing/vitest/vitest.config.ts.template +18 -0
- package/CLAUDE.md +0 -38
|
@@ -0,0 +1,1049 @@
|
|
|
1
|
+
# Universal Claude Code Prompt Library
|
|
2
|
+
## From Idea to Production — Every Prompt You'll Ever Need
|
|
3
|
+
|
|
4
|
+
*Works in: Claude Code CLI, VS Code Extension, Cursor, Windsurf*
|
|
5
|
+
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
## Does This Work in VS Code?
|
|
9
|
+
|
|
10
|
+
**Yes.** Everything in this playbook works identically in VS Code. Claude Code has an official VS Code extension — install it from the marketplace (search "Claude Code" by Anthropic). CLAUDE.md, hooks, skills, subagents, commands — all of it reads from the same `.claude/` directory in your project root regardless of whether you're in the terminal or VS Code.
|
|
11
|
+
|
|
12
|
+
The only difference is the interface: VS Code gives you inline diffs with accept/reject buttons, a sidebar chat panel, and `@-mention` file references. The underlying engine is identical.
|
|
13
|
+
|
|
14
|
+
**Setup:** `Ctrl+Shift+X` → search "Claude Code" → Install → click the Spark icon in the sidebar.
|
|
15
|
+
|
|
16
|
+
---
|
|
17
|
+
|
|
18
|
+
## The Master Workflow (Mermaid)
|
|
19
|
+
|
|
20
|
+
```mermaid
|
|
21
|
+
graph TD
|
|
22
|
+
A["💡 IDEA / BRAINSTORM"] --> B["📋 SITUATION ASSESSMENT"]
|
|
23
|
+
|
|
24
|
+
B --> C{"What type of work?"}
|
|
25
|
+
|
|
26
|
+
C -->|"New project"| D["🏗️ NEW PROJECT FLOW"]
|
|
27
|
+
C -->|"New feature"| E["✨ NEW FEATURE FLOW"]
|
|
28
|
+
C -->|"Bug fix"| F["🐛 BUG FIX FLOW"]
|
|
29
|
+
C -->|"Cleanup / Refactor"| G["🧹 CLEANUP FLOW"]
|
|
30
|
+
C -->|"Joining existing project"| H["🔍 PROJECT ONBOARDING FLOW"]
|
|
31
|
+
C -->|"Validate existing work"| I["✅ AUDIT FLOW"]
|
|
32
|
+
C -->|"UAT / acceptance testing"| UAT["🧪 UAT FLOW"]
|
|
33
|
+
C -->|"Preparing for production"| PROD["🚀 PRODUCTION READINESS FLOW"]
|
|
34
|
+
|
|
35
|
+
D --> D1["Define product vision"] --> D2["Create structured spec"] --> D3["Set up project scaffolding"]
|
|
36
|
+
D3 --> D4["Create CLAUDE.md + hooks + skills"] --> D5["Plan Mode: architecture review"]
|
|
37
|
+
D5 --> D6["Implement phase by phase"] --> D7["Verification chain"]
|
|
38
|
+
|
|
39
|
+
E --> E1["Read spec + existing patterns"] --> E2["Plan Mode: impact analysis"]
|
|
40
|
+
E2 --> E3["Backend tasks"] --> E4["Frontend tasks"]
|
|
41
|
+
E4 --> E5["Tests"] --> E6["Verification chain"]
|
|
42
|
+
|
|
43
|
+
F --> F1["Reproduce the bug"] --> F2["Root cause analysis"]
|
|
44
|
+
F2 --> F3["Write failing test FIRST"] --> F4["Implement fix"]
|
|
45
|
+
F4 --> F5["Verify test passes"] --> F6["Run full suite for regressions"]
|
|
46
|
+
|
|
47
|
+
G --> G1["Read codebase thoroughly"] --> G2["Identify scope + risks"]
|
|
48
|
+
G2 --> G3["Plan Mode: refactor strategy"] --> G4["Atomic changes with tests"]
|
|
49
|
+
G4 --> G5["Verification chain"]
|
|
50
|
+
|
|
51
|
+
H --> H1["Explore + map codebase"] --> H2["Read all docs + specs"]
|
|
52
|
+
H2 --> H3["Generate CLAUDE.md"] --> H4["Identify issues + priorities"]
|
|
53
|
+
H4 --> H5["Plan approach with owner"]
|
|
54
|
+
|
|
55
|
+
I --> I1["Spec compliance audit"] --> I2["Wiring audit"]
|
|
56
|
+
I2 --> I3["Security audit"] --> I4["AI prompt quality audit"]
|
|
57
|
+
I4 --> I5["Production readiness audit"] --> I6["Consolidated report"]
|
|
58
|
+
|
|
59
|
+
UAT --> UAT1["Generate scenarios from spec"] --> UAT2["Map to automated tests"]
|
|
60
|
+
UAT2 --> UAT3["Execute UAT"] --> UAT4["Smoke test after deploy"]
|
|
61
|
+
|
|
62
|
+
PROD --> PROD1["Pre-deployment checklist"] --> PROD2["Add missing failover patterns"]
|
|
63
|
+
PROD2 --> PROD3["Smoke test"] --> PROD4["Monitor first 24 hours"]
|
|
64
|
+
|
|
65
|
+
D7 --> J["🚀 PR + REVIEW"]
|
|
66
|
+
E6 --> J
|
|
67
|
+
F6 --> J
|
|
68
|
+
G5 --> J
|
|
69
|
+
|
|
70
|
+
J --> K["Claude Code Review<br/>(multi-agent PR review)"]
|
|
71
|
+
K --> L["✅ MERGE"]
|
|
72
|
+
|
|
73
|
+
style A fill:#4CAF50,color:#fff
|
|
74
|
+
style B fill:#2196F3,color:#fff
|
|
75
|
+
style C fill:#FF9800,color:#fff
|
|
76
|
+
style D fill:#9C27B0,color:#fff
|
|
77
|
+
style E fill:#00BCD4,color:#fff
|
|
78
|
+
style F fill:#f44336,color:#fff
|
|
79
|
+
style G fill:#607D8B,color:#fff
|
|
80
|
+
style H fill:#795548,color:#fff
|
|
81
|
+
style I fill:#FF5722,color:#fff
|
|
82
|
+
style J fill:#4CAF50,color:#fff
|
|
83
|
+
style L fill:#4CAF50,color:#fff
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
---
|
|
87
|
+
|
|
88
|
+
## Flow 1: NEW PROJECT (From Idea to First Commit)
|
|
89
|
+
|
|
90
|
+
### Step 1.1 — Define Product Vision
|
|
91
|
+
|
|
92
|
+
```
|
|
93
|
+
I'm starting a new project. Here's my brainstorm/idea:
|
|
94
|
+
|
|
95
|
+
[paste your brainstorm, notes, sketches, voice memo transcript — raw is fine]
|
|
96
|
+
|
|
97
|
+
Before writing ANY code, help me structure this into a Product Requirements Document:
|
|
98
|
+
|
|
99
|
+
1. Product vision (one paragraph: what it is, who it's for, why it matters)
|
|
100
|
+
2. Core user personas (2-3 max, with their primary goals)
|
|
101
|
+
3. Feature list grouped into MVP vs Phase 2 vs Future
|
|
102
|
+
4. For each MVP feature:
|
|
103
|
+
- User story (As a [role], I want [X] so that [Y])
|
|
104
|
+
- Acceptance criteria (Given/When/Then checkboxes)
|
|
105
|
+
- Technical constraints (if any)
|
|
106
|
+
5. Non-functional requirements (performance, security, compliance)
|
|
107
|
+
6. Tech stack recommendation with justification
|
|
108
|
+
7. Out of scope (explicitly state what this is NOT)
|
|
109
|
+
|
|
110
|
+
Output as a structured markdown document. Do NOT write any code yet.
|
|
111
|
+
Save to docs/PRD.md
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
### Step 1.2 — Create Architecture Spec
|
|
115
|
+
|
|
116
|
+
```
|
|
117
|
+
Read docs/PRD.md.
|
|
118
|
+
|
|
119
|
+
Now create an architecture document covering:
|
|
120
|
+
|
|
121
|
+
1. System architecture diagram (describe in text, I'll draw it)
|
|
122
|
+
2. Data model — every entity, every field, every relationship
|
|
123
|
+
3. API design — every endpoint with HTTP method, path, request/response types
|
|
124
|
+
4. Frontend component tree — pages, major components, data flow
|
|
125
|
+
5. Third-party integrations needed
|
|
126
|
+
6. Infrastructure requirements (database, hosting, storage, AI services)
|
|
127
|
+
|
|
128
|
+
For each major component, specify:
|
|
129
|
+
- Input: what data it receives
|
|
130
|
+
- Output: what data it produces
|
|
131
|
+
- Dependencies: what it needs to function
|
|
132
|
+
- Error cases: what can go wrong
|
|
133
|
+
|
|
134
|
+
Save to docs/ARCHITECTURE.md
|
|
135
|
+
Do NOT write any code yet.
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
### Step 1.3 — Project Scaffolding
|
|
139
|
+
|
|
140
|
+
```
|
|
141
|
+
Read docs/PRD.md and docs/ARCHITECTURE.md.
|
|
142
|
+
|
|
143
|
+
Set up the project structure. Create:
|
|
144
|
+
|
|
145
|
+
1. Directory structure following the architecture
|
|
146
|
+
2. Package.json / requirements.txt with dependencies
|
|
147
|
+
3. TypeScript/Python config files
|
|
148
|
+
4. ESLint, Prettier, Ruff config files
|
|
149
|
+
5. Git initialization with .gitignore
|
|
150
|
+
6. Empty placeholder files for major modules (just the files, no implementation)
|
|
151
|
+
7. README.md with setup instructions
|
|
152
|
+
|
|
153
|
+
Follow 2026 best practices for [framework].
|
|
154
|
+
Use the latest stable versions of all dependencies.
|
|
155
|
+
Do NOT implement any features yet — scaffolding only.
|
|
156
|
+
```
|
|
157
|
+
|
|
158
|
+
### Step 1.4 — Create CLAUDE.md + Hooks + Skills
|
|
159
|
+
|
|
160
|
+
```
|
|
161
|
+
Read docs/PRD.md and docs/ARCHITECTURE.md.
|
|
162
|
+
|
|
163
|
+
Now create the development infrastructure:
|
|
164
|
+
|
|
165
|
+
1. CLAUDE.md (under 150 lines):
|
|
166
|
+
- Project identity (what this is, tech stack)
|
|
167
|
+
- Folder structure map
|
|
168
|
+
- Verification commands (lint, type check, test, build)
|
|
169
|
+
- Key patterns to follow
|
|
170
|
+
- Known constraints
|
|
171
|
+
|
|
172
|
+
2. Hooks (.claude/settings.json):
|
|
173
|
+
- PostToolUse: auto-lint after every file edit
|
|
174
|
+
- PreToolUse: protect .env, lockfiles, migrations
|
|
175
|
+
- Stop: run type check + lint before marking done
|
|
176
|
+
|
|
177
|
+
3. Directory-scoped CLAUDE.md files:
|
|
178
|
+
- backend/CLAUDE.md (backend-specific rules)
|
|
179
|
+
- src/CLAUDE.md or frontend/CLAUDE.md (frontend-specific rules)
|
|
180
|
+
- tests/CLAUDE.md or e2e/CLAUDE.md (test-specific rules)
|
|
181
|
+
|
|
182
|
+
4. Any project-specific skills if needed (.claude/skills/)
|
|
183
|
+
|
|
184
|
+
Make the hooks executable (chmod +x).
|
|
185
|
+
Test that each hook works by running it manually.
|
|
186
|
+
```
|
|
187
|
+
|
|
188
|
+
### Step 1.5 — Plan the First Phase
|
|
189
|
+
|
|
190
|
+
```
|
|
191
|
+
Read docs/PRD.md and docs/ARCHITECTURE.md.
|
|
192
|
+
|
|
193
|
+
Enter Plan Mode. For the MVP's first feature:
|
|
194
|
+
1. List every file you will create or modify
|
|
195
|
+
2. Separate into BACKEND tasks and FRONTEND tasks
|
|
196
|
+
3. For each task, describe the exact changes (2-3 sentences)
|
|
197
|
+
4. Identify any risks or dependencies
|
|
198
|
+
5. List the tests you will write for each task
|
|
199
|
+
6. Estimate complexity (simple/medium/complex)
|
|
200
|
+
|
|
201
|
+
Save the plan to docs/plans/phase-1-plan.md.
|
|
202
|
+
DO NOT write any code until I confirm this plan.
|
|
203
|
+
```
|
|
204
|
+
|
|
205
|
+
### Step 1.6 — Implement (Phase by Phase)
|
|
206
|
+
|
|
207
|
+
See "Flow 2: NEW FEATURE" below — each phase follows that pattern.
|
|
208
|
+
|
|
209
|
+
---
|
|
210
|
+
|
|
211
|
+
## Flow 2: NEW FEATURE (Adding to Existing Project)
|
|
212
|
+
|
|
213
|
+
### Step 2.1 — Understand Context
|
|
214
|
+
|
|
215
|
+
```
|
|
216
|
+
I need to add a new feature: [description].
|
|
217
|
+
|
|
218
|
+
Before doing anything:
|
|
219
|
+
1. Read CLAUDE.md and MEMORY.md (if they exist)
|
|
220
|
+
2. Read the spec at [path/to/spec.md] (if it exists)
|
|
221
|
+
3. Search the codebase for similar features already built
|
|
222
|
+
4. Identify 2-3 existing files that follow the pattern this feature should follow
|
|
223
|
+
5. List all files that will need to change
|
|
224
|
+
|
|
225
|
+
Tell me:
|
|
226
|
+
- Which existing patterns should this follow?
|
|
227
|
+
- What are the risks of breaking existing functionality?
|
|
228
|
+
- What dependencies does this feature have?
|
|
229
|
+
|
|
230
|
+
DO NOT write any code yet.
|
|
231
|
+
```
|
|
232
|
+
|
|
233
|
+
### Step 2.2 — Plan (Always Before Implementation)
|
|
234
|
+
|
|
235
|
+
```
|
|
236
|
+
Based on your analysis, create a plan for this feature.
|
|
237
|
+
|
|
238
|
+
Separate into BACKEND tasks and FRONTEND tasks.
|
|
239
|
+
|
|
240
|
+
For each task:
|
|
241
|
+
- Files to create or modify
|
|
242
|
+
- What changes specifically
|
|
243
|
+
- What test covers this
|
|
244
|
+
- Estimated complexity
|
|
245
|
+
|
|
246
|
+
Save to docs/plans/[feature-name]-plan.md.
|
|
247
|
+
DO NOT write any code until I approve.
|
|
248
|
+
```
|
|
249
|
+
|
|
250
|
+
### Step 2.3 — Backend Implementation (Atomic Tasks)
|
|
251
|
+
|
|
252
|
+
```
|
|
253
|
+
BACKEND TASK [N] of [total]: [task description]
|
|
254
|
+
|
|
255
|
+
Follow the pattern in [reference file].
|
|
256
|
+
Read that file first before writing anything.
|
|
257
|
+
|
|
258
|
+
Requirements:
|
|
259
|
+
[paste specific acceptance criteria]
|
|
260
|
+
|
|
261
|
+
After implementation:
|
|
262
|
+
- Run all verification checks
|
|
263
|
+
- Confirm the new code passes type checking
|
|
264
|
+
- Do NOT touch any frontend files
|
|
265
|
+
|
|
266
|
+
When done, show me:
|
|
267
|
+
- What you created/changed (file paths)
|
|
268
|
+
- How it connects to existing code
|
|
269
|
+
```
|
|
270
|
+
|
|
271
|
+
*Run `/clear` between backend tasks.*
|
|
272
|
+
|
|
273
|
+
### Step 2.4 — Frontend Implementation (Atomic Tasks)
|
|
274
|
+
|
|
275
|
+
```
|
|
276
|
+
FRONTEND TASK [N] of [total]: [task description]
|
|
277
|
+
|
|
278
|
+
Follow the pattern in [reference component].
|
|
279
|
+
Read that file first before writing anything.
|
|
280
|
+
|
|
281
|
+
Requirements:
|
|
282
|
+
[paste specific acceptance criteria]
|
|
283
|
+
|
|
284
|
+
Wire this to the api.ts methods created in the backend tasks.
|
|
285
|
+
Add data-testid attributes to all interactive elements.
|
|
286
|
+
Handle loading, empty, and error states.
|
|
287
|
+
|
|
288
|
+
After implementation:
|
|
289
|
+
- Run all verification checks
|
|
290
|
+
- Do NOT touch any backend files
|
|
291
|
+
|
|
292
|
+
When done, show me:
|
|
293
|
+
- What you created/changed
|
|
294
|
+
- The full wire: api.ts method → component → rendered in page
|
|
295
|
+
```
|
|
296
|
+
|
|
297
|
+
*Run `/clear` between frontend tasks.*
|
|
298
|
+
|
|
299
|
+
### Step 2.5 — Tests
|
|
300
|
+
|
|
301
|
+
```
|
|
302
|
+
Write tests for the [feature name] feature.
|
|
303
|
+
|
|
304
|
+
1. Unit tests:
|
|
305
|
+
- Backend: test each endpoint (happy path + error cases)
|
|
306
|
+
- Frontend: test each component's key behaviors
|
|
307
|
+
|
|
308
|
+
2. E2E tests (Playwright):
|
|
309
|
+
- Happy path: user completes the primary workflow
|
|
310
|
+
- Error path: what happens when the API fails
|
|
311
|
+
- Edge case: empty state, max length, invalid input
|
|
312
|
+
|
|
313
|
+
Follow the test patterns in [reference test file].
|
|
314
|
+
Import test and expect from ./fixtures (not @playwright/test).
|
|
315
|
+
Use getByRole/getByLabel/getByTestId — NEVER CSS selectors.
|
|
316
|
+
NEVER use page.waitForTimeout().
|
|
317
|
+
|
|
318
|
+
Run the tests and fix until all pass.
|
|
319
|
+
Then run the FULL test suite to check for regressions.
|
|
320
|
+
```
|
|
321
|
+
|
|
322
|
+
### Step 2.6 — Verify
|
|
323
|
+
|
|
324
|
+
```
|
|
325
|
+
/project:verify-all [path/to/spec.md]
|
|
326
|
+
```
|
|
327
|
+
|
|
328
|
+
Or manually:
|
|
329
|
+
|
|
330
|
+
```
|
|
331
|
+
Run the verification chain:
|
|
332
|
+
|
|
333
|
+
1. Check spec compliance: every acceptance criterion in [spec file]
|
|
334
|
+
— is it IMPLEMENTED, PARTIAL, MISSING, or DIVERGED?
|
|
335
|
+
2. Check wiring: every backend endpoint → api.ts → component → rendered UI
|
|
336
|
+
3. Check security: auth on every endpoint, input validation, no bare except
|
|
337
|
+
4. Check production readiness: no hardcoded URLs, error handling, pagination
|
|
338
|
+
|
|
339
|
+
Output a single consolidated report grouped by severity.
|
|
340
|
+
```
|
|
341
|
+
|
|
342
|
+
---
|
|
343
|
+
|
|
344
|
+
## Flow 3: BUG FIX
|
|
345
|
+
|
|
346
|
+
### Step 3.1 — Reproduce
|
|
347
|
+
|
|
348
|
+
```
|
|
349
|
+
Bug report: [paste the bug description, error message, or user report]
|
|
350
|
+
|
|
351
|
+
Before fixing anything:
|
|
352
|
+
1. Read the relevant code files
|
|
353
|
+
2. Understand the current behavior vs expected behavior
|
|
354
|
+
3. Identify the root cause (not just the symptom)
|
|
355
|
+
4. Explain to me: why is this happening?
|
|
356
|
+
|
|
357
|
+
DO NOT fix anything yet.
|
|
358
|
+
```
|
|
359
|
+
|
|
360
|
+
### Step 3.2 — Write Failing Test First
|
|
361
|
+
|
|
362
|
+
```
|
|
363
|
+
Now write a test that REPRODUCES this bug.
|
|
364
|
+
|
|
365
|
+
The test should:
|
|
366
|
+
- Set up the conditions that trigger the bug
|
|
367
|
+
- Assert the EXPECTED behavior (which will currently FAIL)
|
|
368
|
+
- Be named descriptively: "regression: [brief description of bug]"
|
|
369
|
+
|
|
370
|
+
Run the test and confirm it FAILS.
|
|
371
|
+
Show me the test and the failure output.
|
|
372
|
+
DO NOT fix the bug yet.
|
|
373
|
+
```
|
|
374
|
+
|
|
375
|
+
### Step 3.3 — Implement Fix
|
|
376
|
+
|
|
377
|
+
```
|
|
378
|
+
Now fix the bug.
|
|
379
|
+
|
|
380
|
+
Requirements:
|
|
381
|
+
- Minimal change — fix only what's broken
|
|
382
|
+
- Don't refactor unrelated code in the same change
|
|
383
|
+
- Preserve existing behavior for non-bug-related paths
|
|
384
|
+
- If the fix requires changing >3 files, list them all and wait for my approval
|
|
385
|
+
|
|
386
|
+
After the fix:
|
|
387
|
+
- Run the regression test — it should now PASS
|
|
388
|
+
- Run the full test suite — no existing tests should break
|
|
389
|
+
- Run all 5 verification checks
|
|
390
|
+
```
|
|
391
|
+
|
|
392
|
+
### Step 3.4 — Verify No Side Effects
|
|
393
|
+
|
|
394
|
+
```
|
|
395
|
+
The bug is fixed. Now verify there are no side effects:
|
|
396
|
+
|
|
397
|
+
1. The regression test passes
|
|
398
|
+
2. All existing tests still pass (npm run test, npm run e2e)
|
|
399
|
+
3. All 5 validation checks pass
|
|
400
|
+
4. Search the codebase for similar patterns that might have the same bug
|
|
401
|
+
(e.g., if you fixed a missing null check, are there other places with
|
|
402
|
+
the same missing check?)
|
|
403
|
+
|
|
404
|
+
If you find similar patterns, list them but DO NOT fix them —
|
|
405
|
+
I'll decide if those are separate tickets.
|
|
406
|
+
```
|
|
407
|
+
|
|
408
|
+
---
|
|
409
|
+
|
|
410
|
+
## Flow 4: CLEANUP / REFACTOR
|
|
411
|
+
|
|
412
|
+
### Step 4.1 — Assessment
|
|
413
|
+
|
|
414
|
+
```
|
|
415
|
+
I need to clean up / refactor [area/module/file].
|
|
416
|
+
|
|
417
|
+
Before touching anything:
|
|
418
|
+
1. Read all files in the affected area
|
|
419
|
+
2. Map all callers/importers/consumers of the code being refactored
|
|
420
|
+
3. Identify all tests that cover this code
|
|
421
|
+
4. List every file that would need to change
|
|
422
|
+
|
|
423
|
+
Tell me:
|
|
424
|
+
- Scope: how many files are affected?
|
|
425
|
+
- Risk: what could break?
|
|
426
|
+
- Strategy: should this be done in one pass or broken into steps?
|
|
427
|
+
- Tests: are existing tests sufficient, or do we need more before refactoring?
|
|
428
|
+
|
|
429
|
+
DO NOT make any changes yet.
|
|
430
|
+
```
|
|
431
|
+
|
|
432
|
+
### Step 4.2 — Add Test Coverage First
|
|
433
|
+
|
|
434
|
+
```
|
|
435
|
+
Before refactoring, ensure we have sufficient test coverage.
|
|
436
|
+
|
|
437
|
+
Look at the code being refactored:
|
|
438
|
+
- Which behaviors are currently tested?
|
|
439
|
+
- Which behaviors are NOT tested?
|
|
440
|
+
|
|
441
|
+
Write tests for any UNTESTED behaviors FIRST, using the current
|
|
442
|
+
implementation. These tests become our safety net — if they pass
|
|
443
|
+
before AND after the refactor, we know we didn't break anything.
|
|
444
|
+
|
|
445
|
+
Run all tests. Everything should pass.
|
|
446
|
+
DO NOT start refactoring yet.
|
|
447
|
+
```
|
|
448
|
+
|
|
449
|
+
### Step 4.3 — Refactor in Atomic Steps
|
|
450
|
+
|
|
451
|
+
```
|
|
452
|
+
Now refactor step [N]: [specific change]
|
|
453
|
+
|
|
454
|
+
Rules:
|
|
455
|
+
- One logical change at a time
|
|
456
|
+
- Run tests after EACH change
|
|
457
|
+
- If any test fails, fix before proceeding
|
|
458
|
+
- Do NOT batch unrelated changes
|
|
459
|
+
- Preserve all external behavior (same inputs → same outputs)
|
|
460
|
+
|
|
461
|
+
After this step:
|
|
462
|
+
- All tests still pass
|
|
463
|
+
- All 5 verification checks pass
|
|
464
|
+
- Show me what changed and why
|
|
465
|
+
```
|
|
466
|
+
|
|
467
|
+
### Step 4.4 — Verify
|
|
468
|
+
|
|
469
|
+
```
|
|
470
|
+
Refactoring complete. Final verification:
|
|
471
|
+
|
|
472
|
+
1. Run the full test suite
|
|
473
|
+
2. Run all 5 validation checks
|
|
474
|
+
3. Compare the external behavior: same inputs should produce same outputs
|
|
475
|
+
4. Check that no callers/consumers were broken by the internal changes
|
|
476
|
+
5. If performance was a goal, show before/after metrics
|
|
477
|
+
|
|
478
|
+
List any follow-up items discovered during refactoring.
|
|
479
|
+
```
|
|
480
|
+
|
|
481
|
+
---
|
|
482
|
+
|
|
483
|
+
## Flow 5: JOINING AN EXISTING PROJECT (Project Onboarding)
|
|
484
|
+
|
|
485
|
+
### Step 5.1 — Explore and Map
|
|
486
|
+
|
|
487
|
+
```
|
|
488
|
+
I've just joined this project and need to understand the codebase.
|
|
489
|
+
|
|
490
|
+
Explore the project and give me:
|
|
491
|
+
|
|
492
|
+
1. Project overview: what does this app do?
|
|
493
|
+
2. Tech stack: languages, frameworks, major dependencies
|
|
494
|
+
3. Architecture: how is the code organized? (folder structure + purpose)
|
|
495
|
+
4. Data model: what are the main entities and their relationships?
|
|
496
|
+
5. Key entry points: where does the app start? main routes? API endpoints?
|
|
497
|
+
6. Build and test commands: how to run, test, lint, build
|
|
498
|
+
7. Configuration: env vars needed, config files, secrets
|
|
499
|
+
8. Code quality: are there lints, type checking, tests? What's the coverage like?
|
|
500
|
+
9. Pain points: obvious code smells, outdated patterns, missing tests
|
|
501
|
+
10. Documentation: what docs exist? are they accurate?
|
|
502
|
+
|
|
503
|
+
Be thorough but concise. I need the mental model, not a file-by-file listing.
|
|
504
|
+
```
|
|
505
|
+
|
|
506
|
+
### Step 5.2 — Generate CLAUDE.md
|
|
507
|
+
|
|
508
|
+
```
|
|
509
|
+
Based on your exploration, generate a CLAUDE.md for this project.
|
|
510
|
+
|
|
511
|
+
Include:
|
|
512
|
+
- Project identity and purpose
|
|
513
|
+
- Tech stack summary
|
|
514
|
+
- Folder structure map (key directories and what they contain)
|
|
515
|
+
- Verification commands (lint, type check, test, build — whatever exists)
|
|
516
|
+
- Key patterns the project follows (naming, architecture, error handling)
|
|
517
|
+
- Known issues or quirks you discovered
|
|
518
|
+
|
|
519
|
+
Keep it under 150 lines. Focus on what an AI agent needs to know
|
|
520
|
+
to contribute correctly without breaking things.
|
|
521
|
+
|
|
522
|
+
Save to CLAUDE.md.
|
|
523
|
+
```
|
|
524
|
+
|
|
525
|
+
### Step 5.3 — Identify Issues and Priorities
|
|
526
|
+
|
|
527
|
+
```
|
|
528
|
+
Now do a health check on this codebase:
|
|
529
|
+
|
|
530
|
+
1. Security: any obvious vulnerabilities? (missing auth, unsanitized input, exposed secrets)
|
|
531
|
+
2. Code quality: deprecated patterns, dead code, duplicated logic
|
|
532
|
+
3. Testing: what's tested vs what's not? any flaky tests?
|
|
533
|
+
4. Performance: any obvious N+1 queries, missing pagination, unbounded lists?
|
|
534
|
+
5. Dependencies: any outdated or vulnerable packages?
|
|
535
|
+
|
|
536
|
+
For each finding:
|
|
537
|
+
- Severity: CRITICAL / HIGH / MEDIUM / LOW
|
|
538
|
+
- Location: file path and line
|
|
539
|
+
- Description: what's wrong
|
|
540
|
+
- Recommended fix: how to address it
|
|
541
|
+
|
|
542
|
+
DO NOT fix anything yet — I need to align with the project owner first.
|
|
543
|
+
```
|
|
544
|
+
|
|
545
|
+
---
|
|
546
|
+
|
|
547
|
+
## Flow 6: VALIDATE EXISTING WORK (Audit)
|
|
548
|
+
|
|
549
|
+
### Step 6.1 — Spec Compliance
|
|
550
|
+
|
|
551
|
+
```
|
|
552
|
+
Validate the current implementation against the spec at [path/to/spec.md].
|
|
553
|
+
|
|
554
|
+
For every requirement in the spec:
|
|
555
|
+
1. Search the codebase for the implementation
|
|
556
|
+
2. Trace the full wire: backend → API client → frontend → rendered UI
|
|
557
|
+
3. Rate: IMPLEMENTED | PARTIAL | MISSING | DIVERGED
|
|
558
|
+
|
|
559
|
+
Output as a table with evidence (file:line).
|
|
560
|
+
```
|
|
561
|
+
|
|
562
|
+
### Step 6.2 — Dead Feature Detection
|
|
563
|
+
|
|
564
|
+
```
|
|
565
|
+
For the [module name] module:
|
|
566
|
+
|
|
567
|
+
Find ALL backend endpoints in the relevant router.
|
|
568
|
+
For each endpoint, trace:
|
|
569
|
+
- Backend endpoint → api.ts method → component → page/route
|
|
570
|
+
|
|
571
|
+
Mark each as:
|
|
572
|
+
- LIVE: complete wire from endpoint to rendered UI
|
|
573
|
+
- DEAD ENDPOINT: backend exists, nothing calls it
|
|
574
|
+
- DEAD METHOD: api.ts method exists, no component uses it
|
|
575
|
+
- DEAD COMPONENT: component exists, not in any route
|
|
576
|
+
- ORPHANED PROP: prop declared but never read
|
|
577
|
+
|
|
578
|
+
This is not about code quality — it's about finding features that
|
|
579
|
+
look implemented but aren't actually wired up.
|
|
580
|
+
```
|
|
581
|
+
|
|
582
|
+
### Step 6.3 — AI Prompt Quality Audit
|
|
583
|
+
|
|
584
|
+
```
|
|
585
|
+
Find all AI prompt construction in the codebase.
|
|
586
|
+
(Search for functions that build prompts for LLM calls)
|
|
587
|
+
|
|
588
|
+
For each prompt function, score against these 7 criteria:
|
|
589
|
+
1. Embeds concrete data (actual values, not descriptions)?
|
|
590
|
+
2. Specifies exact output JSON schema with types?
|
|
591
|
+
3. Includes anti-hallucination boundary?
|
|
592
|
+
4. Specifies audience and purpose?
|
|
593
|
+
5. Requires citations for findings?
|
|
594
|
+
6. Includes confidence scoring?
|
|
595
|
+
7. Has graceful fallback in the calling function?
|
|
596
|
+
|
|
597
|
+
Output: table with file, function, score (X/7), and what's missing.
|
|
598
|
+
```
|
|
599
|
+
|
|
600
|
+
### Step 6.4 — Security Audit
|
|
601
|
+
|
|
602
|
+
```
|
|
603
|
+
Run a security review of the codebase.
|
|
604
|
+
|
|
605
|
+
Check:
|
|
606
|
+
1. Authentication: every endpoint requires auth?
|
|
607
|
+
2. Authorization: every endpoint enforces RBAC/scope?
|
|
608
|
+
3. Input validation: user text sanitized? file uploads validated?
|
|
609
|
+
4. AI safety: PII redacted before AI calls? injection boundaries?
|
|
610
|
+
5. Output safety: no stack traces in errors? export formula injection prevented?
|
|
611
|
+
6. Secrets: no hardcoded credentials? env vars used?
|
|
612
|
+
|
|
613
|
+
For each finding, attempt to DISPROVE it (check for middleware, decorators,
|
|
614
|
+
base classes that might handle it elsewhere). Only report CONFIRMED issues.
|
|
615
|
+
|
|
616
|
+
Output: findings table with severity, file, issue, and remediation.
|
|
617
|
+
```
|
|
618
|
+
|
|
619
|
+
---
|
|
620
|
+
|
|
621
|
+
## Flow 7: UAT (User Acceptance Testing)
|
|
622
|
+
|
|
623
|
+
### Step 7.1 — Generate UAT Scenarios From Spec
|
|
624
|
+
|
|
625
|
+
```
|
|
626
|
+
Read the spec at [path/to/spec.md].
|
|
627
|
+
|
|
628
|
+
Generate a UAT scenario pack covering:
|
|
629
|
+
|
|
630
|
+
For each user-facing feature:
|
|
631
|
+
1. Happy path scenario (P0)
|
|
632
|
+
2. Error/edge case scenario (P1)
|
|
633
|
+
3. Boundary condition scenario (P2)
|
|
634
|
+
|
|
635
|
+
Format each scenario as:
|
|
636
|
+
|
|
637
|
+
### UAT-[NNN]: [Feature] — [Scenario Type]
|
|
638
|
+
**Priority:** P0 | P1 | P2
|
|
639
|
+
**Preconditions:** [what must be true before testing]
|
|
640
|
+
**Steps:**
|
|
641
|
+
1. [concrete user action]
|
|
642
|
+
2. [concrete user action]
|
|
643
|
+
3. [concrete user action]
|
|
644
|
+
**Expected Result:** [exactly what should happen]
|
|
645
|
+
**Status:** NOT RUN
|
|
646
|
+
**Tester:** ___
|
|
647
|
+
**Date:** ___
|
|
648
|
+
|
|
649
|
+
Save to docs/uat/UAT_SCENARIOS.md
|
|
650
|
+
|
|
651
|
+
Also generate docs/uat/UAT_CHECKLIST.csv:
|
|
652
|
+
UAT_ID,Feature,Priority,Scenario,Status,Tester,Date,Defect_ID,Notes
|
|
653
|
+
```
|
|
654
|
+
|
|
655
|
+
### Step 7.2 — Map UAT to Automated Tests
|
|
656
|
+
|
|
657
|
+
```
|
|
658
|
+
Read docs/uat/UAT_SCENARIOS.md.
|
|
659
|
+
Read the test files in [e2e/ or tests/].
|
|
660
|
+
|
|
661
|
+
Create a traceability matrix:
|
|
662
|
+
|
|
663
|
+
| UAT ID | Scenario | Has Automated Test? | Test File:Line | Coverage Gap |
|
|
664
|
+
|--------|----------|--------------------:|----------------|-------------|
|
|
665
|
+
|
|
666
|
+
For each UAT scenario WITHOUT an automated test:
|
|
667
|
+
- Can it be automated? (yes/no + reason)
|
|
668
|
+
- If yes: write the test. Name it with the UAT ID (e.g., "UAT-003: user can reset password")
|
|
669
|
+
- If no: document what manual verification is needed
|
|
670
|
+
|
|
671
|
+
Output the updated matrix.
|
|
672
|
+
```
|
|
673
|
+
|
|
674
|
+
### Step 7.3 — Execute UAT
|
|
675
|
+
|
|
676
|
+
```
|
|
677
|
+
Run all automated UAT scenarios and update the checklist.
|
|
678
|
+
|
|
679
|
+
For each P0 scenario:
|
|
680
|
+
1. Find the corresponding automated test
|
|
681
|
+
2. Run it
|
|
682
|
+
3. Record PASS or FAIL in docs/uat/UAT_CHECKLIST.csv
|
|
683
|
+
|
|
684
|
+
For scenarios without automated tests:
|
|
685
|
+
- Mark as MANUAL_REQUIRED in the checklist
|
|
686
|
+
- List the manual steps needed
|
|
687
|
+
|
|
688
|
+
Summary:
|
|
689
|
+
- P0: X/Y passed, Z failed, W need manual testing
|
|
690
|
+
- P1: X/Y passed, Z failed, W need manual testing
|
|
691
|
+
- BLOCKING: list any P0 failures that must be fixed before deployment
|
|
692
|
+
```
|
|
693
|
+
|
|
694
|
+
### Step 7.4 — Smoke Test After Deployment
|
|
695
|
+
|
|
696
|
+
```
|
|
697
|
+
The app has been deployed to [staging URL / production URL].
|
|
698
|
+
|
|
699
|
+
Run a deployment smoke test:
|
|
700
|
+
|
|
701
|
+
1. Can the home page load? (check for 200 response)
|
|
702
|
+
2. Can a user log in? (if auth exists)
|
|
703
|
+
3. Do critical API endpoints respond? (list the 3-5 most important)
|
|
704
|
+
4. Are database connections working? (hit an endpoint that queries data)
|
|
705
|
+
5. Are external services reachable? (AI API, email, storage, etc.)
|
|
706
|
+
6. Is the health check endpoint responding? (/health or /healthz)
|
|
707
|
+
|
|
708
|
+
For each check: PASS or FAIL with response time.
|
|
709
|
+
|
|
710
|
+
If ANY critical check fails:
|
|
711
|
+
- Recommend rollback: yes/no
|
|
712
|
+
- Identify the likely cause
|
|
713
|
+
- Suggest immediate fix
|
|
714
|
+
```
|
|
715
|
+
|
|
716
|
+
---
|
|
717
|
+
|
|
718
|
+
## Flow 8: PRODUCTION READINESS & FAILOVER
|
|
719
|
+
|
|
720
|
+
### Step 8.1 — Pre-Deployment Verification
|
|
721
|
+
|
|
722
|
+
```
|
|
723
|
+
This feature/release is about to go to production.
|
|
724
|
+
|
|
725
|
+
Run a production readiness review:
|
|
726
|
+
|
|
727
|
+
### Environment & Config
|
|
728
|
+
- [ ] All env vars documented in .env.example (no missing vars)
|
|
729
|
+
- [ ] No hardcoded localhost URLs or ports
|
|
730
|
+
- [ ] No hardcoded secrets or API keys
|
|
731
|
+
- [ ] CORS configuration is restrictive (not wildcard *)
|
|
732
|
+
- [ ] Rate limiting is configured on public endpoints
|
|
733
|
+
|
|
734
|
+
### Health & Monitoring
|
|
735
|
+
- [ ] Health check endpoint exists (/health or /healthz)
|
|
736
|
+
- [ ] Health check verifies: app running + database connected + critical services reachable
|
|
737
|
+
- [ ] Structured logging is configured (not console.log in production)
|
|
738
|
+
- [ ] Error tracking is configured (Sentry, LogRocket, or equivalent)
|
|
739
|
+
|
|
740
|
+
### Failover & Resilience
|
|
741
|
+
- [ ] Database connection has retry with exponential backoff
|
|
742
|
+
- [ ] External API calls have timeouts configured (not infinite)
|
|
743
|
+
- [ ] External API calls have retry logic for transient failures
|
|
744
|
+
- [ ] AI/LLM calls have fallback (rule-based when AI is unavailable)
|
|
745
|
+
- [ ] AI/LLM responses are validated through schemas (not raw strings)
|
|
746
|
+
- [ ] Background jobs have dead letter / retry mechanism
|
|
747
|
+
- [ ] Graceful shutdown handler exists (finish in-flight requests, close DB connections)
|
|
748
|
+
|
|
749
|
+
### Data Safety
|
|
750
|
+
- [ ] Database migrations are backward-compatible
|
|
751
|
+
- [ ] New columns have default values (won't break existing rows)
|
|
752
|
+
- [ ] No destructive migrations (column drops, table drops) without data migration
|
|
753
|
+
- [ ] Backups are configured and tested
|
|
754
|
+
|
|
755
|
+
### Rollback Plan
|
|
756
|
+
- [ ] Previous version can be redeployed in < 5 minutes
|
|
757
|
+
- [ ] Database changes are forward-compatible (old code works with new schema)
|
|
758
|
+
- [ ] Feature flags exist for risky features (can disable without redeploy)
|
|
759
|
+
|
|
760
|
+
For each UNCHECKED item: file path, what's missing, and priority to fix.
|
|
761
|
+
```
|
|
762
|
+
|
|
763
|
+
### Step 8.2 — Add Missing Failover Patterns
|
|
764
|
+
|
|
765
|
+
```
|
|
766
|
+
Based on the production readiness review, the following failover patterns
|
|
767
|
+
are missing. Implement them:
|
|
768
|
+
|
|
769
|
+
1. Health check endpoint (if missing):
|
|
770
|
+
- GET /health returns { "status": "ok", "db": "connected", "version": "..." }
|
|
771
|
+
- Returns 503 if database is unreachable
|
|
772
|
+
|
|
773
|
+
2. Graceful shutdown (if missing):
|
|
774
|
+
- Listen for SIGTERM/SIGINT
|
|
775
|
+
- Stop accepting new requests
|
|
776
|
+
- Finish in-flight requests (with timeout)
|
|
777
|
+
- Close database connections
|
|
778
|
+
- Exit cleanly
|
|
779
|
+
|
|
780
|
+
3. External service timeouts (if missing):
|
|
781
|
+
- Set explicit timeout on every HTTP client call
|
|
782
|
+
- Add retry with exponential backoff for transient errors (5xx, timeout)
|
|
783
|
+
- Log failures with enough context to debug
|
|
784
|
+
|
|
785
|
+
4. AI/LLM fallback (if applicable and missing):
|
|
786
|
+
- Try primary AI call
|
|
787
|
+
- On failure: retry once with backoff
|
|
788
|
+
- On second failure: fall back to rule-based/cached response
|
|
789
|
+
- Log the fallback event
|
|
790
|
+
- Surface to user with subtle indicator (not error)
|
|
791
|
+
|
|
792
|
+
Implement only what's missing. Show me what you added.
|
|
793
|
+
```
|
|
794
|
+
|
|
795
|
+
### Step 6.5 — UAT Readiness Check
|
|
796
|
+
|
|
797
|
+
```
|
|
798
|
+
Before declaring this feature/phase complete, run a UAT readiness check.
|
|
799
|
+
|
|
800
|
+
1. Read docs/uat/UAT_TEMPLATE.md (or create one if it doesn't exist)
|
|
801
|
+
2. For each P0 scenario:
|
|
802
|
+
- Does the feature exist in the codebase?
|
|
803
|
+
- Is there an automated test that covers this scenario's steps?
|
|
804
|
+
- If not automated, flag as MANUAL REQUIRED
|
|
805
|
+
3. For each P1 scenario:
|
|
806
|
+
- Same checks, but non-blocking
|
|
807
|
+
|
|
808
|
+
Output:
|
|
809
|
+
| UAT ID | Scenario | Priority | Automated? | Test File | Status |
|
|
810
|
+
|--------|----------|----------|-----------|-----------|--------|
|
|
811
|
+
|
|
812
|
+
Summary:
|
|
813
|
+
- P0 automated coverage: X/Y scenarios
|
|
814
|
+
- P0 manual required: Z scenarios
|
|
815
|
+
- Blocking: list any P0 scenarios with NO coverage (automated or manual)
|
|
816
|
+
|
|
817
|
+
DO NOT mark this feature as complete if any P0 scenario has zero coverage.
|
|
818
|
+
```
|
|
819
|
+
|
|
820
|
+
### Step 6.6 — Failover & Resilience Audit
|
|
821
|
+
|
|
822
|
+
```
|
|
823
|
+
Check the codebase for production resilience patterns.
|
|
824
|
+
|
|
825
|
+
1. Health checks:
|
|
826
|
+
- Does a /health or /healthz endpoint exist?
|
|
827
|
+
- Does it check database connectivity?
|
|
828
|
+
- Does it check external service availability?
|
|
829
|
+
|
|
830
|
+
2. Graceful shutdown:
|
|
831
|
+
- Does the server handle SIGTERM/SIGINT?
|
|
832
|
+
- Does it drain active connections before stopping?
|
|
833
|
+
- Does it close database pools cleanly?
|
|
834
|
+
|
|
835
|
+
3. Retry & timeout patterns:
|
|
836
|
+
- Do external API calls have timeouts configured?
|
|
837
|
+
- Do database connections retry with backoff on failure?
|
|
838
|
+
- Do AI/LLM calls have timeout + fallback?
|
|
839
|
+
|
|
840
|
+
4. Error isolation:
|
|
841
|
+
- Can one failed external service take down the whole app?
|
|
842
|
+
- Are there circuit breaker patterns (or at minimum, timeout + catch)?
|
|
843
|
+
- Do async jobs have dead letter / retry queues?
|
|
844
|
+
|
|
845
|
+
5. Data safety:
|
|
846
|
+
- Are database transactions used for multi-step operations?
|
|
847
|
+
- Is there rollback on partial failure?
|
|
848
|
+
- Are idempotency keys used for payment/critical operations?
|
|
849
|
+
|
|
850
|
+
For each gap found:
|
|
851
|
+
- File: [where it should be]
|
|
852
|
+
- Issue: [what's missing]
|
|
853
|
+
- Risk: [what happens in production without it]
|
|
854
|
+
- Fix: [concrete implementation suggestion]
|
|
855
|
+
|
|
856
|
+
Rate overall resilience: PRODUCTION-READY | NEEDS WORK | NOT READY
|
|
857
|
+
```
|
|
858
|
+
|
|
859
|
+
### Step 6.7 — Staging/Pre-Production Smoke Test
|
|
860
|
+
|
|
861
|
+
```
|
|
862
|
+
We're about to deploy. Run a pre-deployment smoke test checklist.
|
|
863
|
+
|
|
864
|
+
Verify:
|
|
865
|
+
1. Environment configuration:
|
|
866
|
+
- All required env vars are documented (not just in .env.example)
|
|
867
|
+
- No dev-only values (localhost, debug=true) in staging config
|
|
868
|
+
- Secrets are in secret manager, not committed to repo
|
|
869
|
+
|
|
870
|
+
2. Database:
|
|
871
|
+
- All migrations run cleanly on a fresh database
|
|
872
|
+
- Seed data exists for required lookup tables
|
|
873
|
+
- No pending migrations that haven't been applied
|
|
874
|
+
|
|
875
|
+
3. External services:
|
|
876
|
+
- API keys for all external services are valid for the target environment
|
|
877
|
+
- Webhook URLs point to the correct environment
|
|
878
|
+
- CORS allows the correct origins
|
|
879
|
+
|
|
880
|
+
4. Critical paths (run these manually or via E2E):
|
|
881
|
+
- [ ] User can sign up / log in
|
|
882
|
+
- [ ] Core workflow completes end-to-end
|
|
883
|
+
- [ ] Error states show user-friendly messages (not stack traces)
|
|
884
|
+
- [ ] File uploads work within size limits
|
|
885
|
+
- [ ] Email/notification delivery works
|
|
886
|
+
|
|
887
|
+
5. Rollback readiness:
|
|
888
|
+
- Can we revert the deployment in under 5 minutes?
|
|
889
|
+
- Are database migrations reversible?
|
|
890
|
+
- Is the previous version still deployable?
|
|
891
|
+
|
|
892
|
+
Output: GO / NO-GO with specific blockers for NO-GO.
|
|
893
|
+
```
|
|
894
|
+
|
|
895
|
+
---
|
|
896
|
+
|
|
897
|
+
## Utility Prompts (Use Anytime)
|
|
898
|
+
|
|
899
|
+
### "Run UAT" — Execute Acceptance Testing
|
|
900
|
+
|
|
901
|
+
```
|
|
902
|
+
Run UAT for the [feature/module] we just completed.
|
|
903
|
+
|
|
904
|
+
1. Read docs/uat/UAT_TEMPLATE.md
|
|
905
|
+
2. For each scenario relevant to this feature:
|
|
906
|
+
- If an automated test exists: run it, report PASS/FAIL
|
|
907
|
+
- If no automated test: describe what manual steps are needed
|
|
908
|
+
3. For any FAIL: identify the root cause and whether it's a code bug or test bug
|
|
909
|
+
4. Update docs/uat/UAT_CHECKLIST.csv with results
|
|
910
|
+
|
|
911
|
+
Output:
|
|
912
|
+
- P0 results: X passed, Y failed, Z need manual testing
|
|
913
|
+
- P1 results: X passed, Y failed, Z need manual testing
|
|
914
|
+
- Blocking issues: [list any P0 failures with root cause]
|
|
915
|
+
- Recommendation: READY TO SHIP / FIX REQUIRED / MANUAL TESTING NEEDED
|
|
916
|
+
```
|
|
917
|
+
|
|
918
|
+
### "I'm Lost" — Context Recovery
|
|
919
|
+
|
|
920
|
+
```
|
|
921
|
+
I've been working on this for a while and lost track.
|
|
922
|
+
|
|
923
|
+
Read the git log (last 20 commits), the current diff, and MEMORY.md.
|
|
924
|
+
|
|
925
|
+
Tell me:
|
|
926
|
+
1. What was I working on?
|
|
927
|
+
2. What's the current state? (what's done, what's in progress)
|
|
928
|
+
3. What's left to do?
|
|
929
|
+
4. Are there any broken tests or lint errors right now?
|
|
930
|
+
|
|
931
|
+
Show me a checklist of remaining tasks.
|
|
932
|
+
```
|
|
933
|
+
|
|
934
|
+
### "Is This Right?" — Quick Verification
|
|
935
|
+
|
|
936
|
+
```
|
|
937
|
+
I just finished [task description].
|
|
938
|
+
|
|
939
|
+
Quick check:
|
|
940
|
+
1. Does it compile? (run type check)
|
|
941
|
+
2. Do tests pass? (run test suite)
|
|
942
|
+
3. Is it wired correctly? (trace from source to consumer)
|
|
943
|
+
4. Did I miss anything from the original instruction?
|
|
944
|
+
|
|
945
|
+
Be concise — yes/no with evidence for each.
|
|
946
|
+
```
|
|
947
|
+
|
|
948
|
+
### "Before I PR" — Pre-PR Checklist
|
|
949
|
+
|
|
950
|
+
```
|
|
951
|
+
I'm about to create a PR for this work.
|
|
952
|
+
|
|
953
|
+
Run the full verification:
|
|
954
|
+
1. All 5 validation checks (syntax, types, lint for both languages)
|
|
955
|
+
2. Full test suite
|
|
956
|
+
3. Check for console.log/print statements in production code
|
|
957
|
+
4. Check for hardcoded URLs, secrets, or credentials
|
|
958
|
+
5. Check for TODO/FIXME/HACK comments that should be resolved
|
|
959
|
+
6. Check the git diff for any accidental changes to unrelated files
|
|
960
|
+
|
|
961
|
+
Give me a GO / NO-GO with specific issues for NO-GO.
|
|
962
|
+
```
|
|
963
|
+
|
|
964
|
+
### "Update This Document" — Keep Memory Fresh
|
|
965
|
+
|
|
966
|
+
```
|
|
967
|
+
I just finished [task/feature].
|
|
968
|
+
|
|
969
|
+
Update the following (only the sections that changed):
|
|
970
|
+
1. MEMORY.md — add any new facts (stable, verified info only)
|
|
971
|
+
2. CLAUDE.md — if any new patterns or pitfalls were discovered
|
|
972
|
+
3. The relevant phase/spec file — mark completed items
|
|
973
|
+
|
|
974
|
+
Do NOT rewrite sections you didn't change.
|
|
975
|
+
Show me the diff of what you updated.
|
|
976
|
+
```
|
|
977
|
+
|
|
978
|
+
### "Explain This Code" — Understanding Existing Code
|
|
979
|
+
|
|
980
|
+
```
|
|
981
|
+
Explain [file path or function name]:
|
|
982
|
+
|
|
983
|
+
1. What does it do? (one sentence)
|
|
984
|
+
2. How does it work? (step by step, with the key decisions)
|
|
985
|
+
3. What calls it? (trace upstream callers)
|
|
986
|
+
4. What does it call? (trace downstream dependencies)
|
|
987
|
+
5. What could go wrong? (error cases, edge cases)
|
|
988
|
+
6. How is it tested? (find the relevant tests)
|
|
989
|
+
|
|
990
|
+
Keep it concise. I need to understand it, not read a novel.
|
|
991
|
+
```
|
|
992
|
+
|
|
993
|
+
### "Compare Approaches" — Decision Making
|
|
994
|
+
|
|
995
|
+
```
|
|
996
|
+
I need to decide between:
|
|
997
|
+
A) [approach A description]
|
|
998
|
+
B) [approach B description]
|
|
999
|
+
|
|
1000
|
+
For each approach, analyze:
|
|
1001
|
+
1. Implementation complexity (how many files, how much new code)
|
|
1002
|
+
2. Performance implications
|
|
1003
|
+
3. Maintenance burden (how easy to change later)
|
|
1004
|
+
4. Risk (what could go wrong)
|
|
1005
|
+
5. Compatibility with existing patterns in this codebase
|
|
1006
|
+
|
|
1007
|
+
Recommend one with clear reasoning. If it's a close call, say so.
|
|
1008
|
+
```
|
|
1009
|
+
|
|
1010
|
+
---
|
|
1011
|
+
|
|
1012
|
+
## Command Quick Reference
|
|
1013
|
+
|
|
1014
|
+
Create these as files in `.claude/commands/` for one-command access:
|
|
1015
|
+
|
|
1016
|
+
| Command | File | Usage |
|
|
1017
|
+
|---------|------|-------|
|
|
1018
|
+
| Verify all | `verify-all.md` | `/project:verify-all [spec-path]` |
|
|
1019
|
+
| Audit spec | `audit-spec.md` | `/project:audit-spec [spec-path]` |
|
|
1020
|
+
| Audit wiring | `audit-wiring.md` | `/project:audit-wiring [module]` |
|
|
1021
|
+
| Audit prompts | `audit-prompts.md` | `/project:audit-prompts` |
|
|
1022
|
+
| Audit security | `audit-security.md` | `/project:audit-security` |
|
|
1023
|
+
| Audit resilience | `audit-resilience.md` | `/project:audit-resilience` |
|
|
1024
|
+
| Run UAT | `run-uat.md` | `/project:run-uat [feature]` |
|
|
1025
|
+
| Pre-deploy smoke | `pre-deploy.md` | `/project:pre-deploy` |
|
|
1026
|
+
| Health check | `health-check.md` | `/project:health-check` |
|
|
1027
|
+
| Context recovery | `where-am-i.md` | `/project:where-am-i` |
|
|
1028
|
+
| Pre-PR check | `pre-pr.md` | `/project:pre-pr` |
|
|
1029
|
+
|
|
1030
|
+
---
|
|
1031
|
+
|
|
1032
|
+
## Cheat Sheet: Which Prompt for Which Situation
|
|
1033
|
+
|
|
1034
|
+
| Situation | Flow | First Prompt |
|
|
1035
|
+
|-----------|------|-------------|
|
|
1036
|
+
| "I have a product idea" | Flow 1 | Step 1.1 (Define Product Vision) |
|
|
1037
|
+
| "I need to add a feature" | Flow 2 | Step 2.1 (Understand Context) |
|
|
1038
|
+
| "Something is broken" | Flow 3 | Step 3.1 (Reproduce) |
|
|
1039
|
+
| "This code is messy" | Flow 4 | Step 4.1 (Assessment) |
|
|
1040
|
+
| "I just joined this project" | Flow 5 | Step 5.1 (Explore and Map) |
|
|
1041
|
+
| "Does this match the spec?" | Flow 6 | Step 6.1 (Spec Compliance) |
|
|
1042
|
+
| "Generate UAT scenarios" | Flow 7 | Step 7.1 (Generate Scenarios From Spec) |
|
|
1043
|
+
| "Run acceptance tests" | Flow 7 | Step 7.3 (Execute UAT) |
|
|
1044
|
+
| "We just deployed — is it working?" | Flow 7 | Step 7.4 (Smoke Test After Deploy) |
|
|
1045
|
+
| "Is this production-ready?" | Flow 8 | Step 8.1 (Pre-Deployment Verification) |
|
|
1046
|
+
| "Add failover / resilience patterns" | Flow 8 | Step 8.2 (Add Missing Failover) |
|
|
1047
|
+
| "I'm lost, where was I?" | Utility | Context Recovery |
|
|
1048
|
+
| "Am I ready to PR?" | Utility | Pre-PR Checklist |
|
|
1049
|
+
| "Which approach should I use?" | Utility | Compare Approaches |
|