create-sdd-project 0.9.8 → 0.10.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md
CHANGED
|
@@ -294,19 +294,28 @@ SDD DevFlow combines three proven practices:
|
|
|
294
294
|
| `qa-engineer` | Edge cases, spec verification | 5 |
|
|
295
295
|
| `database-architect` | Schema design, optimization | Any |
|
|
296
296
|
|
|
297
|
-
###
|
|
297
|
+
### 4 Skills (Slash Commands)
|
|
298
298
|
|
|
299
299
|
| Skill | Trigger | What it does |
|
|
300
300
|
|-------|---------|-------------|
|
|
301
301
|
| `development-workflow` | `start task F001`, `next task`, `add feature` | Orchestrates the complete 7-step workflow |
|
|
302
302
|
| `bug-workflow` | `report bug`, `fix bug`, `hotfix needed` | Bug triage, investigation, and resolution |
|
|
303
303
|
| `project-memory` | `set up project memory`, `log a bug fix` | Maintains institutional knowledge |
|
|
304
|
+
| `health-check` | `health check`, `project health` | Quick scan: tests, build, specs sync, secrets, docs freshness |
|
|
305
|
+
|
|
306
|
+
### 3 Custom Commands
|
|
307
|
+
|
|
308
|
+
| Command | What it does |
|
|
309
|
+
|---------|-------------|
|
|
310
|
+
| `/review-plan` | Sends Implementation Plan to external AI models (Codex CLI, Gemini CLI) for independent critique |
|
|
311
|
+
| `/context-prompt` | Generates a context recovery prompt after `/compact` with Workflow Recovery to prevent checkpoint skipping |
|
|
312
|
+
| `/review-project` | Comprehensive project-level review using up to 3 AI models in parallel — 6 domains, audit context, consolidated report with action plan |
|
|
304
313
|
|
|
305
314
|
### Plan Quality
|
|
306
315
|
|
|
307
316
|
Every Standard/Complex feature plan goes through a **built-in self-review** (Step 2.4) where the agent re-reads its own plan and checks for errors, vague steps, wrong assumptions, and over-engineering before requesting approval.
|
|
308
317
|
|
|
309
|
-
For additional confidence, the optional `/review-plan` command sends the plan to
|
|
318
|
+
For additional confidence, the optional `/review-plan` command sends the plan to external AI models (Codex CLI and/or Gemini CLI in parallel) for independent critique — catching blind spots that same-model review misses.
|
|
310
319
|
|
|
311
320
|
### Workflow (Steps 0–6)
|
|
312
321
|
|
|
@@ -336,6 +345,16 @@ For additional confidence, the optional `/review-plan` command sends the plan to
|
|
|
336
345
|
|
|
337
346
|
Quality gates (tests, lint, build, validators) **always run** regardless of level.
|
|
338
347
|
|
|
348
|
+
### Merge Checklist (B+D Mechanism)
|
|
349
|
+
|
|
350
|
+
Every ticket includes a `## Merge Checklist Evidence` table that the agent must fill before requesting merge approval. This mechanism:
|
|
351
|
+
|
|
352
|
+
- **Survives context compaction** — the ticket is always re-read via product tracker, so the empty evidence table acts as a persistent reminder
|
|
353
|
+
- **Forces sequential execution** — agent must read `references/merge-checklist.md`, execute 9 actions (0–8), and record evidence
|
|
354
|
+
- **Works at all tiers** — Simple tasks get a lite ticket with the same evidence table
|
|
355
|
+
|
|
356
|
+
Validated across 16+ features with 87% first-attempt pass rate (failures led to iterative improvements in v0.8.7–v0.9.8).
|
|
357
|
+
|
|
339
358
|
### Project Memory
|
|
340
359
|
|
|
341
360
|
Tracks institutional knowledge across sessions in `docs/project_notes/`:
|
|
@@ -399,14 +418,19 @@ project/
|
|
|
399
418
|
│ │ ├── development-workflow/ # Main task workflow (Steps 0-6)
|
|
400
419
|
│ │ │ └── references/ # Templates, guides, examples
|
|
401
420
|
│ │ ├── bug-workflow/ # Bug triage and resolution
|
|
421
|
+
│ │ ├── health-check/ # Project health diagnostics
|
|
402
422
|
│ │ └── project-memory/ # Memory system setup
|
|
423
|
+
│ ├── commands/ # Custom slash commands
|
|
424
|
+
│ │ ├── review-plan.md # Cross-model plan review
|
|
425
|
+
│ │ ├── context-prompt.md # Post-compact context recovery
|
|
426
|
+
│ │ └── review-project.md # Multi-model project review
|
|
403
427
|
│ ├── hooks/quick-scan.sh # Post-developer quality scan
|
|
404
428
|
│ └── settings.json # Shared hooks (git-tracked)
|
|
405
429
|
│
|
|
406
430
|
├── .gemini/
|
|
407
431
|
│ ├── agents/ # 9 agents (Gemini format)
|
|
408
|
-
│ ├── skills/ # Same
|
|
409
|
-
│ ├── commands/ # Slash
|
|
432
|
+
│ ├── skills/ # Same 4 skills
|
|
433
|
+
│ ├── commands/ # Slash commands (workflow + review + context + project review)
|
|
410
434
|
│ └── settings.json # Gemini configuration
|
|
411
435
|
│
|
|
412
436
|
├── ai-specs/specs/
|
|
@@ -462,11 +486,10 @@ cp -r node_modules/create-sdd-project/template/ /path/to/your-project/
|
|
|
462
486
|
|
|
463
487
|
## Roadmap
|
|
464
488
|
|
|
489
|
+
- **PM Agent + L5 Autonomous**: AI-driven feature orchestration — sequential feature loop with automatic checkpoint approval and session state persistence
|
|
465
490
|
- **Monorepo improvements**: Better support for pnpm workspaces and Turbo
|
|
466
|
-
- **SDD Upgrade/Migration**: Version bumps for projects already using SDD
|
|
467
491
|
- **Retrofit Testing**: Automated test generation for existing projects with low coverage
|
|
468
492
|
- **Agent Teams**: Parallel execution of independent tasks
|
|
469
|
-
- **PM Agent + L5 Autonomous**: AI-driven feature orchestration with human review at milestone boundaries
|
|
470
493
|
|
|
471
494
|
## Contributing
|
|
472
495
|
|
package/lib/config.js
CHANGED
package/package.json
CHANGED
|
@@ -0,0 +1,377 @@
|
|
|
1
|
+
Perform a comprehensive project review using multiple AI models in parallel. This is a 4-phase process designed for MVP milestones.
|
|
2
|
+
|
|
3
|
+
After /compact, re-invoke `/review-project` to resume. Completed work is preserved in /tmp/review-project-{project}/.
|
|
4
|
+
|
|
5
|
+
## Phase 0: Discovery
|
|
6
|
+
|
|
7
|
+
Detect project context without heavy file reading:
|
|
8
|
+
|
|
9
|
+
```bash
|
|
10
|
+
# Project type and SDD version
|
|
11
|
+
cat .sdd-version 2>/dev/null || echo "no .sdd-version"
|
|
12
|
+
head -30 docs/project_notes/key_facts.md 2>/dev/null
|
|
13
|
+
|
|
14
|
+
# Detect dominant source extensions (adapts to any JS/TS framework)
|
|
15
|
+
echo "=== Source extensions found ==="
|
|
16
|
+
find . -type f -not -path "*/node_modules/*" -not -path "*/dist/*" -not -path "*/.next/*" \
|
|
17
|
+
-not -path "*/.nuxt/*" -not -path "*/build/*" -not -path "*/coverage/*" \
|
|
18
|
+
\( -name "*.ts" -o -name "*.js" -o -name "*.tsx" -o -name "*.jsx" \
|
|
19
|
+
-o -name "*.vue" -o -name "*.svelte" -o -name "*.astro" \
|
|
20
|
+
-o -name "*.mjs" -o -name "*.cjs" \) \
|
|
21
|
+
| head -100
|
|
22
|
+
|
|
23
|
+
# Scale
|
|
24
|
+
echo "Source files:" && find . -type f -not -path "*/node_modules/*" -not -path "*/dist/*" \
|
|
25
|
+
-not -path "*/.next/*" -not -path "*/.nuxt/*" -not -path "*/build/*" \
|
|
26
|
+
\( -name "*.ts" -o -name "*.js" -o -name "*.tsx" -o -name "*.jsx" \
|
|
27
|
+
-o -name "*.vue" -o -name "*.svelte" -o -name "*.astro" \) | wc -l
|
|
28
|
+
echo "Test files:" && find . -type f -not -path "*/node_modules/*" \
|
|
29
|
+
\( -name "*.test.*" -o -name "*.spec.*" \) | wc -l
|
|
30
|
+
|
|
31
|
+
# Detect stack signals
|
|
32
|
+
echo "=== Stack signals ==="
|
|
33
|
+
[ -f "package.json" ] && echo "package.json: exists" || echo "package.json: not found"
|
|
34
|
+
[ -d "prisma" ] && echo "prisma/: found"
|
|
35
|
+
find . -maxdepth 3 -name "*.prisma" -not -path "*/node_modules/*" 2>/dev/null | head -3
|
|
36
|
+
find . -maxdepth 3 -type d -name "models" -not -path "*/node_modules/*" 2>/dev/null | head -3
|
|
37
|
+
[ -f "tsconfig.json" ] && echo "tsconfig.json: exists"
|
|
38
|
+
[ -f "next.config.js" ] || [ -f "next.config.mjs" ] || [ -f "next.config.ts" ] && echo "Next.js project"
|
|
39
|
+
[ -f "nuxt.config.ts" ] || [ -f "nuxt.config.js" ] && echo "Nuxt project"
|
|
40
|
+
[ -f "vite.config.ts" ] || [ -f "vite.config.js" ] && echo "Vite project"
|
|
41
|
+
[ -f "angular.json" ] && echo "Angular project"
|
|
42
|
+
[ -f "svelte.config.js" ] && echo "Svelte project"
|
|
43
|
+
[ -f "astro.config.mjs" ] && echo "Astro project"
|
|
44
|
+
|
|
45
|
+
# Detect available CLIs (robust — test real invocation, not just path lookup)
|
|
46
|
+
if command -v gemini >/dev/null 2>&1; then
|
|
47
|
+
GEMINI_TEST=$(echo "Reply OK" | gemini 2>&1 | head -1)
|
|
48
|
+
echo "gemini: $GEMINI_TEST"
|
|
49
|
+
else
|
|
50
|
+
echo "gemini: unavailable"
|
|
51
|
+
fi
|
|
52
|
+
if command -v codex >/dev/null 2>&1; then
|
|
53
|
+
codex --version >/dev/null 2>&1 && echo "codex: available" || echo "codex: unavailable"
|
|
54
|
+
else
|
|
55
|
+
echo "codex: unavailable"
|
|
56
|
+
fi
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
Create project-scoped working directory. Check for resume state:
|
|
60
|
+
|
|
61
|
+
```bash
|
|
62
|
+
REVIEW_DIR="/tmp/review-project-$(basename "$PWD")"
|
|
63
|
+
mkdir -p "$REVIEW_DIR"
|
|
64
|
+
echo "$REVIEW_DIR" > /tmp/.review-project-dir
|
|
65
|
+
cat "$REVIEW_DIR/progress.txt" 2>/dev/null || echo "No previous progress — starting fresh"
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
Use `$REVIEW_DIR` in all subsequent commands (or re-read from `/tmp/.review-project-dir` after /compact).
|
|
69
|
+
|
|
70
|
+
**Adapt domains by project type** (detected from key_facts.md, package.json, and stack signals above):
|
|
71
|
+
- Backend-only → skip frontend-specific checks in domain 2
|
|
72
|
+
- Frontend-only → skip domain 3 (Data Layer); domain 5 focuses on client-side security (XSS, CSP, token storage, route guards)
|
|
73
|
+
- Fullstack → all 6 domains
|
|
74
|
+
|
|
75
|
+
## Phase 1: Prepare Audit Context + External Digest + Launch
|
|
76
|
+
|
|
77
|
+
This phase has two sub-steps. Do NOT read the digest into your own context — assemble it entirely via bash.
|
|
78
|
+
|
|
79
|
+
### Step 1a: Generate Audit Context
|
|
80
|
+
|
|
81
|
+
Read **whichever of these files exist** to understand the project, then write a concise audit context to `$REVIEW_DIR/audit-context.md`:
|
|
82
|
+
|
|
83
|
+
**SDD project docs** (created by both `create-sdd-project` and `--init`):
|
|
84
|
+
- `docs/project_notes/key_facts.md` — stack, architecture, components
|
|
85
|
+
- `docs/project_notes/decisions.md` — ADRs and rationale
|
|
86
|
+
- `docs/specs/api-spec.yaml` or `docs/specs/api-spec.json` (first 100 lines)
|
|
87
|
+
|
|
88
|
+
**Standard project files** (any project):
|
|
89
|
+
- `package.json` — dependencies, scripts, project name
|
|
90
|
+
- `README.md` (first 100 lines) — project description, setup
|
|
91
|
+
- `tsconfig.json` — TypeScript config and paths
|
|
92
|
+
|
|
93
|
+
**Schema/ORM files** (read whichever exists):
|
|
94
|
+
- `prisma/schema.prisma` or any `*.prisma` file
|
|
95
|
+
- `src/models/` or `models/` directory (Mongoose, Sequelize, TypeORM entities)
|
|
96
|
+
- `drizzle/` or `src/db/schema.*` (Drizzle schemas)
|
|
97
|
+
|
|
98
|
+
**If key_facts.md is missing or minimal**, infer the stack from `package.json` dependencies and the directory structure detected in Phase 0.
|
|
99
|
+
|
|
100
|
+
The audit context should include (aim for 100-200 lines, not more):
|
|
101
|
+
1. **Project purpose** — what it does, who it's for (from README or key_facts)
|
|
102
|
+
2. **Architecture** — stack, key patterns, data flow, framework conventions
|
|
103
|
+
3. **Key decisions** — ADRs summarized in 1 line each (if decisions.md exists)
|
|
104
|
+
4. **Known issues** — from decisions.md, bugs.md, or TODO comments
|
|
105
|
+
5. **Specific audit focus areas** — based on the detected stack's risk profile:
|
|
106
|
+
- Express/Fastify: middleware ordering, input validation, error handling
|
|
107
|
+
- Next.js/Nuxt: SSR data fetching, API routes security, hydration issues
|
|
108
|
+
- Vue/Svelte/Astro: component reactivity, XSS in templates, state management
|
|
109
|
+
- Prisma: raw queries, migration safety, relation loading
|
|
110
|
+
- Mongoose: schema validation gaps, injection in query operators
|
|
111
|
+
- Auth: timing-safe comparison, token storage, session handling
|
|
112
|
+
|
|
113
|
+
Write this to disk:
|
|
114
|
+
```bash
|
|
115
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
116
|
+
cat > "$REVIEW_DIR/audit-context.md" <<'EOF'
|
|
117
|
+
[Your generated audit context here]
|
|
118
|
+
EOF
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
### Step 1b: Assemble Digest + Launch External Models
|
|
122
|
+
|
|
123
|
+
**Resume check**: if `$REVIEW_DIR/digest.txt` already exists, skip Step 1b entirely (digest was built in a previous run).
|
|
124
|
+
|
|
125
|
+
```bash
|
|
126
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
127
|
+
|
|
128
|
+
# 1. Review prompt header
|
|
129
|
+
cat > "$REVIEW_DIR/digest.txt" <<'HEADER'
|
|
130
|
+
You are performing a comprehensive review of a software project.
|
|
131
|
+
Your job is to find real problems — security, reliability, performance, architecture.
|
|
132
|
+
Do NOT manufacture issues. If code is solid, say so. Note uncertainty rather than flagging as issue.
|
|
133
|
+
|
|
134
|
+
For each issue: [CRITICAL/IMPORTANT/SUGGESTION] Category — Description
|
|
135
|
+
File: exact/path (line N if possible) — Proposed fix
|
|
136
|
+
|
|
137
|
+
Review criteria:
|
|
138
|
+
1. Security — injection, secrets, auth bypass, XSS, CSRF
|
|
139
|
+
2. Reliability — error handling, edge cases, race conditions, validation gaps
|
|
140
|
+
3. Performance — N+1 queries, missing indexes, memory leaks, unnecessary computation
|
|
141
|
+
4. Architecture — layer violations, tight coupling, SRP violations, dead code
|
|
142
|
+
5. Testing — coverage gaps, test quality, missing edge cases, flaky patterns
|
|
143
|
+
6. Documentation — spec/code mismatches, stale docs, missing API contracts
|
|
144
|
+
|
|
145
|
+
End with: VERDICT: HEALTHY | NEEDS_WORK (if any CRITICAL or 3+ IMPORTANT)
|
|
146
|
+
---
|
|
147
|
+
HEADER
|
|
148
|
+
|
|
149
|
+
# 2. Prepend audit context (project understanding for the external model)
|
|
150
|
+
echo "PROJECT CONTEXT:" >> "$REVIEW_DIR/digest.txt"
|
|
151
|
+
cat "$REVIEW_DIR/audit-context.md" >> "$REVIEW_DIR/digest.txt"
|
|
152
|
+
printf "\n---\nPROJECT FILES:\n" >> "$REVIEW_DIR/digest.txt"
|
|
153
|
+
|
|
154
|
+
# 3. Concatenate source files (all supported extensions, exclude tests/generated)
|
|
155
|
+
find . -type f -not -path "*/node_modules/*" -not -path "*/dist/*" -not -path "*/.next/*" \
|
|
156
|
+
-not -path "*/.nuxt/*" -not -path "*/coverage/*" -not -path "*/build/*" -not -path "*/.svelte-kit/*" \
|
|
157
|
+
\( -name "*.ts" -o -name "*.js" -o -name "*.tsx" -o -name "*.jsx" \
|
|
158
|
+
-o -name "*.vue" -o -name "*.svelte" -o -name "*.astro" \
|
|
159
|
+
-o -name "*.mjs" -o -name "*.cjs" \) \
|
|
160
|
+
-not -name "*.test.*" -not -name "*.spec.*" -not -name "*.min.*" -not -name "*.d.ts" \
|
|
161
|
+
| sort | while IFS= read -r f; do
|
|
162
|
+
echo "=== FILE: $f ===" >> "$REVIEW_DIR/digest.txt"
|
|
163
|
+
cat "$f" >> "$REVIEW_DIR/digest.txt"
|
|
164
|
+
echo "" >> "$REVIEW_DIR/digest.txt"
|
|
165
|
+
done
|
|
166
|
+
|
|
167
|
+
# 4. Add non-source config and documentation files (*.js/*.ts configs already captured by Step 3)
|
|
168
|
+
for doc in \
|
|
169
|
+
package.json tsconfig.json angular.json \
|
|
170
|
+
.env.example Dockerfile docker-compose.yml docker-compose.yaml \
|
|
171
|
+
docs/project_notes/key_facts.md docs/project_notes/decisions.md \
|
|
172
|
+
docs/specs/api-spec.yaml docs/specs/api-spec.json \
|
|
173
|
+
.eslintrc .eslintrc.json \
|
|
174
|
+
; do
|
|
175
|
+
if [ -f "$doc" ]; then
|
|
176
|
+
echo "=== FILE: $doc ===" >> "$REVIEW_DIR/digest.txt"
|
|
177
|
+
cat "$doc" >> "$REVIEW_DIR/digest.txt"
|
|
178
|
+
echo "" >> "$REVIEW_DIR/digest.txt"
|
|
179
|
+
fi
|
|
180
|
+
done
|
|
181
|
+
|
|
182
|
+
# 5. Add Prisma schema files (*.ts/*.js models already captured by Step 3)
|
|
183
|
+
find . -type f -name "*.prisma" -not -path "*/node_modules/*" | sort | while IFS= read -r f; do
|
|
184
|
+
echo "=== FILE: $f ===" >> "$REVIEW_DIR/digest.txt"
|
|
185
|
+
cat "$f" >> "$REVIEW_DIR/digest.txt"
|
|
186
|
+
echo "" >> "$REVIEW_DIR/digest.txt"
|
|
187
|
+
done
|
|
188
|
+
|
|
189
|
+
# 6. Test file list (paths only)
|
|
190
|
+
echo "=== TEST FILES (paths only) ===" >> "$REVIEW_DIR/digest.txt"
|
|
191
|
+
find . -type f -not -path "*/node_modules/*" \( -name "*.test.*" -o -name "*.spec.*" \) \
|
|
192
|
+
| sort >> "$REVIEW_DIR/digest.txt"
|
|
193
|
+
|
|
194
|
+
# 7. Check size
|
|
195
|
+
wc -c "$REVIEW_DIR/digest.txt"
|
|
196
|
+
```
|
|
197
|
+
|
|
198
|
+
Launch external models based on availability detected in Phase 0:
|
|
199
|
+
|
|
200
|
+
### Path A: Both CLIs available
|
|
201
|
+
|
|
202
|
+
```bash
|
|
203
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
204
|
+
export REVIEW_DIR
|
|
205
|
+
sh -c 'cat "$REVIEW_DIR/digest.txt" | gemini > "$REVIEW_DIR/review-gemini.txt" 2>&1; touch "$REVIEW_DIR/gemini.done"' &
|
|
206
|
+
DIGEST_SIZE=$(wc -c < "$REVIEW_DIR/digest.txt" | tr -d ' ')
|
|
207
|
+
if [ "$DIGEST_SIZE" -gt 600000 ]; then
|
|
208
|
+
sh -c 'head -c 600000 "$REVIEW_DIR/digest.txt" | codex exec --full-auto - > "$REVIEW_DIR/review-codex.txt" 2>&1; touch "$REVIEW_DIR/codex.done"' &
|
|
209
|
+
else
|
|
210
|
+
sh -c 'cat "$REVIEW_DIR/digest.txt" | codex exec --full-auto - > "$REVIEW_DIR/review-codex.txt" 2>&1; touch "$REVIEW_DIR/codex.done"' &
|
|
211
|
+
fi
|
|
212
|
+
echo "External models launched in background"
|
|
213
|
+
```
|
|
214
|
+
|
|
215
|
+
### Path B: One CLI available
|
|
216
|
+
|
|
217
|
+
```bash
|
|
218
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
219
|
+
export REVIEW_DIR
|
|
220
|
+
# Gemini only:
|
|
221
|
+
sh -c 'cat "$REVIEW_DIR/digest.txt" | gemini > "$REVIEW_DIR/review-gemini.txt" 2>&1; touch "$REVIEW_DIR/gemini.done"' &
|
|
222
|
+
# OR Codex only:
|
|
223
|
+
sh -c 'cat "$REVIEW_DIR/digest.txt" | codex exec --full-auto - > "$REVIEW_DIR/review-codex.txt" 2>&1; touch "$REVIEW_DIR/codex.done"' &
|
|
224
|
+
```
|
|
225
|
+
|
|
226
|
+
### Path C: No external CLI available — skip this phase. Claude-only review (Phase 2) still provides 6 domain reviews.
|
|
227
|
+
|
|
228
|
+
## Phase 2: Claude Deep Review (domain-by-domain, resumable)
|
|
229
|
+
|
|
230
|
+
While external models run, review the project by reading files directly. 6 domains, each written to disk immediately after completion.
|
|
231
|
+
|
|
232
|
+
**Check progress before each domain** — if `$REVIEW_DIR/progress.txt` shows `domain-N: DONE`, skip it (resume support).
|
|
233
|
+
|
|
234
|
+
**Important**: adapt each domain's focus to the actual stack detected in Phase 0. The descriptions below are guidelines — prioritize reading files that exist in this specific project.
|
|
235
|
+
|
|
236
|
+
### Domain 1: Architecture & Config
|
|
237
|
+
Read: package.json, tsconfig, framework config (next.config/nuxt.config/vite.config/angular.json), entry points, key_facts.md, decisions.md
|
|
238
|
+
Focus: structure, dependencies, config correctness, missing configs, framework best practices
|
|
239
|
+
|
|
240
|
+
### Domain 2: Source Code Quality
|
|
241
|
+
Read: routes/pages/components, services, models, utils, middleware (sample representative files)
|
|
242
|
+
Focus: naming, duplication, complexity, patterns, code smells, framework-specific anti-patterns
|
|
243
|
+
|
|
244
|
+
### Domain 3: Data Layer (skip for frontend-only)
|
|
245
|
+
Read: schema files (Prisma, Mongoose models, Sequelize/TypeORM entities, Drizzle), migrations, seeds, query builders
|
|
246
|
+
Focus: schema design, indexes, migrations, query efficiency, N+1 risks, ORM-specific pitfalls
|
|
247
|
+
|
|
248
|
+
### Domain 4: Testing & CI
|
|
249
|
+
Read: test files (sample), test config (jest/vitest/cypress/playwright), CI workflows, lint config
|
|
250
|
+
Focus: coverage gaps, test quality, CI robustness, flaky patterns
|
|
251
|
+
|
|
252
|
+
### Domain 5: Security & Reliability
|
|
253
|
+
Read: auth middleware, validators, error handlers, rate limiters, env handling
|
|
254
|
+
Focus: vulnerabilities, error paths, secrets exposure, OWASP top 10
|
|
255
|
+
- Backend: injection, auth bypass, SSRF, timing attacks, error leakage
|
|
256
|
+
- Frontend: XSS, CSP, token storage, route guards, dependency vulnerabilities, CORS
|
|
257
|
+
|
|
258
|
+
### Domain 6: Documentation & SDD Process
|
|
259
|
+
Read: tickets (sample), product-tracker, api-spec, bugs.md, README
|
|
260
|
+
Focus: spec/code sync, ticket quality, stale docs, process adherence
|
|
261
|
+
|
|
262
|
+
**After each domain**, write findings and a manifest of reviewed files to disk:
|
|
263
|
+
|
|
264
|
+
```bash
|
|
265
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
266
|
+
cat > "$REVIEW_DIR/review-domain-N.md" <<'EOF'
|
|
267
|
+
## Domain N: [Name]
|
|
268
|
+
### Files Reviewed
|
|
269
|
+
- path/to/file1.ts
|
|
270
|
+
- path/to/file2.vue
|
|
271
|
+
### Findings
|
|
272
|
+
[SEVERITY] Category — Description
|
|
273
|
+
File: path:line — Fix
|
|
274
|
+
...
|
|
275
|
+
EOF
|
|
276
|
+
echo "domain-N: DONE (X issues)" >> "$REVIEW_DIR/progress.txt"
|
|
277
|
+
```
|
|
278
|
+
|
|
279
|
+
## Phase 3: Consolidation
|
|
280
|
+
|
|
281
|
+
After all Claude domains complete, check external model outputs:
|
|
282
|
+
|
|
283
|
+
```bash
|
|
284
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
285
|
+
for model in gemini codex; do
|
|
286
|
+
DONE="$REVIEW_DIR/$model.done"
|
|
287
|
+
FILE="$REVIEW_DIR/review-$model.txt"
|
|
288
|
+
if [ -f "$DONE" ] && [ -s "$FILE" ] && grep -qE "\[CRITICAL\]|\[IMPORTANT\]|\[SUGGESTION\]|VERDICT" "$FILE" 2>/dev/null; then
|
|
289
|
+
echo "$model: done ($(wc -l < "$FILE") lines, valid)"
|
|
290
|
+
elif [ -f "$DONE" ]; then
|
|
291
|
+
echo "$model: finished but output appears malformed — review manually"
|
|
292
|
+
else
|
|
293
|
+
echo "$model: still running or not launched"
|
|
294
|
+
fi
|
|
295
|
+
done
|
|
296
|
+
```
|
|
297
|
+
|
|
298
|
+
If pending, wait up to 2 minutes. If still pending, proceed with available results.
|
|
299
|
+
|
|
300
|
+
**Consolidation steps** (write to disk progressively per category):
|
|
301
|
+
1. Read Claude domain findings (up to 6 files from `$REVIEW_DIR/`)
|
|
302
|
+
2. Read external model outputs (up to 2 files from `$REVIEW_DIR/`)
|
|
303
|
+
3. For each finding, assign confidence:
|
|
304
|
+
- **HIGH**: 2+ models flag the same file + same concern category
|
|
305
|
+
- **MEDIUM**: 1 model, specific file/line cited
|
|
306
|
+
- **LOW**: suggestion without specific evidence
|
|
307
|
+
4. Categorize: Security, Reliability, Performance, Architecture, Testing, Documentation
|
|
308
|
+
5. Prioritize: CRITICAL > IMPORTANT > SUGGESTION
|
|
309
|
+
6. Discard external model findings that lack severity markers or a VERDICT line
|
|
310
|
+
|
|
311
|
+
Write the consolidated report to `docs/project_notes/review-project-report.md`:
|
|
312
|
+
|
|
313
|
+
```markdown
|
|
314
|
+
# Project Review Report
|
|
315
|
+
|
|
316
|
+
**Date:** YYYY-MM-DD
|
|
317
|
+
**Models:** Claude, Gemini, Codex (or subset)
|
|
318
|
+
**Source files:** N | **Test files:** M | **Doc files:** K
|
|
319
|
+
|
|
320
|
+
## Summary
|
|
321
|
+
|
|
322
|
+
| Severity | Count |
|
|
323
|
+
|----------|-------|
|
|
324
|
+
| CRITICAL | N |
|
|
325
|
+
| IMPORTANT | N |
|
|
326
|
+
| SUGGESTION | N |
|
|
327
|
+
|
|
328
|
+
**Verdict:** HEALTHY | NEEDS_WORK
|
|
329
|
+
|
|
330
|
+
## CRITICAL
|
|
331
|
+
|
|
332
|
+
### C1. [Title]
|
|
333
|
+
- **Category:** Security
|
|
334
|
+
- **File:** path/to/file.ts:45
|
|
335
|
+
- **Found by:** Claude, Gemini (HIGH confidence)
|
|
336
|
+
- **Description:** ...
|
|
337
|
+
- **Fix:** ...
|
|
338
|
+
|
|
339
|
+
## IMPORTANT
|
|
340
|
+
...
|
|
341
|
+
|
|
342
|
+
## SUGGESTION
|
|
343
|
+
...
|
|
344
|
+
```
|
|
345
|
+
|
|
346
|
+
Write the action plan to `docs/project_notes/review-project-actions.md`:
|
|
347
|
+
|
|
348
|
+
```markdown
|
|
349
|
+
# Project Review — Action Plan
|
|
350
|
+
|
|
351
|
+
**Generated:** YYYY-MM-DD
|
|
352
|
+
**From:** review-project-report.md
|
|
353
|
+
|
|
354
|
+
## Quick Fixes (single file, localized change)
|
|
355
|
+
- [ ] C1: Description — `path/to/file.ts:45`
|
|
356
|
+
|
|
357
|
+
## Medium Effort (multi-file refactor, 1-3 hours)
|
|
358
|
+
- [ ] I1: Description
|
|
359
|
+
|
|
360
|
+
## Large Effort (schema/protocol/security redesign, > 3 hours)
|
|
361
|
+
- [ ] I2: Description
|
|
362
|
+
|
|
363
|
+
## Suggestions (optional)
|
|
364
|
+
- [ ] S1: Description
|
|
365
|
+
```
|
|
366
|
+
|
|
367
|
+
Ensure `docs/project_notes/` exists before writing: `mkdir -p docs/project_notes`.
|
|
368
|
+
|
|
369
|
+
## Notes
|
|
370
|
+
|
|
371
|
+
- This command is designed for **MVP milestones** — not for every commit
|
|
372
|
+
- External models get project context (audit-context.md) + concatenated source — this produces much better results than raw code alone
|
|
373
|
+
- Claude reads selectively (representative samples per domain), not exhaustively — external models compensate by getting ALL source in the digest
|
|
374
|
+
- For high-risk areas (auth, payments), consider a targeted review instead of this broad sweep
|
|
375
|
+
- Cross-cutting issues (spanning frontend+backend+DB) may need manual correlation across domain findings
|
|
376
|
+
- Each domain output includes a "Files Reviewed" manifest so you can verify coverage
|
|
377
|
+
- Works with any SDD project: new (`create-sdd-project`), existing (`--init`), any supported stack
|
|
@@ -0,0 +1,378 @@
|
|
|
1
|
+
## Review Project — Instructions
|
|
2
|
+
|
|
3
|
+
Perform a comprehensive project review using multiple AI models in parallel. This is a 4-phase process designed for MVP milestones.
|
|
4
|
+
|
|
5
|
+
After compaction, re-invoke `/review-project` to resume. Completed work is preserved in /tmp/review-project-{project}/.
|
|
6
|
+
|
|
7
|
+
### Phase 0: Discovery
|
|
8
|
+
|
|
9
|
+
Detect project context without heavy file reading:
|
|
10
|
+
|
|
11
|
+
```bash
|
|
12
|
+
# Project type and SDD version
|
|
13
|
+
cat .sdd-version 2>/dev/null || echo "no .sdd-version"
|
|
14
|
+
head -30 docs/project_notes/key_facts.md 2>/dev/null
|
|
15
|
+
|
|
16
|
+
# Detect dominant source extensions (adapts to any JS/TS framework)
|
|
17
|
+
echo "=== Source extensions found ==="
|
|
18
|
+
find . -type f -not -path "*/node_modules/*" -not -path "*/dist/*" -not -path "*/.next/*" \
|
|
19
|
+
-not -path "*/.nuxt/*" -not -path "*/build/*" -not -path "*/coverage/*" \
|
|
20
|
+
\( -name "*.ts" -o -name "*.js" -o -name "*.tsx" -o -name "*.jsx" \
|
|
21
|
+
-o -name "*.vue" -o -name "*.svelte" -o -name "*.astro" \
|
|
22
|
+
-o -name "*.mjs" -o -name "*.cjs" \) \
|
|
23
|
+
| head -100
|
|
24
|
+
|
|
25
|
+
# Scale
|
|
26
|
+
echo "Source files:" && find . -type f -not -path "*/node_modules/*" -not -path "*/dist/*" \
|
|
27
|
+
-not -path "*/.next/*" -not -path "*/.nuxt/*" -not -path "*/build/*" \
|
|
28
|
+
\( -name "*.ts" -o -name "*.js" -o -name "*.tsx" -o -name "*.jsx" \
|
|
29
|
+
-o -name "*.vue" -o -name "*.svelte" -o -name "*.astro" \) | wc -l
|
|
30
|
+
echo "Test files:" && find . -type f -not -path "*/node_modules/*" \
|
|
31
|
+
\( -name "*.test.*" -o -name "*.spec.*" \) | wc -l
|
|
32
|
+
|
|
33
|
+
# Detect stack signals
|
|
34
|
+
echo "=== Stack signals ==="
|
|
35
|
+
[ -f "package.json" ] && echo "package.json: exists" || echo "package.json: not found"
|
|
36
|
+
[ -d "prisma" ] && echo "prisma/: found"
|
|
37
|
+
find . -maxdepth 3 -name "*.prisma" -not -path "*/node_modules/*" 2>/dev/null | head -3
|
|
38
|
+
find . -maxdepth 3 -type d -name "models" -not -path "*/node_modules/*" 2>/dev/null | head -3
|
|
39
|
+
[ -f "tsconfig.json" ] && echo "tsconfig.json: exists"
|
|
40
|
+
[ -f "next.config.js" ] || [ -f "next.config.mjs" ] || [ -f "next.config.ts" ] && echo "Next.js project"
|
|
41
|
+
[ -f "nuxt.config.ts" ] || [ -f "nuxt.config.js" ] && echo "Nuxt project"
|
|
42
|
+
[ -f "vite.config.ts" ] || [ -f "vite.config.js" ] && echo "Vite project"
|
|
43
|
+
[ -f "angular.json" ] && echo "Angular project"
|
|
44
|
+
[ -f "svelte.config.js" ] && echo "Svelte project"
|
|
45
|
+
[ -f "astro.config.mjs" ] && echo "Astro project"
|
|
46
|
+
|
|
47
|
+
# Detect available CLIs (robust — test real invocation, not just path lookup)
|
|
48
|
+
if command -v claude >/dev/null 2>&1; then
|
|
49
|
+
claude --version >/dev/null 2>&1 && echo "claude: available" || echo "claude: unavailable"
|
|
50
|
+
else
|
|
51
|
+
echo "claude: unavailable"
|
|
52
|
+
fi
|
|
53
|
+
if command -v codex >/dev/null 2>&1; then
|
|
54
|
+
codex --version >/dev/null 2>&1 && echo "codex: available" || echo "codex: unavailable"
|
|
55
|
+
else
|
|
56
|
+
echo "codex: unavailable"
|
|
57
|
+
fi
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
Create project-scoped working directory. Check for resume state:
|
|
61
|
+
|
|
62
|
+
```bash
|
|
63
|
+
REVIEW_DIR="/tmp/review-project-$(basename "$PWD")"
|
|
64
|
+
mkdir -p "$REVIEW_DIR"
|
|
65
|
+
echo "$REVIEW_DIR" > /tmp/.review-project-dir
|
|
66
|
+
cat "$REVIEW_DIR/progress.txt" 2>/dev/null || echo "No previous progress — starting fresh"
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
Use `$REVIEW_DIR` in all subsequent commands (or re-read from `/tmp/.review-project-dir` after compaction).
|
|
70
|
+
|
|
71
|
+
**Adapt domains by project type** (detected from key_facts.md, package.json, and stack signals above):
|
|
72
|
+
- Backend-only → skip frontend-specific checks in domain 2
|
|
73
|
+
- Frontend-only → skip domain 3 (Data Layer); domain 5 focuses on client-side security (XSS, CSP, token storage, route guards)
|
|
74
|
+
- Fullstack → all 6 domains
|
|
75
|
+
|
|
76
|
+
### Phase 1: Prepare Audit Context + External Digest + Launch
|
|
77
|
+
|
|
78
|
+
This phase has two sub-steps. Do NOT read the digest into your own context — assemble it entirely via bash.
|
|
79
|
+
|
|
80
|
+
#### Step 1a: Generate Audit Context
|
|
81
|
+
|
|
82
|
+
Read **whichever of these files exist** to understand the project, then write a concise audit context to `$REVIEW_DIR/audit-context.md`:
|
|
83
|
+
|
|
84
|
+
**SDD project docs** (created by both `create-sdd-project` and `--init`):
|
|
85
|
+
- `docs/project_notes/key_facts.md` — stack, architecture, components
|
|
86
|
+
- `docs/project_notes/decisions.md` — ADRs and rationale
|
|
87
|
+
- `docs/specs/api-spec.yaml` or `docs/specs/api-spec.json` (first 100 lines)
|
|
88
|
+
|
|
89
|
+
**Standard project files** (any project):
|
|
90
|
+
- `package.json` — dependencies, scripts, project name
|
|
91
|
+
- `README.md` (first 100 lines) — project description, setup
|
|
92
|
+
- `tsconfig.json` — TypeScript config and paths
|
|
93
|
+
|
|
94
|
+
**Schema/ORM files** (read whichever exists):
|
|
95
|
+
- `prisma/schema.prisma` or any `*.prisma` file
|
|
96
|
+
- `src/models/` or `models/` directory (Mongoose, Sequelize, TypeORM entities)
|
|
97
|
+
- `drizzle/` or `src/db/schema.*` (Drizzle schemas)
|
|
98
|
+
|
|
99
|
+
**If key_facts.md is missing or minimal**, infer the stack from `package.json` dependencies and the directory structure detected in Phase 0.
|
|
100
|
+
|
|
101
|
+
The audit context should include (aim for 100-200 lines, not more):
|
|
102
|
+
1. **Project purpose** — what it does, who it's for (from README or key_facts)
|
|
103
|
+
2. **Architecture** — stack, key patterns, data flow, framework conventions
|
|
104
|
+
3. **Key decisions** — ADRs summarized in 1 line each (if decisions.md exists)
|
|
105
|
+
4. **Known issues** — from decisions.md, bugs.md, or TODO comments
|
|
106
|
+
5. **Specific audit focus areas** — based on the detected stack's risk profile:
|
|
107
|
+
- Express/Fastify: middleware ordering, input validation, error handling
|
|
108
|
+
- Next.js/Nuxt: SSR data fetching, API routes security, hydration issues
|
|
109
|
+
- Vue/Svelte/Astro: component reactivity, XSS in templates, state management
|
|
110
|
+
- Prisma: raw queries, migration safety, relation loading
|
|
111
|
+
- Mongoose: schema validation gaps, injection in query operators
|
|
112
|
+
- Auth: timing-safe comparison, token storage, session handling
|
|
113
|
+
|
|
114
|
+
Write this to disk:
|
|
115
|
+
```bash
|
|
116
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
117
|
+
cat > "$REVIEW_DIR/audit-context.md" <<'EOF'
|
|
118
|
+
[Your generated audit context here]
|
|
119
|
+
EOF
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
#### Step 1b: Assemble Digest + Launch External Models
|
|
123
|
+
|
|
124
|
+
**Resume check**: if `$REVIEW_DIR/digest.txt` already exists, skip Step 1b entirely (digest was built in a previous run).
|
|
125
|
+
|
|
126
|
+
```bash
|
|
127
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
128
|
+
|
|
129
|
+
# 1. Review prompt header
|
|
130
|
+
cat > "$REVIEW_DIR/digest.txt" <<'HEADER'
|
|
131
|
+
You are performing a comprehensive review of a software project.
|
|
132
|
+
Your job is to find real problems — security, reliability, performance, architecture.
|
|
133
|
+
Do NOT manufacture issues. If code is solid, say so. Note uncertainty rather than flagging as issue.
|
|
134
|
+
|
|
135
|
+
For each issue: [CRITICAL/IMPORTANT/SUGGESTION] Category — Description
|
|
136
|
+
File: exact/path (line N if possible) — Proposed fix
|
|
137
|
+
|
|
138
|
+
Review criteria:
|
|
139
|
+
1. Security — injection, secrets, auth bypass, XSS, CSRF
|
|
140
|
+
2. Reliability — error handling, edge cases, race conditions, validation gaps
|
|
141
|
+
3. Performance — N+1 queries, missing indexes, memory leaks, unnecessary computation
|
|
142
|
+
4. Architecture — layer violations, tight coupling, SRP violations, dead code
|
|
143
|
+
5. Testing — coverage gaps, test quality, missing edge cases, flaky patterns
|
|
144
|
+
6. Documentation — spec/code mismatches, stale docs, missing API contracts
|
|
145
|
+
|
|
146
|
+
End with: VERDICT: HEALTHY | NEEDS_WORK (if any CRITICAL or 3+ IMPORTANT)
|
|
147
|
+
---
|
|
148
|
+
HEADER
|
|
149
|
+
|
|
150
|
+
# 2. Prepend audit context (project understanding for the external model)
|
|
151
|
+
echo "PROJECT CONTEXT:" >> "$REVIEW_DIR/digest.txt"
|
|
152
|
+
cat "$REVIEW_DIR/audit-context.md" >> "$REVIEW_DIR/digest.txt"
|
|
153
|
+
printf "\n---\nPROJECT FILES:\n" >> "$REVIEW_DIR/digest.txt"
|
|
154
|
+
|
|
155
|
+
# 3. Concatenate source files (all supported extensions, exclude tests/generated)
|
|
156
|
+
find . -type f -not -path "*/node_modules/*" -not -path "*/dist/*" -not -path "*/.next/*" \
|
|
157
|
+
-not -path "*/.nuxt/*" -not -path "*/coverage/*" -not -path "*/build/*" -not -path "*/.svelte-kit/*" \
|
|
158
|
+
\( -name "*.ts" -o -name "*.js" -o -name "*.tsx" -o -name "*.jsx" \
|
|
159
|
+
-o -name "*.vue" -o -name "*.svelte" -o -name "*.astro" \
|
|
160
|
+
-o -name "*.mjs" -o -name "*.cjs" \) \
|
|
161
|
+
-not -name "*.test.*" -not -name "*.spec.*" -not -name "*.min.*" -not -name "*.d.ts" \
|
|
162
|
+
| sort | while IFS= read -r f; do
|
|
163
|
+
echo "=== FILE: $f ===" >> "$REVIEW_DIR/digest.txt"
|
|
164
|
+
cat "$f" >> "$REVIEW_DIR/digest.txt"
|
|
165
|
+
echo "" >> "$REVIEW_DIR/digest.txt"
|
|
166
|
+
done
|
|
167
|
+
|
|
168
|
+
# 4. Add non-source config and documentation files (*.js/*.ts configs already captured by Step 3)
|
|
169
|
+
for doc in \
|
|
170
|
+
package.json tsconfig.json angular.json \
|
|
171
|
+
.env.example Dockerfile docker-compose.yml docker-compose.yaml \
|
|
172
|
+
docs/project_notes/key_facts.md docs/project_notes/decisions.md \
|
|
173
|
+
docs/specs/api-spec.yaml docs/specs/api-spec.json \
|
|
174
|
+
.eslintrc .eslintrc.json \
|
|
175
|
+
; do
|
|
176
|
+
if [ -f "$doc" ]; then
|
|
177
|
+
echo "=== FILE: $doc ===" >> "$REVIEW_DIR/digest.txt"
|
|
178
|
+
cat "$doc" >> "$REVIEW_DIR/digest.txt"
|
|
179
|
+
echo "" >> "$REVIEW_DIR/digest.txt"
|
|
180
|
+
fi
|
|
181
|
+
done
|
|
182
|
+
|
|
183
|
+
# 5. Add Prisma schema files (*.ts/*.js models already captured by Step 3)
|
|
184
|
+
find . -type f -name "*.prisma" -not -path "*/node_modules/*" | sort | while IFS= read -r f; do
|
|
185
|
+
echo "=== FILE: $f ===" >> "$REVIEW_DIR/digest.txt"
|
|
186
|
+
cat "$f" >> "$REVIEW_DIR/digest.txt"
|
|
187
|
+
echo "" >> "$REVIEW_DIR/digest.txt"
|
|
188
|
+
done
|
|
189
|
+
|
|
190
|
+
# 6. Test file list (paths only)
|
|
191
|
+
echo "=== TEST FILES (paths only) ===" >> "$REVIEW_DIR/digest.txt"
|
|
192
|
+
find . -type f -not -path "*/node_modules/*" \( -name "*.test.*" -o -name "*.spec.*" \) \
|
|
193
|
+
| sort >> "$REVIEW_DIR/digest.txt"
|
|
194
|
+
|
|
195
|
+
# 7. Check size
|
|
196
|
+
wc -c "$REVIEW_DIR/digest.txt"
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
Launch external models based on availability detected in Phase 0:
|
|
200
|
+
|
|
201
|
+
#### Path A: Both CLIs available
|
|
202
|
+
|
|
203
|
+
```bash
|
|
204
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
205
|
+
export REVIEW_DIR
|
|
206
|
+
sh -c 'cat "$REVIEW_DIR/digest.txt" | claude --print > "$REVIEW_DIR/review-claude.txt" 2>&1; touch "$REVIEW_DIR/claude.done"' &
|
|
207
|
+
DIGEST_SIZE=$(wc -c < "$REVIEW_DIR/digest.txt" | tr -d ' ')
|
|
208
|
+
if [ "$DIGEST_SIZE" -gt 600000 ]; then
|
|
209
|
+
sh -c 'head -c 600000 "$REVIEW_DIR/digest.txt" | codex exec --full-auto - > "$REVIEW_DIR/review-codex.txt" 2>&1; touch "$REVIEW_DIR/codex.done"' &
|
|
210
|
+
else
|
|
211
|
+
sh -c 'cat "$REVIEW_DIR/digest.txt" | codex exec --full-auto - > "$REVIEW_DIR/review-codex.txt" 2>&1; touch "$REVIEW_DIR/codex.done"' &
|
|
212
|
+
fi
|
|
213
|
+
echo "External models launched in background"
|
|
214
|
+
```
|
|
215
|
+
|
|
216
|
+
#### Path B: One CLI available
|
|
217
|
+
|
|
218
|
+
```bash
|
|
219
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
220
|
+
export REVIEW_DIR
|
|
221
|
+
# Claude only:
|
|
222
|
+
sh -c 'cat "$REVIEW_DIR/digest.txt" | claude --print > "$REVIEW_DIR/review-claude.txt" 2>&1; touch "$REVIEW_DIR/claude.done"' &
|
|
223
|
+
# OR Codex only:
|
|
224
|
+
sh -c 'cat "$REVIEW_DIR/digest.txt" | codex exec --full-auto - > "$REVIEW_DIR/review-codex.txt" 2>&1; touch "$REVIEW_DIR/codex.done"' &
|
|
225
|
+
```
|
|
226
|
+
|
|
227
|
+
#### Path C: No external CLI available — skip this phase. Gemini-only review (Phase 2) still provides 6 domain reviews.
|
|
228
|
+
|
|
229
|
+
### Phase 2: Deep Review (domain-by-domain, resumable)
|
|
230
|
+
|
|
231
|
+
While external models run, review the project by reading files directly. 6 domains, each written to disk immediately after completion.
|
|
232
|
+
|
|
233
|
+
**Check progress before each domain** — if `$REVIEW_DIR/progress.txt` shows `domain-N: DONE`, skip it (resume support).
|
|
234
|
+
|
|
235
|
+
**Important**: adapt each domain's focus to the actual stack detected in Phase 0. The descriptions below are guidelines — prioritize reading files that exist in this specific project.
|
|
236
|
+
|
|
237
|
+
#### Domain 1: Architecture & Config
|
|
238
|
+
Read: package.json, tsconfig, framework config (next.config/nuxt.config/vite.config/angular.json), entry points, key_facts.md, decisions.md
|
|
239
|
+
Focus: structure, dependencies, config correctness, missing configs, framework best practices
|
|
240
|
+
|
|
241
|
+
#### Domain 2: Source Code Quality
|
|
242
|
+
Read: routes/pages/components, services, models, utils, middleware (sample representative files)
|
|
243
|
+
Focus: naming, duplication, complexity, patterns, code smells, framework-specific anti-patterns
|
|
244
|
+
|
|
245
|
+
#### Domain 3: Data Layer (skip for frontend-only)
|
|
246
|
+
Read: schema files (Prisma, Mongoose models, Sequelize/TypeORM entities, Drizzle), migrations, seeds, query builders
|
|
247
|
+
Focus: schema design, indexes, migrations, query efficiency, N+1 risks, ORM-specific pitfalls
|
|
248
|
+
|
|
249
|
+
#### Domain 4: Testing & CI
|
|
250
|
+
Read: test files (sample), test config (jest/vitest/cypress/playwright), CI workflows, lint config
|
|
251
|
+
Focus: coverage gaps, test quality, CI robustness, flaky patterns
|
|
252
|
+
|
|
253
|
+
#### Domain 5: Security & Reliability
|
|
254
|
+
Read: auth middleware, validators, error handlers, rate limiters, env handling
|
|
255
|
+
Focus: vulnerabilities, error paths, secrets exposure, OWASP top 10
|
|
256
|
+
- Backend: injection, auth bypass, SSRF, timing attacks, error leakage
|
|
257
|
+
- Frontend: XSS, CSP, token storage, route guards, dependency vulnerabilities, CORS
|
|
258
|
+
|
|
259
|
+
#### Domain 6: Documentation & SDD Process
|
|
260
|
+
Read: tickets (sample), product-tracker, api-spec, bugs.md, README
|
|
261
|
+
Focus: spec/code sync, ticket quality, stale docs, process adherence
|
|
262
|
+
|
|
263
|
+
**After each domain**, write findings and a manifest of reviewed files to disk:
|
|
264
|
+
|
|
265
|
+
```bash
|
|
266
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
267
|
+
cat > "$REVIEW_DIR/review-domain-N.md" <<'EOF'
|
|
268
|
+
## Domain N: [Name]
|
|
269
|
+
### Files Reviewed
|
|
270
|
+
- path/to/file1.ts
|
|
271
|
+
- path/to/file2.vue
|
|
272
|
+
### Findings
|
|
273
|
+
[SEVERITY] Category — Description
|
|
274
|
+
File: path:line — Fix
|
|
275
|
+
...
|
|
276
|
+
EOF
|
|
277
|
+
echo "domain-N: DONE (X issues)" >> "$REVIEW_DIR/progress.txt"
|
|
278
|
+
```
|
|
279
|
+
|
|
280
|
+
### Phase 3: Consolidation
|
|
281
|
+
|
|
282
|
+
After all domains complete, check external model outputs:
|
|
283
|
+
|
|
284
|
+
```bash
|
|
285
|
+
REVIEW_DIR=$(cat /tmp/.review-project-dir)
|
|
286
|
+
for model in claude codex; do
|
|
287
|
+
DONE="$REVIEW_DIR/$model.done"
|
|
288
|
+
FILE="$REVIEW_DIR/review-$model.txt"
|
|
289
|
+
if [ -f "$DONE" ] && [ -s "$FILE" ] && grep -qE "\[CRITICAL\]|\[IMPORTANT\]|\[SUGGESTION\]|VERDICT" "$FILE" 2>/dev/null; then
|
|
290
|
+
echo "$model: done ($(wc -l < "$FILE") lines, valid)"
|
|
291
|
+
elif [ -f "$DONE" ]; then
|
|
292
|
+
echo "$model: finished but output appears malformed — review manually"
|
|
293
|
+
else
|
|
294
|
+
echo "$model: still running or not launched"
|
|
295
|
+
fi
|
|
296
|
+
done
|
|
297
|
+
```
|
|
298
|
+
|
|
299
|
+
If pending, wait up to 2 minutes. If still pending, proceed with available results.
|
|
300
|
+
|
|
301
|
+
**Consolidation steps** (write to disk progressively per category):
|
|
302
|
+
1. Read domain findings (up to 6 files from `$REVIEW_DIR/`)
|
|
303
|
+
2. Read external model outputs (up to 2 files from `$REVIEW_DIR/`)
|
|
304
|
+
3. For each finding, assign confidence:
|
|
305
|
+
- **HIGH**: 2+ models flag the same file + same concern category
|
|
306
|
+
- **MEDIUM**: 1 model, specific file/line cited
|
|
307
|
+
- **LOW**: suggestion without specific evidence
|
|
308
|
+
4. Categorize: Security, Reliability, Performance, Architecture, Testing, Documentation
|
|
309
|
+
5. Prioritize: CRITICAL > IMPORTANT > SUGGESTION
|
|
310
|
+
6. Discard external model findings that lack severity markers or a VERDICT line
|
|
311
|
+
|
|
312
|
+
Write the consolidated report to `docs/project_notes/review-project-report.md`:
|
|
313
|
+
|
|
314
|
+
```markdown
|
|
315
|
+
# Project Review Report
|
|
316
|
+
|
|
317
|
+
**Date:** YYYY-MM-DD
|
|
318
|
+
**Models:** Gemini, Claude, Codex (or subset)
|
|
319
|
+
**Source files:** N | **Test files:** M | **Doc files:** K
|
|
320
|
+
|
|
321
|
+
## Summary
|
|
322
|
+
|
|
323
|
+
| Severity | Count |
|
|
324
|
+
|----------|-------|
|
|
325
|
+
| CRITICAL | N |
|
|
326
|
+
| IMPORTANT | N |
|
|
327
|
+
| SUGGESTION | N |
|
|
328
|
+
|
|
329
|
+
**Verdict:** HEALTHY | NEEDS_WORK
|
|
330
|
+
|
|
331
|
+
## CRITICAL
|
|
332
|
+
|
|
333
|
+
### C1. [Title]
|
|
334
|
+
- **Category:** Security
|
|
335
|
+
- **File:** path/to/file.ts:45
|
|
336
|
+
- **Found by:** Gemini, Claude (HIGH confidence)
|
|
337
|
+
- **Description:** ...
|
|
338
|
+
- **Fix:** ...
|
|
339
|
+
|
|
340
|
+
## IMPORTANT
|
|
341
|
+
...
|
|
342
|
+
|
|
343
|
+
## SUGGESTION
|
|
344
|
+
...
|
|
345
|
+
```
|
|
346
|
+
|
|
347
|
+
Write the action plan to `docs/project_notes/review-project-actions.md`:
|
|
348
|
+
|
|
349
|
+
```markdown
|
|
350
|
+
# Project Review — Action Plan
|
|
351
|
+
|
|
352
|
+
**Generated:** YYYY-MM-DD
|
|
353
|
+
**From:** review-project-report.md
|
|
354
|
+
|
|
355
|
+
## Quick Fixes (single file, localized change)
|
|
356
|
+
- [ ] C1: Description — `path/to/file.ts:45`
|
|
357
|
+
|
|
358
|
+
## Medium Effort (multi-file refactor, 1-3 hours)
|
|
359
|
+
- [ ] I1: Description
|
|
360
|
+
|
|
361
|
+
## Large Effort (schema/protocol/security redesign, > 3 hours)
|
|
362
|
+
- [ ] I2: Description
|
|
363
|
+
|
|
364
|
+
## Suggestions (optional)
|
|
365
|
+
- [ ] S1: Description
|
|
366
|
+
```
|
|
367
|
+
|
|
368
|
+
Ensure `docs/project_notes/` exists before writing: `mkdir -p docs/project_notes`.
|
|
369
|
+
|
|
370
|
+
### Notes
|
|
371
|
+
|
|
372
|
+
- This command is designed for **MVP milestones** — not for every commit
|
|
373
|
+
- External models get project context (audit-context.md) + concatenated source — this produces much better results than raw code alone
|
|
374
|
+
- The primary reviewer reads selectively (representative samples per domain), not exhaustively — external models compensate by getting ALL source in the digest
|
|
375
|
+
- For high-risk areas (auth, payments), consider a targeted review instead of this broad sweep
|
|
376
|
+
- Cross-cutting issues (spanning frontend+backend+DB) may need manual correlation across domain findings
|
|
377
|
+
- Each domain output includes a "Files Reviewed" manifest so you can verify coverage
|
|
378
|
+
- Works with any SDD project: new (`create-sdd-project`), existing (`--init`), any supported stack
|