create-sdd-project 0.10.0 → 0.11.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -303,24 +303,25 @@ SDD DevFlow combines three proven practices:
303
303
  | `project-memory` | `set up project memory`, `log a bug fix` | Maintains institutional knowledge |
304
304
  | `health-check` | `health check`, `project health` | Quick scan: tests, build, specs sync, secrets, docs freshness |
305
305
 
306
- ### 3 Custom Commands
306
+ ### 4 Custom Commands
307
307
 
308
308
  | Command | What it does |
309
309
  |---------|-------------|
310
+ | `/review-spec` | Reviews feature specs using external AI models before planning — catches requirement gaps, ambiguity, and architectural inconsistencies |
310
311
  | `/review-plan` | Sends Implementation Plan to external AI models (Codex CLI, Gemini CLI) for independent critique |
311
312
  | `/context-prompt` | Generates a context recovery prompt after `/compact` with Workflow Recovery to prevent checkpoint skipping |
312
313
  | `/review-project` | Comprehensive project-level review using up to 3 AI models in parallel — 6 domains, audit context, consolidated report with action plan |
313
314
 
314
- ### Plan Quality
315
+ ### Spec & Plan Quality
315
316
 
316
- Every Standard/Complex feature plan goes through a **built-in self-review** (Step 2.4) where the agent re-reads its own plan and checks for errors, vague steps, wrong assumptions, and over-engineering before requesting approval.
317
+ Every Standard/Complex feature spec goes through a **built-in self-review** (Step 0.4) where the agent critically re-reads its own spec checking for completeness, edge cases, API contract clarity, and architectural consistency. For additional confidence, the optional `/review-spec` command sends the spec plus project context (`key_facts.md`, `decisions.md`) to external AI models for independent critique — catching requirement-level blind spots before any planning begins.
317
318
 
318
- For additional confidence, the optional `/review-plan` command sends the plan to external AI models (Codex CLI and/or Gemini CLI in parallel) for independent critique — catching blind spots that same-model review misses.
319
+ Every plan then goes through its own **built-in self-review** (Step 2.4) followed by the optional `/review-plan` command for external critique — catching implementation-level blind spots that same-model review misses.
319
320
 
320
321
  ### Workflow (Steps 0–6)
321
322
 
322
323
  ```
323
- 0. SPEC → spec-creator drafts specs → Spec Approval
324
+ 0. SPEC → spec-creator drafts + self-review → Spec Approval
324
325
  1. SETUP → Branch, ticket, product tracker → Ticket Approval
325
326
  2. PLAN → Planner creates plan + self-review → Plan Approval
326
327
  3. IMPLEMENT → Developer agent, TDD
@@ -421,6 +422,7 @@ project/
421
422
  │ │ ├── health-check/ # Project health diagnostics
422
423
  │ │ └── project-memory/ # Memory system setup
423
424
  │ ├── commands/ # Custom slash commands
425
+ │ │ ├── review-spec.md # Cross-model spec review (pre-plan)
424
426
  │ │ ├── review-plan.md # Cross-model plan review
425
427
  │ │ ├── context-prompt.md # Post-compact context recovery
426
428
  │ │ └── review-project.md # Multi-model project review
package/lib/config.js CHANGED
@@ -106,6 +106,7 @@ const TEMPLATE_AGENTS = [
106
106
  // Template-provided command files (for upgrade: detect custom commands)
107
107
  const TEMPLATE_COMMANDS = [
108
108
  'review-plan.md',
109
+ 'review-spec.md',
109
110
  'context-prompt.md',
110
111
  'review-project.md',
111
112
  ];
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "create-sdd-project",
3
- "version": "0.10.0",
3
+ "version": "0.11.1",
4
4
  "description": "Create a new SDD DevFlow project with AI-assisted development workflow",
5
5
  "bin": {
6
6
  "create-sdd-project": "bin/cli.js"
@@ -12,16 +12,18 @@ Review the Implementation Plan in the current ticket using external models for i
12
12
  2. **Detect available reviewers** — Check which external CLIs are installed:
13
13
 
14
14
  ```bash
15
- which gemini 2>/dev/null && echo "gemini: available" || echo "gemini: not found"
16
- which codex 2>/dev/null && echo "codex: available" || echo "codex: not found"
15
+ command -v gemini >/dev/null 2>&1 && echo "gemini: available" || echo "gemini: not found"
16
+ command -v codex >/dev/null 2>&1 && echo "codex: available" || echo "codex: not found"
17
17
  ```
18
18
 
19
19
  3. **Prepare the review input** — Extract the spec and plan into a temp file with the review prompt. Use the feature ID from the Active Session (e.g., `F023`):
20
20
 
21
21
  ```bash
22
- TICKET=$(ls docs/tickets/F023-*.md) # Use the feature ID from Step 1
22
+ TICKET="$(echo docs/tickets/F023-*.md)" # Use the feature ID from Step 1; verify exactly one match
23
+ REVIEW_DIR="/tmp/review-plan-$(basename "$PWD")"
24
+ mkdir -p "$REVIEW_DIR"
23
25
 
24
- cat > /tmp/review-prompt.txt <<'CRITERIA'
26
+ cat > "$REVIEW_DIR/input.txt" <<'CRITERIA'
25
27
  You are reviewing an Implementation Plan for a software feature. Your job is to find real problems, not praise. But if the plan is solid, say APPROVED — do not manufacture issues that are not there.
26
28
 
27
29
  Below you will find the Spec (what to build) and the Implementation Plan (how to build it). Review the plan and report:
@@ -39,7 +41,7 @@ End with: VERDICT: APPROVED | VERDICT: REVISE (if any CRITICAL or 2+ IMPORTANT i
39
41
  SPEC AND PLAN:
40
42
  CRITERIA
41
43
 
42
- sed -n '/## Spec/,/## Acceptance Criteria/p' "$TICKET" >> /tmp/review-prompt.txt
44
+ sed -n '/^## Spec$/,/^## Acceptance Criteria$/p' "$TICKET" >> "$REVIEW_DIR/input.txt"
43
45
  ```
44
46
 
45
47
  4. **Send for review** — Execute **only one** of the following paths based on Step 2 results:
@@ -47,24 +49,28 @@ sed -n '/## Spec/,/## Acceptance Criteria/p' "$TICKET" >> /tmp/review-prompt.txt
47
49
  ### Path A: Both CLIs available (best — two independent perspectives)
48
50
 
49
51
  ```bash
50
- cat /tmp/review-prompt.txt | gemini > /tmp/review-gemini.txt 2>&1 &
51
- cat /tmp/review-prompt.txt | codex exec - > /tmp/review-codex.txt 2>&1 &
52
- wait
52
+ cat "$REVIEW_DIR/input.txt" | gemini > "$REVIEW_DIR/gemini.txt" 2>&1 &
53
+ PID_GEMINI=$!
54
+ cat "$REVIEW_DIR/input.txt" | codex exec - > "$REVIEW_DIR/codex.txt" 2>&1 &
55
+ PID_CODEX=$!
53
56
 
54
- echo "=== GEMINI REVIEW ===" && cat /tmp/review-gemini.txt
55
- echo "=== CODEX REVIEW ===" && cat /tmp/review-codex.txt
57
+ wait $PID_GEMINI && echo "Gemini: OK" || echo "Gemini: FAILED (exit $?) — check $REVIEW_DIR/gemini.txt"
58
+ wait $PID_CODEX && echo "Codex: OK" || echo "Codex: FAILED (exit $?) — check $REVIEW_DIR/codex.txt"
59
+
60
+ echo "=== GEMINI REVIEW ===" && cat "$REVIEW_DIR/gemini.txt"
61
+ echo "=== CODEX REVIEW ===" && cat "$REVIEW_DIR/codex.txt"
56
62
  ```
57
63
 
58
- Consolidate findings — issues flagged by both models independently carry higher weight. Deduplicate and prioritize.
64
+ Consolidate findings — issues flagged by both models independently carry higher weight. Deduplicate and prioritize. Ignore output from any reviewer that failed.
59
65
 
60
66
  ### Path B: One CLI available
61
67
 
62
68
  ```bash
63
69
  # Gemini only
64
- cat /tmp/review-prompt.txt | gemini
70
+ cat "$REVIEW_DIR/input.txt" | gemini
65
71
 
66
72
  # Codex only
67
- cat /tmp/review-prompt.txt | codex exec -
73
+ cat "$REVIEW_DIR/input.txt" | codex exec -
68
74
  ```
69
75
 
70
76
  ### Path C: No external CLI available (self-review fallback)
@@ -0,0 +1,101 @@
1
+ Review the Spec in the current ticket using external models for independent critique before planning.
2
+
3
+ ## Prerequisites
4
+
5
+ - An active feature with a completed Spec (Step 0)
6
+ - Ideally, one or more external AI CLIs installed: [Gemini CLI](https://github.com/google-gemini/gemini-cli), [Codex CLI](https://github.com/openai/codex), or similar
7
+
8
+ ## What to do
9
+
10
+ 1. **Find the current ticket** — Read `docs/project_notes/product-tracker.md` → Active Session → ticket path
11
+
12
+ 2. **Detect available reviewers** — Check which external CLIs are installed:
13
+
14
+ ```bash
15
+ command -v gemini >/dev/null 2>&1 && echo "gemini: available" || echo "gemini: not found"
16
+ command -v codex >/dev/null 2>&1 && echo "codex: available" || echo "codex: not found"
17
+ ```
18
+
19
+ 3. **Prepare the review input** — Extract the spec, acceptance criteria, and project context into a temp file. Use the feature ID from the Active Session (e.g., `F023`):
20
+
21
+ ```bash
22
+ TICKET="$(echo docs/tickets/F023-*.md)" # Use the feature ID from Step 1; verify exactly one match
23
+ REVIEW_DIR="/tmp/review-spec-$(basename "$PWD")"
24
+ mkdir -p "$REVIEW_DIR"
25
+
26
+ cat > "$REVIEW_DIR/input.txt" <<'CRITERIA'
27
+ You are reviewing a Feature Specification for a software feature. Your job is to find real problems in the REQUIREMENTS — not the implementation (there is no implementation yet). If the spec is solid, say APPROVED — do not manufacture issues.
28
+
29
+ Below you will find the Spec (what to build), the Acceptance Criteria, and project context (architecture, decisions). Review the spec and report:
30
+ 1. Completeness — Are all user needs covered? Missing requirements?
31
+ 2. Ambiguity — Are requirements clear enough to plan and implement with TDD?
32
+ 3. Edge cases — Are failure modes, boundary conditions, and error responses specified?
33
+ 4. API contract — Are endpoints, fields, types, status codes well-defined? (if applicable)
34
+ 5. Scope — Is the spec doing too much or too little for one feature?
35
+ 6. Consistency — Does the spec conflict with existing architecture, patterns, or decisions?
36
+ 7. Testability — Can each acceptance criterion be verified with an automated test?
37
+
38
+ For each issue, state: [CRITICAL/IMPORTANT/SUGGESTION] — description — proposed fix.
39
+
40
+ End with: VERDICT: APPROVED | VERDICT: REVISE (if any CRITICAL or 2+ IMPORTANT issues)
41
+
42
+ ---
43
+ SPEC AND ACCEPTANCE CRITERIA:
44
+ CRITERIA
45
+
46
+ sed -n '/^## Spec$/,/^## Definition of Done$/p' "$TICKET" >> "$REVIEW_DIR/input.txt"
47
+
48
+ echo -e "\n---\nPROJECT CONTEXT (architecture and decisions):\n" >> "$REVIEW_DIR/input.txt"
49
+ cat docs/project_notes/key_facts.md >> "$REVIEW_DIR/input.txt" 2>/dev/null
50
+ echo -e "\n---\n" >> "$REVIEW_DIR/input.txt"
51
+ cat docs/project_notes/decisions.md >> "$REVIEW_DIR/input.txt" 2>/dev/null
52
+ ```
53
+
54
+ 4. **Send for review** — Execute **only one** of the following paths based on Step 2 results:
55
+
56
+ ### Path A: Both CLIs available (best — two independent perspectives)
57
+
58
+ ```bash
59
+ cat "$REVIEW_DIR/input.txt" | gemini > "$REVIEW_DIR/gemini.txt" 2>&1 &
60
+ PID_GEMINI=$!
61
+ cat "$REVIEW_DIR/input.txt" | codex exec - > "$REVIEW_DIR/codex.txt" 2>&1 &
62
+ PID_CODEX=$!
63
+
64
+ wait $PID_GEMINI && echo "Gemini: OK" || echo "Gemini: FAILED (exit $?) — check $REVIEW_DIR/gemini.txt"
65
+ wait $PID_CODEX && echo "Codex: OK" || echo "Codex: FAILED (exit $?) — check $REVIEW_DIR/codex.txt"
66
+
67
+ echo "=== GEMINI REVIEW ===" && cat "$REVIEW_DIR/gemini.txt"
68
+ echo "=== CODEX REVIEW ===" && cat "$REVIEW_DIR/codex.txt"
69
+ ```
70
+
71
+ Consolidate findings — issues flagged by both models independently carry higher weight. Deduplicate and prioritize. Ignore output from any reviewer that failed.
72
+
73
+ ### Path B: One CLI available
74
+
75
+ ```bash
76
+ # Gemini only
77
+ cat "$REVIEW_DIR/input.txt" | gemini
78
+
79
+ # Codex only
80
+ cat "$REVIEW_DIR/input.txt" | codex exec -
81
+ ```
82
+
83
+ ### Path C: No external CLI available (self-review fallback)
84
+
85
+ If no external CLI is installed, perform the review yourself. Re-read the full Spec from the ticket, then review it with this mindset:
86
+
87
+ > You are an experienced engineer who has NOT seen this spec before. Question every assumption. Look for what is missing, ambiguous, or inconsistent with the project's architecture. Do not be lenient — find problems.
88
+
89
+ Apply the same 7 criteria from the prompt above. For each issue, state severity, description, and proposed fix. End with VERDICT.
90
+
91
+ 5. **Process the review** — If any VERDICT is REVISE, update the spec addressing CRITICAL and IMPORTANT issues
92
+ 6. **Optional second round** — Send the revised spec for a final audit if significant changes were made
93
+ 7. **Log the review** — Add a note in the ticket's Completion Log: "Spec reviewed by [model(s) or self-review] — N issues found, N addressed"
94
+
95
+ ## Notes
96
+
97
+ - This command is **optional** — the workflow's built-in Spec Self-Review (Step 0.4) always runs automatically
98
+ - Most valuable for **Standard/Complex** features where a wrong spec leads to wasted planning and implementation effort
99
+ - External models receive project context (key_facts + decisions) to check architectural consistency
100
+ - Both CLIs use their latest default model when no `-m` flag is specified — no need to hardcode model names
101
+ - Path C (self-review) is a last resort — external review gives genuinely independent perspectives that self-review cannot
@@ -70,6 +70,8 @@ Ask user to classify complexity before starting. See `references/complexity-guid
70
70
  1. Use Task tool with `spec-creator` agent
71
71
  2. Agent updates global spec files (`api-spec.yaml`, `ui-components.md`) and Zod schemas in `shared/src/schemas/` if applicable
72
72
  3. Agent writes spec summary into the ticket's `## Spec` section
73
+ 4. **Spec Self-Review:** Re-read the spec critically. Are requirements complete? Edge cases covered? API contract well-defined? Acceptance criteria testable? Does the spec conflict with existing architecture (`key_facts.md`, `decisions.md`)? Update the spec with any fixes found before proceeding.
74
+ 5. **Optional:** Run `/review-spec` for external model review (recommended for Standard/Complex)
73
75
 
74
76
  **→ CHECKPOINT: Spec Approval** — Update tracker (Active Session + Features table): step `0/6 (Spec)`
75
77
 
@@ -14,16 +14,18 @@ Review the Implementation Plan in the current ticket using external models for i
14
14
  2. **Detect available reviewers** — Check which external CLIs are installed:
15
15
 
16
16
  ```bash
17
- which claude 2>/dev/null && echo "claude: available" || echo "claude: not found"
18
- which codex 2>/dev/null && echo "codex: available" || echo "codex: not found"
17
+ command -v claude >/dev/null 2>&1 && echo "claude: available" || echo "claude: not found"
18
+ command -v codex >/dev/null 2>&1 && echo "codex: available" || echo "codex: not found"
19
19
  ```
20
20
 
21
21
  3. **Prepare the review input** — Extract the spec and plan into a temp file with the review prompt. Use the feature ID from the Active Session (e.g., `F023`):
22
22
 
23
23
  ```bash
24
- TICKET=$(ls docs/tickets/F023-*.md) # Use the feature ID from Step 1
24
+ TICKET="$(echo docs/tickets/F023-*.md)" # Use the feature ID from Step 1; verify exactly one match
25
+ REVIEW_DIR="/tmp/review-plan-$(basename "$PWD")"
26
+ mkdir -p "$REVIEW_DIR"
25
27
 
26
- cat > /tmp/review-prompt.txt <<'CRITERIA'
28
+ cat > "$REVIEW_DIR/input.txt" <<'CRITERIA'
27
29
  You are reviewing an Implementation Plan for a software feature. Your job is to find real problems, not praise. But if the plan is solid, say APPROVED — do not manufacture issues that are not there.
28
30
 
29
31
  Below you will find the Spec (what to build) and the Implementation Plan (how to build it). Review the plan and report:
@@ -41,7 +43,7 @@ End with: VERDICT: APPROVED | VERDICT: REVISE (if any CRITICAL or 2+ IMPORTANT i
41
43
  SPEC AND PLAN:
42
44
  CRITERIA
43
45
 
44
- sed -n '/## Spec/,/## Acceptance Criteria/p' "$TICKET" >> /tmp/review-prompt.txt
46
+ sed -n '/^## Spec$/,/^## Acceptance Criteria$/p' "$TICKET" >> "$REVIEW_DIR/input.txt"
45
47
  ```
46
48
 
47
49
  4. **Send for review** — Execute **only one** of the following paths based on Step 2 results:
@@ -49,24 +51,28 @@ sed -n '/## Spec/,/## Acceptance Criteria/p' "$TICKET" >> /tmp/review-prompt.txt
49
51
  #### Path A: Both CLIs available (best — two independent perspectives)
50
52
 
51
53
  ```bash
52
- cat /tmp/review-prompt.txt | claude --print > /tmp/review-claude.txt 2>&1 &
53
- cat /tmp/review-prompt.txt | codex exec - > /tmp/review-codex.txt 2>&1 &
54
- wait
54
+ cat "$REVIEW_DIR/input.txt" | claude --print > "$REVIEW_DIR/claude.txt" 2>&1 &
55
+ PID_CLAUDE=$!
56
+ cat "$REVIEW_DIR/input.txt" | codex exec - > "$REVIEW_DIR/codex.txt" 2>&1 &
57
+ PID_CODEX=$!
55
58
 
56
- echo "=== CLAUDE REVIEW ===" && cat /tmp/review-claude.txt
57
- echo "=== CODEX REVIEW ===" && cat /tmp/review-codex.txt
59
+ wait $PID_CLAUDE && echo "Claude: OK" || echo "Claude: FAILED (exit $?) — check $REVIEW_DIR/claude.txt"
60
+ wait $PID_CODEX && echo "Codex: OK" || echo "Codex: FAILED (exit $?) — check $REVIEW_DIR/codex.txt"
61
+
62
+ echo "=== CLAUDE REVIEW ===" && cat "$REVIEW_DIR/claude.txt"
63
+ echo "=== CODEX REVIEW ===" && cat "$REVIEW_DIR/codex.txt"
58
64
  ```
59
65
 
60
- Consolidate findings — issues flagged by both models independently carry higher weight. Deduplicate and prioritize.
66
+ Consolidate findings — issues flagged by both models independently carry higher weight. Deduplicate and prioritize. Ignore output from any reviewer that failed.
61
67
 
62
68
  #### Path B: One CLI available
63
69
 
64
70
  ```bash
65
71
  # Claude only
66
- cat /tmp/review-prompt.txt | claude --print
72
+ cat "$REVIEW_DIR/input.txt" | claude --print
67
73
 
68
74
  # Codex only
69
- cat /tmp/review-prompt.txt | codex exec -
75
+ cat "$REVIEW_DIR/input.txt" | codex exec -
70
76
  ```
71
77
 
72
78
  #### Path C: No external CLI available (self-review fallback)
@@ -0,0 +1,103 @@
1
+ ## Review Spec — Instructions
2
+
3
+ Review the Spec in the current ticket using external models for independent critique before planning.
4
+
5
+ ### Prerequisites
6
+
7
+ - An active feature with a completed Spec (Step 0)
8
+ - Ideally, one or more external AI CLIs installed: Codex CLI, Claude Code, or similar
9
+
10
+ ### Steps
11
+
12
+ 1. **Find the current ticket** — Read `docs/project_notes/product-tracker.md` → Active Session → ticket path
13
+
14
+ 2. **Detect available reviewers** — Check which external CLIs are installed:
15
+
16
+ ```bash
17
+ command -v claude >/dev/null 2>&1 && echo "claude: available" || echo "claude: not found"
18
+ command -v codex >/dev/null 2>&1 && echo "codex: available" || echo "codex: not found"
19
+ ```
20
+
21
+ 3. **Prepare the review input** — Extract the spec, acceptance criteria, and project context into a temp file. Use the feature ID from the Active Session (e.g., `F023`):
22
+
23
+ ```bash
24
+ TICKET="$(echo docs/tickets/F023-*.md)" # Use the feature ID from Step 1; verify exactly one match
25
+ REVIEW_DIR="/tmp/review-spec-$(basename "$PWD")"
26
+ mkdir -p "$REVIEW_DIR"
27
+
28
+ cat > "$REVIEW_DIR/input.txt" <<'CRITERIA'
29
+ You are reviewing a Feature Specification for a software feature. Your job is to find real problems in the REQUIREMENTS — not the implementation (there is no implementation yet). If the spec is solid, say APPROVED — do not manufacture issues.
30
+
31
+ Below you will find the Spec (what to build), the Acceptance Criteria, and project context (architecture, decisions). Review the spec and report:
32
+ 1. Completeness — Are all user needs covered? Missing requirements?
33
+ 2. Ambiguity — Are requirements clear enough to plan and implement with TDD?
34
+ 3. Edge cases — Are failure modes, boundary conditions, and error responses specified?
35
+ 4. API contract — Are endpoints, fields, types, status codes well-defined? (if applicable)
36
+ 5. Scope — Is the spec doing too much or too little for one feature?
37
+ 6. Consistency — Does the spec conflict with existing architecture, patterns, or decisions?
38
+ 7. Testability — Can each acceptance criterion be verified with an automated test?
39
+
40
+ For each issue, state: [CRITICAL/IMPORTANT/SUGGESTION] — description — proposed fix.
41
+
42
+ End with: VERDICT: APPROVED | VERDICT: REVISE (if any CRITICAL or 2+ IMPORTANT issues)
43
+
44
+ ---
45
+ SPEC AND ACCEPTANCE CRITERIA:
46
+ CRITERIA
47
+
48
+ sed -n '/^## Spec$/,/^## Definition of Done$/p' "$TICKET" >> "$REVIEW_DIR/input.txt"
49
+
50
+ echo -e "\n---\nPROJECT CONTEXT (architecture and decisions):\n" >> "$REVIEW_DIR/input.txt"
51
+ cat docs/project_notes/key_facts.md >> "$REVIEW_DIR/input.txt" 2>/dev/null
52
+ echo -e "\n---\n" >> "$REVIEW_DIR/input.txt"
53
+ cat docs/project_notes/decisions.md >> "$REVIEW_DIR/input.txt" 2>/dev/null
54
+ ```
55
+
56
+ 4. **Send for review** — Execute **only one** of the following paths based on Step 2 results:
57
+
58
+ #### Path A: Both CLIs available (best — two independent perspectives)
59
+
60
+ ```bash
61
+ cat "$REVIEW_DIR/input.txt" | claude --print > "$REVIEW_DIR/claude.txt" 2>&1 &
62
+ PID_CLAUDE=$!
63
+ cat "$REVIEW_DIR/input.txt" | codex exec - > "$REVIEW_DIR/codex.txt" 2>&1 &
64
+ PID_CODEX=$!
65
+
66
+ wait $PID_CLAUDE && echo "Claude: OK" || echo "Claude: FAILED (exit $?) — check $REVIEW_DIR/claude.txt"
67
+ wait $PID_CODEX && echo "Codex: OK" || echo "Codex: FAILED (exit $?) — check $REVIEW_DIR/codex.txt"
68
+
69
+ echo "=== CLAUDE REVIEW ===" && cat "$REVIEW_DIR/claude.txt"
70
+ echo "=== CODEX REVIEW ===" && cat "$REVIEW_DIR/codex.txt"
71
+ ```
72
+
73
+ Consolidate findings — issues flagged by both models independently carry higher weight. Deduplicate and prioritize. Ignore output from any reviewer that failed.
74
+
75
+ #### Path B: One CLI available
76
+
77
+ ```bash
78
+ # Claude only
79
+ cat "$REVIEW_DIR/input.txt" | claude --print
80
+
81
+ # Codex only
82
+ cat "$REVIEW_DIR/input.txt" | codex exec -
83
+ ```
84
+
85
+ #### Path C: No external CLI available (self-review fallback)
86
+
87
+ If no external CLI is installed, perform the review yourself. Re-read the full Spec from the ticket, then review it with this mindset:
88
+
89
+ > You are an experienced engineer who has NOT seen this spec before. Question every assumption. Look for what is missing, ambiguous, or inconsistent with the project's architecture. Do not be lenient — find problems.
90
+
91
+ Apply the same 7 criteria from the prompt above. For each issue, state severity, description, and proposed fix. End with VERDICT.
92
+
93
+ 5. **Process the review** — If any VERDICT is REVISE, update the spec addressing CRITICAL and IMPORTANT issues
94
+ 6. **Optional second round** — Send the revised spec for a final audit if significant changes were made
95
+ 7. **Log the review** — Add a note in the ticket's Completion Log: "Spec reviewed by [model(s) or self-review] — N issues found, N addressed"
96
+
97
+ ### Notes
98
+
99
+ - This command is **optional** — the workflow's built-in Spec Self-Review (Step 0.4) always runs automatically
100
+ - Most valuable for Standard/Complex features where a wrong spec leads to wasted planning and implementation effort
101
+ - External models receive project context (key_facts + decisions) to check architectural consistency
102
+ - Both CLIs use their latest default model when no `-m` flag is specified — no need to hardcode model names
103
+ - Path C (self-review) is a last resort — external review gives genuinely independent perspectives that self-review cannot
@@ -0,0 +1,2 @@
1
+ description = "Review the current Spec using an external AI model for independent critique before planning (optional, for Standard/Complex features)"
2
+ prompt = "Read the file .gemini/commands/review-spec-instructions.md and follow the instructions to review the current Spec with an external model."
@@ -70,6 +70,8 @@ Ask user to classify complexity before starting. See `references/complexity-guid
70
70
  1. Follow the instructions in `.gemini/agents/spec-creator.md`
71
71
  2. Update global spec files (`api-spec.yaml`, `ui-components.md`) and Zod schemas in `shared/src/schemas/` if applicable
72
72
  3. Write spec summary into the ticket's `## Spec` section
73
+ 4. **Spec Self-Review:** Re-read the spec critically. Are requirements complete? Edge cases covered? API contract well-defined? Acceptance criteria testable? Does the spec conflict with existing architecture (`key_facts.md`, `decisions.md`)? Update the spec with any fixes found before proceeding.
74
+ 5. **Optional:** Run `/review-spec` for external model review (recommended for Standard/Complex)
73
75
 
74
76
  **→ CHECKPOINT: Spec Approval** — Update tracker (Active Session + Features table): step `0/6 (Spec)`
75
77