wiggum-cli 0.17.2 → 0.17.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (51) hide show
  1. package/README.md +8 -2
  2. package/dist/agent/orchestrator.d.ts +1 -1
  3. package/dist/agent/orchestrator.js +19 -4
  4. package/dist/agent/tools/backlog.js +8 -4
  5. package/dist/agent/tools/execution.js +1 -1
  6. package/dist/agent/tools/introspection.js +26 -4
  7. package/dist/commands/config.js +96 -2
  8. package/dist/commands/run.d.ts +2 -0
  9. package/dist/commands/run.js +47 -2
  10. package/dist/generator/config.js +13 -2
  11. package/dist/index.js +7 -1
  12. package/dist/repl/command-parser.d.ts +1 -1
  13. package/dist/repl/command-parser.js +1 -1
  14. package/dist/templates/config/ralph.config.cjs.tmpl +9 -2
  15. package/dist/templates/prompts/PROMPT_e2e.md.tmpl +16 -89
  16. package/dist/templates/prompts/PROMPT_e2e_fix.md.tmpl +55 -0
  17. package/dist/templates/prompts/PROMPT_feature.md.tmpl +12 -98
  18. package/dist/templates/prompts/PROMPT_review_auto.md.tmpl +52 -49
  19. package/dist/templates/prompts/PROMPT_review_manual.md.tmpl +30 -2
  20. package/dist/templates/prompts/PROMPT_review_merge.md.tmpl +59 -69
  21. package/dist/templates/prompts/PROMPT_verify.md.tmpl +7 -0
  22. package/dist/templates/root/README.md.tmpl +2 -3
  23. package/dist/templates/scripts/feature-loop.sh.tmpl +777 -90
  24. package/dist/templates/scripts/loop.sh.tmpl +5 -1
  25. package/dist/templates/scripts/ralph-monitor.sh.tmpl +0 -2
  26. package/dist/tui/app.d.ts +5 -1
  27. package/dist/tui/app.js +12 -2
  28. package/dist/tui/hooks/useAgentOrchestrator.js +16 -7
  29. package/dist/tui/hooks/useInit.d.ts +5 -1
  30. package/dist/tui/hooks/useInit.js +20 -2
  31. package/dist/tui/screens/InitScreen.js +12 -1
  32. package/dist/tui/screens/MainShell.js +70 -6
  33. package/dist/tui/screens/RunScreen.d.ts +6 -2
  34. package/dist/tui/screens/RunScreen.js +48 -6
  35. package/dist/tui/utils/loop-status.d.ts +15 -0
  36. package/dist/tui/utils/loop-status.js +89 -27
  37. package/dist/utils/config.d.ts +7 -0
  38. package/dist/utils/config.js +14 -0
  39. package/package.json +1 -1
  40. package/src/templates/config/ralph.config.cjs.tmpl +9 -2
  41. package/src/templates/prompts/PROMPT_e2e.md.tmpl +16 -89
  42. package/src/templates/prompts/PROMPT_e2e_fix.md.tmpl +55 -0
  43. package/src/templates/prompts/PROMPT_feature.md.tmpl +12 -98
  44. package/src/templates/prompts/PROMPT_review_auto.md.tmpl +52 -49
  45. package/src/templates/prompts/PROMPT_review_manual.md.tmpl +30 -2
  46. package/src/templates/prompts/PROMPT_review_merge.md.tmpl +59 -69
  47. package/src/templates/prompts/PROMPT_verify.md.tmpl +7 -0
  48. package/src/templates/root/README.md.tmpl +2 -3
  49. package/src/templates/scripts/feature-loop.sh.tmpl +777 -90
  50. package/src/templates/scripts/loop.sh.tmpl +5 -1
  51. package/src/templates/scripts/ralph-monitor.sh.tmpl +0 -2
@@ -28,8 +28,14 @@ Check the bridge is running:
28
28
  ```bash
29
29
  curl -s http://localhost:3999/health || (cd {{projectRoot}} && npm run e2e:bridge &)
30
30
  sleep 3
31
+ curl -s http://localhost:3999/health
31
32
  ```
32
33
 
34
+ If bridge health still fails (for example `listen EPERM` / socket permission errors), do not keep retrying commands in this run:
35
+ - Mark each unchecked `- [ ] E2E:` scenario as `- [ ] E2E: ... - FAILED: bridge unavailable in sandbox`
36
+ - Add the concrete error text under each failed scenario
37
+ - Continue to Step 4 and record a clear blocked summary
38
+
33
39
  ### Step 2: Parse E2E Test Scenarios
34
40
  Read E2E test scenarios from @.ralph/specs/$FEATURE-implementation-plan.md.
35
41
  Each scenario is marked with `- [ ] E2E:` prefix and follows this format:
@@ -88,32 +94,27 @@ CONTENT=$(agent-browser eval "document.getElementById('terminal-mirror').textCon
88
94
  # Verify CONTENT contains expected strings
89
95
  ```
90
96
 
91
- 6. **Reset between scenarios:**
97
+ 6. **Reset between scenarios (without ending the browser session):**
92
98
  ```bash
93
- agent-browser close
99
+ # Re-open the next scenario URL directly in the same session.
100
+ agent-browser open "http://localhost:3999?cmd=<next-command>&cwd=<next-path>"
94
101
  ```
95
102
 
96
103
  ### TUI Interaction Cheatsheet
97
104
 
98
105
  | Action | Command |
99
106
  |--------|---------|
100
- | Open TUI | `agent-browser open "http://localhost:3999?cmd=init&cwd=/path"` |
107
+ | Open TUI | `agent-browser open "http://localhost:3999?cmd=<cmd>&cwd=<path>"` |
101
108
  | Read screen | `agent-browser eval "document.getElementById('terminal-mirror').textContent"` |
102
- | Take snapshot | `agent-browser snapshot -i` |
103
- | Click element | `agent-browser click @ref` |
104
- | Type text | `agent-browser type @ref "text"` |
105
- | Press Enter | `agent-browser key Enter` |
106
- | Arrow down | `agent-browser key ArrowDown` |
107
- | Escape | `agent-browser key Escape` |
108
- | Screenshot | `agent-browser screenshot e2e-failure.png` |
109
- | Wait for text | `agent-browser wait --text "expected" --timeout 10000` |
109
+ | Type/key press | `agent-browser type @ref "text"` / `agent-browser key Enter` |
110
110
  | Close session | `agent-browser close` |
111
111
 
112
112
  ### Key Rules
113
113
  - Always wait for expected text before asserting (TUI renders async via React)
114
114
  - Use `agent-browser eval` with `terminal-mirror` for reliable text reading
115
115
  - Take screenshots on failures for debugging
116
- - Each scenario navigates to a fresh URL (clean state)
116
+ - Keep one browser session across all scenarios; do not call `agent-browser close` until all scenarios are done
117
+ - Each scenario should navigate to a fresh URL (clean state)
117
118
  - Wait 500ms after key presses before reading (Ink re-render delay)
118
119
 
119
120
  ### Step 4: Report Results
@@ -158,14 +159,10 @@ When all scenarios are executed:
158
159
  ```
159
160
  5. If all passed: signal ready for PR phase
160
161
  6. If any failed: failures documented, loop will retry after fix iteration
162
+ 7. Close browser once at the end: `agent-browser close`
161
163
 
162
164
  ## Learning Capture
163
- If E2E testing revealed issues worth remembering, append to @.ralph/LEARNINGS.md:
164
- - Flaky test patterns -> Add under "## Anti-Patterns" > "E2E Pitfalls"
165
- - TUI timing issues -> Add under "## Anti-Patterns"
166
- - Useful agent-browser techniques -> Add under "## Tool Usage"
167
-
168
- Format: `- [YYYY-MM-DD] [$FEATURE] Brief description`
165
+ If useful E2E patterns found, append to @.ralph/LEARNINGS.md
169
166
  {{else}}
170
167
  ## Task
171
168
  Execute automated E2E tests for the completed feature using Playwright MCP tools.
@@ -282,56 +279,6 @@ Update @.ralph/specs/$FEATURE-implementation-plan.md for each scenario:
282
279
  - Fix needed: [suggested action]
283
280
  ```
284
281
 
285
- ## Playwright MCP Tool Reference
286
-
287
- | Tool | Purpose | Key Parameters |
288
- |------|---------|----------------|
289
- | `browser_navigate` | Go to URL | `url` |
290
- | `browser_snapshot` | Get page state (use for assertions) | - |
291
- | `browser_click` | Click element | `element`, `ref` |
292
- | `browser_type` | Type into element | `element`, `ref`, `text`, `submit` |
293
- | `browser_fill_form` | Fill multiple fields | `fields[]` with name, type, ref, value |
294
- | `browser_select_option` | Select dropdown | `element`, `ref`, `values[]` |
295
- | `browser_wait_for` | Wait for text/time | `text`, `textGone`, `time` |
296
- | `browser_take_screenshot` | Capture visual state | `filename` (optional) |
297
- | `browser_console_messages` | Get JS console output | `level` (error/warning/info) |
298
- | `browser_press_key` | Press keyboard key | `key` (e.g., "Enter", "Tab") |
299
- | `browser_close` | Close browser/reset state | - |
300
-
301
- ## Assertion Patterns
302
-
303
- ### Text Verification
304
- ```
305
- 1. browser_snapshot -> get page content
306
- 2. Check snapshot output for expected text strings
307
- 3. If text not found, scenario fails
308
- ```
309
-
310
- ### Element State
311
- ```
312
- 1. browser_snapshot -> accessibility tree shows element states
313
- 2. Check for: enabled/disabled, checked/unchecked, visible
314
- ```
315
-
316
- ### URL Verification
317
- ```
318
- 1. After navigation/action, snapshot shows current URL
319
- 2. Verify URL contains expected path/params
320
- ```
321
-
322
- {{#if hasSupabase}}### Database State
323
- ```
324
- 1. mcp__plugin_supabase_supabase__execute_sql with SELECT query
325
- 2. Verify row count, column values match expectations
326
- ```
327
- {{/if}}
328
-
329
- ## Browser State Management
330
-
331
- - Use `browser_close` between unrelated scenarios to reset localStorage/cookies
332
- - Keep browser open for scenarios that test state persistence (e.g., duplicate submission)
333
- - Fresh browser state = clean localStorage, no prior submissions tracked
334
-
335
282
  ## Error Recovery
336
283
 
337
284
  If a scenario fails:
@@ -358,21 +305,6 @@ When all scenarios are executed:
358
305
  5. If all passed: signal ready for PR phase
359
306
  6. If any failed: failures documented, loop will retry after fix iteration
360
307
 
361
- ## Troubleshooting
362
-
363
- ### UI Changes Not Visible
364
- If code changes don't appear in the browser:
365
- 1. Stop the dev server
366
- 2. Clear cache: `rm -rf {{appDir}}/.next`
367
- 3. Restart: `cd {{appDir}} && {{devCommand}}`
368
- 4. Wait for full rebuild before testing
369
-
370
- ### Stale Data
371
- - Clear browser storage: Use `browser_close` between scenarios
372
- {{#if hasSupabase}}- Check Supabase for stale test data from previous runs
373
- - Delete test data: `DELETE FROM table WHERE data->>'_test' = 'true'`
374
- {{/if}}
375
-
376
308
  ## Rules
377
309
  - Always get a fresh `browser_snapshot` after actions before making assertions
378
310
  - Use `browser_wait_for` when waiting for async operations (form submission, API calls)
@@ -382,10 +314,5 @@ If code changes don't appear in the browser:
382
314
  - Document failures clearly so fix iteration knows what to address
383
315
 
384
316
  ## Learning Capture
385
- If E2E testing revealed issues worth remembering, append to @.ralph/LEARNINGS.md:
386
- - Flaky test patterns -> Add under "## Anti-Patterns" > "E2E Pitfalls"
387
- - Useful Playwright techniques -> Add under "## Tool Usage"
388
- - Timing issues or race conditions -> Add under "## Anti-Patterns"
389
-
390
- Format: `- [YYYY-MM-DD] [$FEATURE] Brief description`
317
+ If useful E2E patterns found, append to @.ralph/LEARNINGS.md
391
318
  {{/if}}
@@ -0,0 +1,55 @@
1
+ ## Context
2
+ If @.ralph/guides/AGENTS.md exists, study it for commands and patterns.
3
+ Study @.ralph/specs/$FEATURE-implementation-plan.md for E2E failure details.
4
+ Study @.ralph/specs/$FEATURE.md for behavioral constraints and acceptance criteria.
5
+ {{#if frameworkVariant}}For detailed architecture, see @{{appDir}}/.claude/CLAUDE.md{{/if}}
6
+
7
+ ## Learnings
8
+ Read @.ralph/LEARNINGS.md — pay attention to the "E2E Pitfalls" section to avoid repeating known issues.
9
+
10
+ ## Task
11
+ Fix all E2E test scenarios marked `- [ ] E2E: ... - FAILED` in @.ralph/specs/$FEATURE-implementation-plan.md.
12
+
13
+ 1. Read the implementation plan and identify every line matching `- [ ] E2E: ... - FAILED`.
14
+ 2. For each failure, read the `Error:` and `Fix needed:` fields to understand what went wrong.
15
+ 3. Cross-reference @.ralph/specs/$FEATURE.md to ensure fixes respect behavioral constraints.
16
+ 4. Apply the minimal code change needed to make the failing scenario pass.
17
+ 5. Do NOT touch passing scenarios or non-E2E implementation code unless required by the fix.
18
+
19
+ {{#if isTui}}
20
+ ### TUI E2E Fix Notes
21
+ - Fixes typically involve Ink component state, key handling, or terminal output formatting.
22
+ - Verify fix with the agent-browser bridge: `curl -s http://localhost:3999/health` to confirm it is running.
23
+ - After fixing, re-run the failed scenario manually via agent-browser to confirm it passes before updating the plan.
24
+ - If bridge/agent-browser cannot start due sandbox socket restrictions (e.g. `listen EPERM`, daemon socket startup failure), treat scenarios as blocked infrastructure:
25
+ - Keep each affected scenario as `- [ ] E2E: ... - FAILED: bridge unavailable in sandbox`
26
+ - Record exact error output
27
+ - Do not loop on repeated identical retries in this run
28
+ {{else}}
29
+ ### Web E2E Fix Notes
30
+ - Fixes typically involve DOM structure, async timing, or data state issues.
31
+ - Verify fix with the dev server running at http://localhost:3000.
32
+ - After fixing, re-run the failed scenario via Playwright MCP to confirm it passes before updating the plan.
33
+ {{/if}}
34
+
35
+ ## Validation
36
+ After applying fixes, run ONLY the E2E validation command:
37
+ ```bash
38
+ cd {{appDir}} && {{testCommand}}
39
+ ```
40
+
41
+ Unit tests and build are already passing — do not re-run lint, typecheck, or build unless the fix touches non-E2E source files.
42
+
43
+ ## Completion
44
+ When fixes are applied and validation passes:
45
+ 1. Update @.ralph/specs/$FEATURE-implementation-plan.md — for each fixed scenario, change `- [ ] E2E: ... - FAILED` to `- [x] E2E: ... - PASSED`.
46
+ 2. `git -C {{appDir}} add -A`
47
+ 3. `git -C {{appDir}} commit -m "fix($FEATURE): resolve failing E2E scenarios"`
48
+ 4. `git -C {{appDir}} push origin feat/$FEATURE`
49
+
50
+ ## Learning Capture
51
+ If this fix iteration revealed E2E-specific patterns worth remembering, append to @.ralph/LEARNINGS.md:
52
+ - Flaky test patterns or timing issues -> Add under "## Anti-Patterns (What to Avoid)" > "E2E Pitfalls"
53
+ - Useful debugging techniques -> Add under "## Tool Usage"
54
+
55
+ Format: `- [YYYY-MM-DD] [$FEATURE] Brief description`
@@ -14,106 +14,20 @@ Study @.ralph/specs/$FEATURE.md for feature specification.
14
14
  3. Create @.ralph/specs/$FEATURE-implementation-plan.md with tasks
15
15
 
16
16
  ## Implementation Plan Format
17
- ```markdown
18
- # $FEATURE Implementation Plan
19
-
20
- **Spec:** .ralph/specs/$FEATURE.md
21
- **Branch:** feat/$FEATURE
22
- **Status:** Planning | In Progress | PR Review | Completed
23
-
24
- ## Tasks
25
-
26
- ### Phase 1: Setup
27
- - [ ] Task 1 - [complexity: S/M/L]
28
- - [ ] Task 2
29
-
30
- ### Phase 2: Core Implementation
31
- - [ ] Task 3
32
- - [ ] Task 4
33
-
34
- ### Phase 3: Tests (Unit/Integration)
35
- - [ ] Write tests for [component]
36
- - [ ] Write tests for [feature]
37
-
38
- ### Phase 4: Polish & Design
39
- - [ ] Run design checklist from @.ralph/guides/FRONTEND.md (if UI changes)
40
- - [ ] Verify responsive design (mobile/tablet/desktop)
41
- - [ ] Add loading/empty/error states
42
- - [ ] Verify hover/focus states on interactive elements
43
- {{#if styling}}- [ ] For charts: use ChartContainer pattern with tooltips + legends{{/if}}
44
- - [ ] Task N (additional polish)
45
-
46
- ### Phase 5: E2E Testing
47
- {{#if isTui}}
48
- TUI E2E tests executed via xterm.js bridge + agent-browser.
49
- Fixture projects in `e2e/fixtures/`. Bridge at `http://localhost:3999`.
50
-
51
- - [ ] E2E: [Scenario name] - [brief description]
52
- - **Command:** [wiggum command, e.g., init, new auth-flow]
53
- - **CWD:** [working directory, e.g., e2e/fixtures/bare-project]
54
- - **Steps:**
55
- 1. [Action] -> [expected terminal output]
56
- 2. [Action] -> [expected terminal output]
57
- - **Verify:** [text that should appear in terminal]
58
-
59
- Example TUI E2E scenario:
60
- - [ ] E2E: Init in bare project - happy path
61
- - **Command:** init
62
- - **CWD:** e2e/fixtures/bare-project
63
- - **Steps:**
64
- 1. Open bridge with init command -> Welcome screen renders
65
- 2. Arrow down to select option -> Option highlighted
66
- 3. Press Enter -> Next screen appears
67
- - **Verify:** "initialized" text visible in terminal
68
- {{else}}
69
- Browser-based tests executed via Playwright MCP tools.
70
-
71
- - [ ] E2E: [Scenario name] - [brief description]
72
- - **URL:** [starting URL, use {placeholders} for dynamic IDs]
73
- - **Preconditions:** [setup requirements, e.g., "Survey must be published"]
74
- - **Steps:**
75
- 1. [Action] -> [expected result]
76
- 2. [Action] -> [expected result]
77
- - **Verify:** [final assertion text to check]
78
- - **Database check:** [optional SQL query to verify data]
79
-
80
- Example E2E scenario:
81
- - [ ] E2E: Submit survey response - happy path
82
- - **URL:** /survey/{surveyId}?utm_source=e2e_test
83
- - **Preconditions:** Published survey with required questions
84
- - **Steps:**
85
- 1. Navigate to survey URL -> Form displays with all questions
86
- 2. Fill required text field -> No validation error
87
- 3. Select rating 4 -> Button shows selected state
88
- 4. Click "Submit Survey" -> Loading state appears
89
- 5. Wait for "Thank You!" -> Success card displays
90
- - **Verify:** "successfully submitted" text visible
91
- - **Database check:** SELECT * FROM survey_responses WHERE survey_id = '{surveyId}'
92
- {{/if}}
93
-
94
- ## Done
95
- - [x] Completed task - [commit hash]
96
- - [x] E2E: Scenario name - PASSED
97
- ```
17
+ Create `@.ralph/specs/$FEATURE-implementation-plan.md` with:
18
+ - **Header:** feature name, spec path (`$FEATURE.md`), branch (`feat/$FEATURE`), status
19
+ - **Phases:** Setup | Core Implementation | Tests (Unit/Integration) | Polish & Design | E2E Testing
20
+ - **Tasks:** `- [ ] Task description [S/M/L]` — every task MUST use `- [ ]` checkbox syntax
21
+ - **E2E tasks:** `- [ ] E2E: Scenario name` with required fields:
22
+ {{#if isTui}} - Command, CWD, Steps (action expected terminal output), Verify text{{else}} - URL, Preconditions, Steps (action → expected result), Verify text, optional Database check{{/if}}
23
+ - **Done section:** `- [x] Completed task - [commit hash]`
98
24
 
99
25
  ## CRITICAL CONSTRAINT — PLANNING ONLY
100
- **You are in the PLANNING phase. Your ONLY job is to produce an implementation plan.**
101
- - Do NOT write any source code, test code, or configuration files
102
- - Do NOT create, modify, or touch any files outside `.ralph/specs/`
103
- - Do NOT run build, test, or lint commands
104
- - Do NOT make git commits
105
- - If you feel the urge to "just implement a small piece", STOP — that is a phase violation
106
- - The implementation phase runs AFTER this session ends, in a separate session
107
- - Violation of this constraint wastes tokens and breaks the harness automation
26
+ Planning only. Do NOT write code, tests, configs, or run builds/commits.
27
+ The implementation phase runs in a separate session after planning ends.
28
+ Violation wastes tokens and breaks harness automation.
108
29
 
109
30
  ## Rules
110
- - You MUST use `- [ ]` checkbox syntax for every task in the plan
111
- - Do NOT use heading-based task formats (e.g., `#### Task 1:`) for individual tasks
112
- - The harness parses `- [ ]` lines to track progress — other formats will break automation
113
- - Use `### Phase N:` headings only for phase grouping, not for individual tasks
114
- - One task = one commit-sized unit of work (but tasks can be grouped into phases for batch implementation)
31
+ - MUST use `- [ ]` checkbox syntax for every task harness parses this for progress tracking
32
+ - Use `### Phase N:` headings for phases; one task = one commit-sized unit of work
115
33
  - Every implementation task needs a corresponding test task
116
- - Use Supabase MCP to check existing schema
117
- - Use PostHog MCP to check existing analytics setup
118
- - For UI-heavy features (new pages, dashboards, analytics, marketing pages), reference @.ralph/guides/FRONTEND.md
119
- - Consider `/frontend-design` skill for features needing distinctive aesthetics
@@ -11,6 +11,34 @@ Capture any review feedback patterns for future iterations.
11
11
  All implementation and E2E tasks are complete. Create PR and request review.
12
12
  Complete ALL steps in a single pass — do not end the session between steps.
13
13
 
14
+ ### Step 0: Verify Spec Requirements
15
+ Before creating the PR, verify the implementation meets the spec requirements and update spec files.
16
+
17
+ 1. Read @.ralph/specs/$FEATURE.md and for each requirement under "## Requirements":
18
+ - Check if it was implemented (review implementation plan tasks)
19
+ - Mark as `[x]` if complete, leave as `[ ]` if not implemented (add a note explaining why)
20
+
21
+ 2. For each item under "## Acceptance Criteria":
22
+ - Verify against implementation and test results
23
+ - Mark as `[x]` if verified, leave as `[ ]` with a note if not met
24
+
25
+ 3. Update spec status:
26
+ - Change `**Status:** Planned` (or `Draft`) to `**Status:** Completed` if all requirements met, or `**Status:** Partial` if some are missing
27
+ - Update `**Last Updated:**` to today's date
28
+
29
+ 4. Update @.ralph/specs/README.md Active Specs table:
30
+ - Update ONLY the row for $FEATURE (do NOT remove or modify other rows)
31
+ - Change status and update Last Updated date
32
+
33
+ 5. If any requirements were not fully met, add an "## Implementation Notes" section documenting gaps and reasons.
34
+
35
+ 6. Commit spec updates:
36
+ ```bash
37
+ git -C {{appDir}} add ../.ralph/specs/
38
+ git -C {{appDir}} commit -m "docs($FEATURE): verify spec requirements"
39
+ git -C {{appDir}} push origin feat/$FEATURE
40
+ ```
41
+
14
42
  ### Step 1: Verify Ready State
15
43
  1. Check all tasks are complete in implementation plan (no `- [ ]` items)
16
44
  2. Verify tests pass: `cd {{appDir}} && {{testCommand}}`
@@ -62,63 +90,37 @@ cd {{appDir}} && gh pr create --base main --head feat/$FEATURE \
62
90
 
63
91
  Closes #[Read the source issue number from the spec file metadata or context section]
64
92
 
65
- Generated with Claude Code
93
+ Generated with Wiggum Loop
66
94
  EOF
67
95
  )"
68
96
  ```
69
97
 
70
- ### Step 4: Request Claude Code Review
71
-
72
- Run automated code review using Claude Code CLI:
73
-
74
- ```bash
75
- # Check if Claude Code CLI is installed
76
- if ! command -v claude &> /dev/null; then
77
- echo "WARNING: Claude Code CLI not installed. Manual review needed."
78
- cd {{appDir}} && gh pr comment --body "Manual review requested - Claude Code CLI not available. Install: https://docs.anthropic.com/en/docs/claude-code/overview"
79
- else
80
- echo "Running Claude Code review..."
81
-
82
- cd {{appDir}} && claude -p "You are reviewing a PR for the $FEATURE feature.
83
-
84
- Review the git diff against main and check:
85
- - Code quality and patterns consistency
86
- - Test coverage adequacy
87
- - Potential bugs or edge cases
88
- - Security concerns (injection, XSS, etc.)
89
- - Performance implications
90
- - Error handling completeness
91
-
92
- Run: git diff main
93
-
94
- Then:
95
- 1. Post your complete review as a comment on the PR using:
96
- gh pr comment --body '<your review in markdown>'
97
- Format the comment with: a summary, specific issues with file:line refs (if any), and the verdict.
98
- 2. Print your final verdict as the LAST line of stdout. Print exactly one of:
99
- VERDICT: APPROVED
100
- VERDICT: NOT APPROVED
101
- This line is parsed by the automation — do not omit it."
102
- fi
103
- ```
104
-
105
- After the review completes, check its output:
106
- - If it contains "VERDICT: APPROVED", echo that line so the automation detects it:
107
- ```bash
108
- echo "VERDICT: APPROVED"
109
- ```
110
- - If issues were found, echo:
111
- ```bash
112
- echo "VERDICT: NOT APPROVED"
113
- ```
98
+ ### Step 4: Run Automated Review
99
+
100
+ Use the current review CLI session (`$REVIEW_CLI`) to perform a full review:
101
+
102
+ 1. Review `git diff main` for:
103
+ - Code quality and consistency
104
+ - Test coverage and missing tests
105
+ - Potential bugs and edge cases
106
+ - Security risks (injection, auth, data exposure)
107
+ - Performance and error handling
108
+ 2. Post review feedback to the PR with `gh pr comment --body ...`
109
+ 3. Print your final verdict as the **LAST** line of stdout, exactly one of:
110
+ - `VERDICT: APPROVED`
111
+ - `VERDICT: NOT APPROVED`
112
+ 4. Before printing `VERDICT: APPROVED`, wait for CI to complete and pass:
113
+ - Run `cd {{appDir}} && gh pr checks feat/$FEATURE --watch --interval 10`
114
+ - If checks are pending, keep waiting
115
+ - If any check fails, output `VERDICT: NOT APPROVED` and list failing checks
114
116
 
115
117
  **Handle review feedback:**
116
- - If Claude outputs "VERDICT: APPROVED" -> Done. The PR is ready for manual merge by the user.
117
- - If Claude lists issues:
118
+ - If verdict is `VERDICT: APPROVED` -> Done. The PR is ready for manual merge by the user.
119
+ - If verdict is `VERDICT: NOT APPROVED`:
118
120
  1. Address each issue with code fixes
119
121
  2. Commit: `git -C {{appDir}} add -A && git -C {{appDir}} commit -m "fix($FEATURE): address review feedback"`
120
122
  3. Push: `git -C {{appDir}} push origin feat/$FEATURE`
121
- 4. Re-run the Claude review command above
123
+ 4. Re-run this review step
122
124
  - Max 3 review iterations before requiring manual intervention
123
125
 
124
126
  ### Step 5: Final Summary
@@ -133,6 +135,7 @@ no implementation found" so the harness can trigger a new implementation iterati
133
135
 
134
136
  ## Rules
135
137
  - **NEVER approve if any tests are failing.** Output "VERDICT: NOT APPROVED — test failures" if any tests fail.
138
+ - **NEVER approve while CI checks are pending/failing.** Wait until checks complete and pass.
136
139
  - Do NOT merge the PR — auto mode only reviews, the user merges
137
140
  - Do NOT implement missing features — only review and fix minor issues
138
141
  - Address ALL review comments before marking as approved
@@ -143,7 +146,7 @@ no implementation found" so the harness can trigger a new implementation iterati
143
146
  - **gh: command not found** -> Install GitHub CLI: `brew install gh`
144
147
  - **gh auth error** -> Run: `gh auth login`
145
148
  - **PR already exists** -> Use: `gh pr view` to see status
146
- - **Claude Code CLI not installed** -> Install: `npm install -g @anthropic-ai/claude-code`
149
+ - **Review CLI missing** -> Install the configured `$REVIEW_CLI` binary and retry
147
150
 
148
151
  ## Learning Capture
149
152
  If the review revealed patterns worth remembering, append to @.ralph/LEARNINGS.md:
@@ -11,6 +11,34 @@ Capture any review feedback patterns for future iterations.
11
11
  All implementation and E2E tasks are complete. Create PR for manual review.
12
12
  Complete ALL steps in a single pass — do not end the session between steps.
13
13
 
14
+ ### Step 0: Verify Spec Requirements
15
+ Before creating the PR, verify the implementation meets the spec requirements and update spec files.
16
+
17
+ 1. Read @.ralph/specs/$FEATURE.md and for each requirement under "## Requirements":
18
+ - Check if it was implemented (review implementation plan tasks)
19
+ - Mark as `[x]` if complete, leave as `[ ]` if not implemented (add a note explaining why)
20
+
21
+ 2. For each item under "## Acceptance Criteria":
22
+ - Verify against implementation and test results
23
+ - Mark as `[x]` if verified, leave as `[ ]` with a note if not met
24
+
25
+ 3. Update spec status:
26
+ - Change `**Status:** Planned` (or `Draft`) to `**Status:** Completed` if all requirements met, or `**Status:** Partial` if some are missing
27
+ - Update `**Last Updated:**` to today's date
28
+
29
+ 4. Update @.ralph/specs/README.md Active Specs table:
30
+ - Update ONLY the row for $FEATURE (do NOT remove or modify other rows)
31
+ - Change status and update Last Updated date
32
+
33
+ 5. If any requirements were not fully met, add an "## Implementation Notes" section documenting gaps and reasons.
34
+
35
+ 6. Commit spec updates:
36
+ ```bash
37
+ git -C {{appDir}} add ../.ralph/specs/
38
+ git -C {{appDir}} commit -m "docs($FEATURE): verify spec requirements"
39
+ git -C {{appDir}} push origin feat/$FEATURE
40
+ ```
41
+
14
42
  ### Step 1: Verify Ready State
15
43
  1. Check all tasks are complete in implementation plan (no `- [ ]` items)
16
44
  2. Verify tests pass: `cd {{appDir}} && {{testCommand}}`
@@ -62,7 +90,7 @@ cd {{appDir}} && gh pr create --base main --head feat/$FEATURE \
62
90
 
63
91
  Closes #[Read the source issue number from the spec file metadata or context section]
64
92
 
65
- Generated with Claude Code
93
+ Generated with Wiggum Loop
66
94
  EOF
67
95
  )"
68
96
  ```
@@ -78,7 +106,7 @@ The feature branch has been prepared and pushed. A pull request has been created
78
106
  2. Address any review comments
79
107
  3. Merge when approved
80
108
 
81
- Note: Spec status updates are handled in the Spec Verification phase before PR creation.
109
+ Note: Spec status updates are handled in Step 0 above, before PR creation.
82
110
 
83
111
  ## Rules
84
112
  - Manual review mode stops after PR creation