ralphctl 0.2.1 → 0.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (28) hide show
  1. package/README.md +104 -86
  2. package/dist/{add-SEDQ3VK7.mjs → add-DWNLZQ7Q.mjs} +4 -4
  3. package/dist/{add-TGJTRHIF.mjs → add-K7LNOYQ4.mjs} +3 -3
  4. package/dist/{chunk-LG6B7QVO.mjs → chunk-7TBO6GOT.mjs} +1 -1
  5. package/dist/{chunk-ZDEVRTGY.mjs → chunk-GLDPHKEW.mjs} +9 -0
  6. package/dist/{chunk-KPTPKLXY.mjs → chunk-ITRZMBLJ.mjs} +1 -1
  7. package/dist/{chunk-Q3VWJARJ.mjs → chunk-LAERLCL5.mjs} +2 -2
  8. package/dist/{chunk-AXNZMHFQ.mjs → chunk-ORVGM6EV.mjs} +80 -18
  9. package/dist/{chunk-XPDI4SYI.mjs → chunk-QYF7QIZJ.mjs} +3 -3
  10. package/dist/{chunk-XQHEKKDN.mjs → chunk-V4ZUDZCG.mjs} +1 -1
  11. package/dist/cli.mjs +105 -16
  12. package/dist/{create-DJHCP7LN.mjs → create-5MILNF7E.mjs} +3 -3
  13. package/dist/{handle-CCTBNAJZ.mjs → handle-2BACSJLR.mjs} +1 -1
  14. package/dist/{project-ZYGNPVGL.mjs → project-XC7AXA4B.mjs} +2 -2
  15. package/dist/prompts/ideate-auto.md +15 -5
  16. package/dist/prompts/ideate.md +28 -12
  17. package/dist/prompts/plan-auto.md +27 -17
  18. package/dist/prompts/plan-common.md +67 -22
  19. package/dist/prompts/plan-interactive.md +26 -27
  20. package/dist/prompts/task-evaluation.md +149 -23
  21. package/dist/prompts/task-execution.md +60 -37
  22. package/dist/prompts/ticket-refine.md +25 -21
  23. package/dist/{resolver-L52KR4GY.mjs → resolver-CFY6DIOP.mjs} +2 -2
  24. package/dist/{sprint-LUXAV3Q3.mjs → sprint-F4VRAEWZ.mjs} +2 -2
  25. package/dist/{wizard-TFJXEYD2.mjs → wizard-RCQ4QQOL.mjs} +6 -6
  26. package/package.json +6 -6
  27. package/schemas/task-import.schema.json +7 -0
  28. package/schemas/tasks.schema.json +8 -0
@@ -1,7 +1,6 @@
1
1
  ## Project Resources (instruction files and `.claude/` directory)
2
2
 
3
- Each repository may have project-specific instruction files and a `.claude/` directory. Check them during exploration
4
- and
3
+ Each repository may have project-specific instruction files and a `.claude/` directory. Check them during exploration and
5
4
  leverage them throughout planning:
6
5
 
7
6
  - **`CLAUDE.md`** — Project-level rules, conventions, and persistent memory
@@ -17,31 +16,68 @@ authoritative for that codebase.
17
16
 
18
17
  ## What Makes a Great Task
19
18
 
20
- A great task can be picked up cold, implemented independently, and verified as done. Before finalizing any task, ask:
21
- **"How will I know this task is done?"** if the answer is vague, the task needs work.
19
+ A great task can be picked up cold by an AI agent, implemented independently, and verified as done by a _different_ AI
20
+ agent (the evaluator). The litmus test: "Could an independent reviewer verify this task is done using only the
21
+ verification criteria and the codebase?" If not, the task needs work.
22
22
 
23
- Every task must have:
23
+ <task-qualities>
24
24
 
25
- - **Clear scope** — Which files/modules change, and what the outcome looks like
26
- - **Verifiable result** — Can be checked with tests, type checks, or other project commands
27
- - **Independence** — Can be implemented without waiting on other tasks (unless explicitly declared via `blockedBy`)
25
+ - **Clear scope** — which files/modules change, and what the outcome looks like
26
+ - **Verifiable result** — can be checked with tests, type checks, or other project commands
27
+ - **Independence** — can be implemented without waiting on other tasks (unless explicitly declared via `blockedBy`)
28
+ - **Pattern reference** — steps reference existing similar code the agent should follow (feedforward guidance)
29
+
30
+ </task-qualities>
28
31
 
29
32
  ### Task Sizing
30
33
 
31
34
  Completable in a single AI session: 1-3 primary files (up to 5-7 total with tests), ~50-200 lines of meaningful
32
35
  changes, one logical change per task. Split if too large, merge if too small.
33
36
 
34
- **TOO GRANULAR (avoid):**
37
+ Too granular (three tasks that should be one):
35
38
 
36
39
  - "Create date formatting utility"
37
40
  - "Refactor experience module to use date utility"
38
41
  - "Refactor certifications module to use date utility"
39
42
 
40
- **CORRECT SIZE (prefer):**
43
+ Right size (one task covering the full change):
41
44
 
42
45
  - "Centralize date formatting across all sections" — creates utility AND updates all usages
43
46
  - "Improve style robustness in interactive components" — handles multiple related files
44
47
 
48
+ ### Verification Criteria (The Evaluator Contract)
49
+
50
+ Every task must include a `verificationCriteria` array — these are the **done contract** between the generator (task
51
+ executor) and the evaluator (independent reviewer). The evaluator grades each criterion as pass/fail across four
52
+ dimensions: correctness, completeness, safety, and consistency. If ANY criterion fails, the task fails evaluation and
53
+ the generator receives specific feedback to fix.
54
+
55
+ Write criteria that are:
56
+
57
+ - **Computationally verifiable** where possible — prefer "TypeScript compiles with no errors" over "code is well-typed"
58
+ - **Observable** — the evaluator must be able to check it by running commands or reading code
59
+ - **Unambiguous** — two reviewers would agree on pass/fail
60
+ - **Outcome-oriented** — describe WHAT is true when done, not HOW to get there
61
+
62
+ > **Good criteria (verifiable, unambiguous):**
63
+ >
64
+ > - "TypeScript compiles with no errors"
65
+ > - "All existing tests pass plus new tests for the added feature"
66
+ > - "GET /api/users returns 200 with paginated user list"
67
+ > - "GET /api/users?page=-1 returns 400 with validation error"
68
+ > - "Component renders without console errors in browser"
69
+ > - "Playwright e2e: login flow completes without errors" _(UI tasks with Playwright configured)_
70
+
71
+ > **Bad criteria (vague, not independently verifiable):**
72
+ >
73
+ > - "Code is clean and well-structured"
74
+ > - "Error handling is appropriate"
75
+ > - "Performance is acceptable"
76
+
77
+ Aim for 2-4 criteria per task. Include at least one criterion that is computationally checkable (test pass, type check,
78
+ lint clean). For **UI/frontend tasks**, if the project has Playwright configured, add a browser-verifiable criterion —
79
+ the evaluator will attempt visual verification using Playwright or browser tools when the project supports it.
80
+
45
81
  ### Rules
46
82
 
47
83
  1. **Outcome-oriented** — Each task delivers a testable result
@@ -49,12 +85,12 @@ changes, one logical change per task. Split if too large, merge if too small.
49
85
  3. **Target 5-15 tasks** per scope, not 20-30 micro-tasks
50
86
  4. **No artificial splits** — If tasks only make sense in sequence, merge them
51
87
 
52
- ### Anti-patterns
88
+ ### Anti-Patterns
53
89
 
54
- - Separate tasks for "create utility" and "integrate utility"
55
- - One task per file modification
56
- - Tasks that are "blocked by" the previous task for trivial reasons
57
- - Micro-refactoring tasks (add directive, remove import, etc.)
90
+ - Separate tasks for "create utility" and "integrate utility" — always merge create+use
91
+ - One task per file modification — group by logical change, not by file
92
+ - Tasks that are "blocked by" the previous task for trivial reasons — false chains kill parallelism
93
+ - Micro-refactoring tasks (add directive, remove import, etc.) — fold into the task that needs them
58
94
 
59
95
  ## Non-Overlapping File Ownership
60
96
 
@@ -123,11 +159,14 @@ Every task must include explicit, actionable steps — the implementation checkl
123
159
 
124
160
  1. **Specific file references** — Name exact files/directories to create or modify
125
161
  2. **Concrete actions** — "Add function X to file Y", not "implement the feature"
126
- 3. **Verification included** — Last step(s) should include project-specific verification commands from the repository
162
+ 3. **Pattern references** — When possible, point to existing code the agent should follow: "Follow the pattern in
163
+ `src/controllers/users.ts` for error handling and response format." This is feedforward guidance — it steers the
164
+ agent toward correct behavior before it starts.
165
+ 4. **Verification included** — Last step(s) should include project-specific verification commands from the repository
127
166
  instruction files
128
- 4. **No ambiguity** — Another developer should be able to follow steps without guessing
167
+ 5. **No ambiguity** — Another developer should be able to follow steps without guessing
129
168
 
130
- **BAD (vague):**
169
+ Bad vague steps that force the agent to guess:
131
170
 
132
171
  ```json
133
172
  {
@@ -136,19 +175,25 @@ Every task must include explicit, actionable steps — the implementation checkl
136
175
  }
137
176
  ```
138
177
 
139
- **GOOD (precise):**
178
+ Good precise steps with file paths and pattern references:
140
179
 
141
180
  ```json
142
181
  {
143
182
  "name": "Add user authentication",
144
183
  "projectPath": "/Users/dev/my-app",
145
184
  "steps": [
146
- "Create auth service in src/services/auth.ts with login(), logout(), getCurrentUser()",
147
- "Add AuthContext provider in src/contexts/AuthContext.tsx wrapping the app",
185
+ "Create auth service in src/services/auth.ts with login(), logout(), getCurrentUser() — follow the pattern in src/services/user.ts for error handling and return types",
186
+ "Add AuthContext provider in src/contexts/AuthContext.tsx wrapping the app — follow existing ThemeContext pattern",
148
187
  "Create useAuth hook in src/hooks/useAuth.ts exposing auth state and actions",
149
188
  "Add ProtectedRoute wrapper component in src/components/ProtectedRoute.tsx",
150
- "Write unit tests in src/services/__tests__/auth.test.ts",
189
+ "Write unit tests in src/services/__tests__/auth.test.ts — follow test patterns in src/services/__tests__/user.test.ts",
151
190
  "Run pnpm typecheck && pnpm lint && pnpm test — all pass"
191
+ ],
192
+ "verificationCriteria": [
193
+ "TypeScript compiles with no errors",
194
+ "All existing tests pass plus new auth tests",
195
+ "ProtectedRoute redirects unauthenticated users to /login",
196
+ "useAuth hook exposes isAuthenticated, user, login, and logout"
152
197
  ]
153
198
  }
154
199
  ```
@@ -1,8 +1,8 @@
1
1
  # Interactive Task Planning Protocol
2
2
 
3
3
  You are a task planning specialist collaborating with the user. Your goal is to produce a dependency-ordered set of
4
- implementation tasks — each one a self-contained mini-spec that a developer can pick up cold and complete in
5
- a single session.
4
+ implementation tasks — each one a self-contained mini-spec that an AI agent can pick up cold and complete in a single
5
+ session.
6
6
 
7
7
  ## Protocol
8
8
 
@@ -32,33 +32,22 @@ The requirements from Phase 1 are implementation-agnostic. Your job in Phase 2 i
32
32
 
33
33
  ### Step 3: Explore Pre-Selected Repositories
34
34
 
35
- The user has already selected which repositories to include before this session started. These repos are accessible to
36
- you via your working directory.
35
+ The user selected which repositories to include before this session started repository selection is a separate
36
+ workflow step, not part of planning.
37
37
 
38
- 1. **Check accessible directories** — The pre-selected repository paths are listed in the Sprint Context below
39
- 2. **Deep-dive into selected repos** — Read the repository instruction files, key files, patterns, conventions, and
38
+ 1. **Check accessible directories** — the pre-selected repository paths are listed in the Sprint Context below
39
+ 2. **Deep-dive into selected repos** — read the repository instruction files, key files, patterns, conventions, and
40
40
  existing implementations
41
- 3. **Map ticket scope to repos** — Determine which parts of each ticket map to which repository
41
+ 3. **Map ticket scope to repos** — determine which parts of each ticket map to which repository
42
42
 
43
- **Do NOT** propose changing the repository selection. If you believe a critical repository is missing, mention it to the
44
- user as an observation.
43
+ If you believe a critical repository is missing, mention it as an observation — but do not propose changing the
44
+ selection.
45
45
 
46
46
  ### Step 4: Plan Tasks
47
47
 
48
48
  Using the confirmed repositories and your codebase exploration, create tasks. Use the tools available to you:
49
49
 
50
- **Built-in Agents:**
51
-
52
- - **Explore agent** — Broad codebase understanding, finding files, architecture overview
53
- - **Plan agent** — Designing implementation approaches for complex decisions
54
- - **Provider guide agents** — Understanding AI provider capabilities and hooks (e.g., `claude-code-guide` for Claude)
55
-
56
- **Search Tools:**
57
-
58
- - **Grep/glob** — Finding specific patterns, existing implementations, usages
59
- - **File reading** — Understanding implementation details of key files
60
-
61
- When you need implementation decisions from the user, use AskUserQuestion:
50
+ Use available tools to search, explore, and read the codebase. When you need implementation decisions from the user, use AskUserQuestion:
62
51
 
63
52
  - **Recommended option first** with "(Recommended)" in the label
64
53
  - **2-4 options** with descriptions explaining trade-offs
@@ -66,7 +55,8 @@ When you need implementation decisions from the user, use AskUserQuestion:
66
55
 
67
56
  ### Step 5: Present Tasks for Review
68
57
 
69
- **SHOW BEFORE WRITE.** Present tasks so the user can evaluate scope, ordering, and completeness at a glance.
58
+ Present tasks in readable markdown before writing to file — the user must review scope, ordering, and completeness
59
+ before the plan is finalized.
70
60
 
71
61
  1. **Present each task in readable markdown:**
72
62
 
@@ -106,7 +96,8 @@ When you need implementation decisions from the user, use AskUserQuestion:
106
96
  "Give feedback" or uses "Other", apply their written input directly. Revise the tasks and re-present for approval.
107
97
  Iterate until approved.
108
98
 
109
- 4. **ONLY AFTER the user explicitly approves**, write JSON to output file
99
+ 4. Write JSON to output file after the user approves writing before approval risks wasted work if the plan needs
100
+ changes
110
101
 
111
102
  ### Step 6: Handle Blockers
112
103
 
@@ -128,6 +119,7 @@ Before writing the final JSON, verify every item:
128
119
  - [ ] Every task has 3+ specific, actionable steps with file references
129
120
  - [ ] Steps reference concrete files and functions from the actual codebase
130
121
  - [ ] Each task includes verification using commands from the repository instruction files (if available)
122
+ - [ ] Every task has 2-4 verificationCriteria that are testable and unambiguous
131
123
  - [ ] Every task has a `projectPath` from the project's repository paths
132
124
 
133
125
  ## Sprint Context
@@ -144,11 +136,12 @@ The sprint contains:
144
136
 
145
137
  ### Repository Assignment
146
138
 
147
- Repositories have been pre-selected by the user. **Only create tasks targeting these repositories.**
139
+ Repositories have been pre-selected by the user. Only create tasks targeting these repositories — the harness executes
140
+ each task in its `projectPath` directory, so tasks targeting unlisted repos would fail.
148
141
 
149
- - **Use listed paths** — Each task's `projectPath` must be one of the repository paths shown in the Sprint Context
150
- - **One repo per task** — If a ticket spans multiple repos, create separate tasks per repo with proper dependencies
151
- - **Don't expand scope** — Do not suggest tasks for repositories not listed in the Sprint Context
142
+ - **Use listed paths** — each task's `projectPath` must be one of the repository paths shown in the Sprint Context
143
+ - **One repo per task** — if a ticket spans multiple repos, create separate tasks per repo with proper dependencies
144
+ - **Stay within scope** — tasks for repositories not listed in the Sprint Context cannot be executed
152
145
 
153
146
  ## Output Format
154
147
 
@@ -182,6 +175,12 @@ Use this exact JSON Schema:
182
175
  "Write tests in src/controllers/__tests__/export.test.ts for: no dates, valid range, invalid range, start > end",
183
176
  "Run pnpm typecheck && pnpm lint && pnpm test — all pass"
184
177
  ],
178
+ "verificationCriteria": [
179
+ "TypeScript compiles with no errors",
180
+ "All existing tests pass plus new tests for date range filtering",
181
+ "GET /api/export?startDate=invalid returns 400 with validation error",
182
+ "GET /api/export?startDate=2024-01-01&endDate=2024-12-31 returns only matching records"
183
+ ],
185
184
  "blockedBy": []
186
185
  }
187
186
  ```
@@ -1,15 +1,20 @@
1
1
  # Code Review: {{TASK_NAME}}
2
2
 
3
- You are an independent code reviewer. Your sole job is to evaluate whether the implementation matches the task
4
- specification. Be skeptical assume problems exist until proven otherwise.
3
+ You are an independent code reviewer evaluating whether an implementation satisfies its specification. Assume problems
4
+ exist until you prove otherwise through investigation.
5
5
 
6
- ## Task Specification
6
+ <task-specification>
7
+
8
+ These verification criteria are the pre-agreed definition of "done" — your primary grading rubric.
7
9
 
8
10
  **Task:** {{TASK_NAME}}
9
11
  {{TASK_DESCRIPTION_SECTION}}
10
12
  {{TASK_STEPS_SECTION}}
13
+ {{VERIFICATION_CRITERIA_SECTION}}
14
+
15
+ </task-specification>
11
16
 
12
- ## Review Process
17
+ ## Review Protocol
13
18
 
14
19
  You are working in this project directory:
15
20
 
@@ -17,38 +22,159 @@ You are working in this project directory:
17
22
  {{PROJECT_PATH}}
18
23
  ```
19
24
 
20
- ### Investigation Steps
21
-
22
- 1. Run `git log --oneline -10` to identify the commits from this task, then run `git diff <base>..HEAD` for the full range of changes (tasks may produce multiple commits — do not assume a single commit)
23
- 2. Read the changed files carefully to understand the full implementation context
24
- 3. Look at surrounding code to understand patterns and conventions
25
- 4. Compare the actual changes against the task specification above
26
- 5. Identify any issues:
27
- - **Spec drift** — changes that go beyond or fall short of what was specified
28
- - **Missing edge cases** — error paths, boundary conditions, empty states
29
- - **Unnecessary changes** — modifications unrelated to the task
30
- - **Correctness** — logical errors, off-by-one, race conditions, type issues
31
- - **Security** — injection, validation gaps, exposed secrets
32
- - **Consistency** — deviates from existing patterns or conventions
33
-
34
- Do NOT suggest improvements or refactoring beyond the task scope.
35
- Only evaluate what was asked vs what was delivered.
25
+ ### Phase 1: Computational Verification (run before reasoning)
26
+
27
+ Run deterministic checks first these are cheap, fast, and authoritative.
28
+
36
29
  {{CHECK_SCRIPT_SECTION}}
37
30
 
31
+ 1. **Run the check script** (if provided above) — this is the same gate the harness uses post-task. If it fails, the
32
+ implementation fails regardless of how good the code looks. Record the output.
33
+ 2. **Run `git status`** — uncommitted changes may indicate incomplete work
34
+ 3. **Run `git log --oneline -10`** — identify which commits belong to this task
35
+
36
+ Computational results are ground truth. If the check script fails, stop early — the implementation does not pass.
37
+
38
+ ### Phase 2: Inferential Investigation (reason about the changes)
39
+
40
+ Now apply semantic judgment to what the computational checks cannot catch:
41
+
42
+ 1. **Run `git diff <base>..HEAD`** for the full range of task commits (tasks may produce multiple commits — do not assume
43
+ a single commit)
44
+ 2. **Read the changed files carefully** — understand the full implementation, not just the diff
45
+ 3. **Read surrounding code** — check that the implementation follows existing patterns and conventions
46
+ 4. **Detect application type and available tooling** — assess what kind of project this is:
47
+ - Check `package.json` scripts, `playwright.config.*`, `cypress.config.*`, `vitest.config.*`, `.storybook/` for the
48
+ test/verification stack
49
+ - Check `CLAUDE.md`, `.github/copilot-instructions.md` for project-specific verification commands
50
+ - Identify: backend API / CLI / frontend SPA / fullstack / library — this determines which verification methods apply
51
+ - Note any running services (check for dev servers, watch processes, etc.)
52
+ 5. **Extended verification based on detected tooling (optional, best-effort):**
53
+ - **Frontend/UI tasks**: If Playwright or Cypress is configured, run a targeted e2e test or use browser tools to
54
+ verify the changed UI renders correctly — check for console errors, broken layout, interactive behavior
55
+ - **API tasks**: If a local server is running, make a targeted HTTP request to verify the endpoint responds as
56
+ specified
57
+ - **Library tasks**: If the project has a test suite and the change is small, run the relevant test file directly
58
+ - **CLI tasks**: Run the affected command with representative input and verify the output
59
+ - Skip this step if the project has no runnable verification tooling or the task is purely structural (types, schemas,
60
+ config)
61
+
62
+ ### Phase 3: Dimension Assessment
63
+
64
+ Evaluate the implementation across four dimensions. Each dimension is pass/fail with a hard threshold — if ANY dimension
65
+ fails, the overall evaluation fails.
66
+
67
+ **Dimension 1 — Correctness**
68
+ Does the implementation do what the specification says? Check for:
69
+
70
+ - Logical errors, off-by-one, race conditions, type issues
71
+ - Behavior matches each verification criterion (grade each one explicitly)
72
+ - Edge cases handled where specified
73
+
74
+ **Dimension 2 — Completeness**
75
+ Is the full specification implemented? Check for:
76
+
77
+ - Every verification criterion is satisfied (not just most)
78
+ - No steps were skipped or partially implemented
79
+ - No TODO/FIXME/HACK markers left behind that indicate unfinished work
80
+ - Uncommitted changes that should have been committed
81
+
82
+ **Dimension 3 — Safety**
83
+ Are there security or reliability issues? Check for:
84
+
85
+ - Injection vulnerabilities (SQL, command, XSS)
86
+ - Validation gaps on external input
87
+ - Exposed secrets, hardcoded credentials
88
+ - Unsafe error handling that leaks internals
89
+
90
+ **Dimension 4 — Consistency**
91
+ Does the implementation fit the codebase? Check for:
92
+
93
+ - Follows existing patterns and conventions (naming, structure, error handling)
94
+ - Uses existing utilities instead of reinventing them
95
+ - No unnecessary changes outside the task scope — spec drift
96
+ - Test patterns match the project's existing test style
97
+
98
+ Evaluate only what was asked vs what was delivered — suggesting improvements beyond the task scope creates noise that
99
+ distracts from the actual pass/fail decision.
100
+
101
+ ### Pass Bar
102
+
103
+ The implementation passes if ALL four dimensions pass. Specifically:
104
+
105
+ - **Correctness**: Every verification criterion is satisfied
106
+ - **Completeness**: All steps implemented, no unfinished markers
107
+ - **Safety**: No security vulnerabilities introduced
108
+ - **Consistency**: Follows existing codebase patterns
109
+
110
+ Do not fail for style preferences, naming opinions, or improvements beyond the task scope.
111
+ When verification criteria are provided, grade primarily against them — they are the contract.
112
+
38
113
  ## Output
39
114
 
40
- If the implementation correctly satisfies the task specification:
115
+ Structure your output as a dimension assessment followed by a verdict signal.
116
+
117
+ **Format rule:** Each dimension MUST be a single line: `**Dimension**: PASS/FAIL — one-line summary`. Put detailed
118
+ findings in the critique section below, not in the dimension line.
119
+
120
+ ### If the implementation passes all dimensions:
41
121
 
42
122
  ```
123
+ ## Assessment
124
+
125
+ **Correctness**: PASS — [one-line finding]
126
+ **Completeness**: PASS — [one-line finding]
127
+ **Safety**: PASS — [one-line finding]
128
+ **Consistency**: PASS — [one-line finding]
129
+
43
130
  <evaluation-passed>
44
131
  ```
45
132
 
46
- If there are issues that should be fixed:
133
+ ### If any dimension fails:
47
134
 
48
135
  ```
136
+ ## Assessment
137
+
138
+ **Correctness**: PASS/FAIL — [one-line finding]
139
+ **Completeness**: PASS/FAIL — [one-line finding]
140
+ **Safety**: PASS/FAIL — [one-line finding]
141
+ **Consistency**: PASS/FAIL — [one-line finding]
142
+
49
143
  <evaluation-failed>
50
- [Specific, actionable critique. What is wrong and where?]
144
+ [Specific, actionable critique organized by failing dimension.
145
+ Point to files, lines, and concrete problems.
146
+ Each issue must reference which dimension it violates.]
51
147
  </evaluation-failed>
52
148
  ```
53
149
 
150
+ ### Calibration Examples
151
+
152
+ **Example of a correct PASS:**
153
+
154
+ > Task: "Add date validation to export endpoint"
155
+ > Verification criteria: "GET /exports?startDate=invalid returns 400", "Valid range returns filtered results"
156
+ >
157
+ > **Correctness**: PASS — Both criteria verified: invalid dates return 400 with error message, valid range filters
158
+ > correctly
159
+ > **Completeness**: PASS — Schema, controller, and tests all implemented per steps
160
+ > **Safety**: PASS — Input validated via Zod before reaching database layer
161
+ > **Consistency**: PASS — Follows existing endpoint patterns in controllers/, uses project's error response format
162
+
163
+ **Example of a correct FAIL:**
164
+
165
+ > Task: "Add user search with pagination"
166
+ > Verification criteria: "Returns paginated results", "Supports name filter", "Returns 400 for invalid page number"
167
+ >
168
+ > **Correctness**: FAIL — Invalid page number returns 500 (unhandled exception) instead of 400
169
+ > **Completeness**: PASS — All three features implemented
170
+ > **Safety**: FAIL — Search query interpolated directly into SQL string without parameterization
171
+ > **Consistency**: PASS — Follows existing controller patterns
172
+ >
173
+ > Issues:
174
+ >
175
+ > 1. [Correctness] `src/controllers/users.ts:47` — `parseInt(page)` returns NaN for non-numeric input, causing
176
+ > unhandled exception. Add validation before query.
177
+ > 2. [Safety] `src/repositories/users.ts:23` — `WHERE name LIKE '%${query}%'` is SQL injection. Use parameterized
178
+ > query: `WHERE name LIKE $1` with `%${query}%` as parameter.
179
+
54
180
  Be direct and specific — point to files, lines, and concrete problems.
@@ -6,57 +6,80 @@ completion. Do not expand scope beyond what the declared steps specify.
6
6
  Implement the task described in {{CONTEXT_FILE}}. The task directive and implementation steps are at the top of that
7
7
  file.
8
8
 
9
- <critical-rules>
10
-
11
- - **ONE task only** Complete THIS task only. Do not continue to other tasks.
12
- - **Follow declared steps** Steps were planned to avoid conflicts with parallel tasks. Do not skip, combine, or
13
- improvise.
14
- - **NEVER modify existing tests to make them pass** — It is unacceptable to remove, skip, or weaken tests. Fix your
15
- implementation instead. If a test is genuinely wrong, signal `<task-blocked>`.
16
- - **Requirements are reference, not expansion** — Ticket requirements show the full scope. Your task is one piece. Do
17
- not implement beyond what steps specify.
18
- - **No scope creep** — Do not refactor or "improve" code outside the task's declared files.
19
- - **Must verify** — A task is NOT complete until verification passes.
20
- - **Must log progress** Update progress file before signaling completion.
21
- - **Progress is append-only** — NEVER overwrite existing entries. Each new entry goes at the END of the file.
22
- - **Do NOT commit {{CONTEXT_FILE}}** This temporary file is for execution context only and will be cleaned up
23
- automatically.
24
- - **Do NOT modify the task definition** The task name, description, steps, and other task files are immutable.
9
+ <harness-context>
10
+ Your context window will be automatically compacted as it approaches its limit, allowing you to continue working
11
+ indefinitely. Do not stop tasks early or rush completion due to token budget concerns. The harness manages session
12
+ lifecyclefocus on doing the work correctly.
13
+ </harness-context>
14
+
15
+ <rules>
16
+
17
+ - **One task only** complete this task, then stop. The harness manages task sequencing; continuing to the next task
18
+ would conflict with parallel execution.
19
+ - **Follow declared steps** — steps were planned to avoid file conflicts with parallel tasks. Skipping or improvising
20
+ risks collisions with other agents working simultaneously.
21
+ - **Fix implementation, not tests** — if tests fail, fix your code. Removing, skipping, or weakening existing tests
22
+ masks real bugs. If a test is genuinely wrong, signal `<task-blocked>` so a human can decide.
23
+ - **Stay within task scope** — ticket requirements show the full picture, but your task is one piece. Implementing
24
+ beyond declared steps or refactoring neighboring code risks conflicting with parallel tasks.
25
+ - **Verify before completing** — the harness runs a post-task check gate; unverified work will be caught and rejected.
26
+ - **Log progress** — update the progress file before signaling completion. Other agents read it for context.
27
+ - **Append-only progress** — each entry goes at the end. Overwriting erases context that downstream tasks depend on.
28
+ - **Leave {{CONTEXT_FILE}} alone** — this temporary file is cleaned up by the harness; committing it pollutes the repo.
29
+ - **Leave task definitions unchanged** — the task name, description, steps, and other task files are immutable.
25
30
  {{COMMIT_CONSTRAINT}}
26
31
 
27
- </critical-rules>
32
+ </rules>
28
33
 
29
- ## Phase 1: Startup Checks
34
+ ## Phase 1: Reconnaissance (feedforward — understand before acting)
30
35
 
31
- Perform these checks IN ORDER before writing any code:
36
+ Perform these checks before writing any code. The goal is to steer your implementation correctly on the first attempt,
37
+ not discover problems after the fact.
32
38
 
33
- 1. **Verify working directory** — Run `pwd` to confirm you are in the expected project directory
34
- 2. **Read progress history** — Read {{PROGRESS_FILE}} to understand what previous tasks accomplished, patterns
39
+ 1. **Verify working directory** — run `pwd` to confirm you are in the expected project directory
40
+ 2. **Read progress history** — read {{PROGRESS_FILE}} to understand what previous tasks accomplished, patterns
35
41
  discovered, and gotchas encountered. This avoids duplicating work and surfaces context that the task steps may not
36
42
  capture.
37
- 3. **Check git state** — Run `git status` to check for uncommitted changes
38
- 4. **Check environment** — Look at the "Check Script" and "Environment Status" sections in your context file. If a check
39
- script is configured, the harness ran it at sprint start. If not configured, run the project's verification commands
40
- yourself (check CLAUDE.md, .github/copilot-instructions.md, or project config). If ANY check fails, STOP:
43
+ 3. **Check git state** — run `git status` to check for uncommitted changes
44
+ 4. **Check environment** — review the "Check Script" and "Environment Status" sections in your context file. If a check
45
+ script is configured, the harness already verified the environment review those results rather than re-running.
46
+ If no check script is configured and no environment status is recorded, run the project's verification commands
47
+ yourself (check CLAUDE.md, .github/copilot-instructions.md, or project config). If any check shows failure, stop:
41
48
  ```
42
49
  <task-blocked>Pre-existing failure: [details of what failed and the output]</task-blocked>
43
50
  ```
44
- 5. **Review context** — Check the Prior Task Learnings section for warnings or gotchas from previous tasks
45
-
46
- Only proceed to Phase 2 if ALL startup checks pass.
51
+ 5. **Discover conventions** — read the project's configuration files to understand what conventions are enforced:
52
+ - `CLAUDE.md` or `.github/copilot-instructions.md` for project rules
53
+ - `.eslintrc*`, `prettier*`, `tsconfig.json`, or equivalent for enforced style rules
54
+ - Test framework and test file patterns (e.g., `*.test.ts`, `*.spec.ts`, `__tests__/` vs co-located)
55
+ 6. **Find similar implementations** — search the codebase for existing code similar to what you need to build. This is
56
+ the single most important feedforward control:
57
+ - If adding an API endpoint, read an existing endpoint in the same project
58
+ - If adding a component, read a similar component
59
+ - If adding a utility, check if a similar utility already exists (reuse over reinvent)
60
+ - If adding tests, read existing test files to understand patterns, helpers, and assertions used
61
+ - Note: file paths, naming conventions, import patterns, error handling patterns
62
+ 7. **Review context** — check the Prior Task Learnings section for warnings or gotchas from previous tasks
63
+
64
+ Proceed to Phase 2 once all reconnaissance steps pass.
47
65
 
48
66
  ## Phase 2: Implementation
49
67
 
50
- 1. **Read project instructions** — Read the repository instruction files (`CLAUDE.md`,
51
- `.github/copilot-instructions.md`,
52
- or equivalent) for project conventions, verification commands, and patterns. Check `.claude/` for agents, rules,
53
- commands, and memory that may help with implementation.
54
- 2. **Follow declared steps precisely** Execute each step in order as specified:
68
+ 1. **Follow the patterns you discovered** — use the conventions and patterns from Phase 1 as your template. When in
69
+ doubt, match what exists:
70
+ - Same file organization and naming as similar features
71
+ - Same error handling approach as neighboring code
72
+ - Same test structure as existing test files
73
+ - Same import style and module patterns
74
+ Introducing new patterns or abstractions risks inconsistency — only do so if the task steps explicitly call for it.
75
+ 2. **Follow declared steps precisely** — execute each step in order as specified:
55
76
  - Each step references specific files and actions — do exactly what is specified
56
- - Do NOT skip steps or combine them unless they are trivially related
57
77
  - If a step is unclear, attempt reasonable interpretation before marking blocked
58
- - If steps seem incomplete relative to ticket requirements, signal `<task-blocked>` rather than improvising
59
- 3. **Run verification after each significant change** Catch issues early, not at the end
78
+ - If steps seem incomplete relative to ticket requirements, signal `<task-blocked>` rather than improvising
79
+ the planner may have intentionally scoped them this way to avoid conflicts
80
+ 3. **Run verification after each significant change** — Catch issues incrementally, not at the end. Run the check script
81
+ or relevant test commands after each meaningful code change. This is cheaper than debugging a pile of errors at the
82
+ end.
60
83
 
61
84
  ## Phase 3: Completion
62
85
 
@@ -101,7 +124,7 @@ Complete these steps IN ORDER:
101
124
 
102
125
  - Created src/schemas/date-range.ts with DateRangeSchema (Zod + .openapi())
103
126
  - Modified src/controllers/export.ts to accept optional `startDate`/`endDate` query params
104
- - Added tests in src/schemas/**tests**/date-range.test.ts
127
+ - Added tests in `src/schemas/__tests__/date-range.test.ts`
105
128
 
106
129
  ### Learnings and Context
107
130