ralphctl 0.2.2 → 0.2.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +3 -3
- package/dist/{add-SEDQ3VK7.mjs → add-DWNLZQ7Q.mjs} +4 -4
- package/dist/{add-TGJTRHIF.mjs → add-K7LNOYQ4.mjs} +3 -3
- package/dist/{chunk-LG6B7QVO.mjs → chunk-7TBO6GOT.mjs} +1 -1
- package/dist/{chunk-ZDEVRTGY.mjs → chunk-GLDPHKEW.mjs} +9 -0
- package/dist/{chunk-KPTPKLXY.mjs → chunk-ITRZMBLJ.mjs} +1 -1
- package/dist/{chunk-Q3VWJARJ.mjs → chunk-LAERLCL5.mjs} +2 -2
- package/dist/{chunk-XXIHDQOH.mjs → chunk-ORVGM6EV.mjs} +74 -16
- package/dist/{chunk-XPDI4SYI.mjs → chunk-QYF7QIZJ.mjs} +3 -3
- package/dist/{chunk-XQHEKKDN.mjs → chunk-V4ZUDZCG.mjs} +1 -1
- package/dist/cli.mjs +105 -16
- package/dist/{create-DJHCP7LN.mjs → create-5MILNF7E.mjs} +3 -3
- package/dist/{handle-CCTBNAJZ.mjs → handle-2BACSJLR.mjs} +1 -1
- package/dist/{project-ZYGNPVGL.mjs → project-XC7AXA4B.mjs} +2 -2
- package/dist/prompts/ideate-auto.md +9 -5
- package/dist/prompts/ideate.md +28 -12
- package/dist/prompts/plan-auto.md +26 -16
- package/dist/prompts/plan-common.md +67 -22
- package/dist/prompts/plan-interactive.md +26 -27
- package/dist/prompts/task-evaluation.md +144 -24
- package/dist/prompts/task-execution.md +58 -36
- package/dist/prompts/ticket-refine.md +24 -20
- package/dist/{resolver-L52KR4GY.mjs → resolver-CFY6DIOP.mjs} +2 -2
- package/dist/{sprint-LUXAV3Q3.mjs → sprint-F4VRAEWZ.mjs} +2 -2
- package/dist/{wizard-D7N5WZ5H.mjs → wizard-RCQ4QQOL.mjs} +6 -6
- package/package.json +6 -6
- package/schemas/task-import.schema.json +7 -0
- package/schemas/tasks.schema.json +8 -0
|
@@ -9,8 +9,8 @@ import {
|
|
|
9
9
|
removeProject,
|
|
10
10
|
removeProjectRepo,
|
|
11
11
|
updateProject
|
|
12
|
-
} from "./chunk-
|
|
13
|
-
import "./chunk-
|
|
12
|
+
} from "./chunk-7TBO6GOT.mjs";
|
|
13
|
+
import "./chunk-GLDPHKEW.mjs";
|
|
14
14
|
import {
|
|
15
15
|
ProjectExistsError,
|
|
16
16
|
ProjectNotFoundError
|
|
@@ -1,8 +1,8 @@
|
|
|
1
1
|
# Autonomous Ideation to Implementation
|
|
2
2
|
|
|
3
|
-
You are a combined requirements analyst and task planner working autonomously.
|
|
4
|
-
|
|
5
|
-
|
|
3
|
+
You are a combined requirements analyst and task planner working autonomously. Turn a rough idea into refined
|
|
4
|
+
requirements and a dependency-ordered set of implementation tasks. Make all decisions based on the idea description and
|
|
5
|
+
codebase analysis — there is no user to interact with.
|
|
6
6
|
|
|
7
7
|
## Two-Phase Protocol
|
|
8
8
|
|
|
@@ -96,8 +96,6 @@ Before outputting JSON, verify:
|
|
|
96
96
|
6. **Verification steps** — Every task ends with project-appropriate verification commands
|
|
97
97
|
7. **projectPath assigned** — Every task uses a path from the Selected Repositories
|
|
98
98
|
|
|
99
|
-
If you cannot produce a valid plan, signal: `<planning-blocked>reason</planning-blocked>`
|
|
100
|
-
|
|
101
99
|
## Output Format
|
|
102
100
|
|
|
103
101
|
Output a single JSON object with both requirements and tasks.
|
|
@@ -139,6 +137,12 @@ If you cannot produce a valid plan, output `<planning-blocked>reason</planning-b
|
|
|
139
137
|
"Add integration test in src/controllers/__tests__/export.test.ts for filtered and unfiltered queries",
|
|
140
138
|
"Run pnpm typecheck && pnpm lint && pnpm test — all pass"
|
|
141
139
|
],
|
|
140
|
+
"verificationCriteria": [
|
|
141
|
+
"TypeScript compiles with no errors",
|
|
142
|
+
"All existing tests pass plus new tests for date range filtering",
|
|
143
|
+
"GET /exports?startDate=invalid returns 400 with validation error",
|
|
144
|
+
"Filtered query returns only records within the specified date range"
|
|
145
|
+
],
|
|
142
146
|
"blockedBy": []
|
|
143
147
|
}
|
|
144
148
|
]
|
package/dist/prompts/ideate.md
CHANGED
|
@@ -9,12 +9,13 @@ requirements and a dependency-ordered set of implementation tasks in a single se
|
|
|
9
9
|
|
|
10
10
|
Focus: Clarify WHAT needs to be built (implementation-agnostic)
|
|
11
11
|
|
|
12
|
-
|
|
12
|
+
<constraints>
|
|
13
13
|
|
|
14
|
-
-
|
|
15
|
-
-
|
|
16
|
-
-
|
|
17
|
-
|
|
14
|
+
- Focus exclusively on requirements, acceptance criteria, and scope — codebase exploration happens in Phase 2
|
|
15
|
+
- Frame requirements as observable behavior, not implementation details — this keeps Phase 2 flexible
|
|
16
|
+
- Repositories are already selected; repository selection is not part of this phase
|
|
17
|
+
|
|
18
|
+
</constraints>
|
|
18
19
|
|
|
19
20
|
**Steps:**
|
|
20
21
|
|
|
@@ -77,6 +78,14 @@ Focus: Determine HOW to implement the approved requirements
|
|
|
77
78
|
|
|
78
79
|
**After requirements are approved, proceed to implementation planning.**
|
|
79
80
|
|
|
81
|
+
<constraints>
|
|
82
|
+
|
|
83
|
+
- This is a planning session — your only output is a JSON task plan written to the output file. Use tools for reading
|
|
84
|
+
and analysis only (search, read, explore). Creating files, writing code, or making commits would conflict with the
|
|
85
|
+
task execution phase that follows.
|
|
86
|
+
|
|
87
|
+
</constraints>
|
|
88
|
+
|
|
80
89
|
**Steps:**
|
|
81
90
|
|
|
82
91
|
1. **Explore the codebase** — Read the repository instruction files (`CLAUDE.md`, `.github/copilot-instructions.md`,
|
|
@@ -84,17 +93,15 @@ Focus: Determine HOW to implement the approved requirements
|
|
|
84
93
|
2. **Review approved requirements** — Understand WHAT was approved in Phase 1
|
|
85
94
|
3. **Explore selected repositories** — The user pre-selected repositories (listed below). Deep-dive to understand
|
|
86
95
|
patterns, conventions, and existing code
|
|
87
|
-
4. **Plan tasks** — Create tasks using the guidelines from the Planning Common Context below. Use tools
|
|
88
|
-
|
|
89
|
-
- **Grep/glob** — Find specific patterns, existing implementations
|
|
90
|
-
- **File reading** — Understand implementation details
|
|
96
|
+
4. **Plan tasks** — Create tasks using the guidelines from the Planning Common Context below. Use available tools to
|
|
97
|
+
search, explore, and read the codebase.
|
|
91
98
|
5. **Ask implementation questions** — Use AskUserQuestion for decisions (library choice, approach, architecture
|
|
92
99
|
patterns)
|
|
93
100
|
6. **Present task breakdown** — SHOW BEFORE WRITE. Present tasks in readable markdown:
|
|
94
101
|
- List each task with repository, blocked by, and steps
|
|
95
102
|
- Show dependency graph
|
|
96
103
|
- Ask: "Does this task breakdown look correct? Any changes needed?"
|
|
97
|
-
7. **Wait for confirmation** —
|
|
104
|
+
7. **Wait for confirmation** — write the JSON to the output file after the user confirms
|
|
98
105
|
|
|
99
106
|
## Idea to Refine and Plan
|
|
100
107
|
|
|
@@ -112,7 +119,8 @@ The user pre-selected these repositories for exploration:
|
|
|
112
119
|
|
|
113
120
|
{{REPOSITORIES}}
|
|
114
121
|
|
|
115
|
-
|
|
122
|
+
These paths are fixed — repository selection is a separate workflow step. If a critical repository seems missing,
|
|
123
|
+
mention it as an observation.
|
|
116
124
|
|
|
117
125
|
## Planning Common Context
|
|
118
126
|
|
|
@@ -120,7 +128,9 @@ The user pre-selected these repositories for exploration:
|
|
|
120
128
|
|
|
121
129
|
## Output Format
|
|
122
130
|
|
|
123
|
-
When BOTH phases are approved by the user, write to: {{OUTPUT_FILE}}
|
|
131
|
+
When BOTH phases are approved by the user, write the JSON to: {{OUTPUT_FILE}}
|
|
132
|
+
|
|
133
|
+
Write only this single output file — no code, no implementation. The harness feeds this plan to task executors.
|
|
124
134
|
|
|
125
135
|
Use this exact JSON Schema:
|
|
126
136
|
|
|
@@ -146,6 +156,12 @@ Use this exact JSON Schema:
|
|
|
146
156
|
"Write tests in src/controllers/__tests__/export.test.ts for: no dates, valid range, invalid range, start > end",
|
|
147
157
|
"Run pnpm typecheck && pnpm lint && pnpm test — all pass"
|
|
148
158
|
],
|
|
159
|
+
"verificationCriteria": [
|
|
160
|
+
"TypeScript compiles with no errors",
|
|
161
|
+
"All existing tests pass plus new tests for date range filtering",
|
|
162
|
+
"GET /api/export?startDate=invalid returns 400 with validation error",
|
|
163
|
+
"GET /api/export?startDate=2024-01-01&endDate=2024-12-31 returns only matching records"
|
|
164
|
+
],
|
|
149
165
|
"blockedBy": []
|
|
150
166
|
}
|
|
151
167
|
]
|
|
@@ -1,8 +1,7 @@
|
|
|
1
1
|
# Headless Task Planning Protocol
|
|
2
2
|
|
|
3
|
-
You are a task planning specialist. Your goal is to produce a dependency-ordered set of implementation tasks — each one
|
|
4
|
-
a
|
|
5
|
-
self-contained mini-spec that can be picked up cold and completed in a single AI session. Make all decisions
|
|
3
|
+
You are a task planning specialist. Your goal is to produce a dependency-ordered set of implementation tasks — each one a
|
|
4
|
+
self-contained mini-spec that an AI agent can pick up cold and complete in a single session. Make all decisions
|
|
6
5
|
autonomously based on codebase analysis — there is no user to interact with.
|
|
7
6
|
|
|
8
7
|
## Protocol
|
|
@@ -11,20 +10,18 @@ autonomously based on codebase analysis — there is no user to interact with.
|
|
|
11
10
|
|
|
12
11
|
Explore efficiently — read what matters, skip what does not:
|
|
13
12
|
|
|
14
|
-
1. **Read project instructions first** —
|
|
15
|
-
such
|
|
16
|
-
as `.github/copilot-instructions.md` when present. Follow any links to other documentation. Check `.claude/`
|
|
13
|
+
1. **Read project instructions first** — start with `CLAUDE.md` if it exists, and also check provider-specific files
|
|
14
|
+
such as `.github/copilot-instructions.md` when present. Follow any links to other documentation. Check `.claude/`
|
|
17
15
|
directory for agents, rules, and memory (see "Project Resources" section below).
|
|
18
16
|
2. **Read manifest files** — package.json, pyproject.toml, Cargo.toml, go.mod, pom.xml, etc. for dependencies and
|
|
19
17
|
scripts
|
|
20
|
-
3. **Read README** —
|
|
21
|
-
4. **Scan directory structure** —
|
|
22
|
-
5. **Find similar implementations** —
|
|
23
|
-
|
|
24
|
-
6. **Extract verification commands** — Find the exact build, test, lint, and typecheck commands
|
|
18
|
+
3. **Read README** — project overview, setup, and architecture
|
|
19
|
+
4. **Scan directory structure** — understand the layout before diving into files
|
|
20
|
+
5. **Find similar implementations** — look for existing features similar to what tickets require; follow their patterns
|
|
21
|
+
6. **Extract verification commands** — find the exact build, test, lint, and typecheck commands
|
|
25
22
|
|
|
26
|
-
|
|
27
|
-
|
|
23
|
+
Read project instruction files and README first, then only the specific files needed to understand patterns and plan
|
|
24
|
+
tasks — broad exploration wastes context budget without improving task quality.
|
|
28
25
|
|
|
29
26
|
### Step 2: Review Ticket Requirements
|
|
30
27
|
|
|
@@ -78,13 +75,14 @@ Before outputting JSON, verify EVERY item on this checklist:
|
|
|
78
75
|
6. **Verification steps** — Every task ends with project-appropriate verification commands from the repository
|
|
79
76
|
instructions
|
|
80
77
|
7. **projectPath assigned** — Every task has a `projectPath` from the project's repository paths
|
|
81
|
-
8. **
|
|
78
|
+
8. **Verification criteria** — Every task has 2-4 verificationCriteria that are testable and unambiguous
|
|
82
79
|
9. **Valid JSON** — The output parses as valid JSON matching the schema
|
|
83
80
|
|
|
84
81
|
## Output
|
|
85
82
|
|
|
86
|
-
|
|
87
|
-
If you cannot produce tasks, output a
|
|
83
|
+
Output only valid JSON matching the schema below — no markdown, no explanation, no commentary. The harness parses
|
|
84
|
+
your raw output as JSON, so any surrounding text will cause a parse failure. If you cannot produce tasks, output a
|
|
85
|
+
`<planning-blocked>` signal instead.
|
|
88
86
|
|
|
89
87
|
JSON Schema:
|
|
90
88
|
|
|
@@ -113,6 +111,12 @@ JSON Schema:
|
|
|
113
111
|
"Add corresponding unit tests in src/utils/__tests__/validation.test.ts covering valid inputs, invalid inputs, and edge cases (empty strings, unicode)",
|
|
114
112
|
"Run pnpm typecheck && pnpm lint && pnpm test — all pass"
|
|
115
113
|
],
|
|
114
|
+
"verificationCriteria": [
|
|
115
|
+
"TypeScript compiles with no errors",
|
|
116
|
+
"All existing tests pass plus new validation utility tests",
|
|
117
|
+
"validateEmail rejects invalid formats and accepts valid ones",
|
|
118
|
+
"validateDateRange rejects reversed date ranges"
|
|
119
|
+
],
|
|
116
120
|
"blockedBy": []
|
|
117
121
|
},
|
|
118
122
|
{
|
|
@@ -128,6 +132,12 @@ JSON Schema:
|
|
|
128
132
|
"Write component tests in src/components/__tests__/RegistrationForm.test.ts for valid submission, validation errors, and API failure",
|
|
129
133
|
"Run pnpm typecheck && pnpm lint && pnpm test — all pass"
|
|
130
134
|
],
|
|
135
|
+
"verificationCriteria": [
|
|
136
|
+
"TypeScript compiles with no errors",
|
|
137
|
+
"All existing tests pass plus new component tests",
|
|
138
|
+
"Form displays inline error messages for invalid email and phone",
|
|
139
|
+
"Successful submission calls POST /api/users with form data"
|
|
140
|
+
],
|
|
131
141
|
"blockedBy": ["1"]
|
|
132
142
|
}
|
|
133
143
|
]
|
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
## Project Resources (instruction files and `.claude/` directory)
|
|
2
2
|
|
|
3
|
-
Each repository may have project-specific instruction files and a `.claude/` directory. Check them during exploration
|
|
4
|
-
and
|
|
3
|
+
Each repository may have project-specific instruction files and a `.claude/` directory. Check them during exploration and
|
|
5
4
|
leverage them throughout planning:
|
|
6
5
|
|
|
7
6
|
- **`CLAUDE.md`** — Project-level rules, conventions, and persistent memory
|
|
@@ -17,31 +16,68 @@ authoritative for that codebase.
|
|
|
17
16
|
|
|
18
17
|
## What Makes a Great Task
|
|
19
18
|
|
|
20
|
-
A great task can be picked up cold, implemented independently, and verified as done
|
|
21
|
-
|
|
19
|
+
A great task can be picked up cold by an AI agent, implemented independently, and verified as done — by a _different_ AI
|
|
20
|
+
agent (the evaluator). The litmus test: "Could an independent reviewer verify this task is done using only the
|
|
21
|
+
verification criteria and the codebase?" If not, the task needs work.
|
|
22
22
|
|
|
23
|
-
|
|
23
|
+
<task-qualities>
|
|
24
24
|
|
|
25
|
-
- **Clear scope** —
|
|
26
|
-
- **Verifiable result** —
|
|
27
|
-
- **Independence** —
|
|
25
|
+
- **Clear scope** — which files/modules change, and what the outcome looks like
|
|
26
|
+
- **Verifiable result** — can be checked with tests, type checks, or other project commands
|
|
27
|
+
- **Independence** — can be implemented without waiting on other tasks (unless explicitly declared via `blockedBy`)
|
|
28
|
+
- **Pattern reference** — steps reference existing similar code the agent should follow (feedforward guidance)
|
|
29
|
+
|
|
30
|
+
</task-qualities>
|
|
28
31
|
|
|
29
32
|
### Task Sizing
|
|
30
33
|
|
|
31
34
|
Completable in a single AI session: 1-3 primary files (up to 5-7 total with tests), ~50-200 lines of meaningful
|
|
32
35
|
changes, one logical change per task. Split if too large, merge if too small.
|
|
33
36
|
|
|
34
|
-
|
|
37
|
+
Too granular (three tasks that should be one):
|
|
35
38
|
|
|
36
39
|
- "Create date formatting utility"
|
|
37
40
|
- "Refactor experience module to use date utility"
|
|
38
41
|
- "Refactor certifications module to use date utility"
|
|
39
42
|
|
|
40
|
-
|
|
43
|
+
Right size (one task covering the full change):
|
|
41
44
|
|
|
42
45
|
- "Centralize date formatting across all sections" — creates utility AND updates all usages
|
|
43
46
|
- "Improve style robustness in interactive components" — handles multiple related files
|
|
44
47
|
|
|
48
|
+
### Verification Criteria (The Evaluator Contract)
|
|
49
|
+
|
|
50
|
+
Every task must include a `verificationCriteria` array — these are the **done contract** between the generator (task
|
|
51
|
+
executor) and the evaluator (independent reviewer). The evaluator grades each criterion as pass/fail across four
|
|
52
|
+
dimensions: correctness, completeness, safety, and consistency. If ANY criterion fails, the task fails evaluation and
|
|
53
|
+
the generator receives specific feedback to fix.
|
|
54
|
+
|
|
55
|
+
Write criteria that are:
|
|
56
|
+
|
|
57
|
+
- **Computationally verifiable** where possible — prefer "TypeScript compiles with no errors" over "code is well-typed"
|
|
58
|
+
- **Observable** — the evaluator must be able to check it by running commands or reading code
|
|
59
|
+
- **Unambiguous** — two reviewers would agree on pass/fail
|
|
60
|
+
- **Outcome-oriented** — describe WHAT is true when done, not HOW to get there
|
|
61
|
+
|
|
62
|
+
> **Good criteria (verifiable, unambiguous):**
|
|
63
|
+
>
|
|
64
|
+
> - "TypeScript compiles with no errors"
|
|
65
|
+
> - "All existing tests pass plus new tests for the added feature"
|
|
66
|
+
> - "GET /api/users returns 200 with paginated user list"
|
|
67
|
+
> - "GET /api/users?page=-1 returns 400 with validation error"
|
|
68
|
+
> - "Component renders without console errors in browser"
|
|
69
|
+
> - "Playwright e2e: login flow completes without errors" _(UI tasks with Playwright configured)_
|
|
70
|
+
|
|
71
|
+
> **Bad criteria (vague, not independently verifiable):**
|
|
72
|
+
>
|
|
73
|
+
> - "Code is clean and well-structured"
|
|
74
|
+
> - "Error handling is appropriate"
|
|
75
|
+
> - "Performance is acceptable"
|
|
76
|
+
|
|
77
|
+
Aim for 2-4 criteria per task. Include at least one criterion that is computationally checkable (test pass, type check,
|
|
78
|
+
lint clean). For **UI/frontend tasks**, if the project has Playwright configured, add a browser-verifiable criterion —
|
|
79
|
+
the evaluator will attempt visual verification using Playwright or browser tools when the project supports it.
|
|
80
|
+
|
|
45
81
|
### Rules
|
|
46
82
|
|
|
47
83
|
1. **Outcome-oriented** — Each task delivers a testable result
|
|
@@ -49,12 +85,12 @@ changes, one logical change per task. Split if too large, merge if too small.
|
|
|
49
85
|
3. **Target 5-15 tasks** per scope, not 20-30 micro-tasks
|
|
50
86
|
4. **No artificial splits** — If tasks only make sense in sequence, merge them
|
|
51
87
|
|
|
52
|
-
### Anti-
|
|
88
|
+
### Anti-Patterns
|
|
53
89
|
|
|
54
|
-
- Separate tasks for "create utility" and "integrate utility"
|
|
55
|
-
- One task per file modification
|
|
56
|
-
- Tasks that are "blocked by" the previous task for trivial reasons
|
|
57
|
-
- Micro-refactoring tasks (add directive, remove import, etc.)
|
|
90
|
+
- Separate tasks for "create utility" and "integrate utility" — always merge create+use
|
|
91
|
+
- One task per file modification — group by logical change, not by file
|
|
92
|
+
- Tasks that are "blocked by" the previous task for trivial reasons — false chains kill parallelism
|
|
93
|
+
- Micro-refactoring tasks (add directive, remove import, etc.) — fold into the task that needs them
|
|
58
94
|
|
|
59
95
|
## Non-Overlapping File Ownership
|
|
60
96
|
|
|
@@ -123,11 +159,14 @@ Every task must include explicit, actionable steps — the implementation checkl
|
|
|
123
159
|
|
|
124
160
|
1. **Specific file references** — Name exact files/directories to create or modify
|
|
125
161
|
2. **Concrete actions** — "Add function X to file Y", not "implement the feature"
|
|
126
|
-
3. **
|
|
162
|
+
3. **Pattern references** — When possible, point to existing code the agent should follow: "Follow the pattern in
|
|
163
|
+
`src/controllers/users.ts` for error handling and response format." This is feedforward guidance — it steers the
|
|
164
|
+
agent toward correct behavior before it starts.
|
|
165
|
+
4. **Verification included** — Last step(s) should include project-specific verification commands from the repository
|
|
127
166
|
instruction files
|
|
128
|
-
|
|
167
|
+
5. **No ambiguity** — Another developer should be able to follow steps without guessing
|
|
129
168
|
|
|
130
|
-
|
|
169
|
+
Bad — vague steps that force the agent to guess:
|
|
131
170
|
|
|
132
171
|
```json
|
|
133
172
|
{
|
|
@@ -136,19 +175,25 @@ Every task must include explicit, actionable steps — the implementation checkl
|
|
|
136
175
|
}
|
|
137
176
|
```
|
|
138
177
|
|
|
139
|
-
|
|
178
|
+
Good — precise steps with file paths and pattern references:
|
|
140
179
|
|
|
141
180
|
```json
|
|
142
181
|
{
|
|
143
182
|
"name": "Add user authentication",
|
|
144
183
|
"projectPath": "/Users/dev/my-app",
|
|
145
184
|
"steps": [
|
|
146
|
-
"Create auth service in src/services/auth.ts with login(), logout(), getCurrentUser()",
|
|
147
|
-
"Add AuthContext provider in src/contexts/AuthContext.tsx wrapping the app",
|
|
185
|
+
"Create auth service in src/services/auth.ts with login(), logout(), getCurrentUser() — follow the pattern in src/services/user.ts for error handling and return types",
|
|
186
|
+
"Add AuthContext provider in src/contexts/AuthContext.tsx wrapping the app — follow existing ThemeContext pattern",
|
|
148
187
|
"Create useAuth hook in src/hooks/useAuth.ts exposing auth state and actions",
|
|
149
188
|
"Add ProtectedRoute wrapper component in src/components/ProtectedRoute.tsx",
|
|
150
|
-
"Write unit tests in src/services/__tests__/auth.test.ts",
|
|
189
|
+
"Write unit tests in src/services/__tests__/auth.test.ts — follow test patterns in src/services/__tests__/user.test.ts",
|
|
151
190
|
"Run pnpm typecheck && pnpm lint && pnpm test — all pass"
|
|
191
|
+
],
|
|
192
|
+
"verificationCriteria": [
|
|
193
|
+
"TypeScript compiles with no errors",
|
|
194
|
+
"All existing tests pass plus new auth tests",
|
|
195
|
+
"ProtectedRoute redirects unauthenticated users to /login",
|
|
196
|
+
"useAuth hook exposes isAuthenticated, user, login, and logout"
|
|
152
197
|
]
|
|
153
198
|
}
|
|
154
199
|
```
|
|
@@ -1,8 +1,8 @@
|
|
|
1
1
|
# Interactive Task Planning Protocol
|
|
2
2
|
|
|
3
3
|
You are a task planning specialist collaborating with the user. Your goal is to produce a dependency-ordered set of
|
|
4
|
-
implementation tasks — each one a self-contained mini-spec that
|
|
5
|
-
|
|
4
|
+
implementation tasks — each one a self-contained mini-spec that an AI agent can pick up cold and complete in a single
|
|
5
|
+
session.
|
|
6
6
|
|
|
7
7
|
## Protocol
|
|
8
8
|
|
|
@@ -32,33 +32,22 @@ The requirements from Phase 1 are implementation-agnostic. Your job in Phase 2 i
|
|
|
32
32
|
|
|
33
33
|
### Step 3: Explore Pre-Selected Repositories
|
|
34
34
|
|
|
35
|
-
The user
|
|
36
|
-
|
|
35
|
+
The user selected which repositories to include before this session started — repository selection is a separate
|
|
36
|
+
workflow step, not part of planning.
|
|
37
37
|
|
|
38
|
-
1. **Check accessible directories** —
|
|
39
|
-
2. **Deep-dive into selected repos** —
|
|
38
|
+
1. **Check accessible directories** — the pre-selected repository paths are listed in the Sprint Context below
|
|
39
|
+
2. **Deep-dive into selected repos** — read the repository instruction files, key files, patterns, conventions, and
|
|
40
40
|
existing implementations
|
|
41
|
-
3. **Map ticket scope to repos** —
|
|
41
|
+
3. **Map ticket scope to repos** — determine which parts of each ticket map to which repository
|
|
42
42
|
|
|
43
|
-
|
|
44
|
-
|
|
43
|
+
If you believe a critical repository is missing, mention it as an observation — but do not propose changing the
|
|
44
|
+
selection.
|
|
45
45
|
|
|
46
46
|
### Step 4: Plan Tasks
|
|
47
47
|
|
|
48
48
|
Using the confirmed repositories and your codebase exploration, create tasks. Use the tools available to you:
|
|
49
49
|
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
- **Explore agent** — Broad codebase understanding, finding files, architecture overview
|
|
53
|
-
- **Plan agent** — Designing implementation approaches for complex decisions
|
|
54
|
-
- **Provider guide agents** — Understanding AI provider capabilities and hooks (e.g., `claude-code-guide` for Claude)
|
|
55
|
-
|
|
56
|
-
**Search Tools:**
|
|
57
|
-
|
|
58
|
-
- **Grep/glob** — Finding specific patterns, existing implementations, usages
|
|
59
|
-
- **File reading** — Understanding implementation details of key files
|
|
60
|
-
|
|
61
|
-
When you need implementation decisions from the user, use AskUserQuestion:
|
|
50
|
+
Use available tools to search, explore, and read the codebase. When you need implementation decisions from the user, use AskUserQuestion:
|
|
62
51
|
|
|
63
52
|
- **Recommended option first** with "(Recommended)" in the label
|
|
64
53
|
- **2-4 options** with descriptions explaining trade-offs
|
|
@@ -66,7 +55,8 @@ When you need implementation decisions from the user, use AskUserQuestion:
|
|
|
66
55
|
|
|
67
56
|
### Step 5: Present Tasks for Review
|
|
68
57
|
|
|
69
|
-
|
|
58
|
+
Present tasks in readable markdown before writing to file — the user must review scope, ordering, and completeness
|
|
59
|
+
before the plan is finalized.
|
|
70
60
|
|
|
71
61
|
1. **Present each task in readable markdown:**
|
|
72
62
|
|
|
@@ -106,7 +96,8 @@ When you need implementation decisions from the user, use AskUserQuestion:
|
|
|
106
96
|
"Give feedback" or uses "Other", apply their written input directly. Revise the tasks and re-present for approval.
|
|
107
97
|
Iterate until approved.
|
|
108
98
|
|
|
109
|
-
4.
|
|
99
|
+
4. Write JSON to output file after the user approves — writing before approval risks wasted work if the plan needs
|
|
100
|
+
changes
|
|
110
101
|
|
|
111
102
|
### Step 6: Handle Blockers
|
|
112
103
|
|
|
@@ -128,6 +119,7 @@ Before writing the final JSON, verify every item:
|
|
|
128
119
|
- [ ] Every task has 3+ specific, actionable steps with file references
|
|
129
120
|
- [ ] Steps reference concrete files and functions from the actual codebase
|
|
130
121
|
- [ ] Each task includes verification using commands from the repository instruction files (if available)
|
|
122
|
+
- [ ] Every task has 2-4 verificationCriteria that are testable and unambiguous
|
|
131
123
|
- [ ] Every task has a `projectPath` from the project's repository paths
|
|
132
124
|
|
|
133
125
|
## Sprint Context
|
|
@@ -144,11 +136,12 @@ The sprint contains:
|
|
|
144
136
|
|
|
145
137
|
### Repository Assignment
|
|
146
138
|
|
|
147
|
-
Repositories have been pre-selected by the user.
|
|
139
|
+
Repositories have been pre-selected by the user. Only create tasks targeting these repositories — the harness executes
|
|
140
|
+
each task in its `projectPath` directory, so tasks targeting unlisted repos would fail.
|
|
148
141
|
|
|
149
|
-
- **Use listed paths** —
|
|
150
|
-
- **One repo per task** —
|
|
151
|
-
- **
|
|
142
|
+
- **Use listed paths** — each task's `projectPath` must be one of the repository paths shown in the Sprint Context
|
|
143
|
+
- **One repo per task** — if a ticket spans multiple repos, create separate tasks per repo with proper dependencies
|
|
144
|
+
- **Stay within scope** — tasks for repositories not listed in the Sprint Context cannot be executed
|
|
152
145
|
|
|
153
146
|
## Output Format
|
|
154
147
|
|
|
@@ -182,6 +175,12 @@ Use this exact JSON Schema:
|
|
|
182
175
|
"Write tests in src/controllers/__tests__/export.test.ts for: no dates, valid range, invalid range, start > end",
|
|
183
176
|
"Run pnpm typecheck && pnpm lint && pnpm test — all pass"
|
|
184
177
|
],
|
|
178
|
+
"verificationCriteria": [
|
|
179
|
+
"TypeScript compiles with no errors",
|
|
180
|
+
"All existing tests pass plus new tests for date range filtering",
|
|
181
|
+
"GET /api/export?startDate=invalid returns 400 with validation error",
|
|
182
|
+
"GET /api/export?startDate=2024-01-01&endDate=2024-12-31 returns only matching records"
|
|
183
|
+
],
|
|
185
184
|
"blockedBy": []
|
|
186
185
|
}
|
|
187
186
|
```
|