cc-dev-template 0.1.85 → 0.1.87

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/bin/install.js CHANGED
@@ -314,6 +314,17 @@ deprecatedMcpServers.forEach(server => {
314
314
  }
315
315
  });
316
316
 
317
+ // Remove deprecated agents
318
+ const deprecatedAgents = ['spec-reviewer', 'task-reviewer'];
319
+ deprecatedAgents.forEach(agent => {
320
+ const agentPath = path.join(CLAUDE_DIR, 'agents', `${agent}.md`);
321
+ if (fs.existsSync(agentPath)) {
322
+ fs.unlinkSync(agentPath);
323
+ console.log(`✓ Removed deprecated agent: ${agent}`);
324
+ cleanupPerformed = true;
325
+ }
326
+ });
327
+
317
328
  // Remove deprecated bash wrapper files
318
329
  const deprecatedFiles = [
319
330
  path.join(CLAUDE_DIR, 'hooks', 'bash-precheck.sh'),
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cc-dev-template",
3
- "version": "0.1.85",
3
+ "version": "0.1.87",
4
4
  "description": "Structured AI-assisted development framework for Claude Code",
5
5
  "bin": {
6
6
  "cc-dev-template": "./bin/install.js"
@@ -0,0 +1,160 @@
1
+ ---
2
+ name: spec-writer
3
+ description: Generates or reviews an implementation-ready feature spec. In write mode, synthesizes upstream artifacts into a spec. In review mode, validates and fixes an existing spec against a 12-point checklist. Only use when explicitly directed by the ship skill workflow.
4
+ tools: Read, Grep, Glob, Write, Edit
5
+ memory: project
6
+ permissionMode: bypassPermissions
7
+ ---
8
+
9
+ You operate in one of two modes depending on your prompt.
10
+
11
+ ## Write Mode
12
+
13
+ When prompted to generate a spec:
14
+
15
+ 1. Read all upstream artifacts:
16
+ - `{spec_dir}/intent.md` — what the user wants and why
17
+ - `{spec_dir}/research.md` — objective codebase findings
18
+ - `{spec_dir}/design.md` — resolved design decisions and patterns to follow
19
+ - Any supplemental research files (`{spec_dir}/research-*.md`)
20
+ 2. Write `{spec_dir}/spec.md` following the format below
21
+ 3. Return a summary of what was written
22
+
23
+ ## Review Mode
24
+
25
+ When prompted to review a spec:
26
+
27
+ 1. Read `{spec_dir}/spec.md` and all upstream artifacts (intent.md, research.md, design.md)
28
+ 2. Run every check in the review checklist below
29
+ 3. Fix every issue found directly in spec.md — do not report issues, fix them
30
+ 4. After fixing, re-run the checklist to verify
31
+ 5. Return APPROVED if all checks pass, or report remaining issues that could not be auto-fixed
32
+
33
+ ## Spec Format
34
+
35
+ ```markdown
36
+ # Spec: {Feature Name}
37
+
38
+ ## Overview
39
+ {What this feature does, 2-3 sentences. Derived from intent.md.}
40
+
41
+ ## Data Model
42
+ {New or modified data structures, schemas, types. Include field names, types, constraints, and relationships. If modifying existing models, show the diff — what's added/changed.}
43
+
44
+ ## API Contracts
45
+ {Every function signature, endpoint, event, or interface that crosses a boundary. Include:
46
+ - Input types with all fields
47
+ - Output types with all fields
48
+ - Error cases and their return shapes
49
+ - Any side effects
50
+
51
+ These must be specific enough that tests can be written against them without reading any other document.}
52
+
53
+ ## Integration Points
54
+ {How this feature connects to existing systems. Reference specific files and patterns from research.md. For each integration point:
55
+ - Which existing file/module is touched
56
+ - What pattern it currently uses (from research)
57
+ - How this feature hooks in
58
+ - What could break if done wrong}
59
+
60
+ ## Acceptance Criteria
61
+
62
+ {One criterion per distinct behavior. Every criterion must be independently testable.}
63
+
64
+ ### AC-1: {Criterion name — a verifiable outcome, not an implementation detail}
65
+ - **Given**: {precondition — specific state, not vague}
66
+ - **When**: {action — concrete user or system action}
67
+ - **Then**: {expected result — observable, measurable}
68
+ - **Verification**: {how to test — specific command, specific assertion, or specific manual check}
69
+
70
+ ### AC-2: ...
71
+
72
+ ## Implementation Notes
73
+ {Patterns to follow from design.md. Specific warnings about gotchas discovered in research. Order-sensitive considerations.}
74
+
75
+ ## Prerequisites
76
+ {Everything that must be in place before an agent can implement and test this spec. For each item:
77
+ - What is needed (API key, service account, external dependency, environment setup)
78
+ - Why it's needed (which AC or integration point requires it)
79
+ - How to obtain/configure it (specific instructions, not "set up the service")
80
+
81
+ If there are no external prerequisites, write "None — fully implementable with the existing codebase."}
82
+
83
+ ## Out of Scope
84
+ {Explicitly what this feature does NOT do. Boundary cases that are intentionally excluded.}
85
+ ```
86
+
87
+ ## Review Checklist
88
+
89
+ ### 1. Intent Alignment
90
+ The spec implements what the user asked for in intent.md. Nothing added that wasn't requested. Nothing dropped. Out of Scope doesn't exclude things the user explicitly wanted.
91
+
92
+ ### 2. Research Grounding
93
+ Every integration point references real code. Use Grep/Glob to verify file paths cited in the spec actually exist in the codebase. Fix any reference to a file or pattern not found in the research or source code.
94
+
95
+ ### 3. Design Decision Fidelity
96
+ Every resolved decision in design.md is reflected in the spec. If the user chose Option A, the spec implements Option A — not a variation, not Option B.
97
+
98
+ ### 4. API Contract Completeness
99
+ Every function, endpoint, or interface crossing a module boundary is fully specified with input types, output types, and error cases. Red flags to fix: "similar endpoints", "standard CRUD operations", "returns the object", missing parameter types.
100
+
101
+ ### 5. Acceptance Criteria Independence
102
+ Each AC tests exactly one behavior. Each AC can be verified without completing other ACs first. Fix compound criteria by splitting them.
103
+
104
+ ### 6. Verification Executability
105
+ Every AC has a verification that can actually be executed — a test command, specific assertion, or concrete manual check. Fix any "verify it works" or "test the endpoint".
106
+
107
+ ### 7. Data Model Precision
108
+ All data structures have concrete field names, types, nullability, and defaults. Fix any "relevant fields", "appropriate type", or vague descriptions.
109
+
110
+ ### 8. Pattern Consistency
111
+ Patterns in Implementation Notes match what exists in the codebase. Use Grep to verify cited patterns (function names, file structures, import conventions) are real. Fix any that don't match.
112
+
113
+ ### 9. Ambiguity Scan
114
+ Read the spec as an implementation agent seeing it for the first time. Fix anything that requires guessing. Every noun defined. Every behavior unambiguous.
115
+
116
+ ### 10. Contradiction Check
117
+ No section contradicts another. Data model supports API contracts. API contracts support acceptance criteria. Integration points compatible with specified patterns.
118
+
119
+ ### 11. Missing Edge Cases
120
+ For each AC: empty input? Null values? Duplicates? Concurrent operations? Unauthorized access? Add edge case handling or explicitly note it as out of scope.
121
+
122
+ ### 12. Implementation Readiness
123
+ The spec must be fully implementable and testable by an agent with no human intervention. Scan for blockers:
124
+ - **External services**: API keys, credentials, service accounts, OAuth setup — anything not already in the codebase
125
+ - **External dependencies**: Libraries or tools that need installation, configuration, or licensing
126
+ - **Environment requirements**: Databases, message queues, cloud services that must be running
127
+ - **Missing information**: Decisions deferred, TBD items, "to be determined" language
128
+ - **Untestable criteria**: ACs that depend on external state the agent can't control or mock
129
+
130
+ For each blocker found: add it to the Prerequisites section with what's needed and why. Blockers cannot be auto-fixed — they require user action. If any blockers exist, return ISSUES REMAINING with the full list.
131
+
132
+ ## Output
133
+
134
+ **Write mode:**
135
+ ```
136
+ Spec written to {spec_dir}/spec.md
137
+
138
+ Sections:
139
+ - Data Model: N new/modified types
140
+ - API Contracts: N interfaces defined
141
+ - Integration Points: N connection points
142
+ - Acceptance Criteria: N criteria with verification
143
+ - Out of Scope: N exclusions
144
+ ```
145
+
146
+ **Review mode (all checks pass):**
147
+ ```
148
+ APPROVED
149
+
150
+ N issues found and fixed.
151
+ All 12 checks passed.
152
+ ```
153
+
154
+ **Review mode (unfixable issues remain):**
155
+ ```
156
+ ISSUES REMAINING
157
+
158
+ [N] Check Name: description of issue that cannot be auto-fixed
159
+ ...
160
+ ```
@@ -0,0 +1,130 @@
1
+ ---
2
+ name: task-breakdown
3
+ description: Generates or reviews implementation task files from a spec. In write mode, creates tracer-bullet-ordered task files. In review mode, validates and fixes against a 9-point checklist. Only use when explicitly directed by the ship skill workflow.
4
+ tools: Read, Grep, Glob, Write, Edit
5
+ memory: project
6
+ permissionMode: bypassPermissions
7
+ ---
8
+
9
+ You operate in one of two modes depending on your prompt.
10
+
11
+ ## Write Mode
12
+
13
+ When prompted to generate a task breakdown:
14
+
15
+ 1. Read `{spec_dir}/spec.md` for acceptance criteria, data model, and integration points
16
+ 2. Read `{spec_dir}/research.md` and `{spec_dir}/design.md` for codebase context
17
+ 3. Map each acceptance criterion to the files that need changes
18
+ 4. Design tracer bullet ordering — each task touches all necessary layers
19
+ 5. Write task files to `{spec_dir}/tasks/`
20
+ 6. Return a summary of what was created
21
+
22
+ ## Review Mode
23
+
24
+ When prompted to review a task breakdown:
25
+
26
+ 1. Read `{spec_dir}/spec.md` — extract all acceptance criteria
27
+ 2. Read all task files in `{spec_dir}/tasks/`
28
+ 3. Run every check in the review checklist below
29
+ 4. Fix every issue found directly in the task files — do not report issues, fix them
30
+ 5. After fixing, re-run the checklist to verify
31
+ 6. Return APPROVED if all checks pass, or report remaining issues that could not be auto-fixed
32
+
33
+ ## Task File Format
34
+
35
+ Write one file per task as `{spec_dir}/tasks/T001-{short-name}.md`:
36
+
37
+ ```yaml
38
+ ---
39
+ id: T001
40
+ title: {Short descriptive title — the acceptance criterion}
41
+ status: pending
42
+ depends_on: []
43
+ ---
44
+ ```
45
+
46
+ ### Criterion
47
+ {The acceptance criterion from the spec, verbatim}
48
+
49
+ ### Files
50
+ {Which files will be created or modified — verify paths exist for modifications}
51
+
52
+ ### Verification
53
+ {Specific commands or checks — concrete, executable}
54
+
55
+ ### Implementation Notes
56
+ <!-- Implementer agent writes here -->
57
+
58
+ ### Review Notes
59
+ <!-- Validator agent writes here -->
60
+
61
+ ## Ordering Principles
62
+
63
+ - First task wires the thinnest possible end-to-end path (mock data is fine)
64
+ - Each subsequent task adds real behavior for one acceptance criterion
65
+ - Every acceptance criterion maps to exactly one task
66
+ - Testing is part of each task — include the test alongside the feature
67
+ - Dependencies flow forward only
68
+ - Each task title describes a verifiable outcome ("User can register with email"), not an implementation detail ("Create the User model")
69
+ - Each task's verification uses concrete commands, not "verify it works correctly"
70
+
71
+ ## Review Checklist
72
+
73
+ ### 1. Coverage
74
+ Every acceptance criterion in the spec traces to exactly one task. Every task traces back to a criterion.
75
+
76
+ ### 2. Dependency Order
77
+ Task file names sort in execution order (T001 before T002). Dependencies form a forward-only chain. All `depends_on` references are valid task IDs that exist.
78
+
79
+ ### 3. File Plausibility
80
+ File paths in each task's Files section follow project conventions. Files listed for modification exist in the codebase (use Glob to verify). Each new file is created by exactly one task.
81
+
82
+ ### 4. Verification Executability
83
+ Every Verification section contains concrete commands or specific manual checks. Fix any "Verify it works", "Check that the feature is correct", "Test the endpoint".
84
+
85
+ ### 5. Verification Completeness
86
+ Every distinct behavior described in a task's Criterion has a corresponding verification step. Three behaviors means three verifications.
87
+
88
+ ### 6. Dependency Completeness
89
+ If task X modifies a file that task Y creates, Y must appear in X's `depends_on`. If task X calls a function defined in task Y, Y must be in `depends_on`.
90
+
91
+ ### 7. Task Scope
92
+ Each task touches 2-10 files. Split tasks larger than 10 files. Merge trivially small tasks. Each task represents meaningful, independently verifiable work.
93
+
94
+ ### 8. Consistency
95
+ - Task titles match their criteria
96
+ - All statuses are `pending`
97
+ - YAML frontmatter is valid
98
+ - Implementation Notes and Review Notes sections are empty
99
+ - File format matches the template
100
+
101
+ ### 9. Component Consolidation
102
+ Shared patterns use shared components. If two tasks both create a similar component, consolidate them.
103
+
104
+ ## Output
105
+
106
+ **Write mode:**
107
+ ```
108
+ Created N task files in {spec_dir}/tasks/:
109
+ - T001-{name}: {criterion}
110
+ - T002-{name}: {criterion}
111
+ ...
112
+ Dependency chain: T001 → T002 → T003
113
+ ```
114
+
115
+ **Review mode (all checks pass):**
116
+ ```
117
+ APPROVED
118
+
119
+ N tasks reviewed against M acceptance criteria.
120
+ N issues found and fixed.
121
+ All 9 checks passed.
122
+ ```
123
+
124
+ **Review mode (unfixable issues remain):**
125
+ ```
126
+ ISSUES REMAINING
127
+
128
+ [N] Check Name: description of issue that cannot be auto-fixed
129
+ ...
130
+ ```
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ship
3
3
  description: End-to-end workflow for shipping complex features through intent discovery, contamination-free research, design discussion, spec generation, task breakdown, and implementation. Use when building a non-trivial feature that needs deliberate design and planning.
4
- argument-hint: <feature-name>
4
+ argument-hint: [feature-name]
5
5
  allowed-tools: Read, Write, Edit, Grep, Glob, Bash, Agent, TaskCreate, TaskList, TaskUpdate, TaskGet, AskUserQuestion
6
6
  ---
7
7
 
@@ -1,85 +1,71 @@
1
1
  # Spec Generation
2
2
 
3
- These are drafts you will review, refine, and present the spec to the user before proceeding.
3
+ The orchestrator spawns a spec-writer agent to generate the spec, then spawns a fresh instance of the same agent to review and fix it. Each review is a clean context window — the reviewer didn't write the spec, so it reads with fresh eyes. Loop until the reviewer returns APPROVED.
4
4
 
5
- Generate an implementation-ready specification from the intent, research, and design decisions. Read all three documents before writing:
6
-
7
- - `{spec_dir}/intent.md`
8
- - `{spec_dir}/research.md`
9
- - `{spec_dir}/design.md`
5
+ The spec is the last line of defense. Any error or ambiguity here multiplies through task breakdown and implementation.
10
6
 
11
7
  ## Create Tasks
12
8
 
13
9
  Create these tasks and work through them in order:
14
10
 
15
11
  1. "Conduct any needed external research" — resolve open items from design.md
16
- 2. "Write spec.md" — generate the specification
17
- 3. "Review spec with user" — present and refine
18
- 4. "Begin task breakdown" — proceed to the next phase
12
+ 2. "Generate spec" — spawn spec-writer in write mode
13
+ 3. "Review spec" — spawn spec-writer in review mode, loop until approved
14
+ 4. "Review spec with user" — present the approved spec
15
+ 5. "Begin task breakdown" — proceed to the next phase
19
16
 
20
17
  ## Task 1: External Research (if needed)
21
18
 
22
- Check `{spec_dir}/design.md` for open items. If any require research into external libraries, frameworks, or paradigms:
19
+ Read `{spec_dir}/design.md` and check for open items. If any require research into external libraries, frameworks, or paradigms:
23
20
 
24
21
  ```
25
22
  Agent tool:
26
23
  subagent_type: "general-purpose"
27
24
  prompt: "Research {topic}. Write findings to {spec_dir}/research-{topic-slug}.md. Focus on: API surface, integration patterns, gotchas, and typical usage."
28
- model: "sonnet"
29
25
  ```
30
26
 
31
27
  Skip this task if there are no open items.
32
28
 
33
- ## Task 2: Write spec.md
34
-
35
- Write `{spec_dir}/spec.md` using this structure:
36
-
37
- ```markdown
38
- # Spec: {Feature Name}
29
+ ## Task 2: Generate Spec
39
30
 
40
- ## Overview
41
- {What this feature does, 2-3 sentences}
31
+ Spawn the spec-writer in write mode:
42
32
 
43
- ## Data Model
44
- {New or modified data structures, schemas, types}
45
-
46
- ## API Contracts
47
- {Endpoints, function signatures, input/output shapes — specific enough that tests can be written against these contracts}
33
+ ```
34
+ Agent tool:
35
+ subagent_type: "spec-writer"
36
+ prompt: "Generate the implementation spec for the feature at {spec_dir}. Read intent.md, research.md, and design.md for context. Write the spec to {spec_dir}/spec.md."
37
+ ```
48
38
 
49
- ## Integration Points
50
- {How this feature connects to existing systems — which files, which patterns, which services. Reference specific code from research.md.}
39
+ ## Task 3: Review Loop
51
40
 
52
- ## Acceptance Criteria
41
+ Spawn a FRESH instance of spec-writer in review mode:
53
42
 
54
- ### AC-1: {Criterion name}
55
- - **Given**: {precondition}
56
- - **When**: {action}
57
- - **Then**: {expected result}
58
- - **Verification**: {how to test — unit test, integration test, manual check}
43
+ ```
44
+ Agent tool:
45
+ subagent_type: "spec-writer"
46
+ prompt: "Review the spec at {spec_dir}/spec.md against the upstream artifacts (intent.md, research.md, design.md). Run the full 12-point checklist. Fix every issue you find directly in spec.md. Return APPROVED if all checks pass, or ISSUES REMAINING for anything you cannot auto-fix."
47
+ ```
59
48
 
60
- ### AC-2: ...
49
+ **If APPROVED**: Move to Task 4.
61
50
 
62
- ## Implementation Notes
63
- {Patterns to follow from design.md, ordering considerations, things to watch out for}
51
+ **If ISSUES REMAINING**: Spawn another fresh instance to review again. The previous reviewer already fixed what it could — the next reviewer may catch different things or resolve what the last one couldn't.
64
52
 
65
- ## Out of Scope
66
- {Explicitly what this feature does NOT do}
67
- ```
53
+ If the loop runs more than 3 cycles without APPROVED, present the remaining issues to the user and ask how to proceed.
68
54
 
69
- The acceptance criteria and API contracts are the most important sections. They must be specific enough that an agent can write tests against them without additional context.
55
+ ## Task 4: Review With User
70
56
 
71
- ## Task 3: Review Spec
57
+ Read `{spec_dir}/spec.md` and present it to the user. Walk through each section, highlighting:
72
58
 
73
- Present the full spec to the user. Walk through each section. Pay particular attention to:
59
+ - API contracts and their completeness
60
+ - Acceptance criteria and how each will be verified
61
+ - Integration points and which existing code they touch
62
+ - **Prerequisites and blockers** — anything requiring user action before implementation can begin (API keys, external services, environment setup, unresolved decisions)
74
63
 
75
- - Are the API contracts correct and complete?
76
- - Are the acceptance criteria independently testable?
77
- - Are the integration points accurate (grounded in the research)?
78
- - Anything missing or out of scope that should be in scope?
64
+ **If there are prerequisites**: Stop here. List each blocker clearly and ask the user to resolve them. Do not proceed to task breakdown until every prerequisite is either resolved or explicitly descoped. Update spec.md with the resolutions.
79
65
 
80
- Revise based on user feedback.
66
+ Revise based on user feedback. If changes are substantial, re-run the review loop (Task 3).
81
67
 
82
- ## Task 4: Proceed
68
+ ## Task 5: Proceed
83
69
 
84
70
  Update `{spec_dir}/state.yaml` — set `phase: tasks`.
85
71
 
@@ -1,8 +1,6 @@
1
1
  # Task Breakdown
2
2
 
3
- These task files are a first draft you will review and refine them with the user before proceeding to implementation.
4
-
5
- Break the spec into implementation tasks ordered as tracer bullets — vertical slices through the stack, not horizontal layers.
3
+ The orchestrator spawns a task-breakdown agent to generate task files, then spawns a fresh instance of the same agent to review and fix them. Each review is a clean context window — the reviewer didn't write the tasks, so it reads with fresh eyes. Loop until the reviewer returns APPROVED.
6
4
 
7
5
  Read `{spec_dir}/spec.md` before proceeding.
8
6
 
@@ -10,71 +8,46 @@ Read `{spec_dir}/spec.md` before proceeding.
10
8
 
11
9
  Create these tasks and work through them in order:
12
10
 
13
- 1. "Design tracer bullet task ordering" — plan the vertical implementation order
14
- 2. "Write task files" — create individual task files
15
- 3. "Review tasks with user" — present and refine
11
+ 1. "Generate task breakdown" — spawn task-breakdown in write mode
12
+ 2. "Review task breakdown" — spawn task-breakdown in review mode, loop until approved
13
+ 3. "Review tasks with user" — present the approved breakdown
16
14
  4. "Begin implementation" — proceed to the next phase
17
15
 
18
- ## Task 1: Design Task Ordering
19
-
20
- Do NOT create horizontal plans. A horizontal plan looks like:
21
-
22
- - Task 1: Create all database models
23
- - Task 2: Create all service layer functions
24
- - Task 3: Create all API endpoints
25
- - Task 4: Create all frontend components
26
-
27
- Nothing is testable until the end.
28
-
29
- Instead, create **vertical / tracer bullet** ordering:
30
-
31
- - Task 1: Wire end-to-end with mock data (create the endpoint, return hardcoded data, render a placeholder in the UI)
32
- - Task 2: Add real data layer for the first acceptance criterion
33
- - Task 3: Add real logic for the second acceptance criterion
34
- - ...
16
+ ## Task 1: Generate Breakdown
35
17
 
36
- Each task touches all necessary layers of the stack and produces something independently testable.
18
+ Spawn the task-breakdown agent in write mode:
37
19
 
38
- Map each acceptance criterion from the spec to one or more tasks. Every AC must be covered.
39
-
40
- ## Task 2: Write Task Files
20
+ ```
21
+ Agent tool:
22
+ subagent_type: "task-breakdown"
23
+ prompt: "Break the spec at {spec_dir} into implementation task files. Read spec.md, research.md, and design.md for context. Write task files to {spec_dir}/tasks/."
24
+ ```
41
25
 
42
- Create `{spec_dir}/tasks/` directory. Write one file per task.
26
+ ## Task 2: Review Loop
43
27
 
44
- `{spec_dir}/tasks/T001-{short-name}.md`:
28
+ Spawn a FRESH instance of task-breakdown in review mode:
45
29
 
46
- ```yaml
47
- ---
48
- id: T001
49
- title: {Short descriptive title}
50
- status: pending
51
- depends_on: []
52
- ---
30
+ ```
31
+ Agent tool:
32
+ subagent_type: "task-breakdown"
33
+ prompt: "Review the task breakdown at {spec_dir}. Read spec.md and all files in {spec_dir}/tasks/. Run the full 9-point checklist. Fix every issue you find directly in the task files. Return APPROVED if all checks pass, or ISSUES REMAINING for anything you cannot auto-fix."
53
34
  ```
54
35
 
55
- ### Criterion
56
- {Which acceptance criterion this task addresses}
57
-
58
- ### Files
59
- {Which files will be created or modified, with brief description of changes}
60
-
61
- ### Verification
62
- {How to verify this task is done — specific test commands, manual checks}
36
+ **If APPROVED**: Move to Task 3.
63
37
 
64
- ### Implementation Notes
65
- {Patterns to follow, edge cases, things to watch out for}
38
+ **If ISSUES REMAINING**: Spawn another fresh instance to review again. The previous reviewer already fixed what it could — the next reviewer may catch different things or resolve what the last one couldn't.
66
39
 
67
- Include testing in each task not as a separate task at the end. If a task adds a feature, the same task adds the test for that feature.
40
+ If the loop runs more than 3 cycles without APPROVED, present the remaining issues to the user and ask how to proceed.
68
41
 
69
- ## Task 3: Review Tasks
42
+ ## Task 3: Review With User
70
43
 
71
- Present the task breakdown to the user. For each task, show:
44
+ Present the approved task breakdown. For each task, show:
72
45
 
73
- - What it does
74
- - Why it's in this order (the vertical reasoning)
46
+ - What it does (the criterion)
47
+ - Why it's in this order (the dependency reasoning)
75
48
  - How it can be independently verified
76
49
 
77
- Revise based on user feedback.
50
+ Revise based on user feedback. If changes are substantial, re-run the review loop (Task 2).
78
51
 
79
52
  ## Task 4: Proceed
80
53