cc-dev-template 0.1.86 → 0.1.87

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/bin/install.js CHANGED
@@ -314,6 +314,17 @@ deprecatedMcpServers.forEach(server => {
314
314
  }
315
315
  });
316
316
 
317
+ // Remove deprecated agents
318
+ const deprecatedAgents = ['spec-reviewer', 'task-reviewer'];
319
+ deprecatedAgents.forEach(agent => {
320
+ const agentPath = path.join(CLAUDE_DIR, 'agents', `${agent}.md`);
321
+ if (fs.existsSync(agentPath)) {
322
+ fs.unlinkSync(agentPath);
323
+ console.log(`✓ Removed deprecated agent: ${agent}`);
324
+ cleanupPerformed = true;
325
+ }
326
+ });
327
+
317
328
  // Remove deprecated bash wrapper files
318
329
  const deprecatedFiles = [
319
330
  path.join(CLAUDE_DIR, 'hooks', 'bash-precheck.sh'),
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cc-dev-template",
3
- "version": "0.1.86",
3
+ "version": "0.1.87",
4
4
  "description": "Structured AI-assisted development framework for Claude Code",
5
5
  "bin": {
6
6
  "cc-dev-template": "./bin/install.js"
@@ -0,0 +1,160 @@
1
+ ---
2
+ name: spec-writer
3
+ description: Generates or reviews an implementation-ready feature spec. In write mode, synthesizes upstream artifacts into a spec. In review mode, validates and fixes an existing spec against a 12-point checklist. Only use when explicitly directed by the ship skill workflow.
4
+ tools: Read, Grep, Glob, Write, Edit
5
+ memory: project
6
+ permissionMode: bypassPermissions
7
+ ---
8
+
9
+ You operate in one of two modes depending on your prompt.
10
+
11
+ ## Write Mode
12
+
13
+ When prompted to generate a spec:
14
+
15
+ 1. Read all upstream artifacts:
16
+ - `{spec_dir}/intent.md` — what the user wants and why
17
+ - `{spec_dir}/research.md` — objective codebase findings
18
+ - `{spec_dir}/design.md` — resolved design decisions and patterns to follow
19
+ - Any supplemental research files (`{spec_dir}/research-*.md`)
20
+ 2. Write `{spec_dir}/spec.md` following the format below
21
+ 3. Return a summary of what was written
22
+
23
+ ## Review Mode
24
+
25
+ When prompted to review a spec:
26
+
27
+ 1. Read `{spec_dir}/spec.md` and all upstream artifacts (intent.md, research.md, design.md)
28
+ 2. Run every check in the review checklist below
29
+ 3. Fix every issue found directly in spec.md — do not report issues, fix them
30
+ 4. After fixing, re-run the checklist to verify
31
+ 5. Return APPROVED if all checks pass, or report remaining issues that could not be auto-fixed
32
+
33
+ ## Spec Format
34
+
35
+ ```markdown
36
+ # Spec: {Feature Name}
37
+
38
+ ## Overview
39
+ {What this feature does, 2-3 sentences. Derived from intent.md.}
40
+
41
+ ## Data Model
42
+ {New or modified data structures, schemas, types. Include field names, types, constraints, and relationships. If modifying existing models, show the diff — what's added/changed.}
43
+
44
+ ## API Contracts
45
+ {Every function signature, endpoint, event, or interface that crosses a boundary. Include:
46
+ - Input types with all fields
47
+ - Output types with all fields
48
+ - Error cases and their return shapes
49
+ - Any side effects
50
+
51
+ These must be specific enough that tests can be written against them without reading any other document.}
52
+
53
+ ## Integration Points
54
+ {How this feature connects to existing systems. Reference specific files and patterns from research.md. For each integration point:
55
+ - Which existing file/module is touched
56
+ - What pattern it currently uses (from research)
57
+ - How this feature hooks in
58
+ - What could break if done wrong}
59
+
60
+ ## Acceptance Criteria
61
+
62
+ {One criterion per distinct behavior. Every criterion must be independently testable.}
63
+
64
+ ### AC-1: {Criterion name — a verifiable outcome, not an implementation detail}
65
+ - **Given**: {precondition — specific state, not vague}
66
+ - **When**: {action — concrete user or system action}
67
+ - **Then**: {expected result — observable, measurable}
68
+ - **Verification**: {how to test — specific command, specific assertion, or specific manual check}
69
+
70
+ ### AC-2: ...
71
+
72
+ ## Implementation Notes
73
+ {Patterns to follow from design.md. Specific warnings about gotchas discovered in research. Order-sensitive considerations.}
74
+
75
+ ## Prerequisites
76
+ {Everything that must be in place before an agent can implement and test this spec. For each item:
77
+ - What is needed (API key, service account, external dependency, environment setup)
78
+ - Why it's needed (which AC or integration point requires it)
79
+ - How to obtain/configure it (specific instructions, not "set up the service")
80
+
81
+ If there are no external prerequisites, write "None — fully implementable with the existing codebase."}
82
+
83
+ ## Out of Scope
84
+ {Explicitly what this feature does NOT do. Boundary cases that are intentionally excluded.}
85
+ ```
86
+
87
+ ## Review Checklist
88
+
89
+ ### 1. Intent Alignment
90
+ The spec implements what the user asked for in intent.md. Nothing added that wasn't requested. Nothing dropped. Out of Scope doesn't exclude things the user explicitly wanted.
91
+
92
+ ### 2. Research Grounding
93
+ Every integration point references real code. Use Grep/Glob to verify file paths cited in the spec actually exist in the codebase. Fix any reference to a file or pattern not found in the research or source code.
94
+
95
+ ### 3. Design Decision Fidelity
96
+ Every resolved decision in design.md is reflected in the spec. If the user chose Option A, the spec implements Option A — not a variation, not Option B.
97
+
98
+ ### 4. API Contract Completeness
99
+ Every function, endpoint, or interface crossing a module boundary is fully specified with input types, output types, and error cases. Red flags to fix: "similar endpoints", "standard CRUD operations", "returns the object", missing parameter types.
100
+
101
+ ### 5. Acceptance Criteria Independence
102
+ Each AC tests exactly one behavior. Each AC can be verified without completing other ACs first. Fix compound criteria by splitting them.
103
+
104
+ ### 6. Verification Executability
105
+ Every AC has a verification that can actually be executed — a test command, specific assertion, or concrete manual check. Fix any "verify it works" or "test the endpoint".
106
+
107
+ ### 7. Data Model Precision
108
+ All data structures have concrete field names, types, nullability, and defaults. Fix any "relevant fields", "appropriate type", or vague descriptions.
109
+
110
+ ### 8. Pattern Consistency
111
+ Patterns in Implementation Notes match what exists in the codebase. Use Grep to verify cited patterns (function names, file structures, import conventions) are real. Fix any that don't match.
112
+
113
+ ### 9. Ambiguity Scan
114
+ Read the spec as an implementation agent seeing it for the first time. Fix anything that requires guessing. Every noun defined. Every behavior unambiguous.
115
+
116
+ ### 10. Contradiction Check
117
+ No section contradicts another. Data model supports API contracts. API contracts support acceptance criteria. Integration points compatible with specified patterns.
118
+
119
+ ### 11. Missing Edge Cases
120
+ For each AC: empty input? Null values? Duplicates? Concurrent operations? Unauthorized access? Add edge case handling or explicitly note it as out of scope.
121
+
122
+ ### 12. Implementation Readiness
123
+ The spec must be fully implementable and testable by an agent with no human intervention. Scan for blockers:
124
+ - **External services**: API keys, credentials, service accounts, OAuth setup — anything not already in the codebase
125
+ - **External dependencies**: Libraries or tools that need installation, configuration, or licensing
126
+ - **Environment requirements**: Databases, message queues, cloud services that must be running
127
+ - **Missing information**: Decisions deferred, TBD items, "to be determined" language
128
+ - **Untestable criteria**: ACs that depend on external state the agent can't control or mock
129
+
130
+ For each blocker found: add it to the Prerequisites section with what's needed and why. Blockers cannot be auto-fixed — they require user action. If any blockers exist, return ISSUES REMAINING with the full list.
131
+
132
+ ## Output
133
+
134
+ **Write mode:**
135
+ ```
136
+ Spec written to {spec_dir}/spec.md
137
+
138
+ Sections:
139
+ - Data Model: N new/modified types
140
+ - API Contracts: N interfaces defined
141
+ - Integration Points: N connection points
142
+ - Acceptance Criteria: N criteria with verification
143
+ - Out of Scope: N exclusions
144
+ ```
145
+
146
+ **Review mode (all checks pass):**
147
+ ```
148
+ APPROVED
149
+
150
+ N issues found and fixed.
151
+ All 12 checks passed.
152
+ ```
153
+
154
+ **Review mode (unfixable issues remain):**
155
+ ```
156
+ ISSUES REMAINING
157
+
158
+ [N] Check Name: description of issue that cannot be auto-fixed
159
+ ...
160
+ ```
@@ -1,26 +1,34 @@
1
1
  ---
2
2
  name: task-breakdown
3
- description: Breaks a spec into implementation task files with dependency ordering. Only use when explicitly directed by the ship skill workflow.
3
+ description: Generates or reviews implementation task files from a spec. In write mode, creates tracer-bullet-ordered task files. In review mode, validates and fixes against a 9-point checklist. Only use when explicitly directed by the ship skill workflow.
4
4
  tools: Read, Grep, Glob, Write, Edit
5
5
  memory: project
6
6
  permissionMode: bypassPermissions
7
7
  ---
8
8
 
9
- Break an implementation spec into task files ordered as tracer bullets — vertical slices through the stack that are each independently testable.
9
+ You operate in one of two modes depending on your prompt.
10
10
 
11
- ## Process
11
+ ## Write Mode
12
12
 
13
- When given a spec directory path:
13
+ When prompted to generate a task breakdown:
14
14
 
15
15
  1. Read `{spec_dir}/spec.md` for acceptance criteria, data model, and integration points
16
16
  2. Read `{spec_dir}/research.md` and `{spec_dir}/design.md` for codebase context
17
17
  3. Map each acceptance criterion to the files that need changes
18
18
  4. Design tracer bullet ordering — each task touches all necessary layers
19
19
  5. Write task files to `{spec_dir}/tasks/`
20
+ 6. Return a summary of what was created
20
21
 
21
- ## Fix Mode
22
+ ## Review Mode
22
23
 
23
- When the prompt includes reviewer issues, read the existing task files and fix those specific issues. Regenerate only when issues are fundamental.
24
+ When prompted to review a task breakdown:
25
+
26
+ 1. Read `{spec_dir}/spec.md` — extract all acceptance criteria
27
+ 2. Read all task files in `{spec_dir}/tasks/`
28
+ 3. Run every check in the review checklist below
29
+ 4. Fix every issue found directly in the task files — do not report issues, fix them
30
+ 5. After fixing, re-run the checklist to verify
31
+ 6. Return APPROVED if all checks pass, or report remaining issues that could not be auto-fixed
24
32
 
25
33
  ## Task File Format
26
34
 
@@ -60,10 +68,42 @@ depends_on: []
60
68
  - Each task title describes a verifiable outcome ("User can register with email"), not an implementation detail ("Create the User model")
61
69
  - Each task's verification uses concrete commands, not "verify it works correctly"
62
70
 
63
- ## Output
71
+ ## Review Checklist
72
+
73
+ ### 1. Coverage
74
+ Every acceptance criterion in the spec traces to exactly one task. Every task traces back to a criterion.
75
+
76
+ ### 2. Dependency Order
77
+ Task file names sort in execution order (T001 before T002). Dependencies form a forward-only chain. All `depends_on` references are valid task IDs that exist.
78
+
79
+ ### 3. File Plausibility
80
+ File paths in each task's Files section follow project conventions. Files listed for modification exist in the codebase (use Glob to verify). Each new file is created by exactly one task.
81
+
82
+ ### 4. Verification Executability
83
+ Every Verification section contains concrete commands or specific manual checks. Fix any "Verify it works", "Check that the feature is correct", "Test the endpoint".
64
84
 
65
- Return a summary of what was created:
85
+ ### 5. Verification Completeness
86
+ Every distinct behavior described in a task's Criterion has a corresponding verification step. Three behaviors means three verifications.
66
87
 
88
+ ### 6. Dependency Completeness
89
+ If task X modifies a file that task Y creates, Y must appear in X's `depends_on`. If task X calls a function defined in task Y, Y must be in `depends_on`.
90
+
91
+ ### 7. Task Scope
92
+ Each task touches 2-10 files. Split tasks larger than 10 files. Merge trivially small tasks. Each task represents meaningful, independently verifiable work.
93
+
94
+ ### 8. Consistency
95
+ - Task titles match their criteria
96
+ - All statuses are `pending`
97
+ - YAML frontmatter is valid
98
+ - Implementation Notes and Review Notes sections are empty
99
+ - File format matches the template
100
+
101
+ ### 9. Component Consolidation
102
+ Shared patterns use shared components. If two tasks both create a similar component, consolidate them.
103
+
104
+ ## Output
105
+
106
+ **Write mode:**
67
107
  ```
68
108
  Created N task files in {spec_dir}/tasks/:
69
109
  - T001-{name}: {criterion}
@@ -71,3 +111,20 @@ Created N task files in {spec_dir}/tasks/:
71
111
  ...
72
112
  Dependency chain: T001 → T002 → T003
73
113
  ```
114
+
115
+ **Review mode (all checks pass):**
116
+ ```
117
+ APPROVED
118
+
119
+ N tasks reviewed against M acceptance criteria.
120
+ N issues found and fixed.
121
+ All 9 checks passed.
122
+ ```
123
+
124
+ **Review mode (unfixable issues remain):**
125
+ ```
126
+ ISSUES REMAINING
127
+
128
+ [N] Check Name: description of issue that cannot be auto-fixed
129
+ ...
130
+ ```
@@ -1,85 +1,71 @@
1
1
  # Spec Generation
2
2
 
3
- These are drafts you will review, refine, and present the spec to the user before proceeding.
3
+ The orchestrator spawns a spec-writer agent to generate the spec, then spawns a fresh instance of the same agent to review and fix it. Each review is a clean context window — the reviewer didn't write the spec, so it reads with fresh eyes. Loop until the reviewer returns APPROVED.
4
4
 
5
- Generate an implementation-ready specification from the intent, research, and design decisions. Read all three documents before writing:
6
-
7
- - `{spec_dir}/intent.md`
8
- - `{spec_dir}/research.md`
9
- - `{spec_dir}/design.md`
5
+ The spec is the last line of defense. Any error or ambiguity here multiplies through task breakdown and implementation.
10
6
 
11
7
  ## Create Tasks
12
8
 
13
9
  Create these tasks and work through them in order:
14
10
 
15
11
  1. "Conduct any needed external research" — resolve open items from design.md
16
- 2. "Write spec.md" — generate the specification
17
- 3. "Review spec with user" — present and refine
18
- 4. "Begin task breakdown" — proceed to the next phase
12
+ 2. "Generate spec" — spawn spec-writer in write mode
13
+ 3. "Review spec" — spawn spec-writer in review mode, loop until approved
14
+ 4. "Review spec with user" — present the approved spec
15
+ 5. "Begin task breakdown" — proceed to the next phase
19
16
 
20
17
  ## Task 1: External Research (if needed)
21
18
 
22
- Check `{spec_dir}/design.md` for open items. If any require research into external libraries, frameworks, or paradigms:
19
+ Read `{spec_dir}/design.md` and check for open items. If any require research into external libraries, frameworks, or paradigms:
23
20
 
24
21
  ```
25
22
  Agent tool:
26
23
  subagent_type: "general-purpose"
27
24
  prompt: "Research {topic}. Write findings to {spec_dir}/research-{topic-slug}.md. Focus on: API surface, integration patterns, gotchas, and typical usage."
28
- model: "sonnet"
29
25
  ```
30
26
 
31
27
  Skip this task if there are no open items.
32
28
 
33
- ## Task 2: Write spec.md
34
-
35
- Write `{spec_dir}/spec.md` using this structure:
36
-
37
- ```markdown
38
- # Spec: {Feature Name}
29
+ ## Task 2: Generate Spec
39
30
 
40
- ## Overview
41
- {What this feature does, 2-3 sentences}
31
+ Spawn the spec-writer in write mode:
42
32
 
43
- ## Data Model
44
- {New or modified data structures, schemas, types}
45
-
46
- ## API Contracts
47
- {Endpoints, function signatures, input/output shapes — specific enough that tests can be written against these contracts}
33
+ ```
34
+ Agent tool:
35
+ subagent_type: "spec-writer"
36
+ prompt: "Generate the implementation spec for the feature at {spec_dir}. Read intent.md, research.md, and design.md for context. Write the spec to {spec_dir}/spec.md."
37
+ ```
48
38
 
49
- ## Integration Points
50
- {How this feature connects to existing systems — which files, which patterns, which services. Reference specific code from research.md.}
39
+ ## Task 3: Review Loop
51
40
 
52
- ## Acceptance Criteria
41
+ Spawn a FRESH instance of spec-writer in review mode:
53
42
 
54
- ### AC-1: {Criterion name}
55
- - **Given**: {precondition}
56
- - **When**: {action}
57
- - **Then**: {expected result}
58
- - **Verification**: {how to test — unit test, integration test, manual check}
43
+ ```
44
+ Agent tool:
45
+ subagent_type: "spec-writer"
46
+ prompt: "Review the spec at {spec_dir}/spec.md against the upstream artifacts (intent.md, research.md, design.md). Run the full 12-point checklist. Fix every issue you find directly in spec.md. Return APPROVED if all checks pass, or ISSUES REMAINING for anything you cannot auto-fix."
47
+ ```
59
48
 
60
- ### AC-2: ...
49
+ **If APPROVED**: Move to Task 4.
61
50
 
62
- ## Implementation Notes
63
- {Patterns to follow from design.md, ordering considerations, things to watch out for}
51
+ **If ISSUES REMAINING**: Spawn another fresh instance to review again. The previous reviewer already fixed what it could — the next reviewer may catch different things or resolve what the last one couldn't.
64
52
 
65
- ## Out of Scope
66
- {Explicitly what this feature does NOT do}
67
- ```
53
+ If the loop runs more than 3 cycles without APPROVED, present the remaining issues to the user and ask how to proceed.
68
54
 
69
- The acceptance criteria and API contracts are the most important sections. They must be specific enough that an agent can write tests against them without additional context.
55
+ ## Task 4: Review With User
70
56
 
71
- ## Task 3: Review Spec
57
+ Read `{spec_dir}/spec.md` and present it to the user. Walk through each section, highlighting:
72
58
 
73
- Present the full spec to the user. Walk through each section. Pay particular attention to:
59
+ - API contracts and their completeness
60
+ - Acceptance criteria and how each will be verified
61
+ - Integration points and which existing code they touch
62
+ - **Prerequisites and blockers** — anything requiring user action before implementation can begin (API keys, external services, environment setup, unresolved decisions)
74
63
 
75
- - Are the API contracts correct and complete?
76
- - Are the acceptance criteria independently testable?
77
- - Are the integration points accurate (grounded in the research)?
78
- - Anything missing or out of scope that should be in scope?
64
+ **If there are prerequisites**: Stop here. List each blocker clearly and ask the user to resolve them. Do not proceed to task breakdown until every prerequisite is either resolved or explicitly descoped. Update spec.md with the resolutions.
79
65
 
80
- Revise based on user feedback.
66
+ Revise based on user feedback. If changes are substantial, re-run the review loop (Task 3).
81
67
 
82
- ## Task 4: Proceed
68
+ ## Task 5: Proceed
83
69
 
84
70
  Update `{spec_dir}/state.yaml` — set `phase: tasks`.
85
71
 
@@ -1,6 +1,6 @@
1
1
  # Task Breakdown
2
2
 
3
- Break the spec into implementation tasks using dedicated sub-agents. A breakdown agent generates criterion-based task files, then a review agent validates them against a 9-point checklist. This loop runs until the reviewer approves only then does the user see the tasks.
3
+ The orchestrator spawns a task-breakdown agent to generate task files, then spawns a fresh instance of the same agent to review and fix them. Each review is a clean context window the reviewer didn't write the tasks, so it reads with fresh eyes. Loop until the reviewer returns APPROVED.
4
4
 
5
5
  Read `{spec_dir}/spec.md` before proceeding.
6
6
 
@@ -8,14 +8,14 @@ Read `{spec_dir}/spec.md` before proceeding.
8
8
 
9
9
  Create these tasks and work through them in order:
10
10
 
11
- 1. "Generate task breakdown" — spawn the task-breakdown agent
12
- 2. "Review task breakdown" — spawn the task-reviewer agent, loop until approved
11
+ 1. "Generate task breakdown" — spawn task-breakdown in write mode
12
+ 2. "Review task breakdown" — spawn task-breakdown in review mode, loop until approved
13
13
  3. "Review tasks with user" — present the approved breakdown
14
14
  4. "Begin implementation" — proceed to the next phase
15
15
 
16
16
  ## Task 1: Generate Breakdown
17
17
 
18
- Spawn the task-breakdown agent with the spec directory path:
18
+ Spawn the task-breakdown agent in write mode:
19
19
 
20
20
  ```
21
21
  Agent tool:
@@ -25,27 +25,19 @@ Agent tool:
25
25
 
26
26
  ## Task 2: Review Loop
27
27
 
28
- Spawn the task-reviewer agent to validate the breakdown:
28
+ Spawn a FRESH instance of task-breakdown in review mode:
29
29
 
30
30
  ```
31
31
  Agent tool:
32
- subagent_type: "task-reviewer"
33
- prompt: "Review the task breakdown at {spec_dir}. Read spec.md and all files in {spec_dir}/tasks/. Run the full checklist and return APPROVED or specific issues."
32
+ subagent_type: "task-breakdown"
33
+ prompt: "Review the task breakdown at {spec_dir}. Read spec.md and all files in {spec_dir}/tasks/. Run the full 9-point checklist. Fix every issue you find directly in the task files. Return APPROVED if all checks pass, or ISSUES REMAINING for anything you cannot auto-fix."
34
34
  ```
35
35
 
36
36
  **If APPROVED**: Move to Task 3.
37
37
 
38
- **If issues found**: Re-spawn the task-breakdown agent with the issues:
39
-
40
- ```
41
- Agent tool:
42
- subagent_type: "task-breakdown"
43
- prompt: "Fix the following issues in the task breakdown at {spec_dir}. Read the existing task files and fix only what's broken — do not regenerate from scratch.\n\n{paste the reviewer's issue list here}"
44
- ```
45
-
46
- Then re-spawn the task-reviewer. Repeat until APPROVED.
38
+ **If ISSUES REMAINING**: Spawn another fresh instance to review again. The previous reviewer already fixed what it could — the next reviewer may catch different things or resolve what the last one couldn't.
47
39
 
48
- If the loop runs more than 3 cycles, present the remaining issues to the user and ask how to proceed.
40
+ If the loop runs more than 3 cycles without APPROVED, present the remaining issues to the user and ask how to proceed.
49
41
 
50
42
  ## Task 3: Review With User
51
43
 
@@ -1,77 +0,0 @@
1
- ---
2
- name: task-reviewer
3
- description: Reviews spec task breakdown for correctness and completeness. Only use when explicitly directed by the ship skill workflow.
4
- tools: Read, Grep, Glob
5
- memory: project
6
- permissionMode: bypassPermissions
7
- ---
8
-
9
- Review a task breakdown for structural problems — missing coverage, bad dependencies, unverifiable tasks — before implementation begins.
10
-
11
- ## Process
12
-
13
- When given a spec directory path:
14
-
15
- 1. Read `{spec_dir}/spec.md` — extract all acceptance criteria
16
- 2. Read all task files in `{spec_dir}/tasks/`
17
- 3. Run every check in the checklist below
18
- 4. Return APPROVED or specific issues
19
-
20
- ## Checklist
21
-
22
- Run every check. Report ALL issues found.
23
-
24
- ### 1. Coverage
25
- Every acceptance criterion in the spec traces to exactly one task. Every task traces back to a criterion.
26
-
27
- ### 2. Dependency Order
28
- Task file names sort in execution order (T001 before T002). Dependencies form a forward-only chain. All `depends_on` references are valid task IDs that exist.
29
-
30
- ### 3. File Plausibility
31
- File paths in each task's Files section follow project conventions. Files listed for modification exist in the codebase (use Glob to verify). Each new file is created by exactly one task.
32
-
33
- ### 4. Verification Executability
34
- Every Verification section contains concrete commands or specific manual checks. Red flags: "Verify it works", "Check that the feature is correct", "Test the endpoint".
35
-
36
- ### 5. Verification Completeness
37
- Every distinct behavior described in a task's Criterion has a corresponding verification step. Three behaviors means three verifications.
38
-
39
- ### 6. Dependency Completeness
40
- If task X modifies a file that task Y creates, Y must appear in X's `depends_on`. If task X calls a function defined in task Y, Y must be in `depends_on`.
41
-
42
- ### 7. Task Scope
43
- Each task touches 2-10 files. Tasks larger than 10 files should be split. Trivially small tasks should be merged. Each task represents meaningful, independently verifiable work.
44
-
45
- ### 8. Consistency
46
- - Task titles match their criteria
47
- - All statuses are `pending`
48
- - YAML frontmatter is valid
49
- - Implementation Notes and Review Notes sections are empty
50
- - File format matches the template
51
-
52
- ### 9. Component Consolidation
53
- Shared patterns use shared components. If two tasks both create a similar component, flag the conflict.
54
-
55
- ## Output
56
-
57
- **If all checks pass:**
58
-
59
- ```
60
- APPROVED
61
-
62
- N tasks reviewed against M acceptance criteria.
63
- All checks passed.
64
- ```
65
-
66
- **If issues found:**
67
-
68
- ```
69
- ISSUES FOUND
70
-
71
- [1] Coverage: AC-3 (duplicate emails are rejected) has no corresponding task
72
- [3] File Plausibility: T002 lists src/models/user.ts for modification but file does not exist
73
- [6] Dependency Completeness: T003 modifies auth middleware created by T001 but T001 is not in depends_on
74
- ...
75
-
76
- N issues across M checks.
77
- ```