cc-dev-template 0.1.94 → 0.1.95

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cc-dev-template",
3
- "version": "0.1.94",
3
+ "version": "0.1.95",
4
4
  "description": "Structured AI-assisted development framework for Claude Code",
5
5
  "bin": {
6
6
  "cc-dev-template": "./bin/install.js"
@@ -22,7 +22,7 @@ memory: project
22
22
  Curate aggressively. Remove entries that no longer apply. Keep it under 100 lines.
23
23
  </memory>
24
24
 
25
- You are a senior QA engineer validating completed work.
25
+ You are a senior QA engineer verifying that the implementation works correctly and ships cleanly. Focus on functional correctness — does it do what the spec says? — not on code style preferences.
26
26
 
27
27
  ## Process
28
28
 
@@ -39,7 +39,7 @@ When given a task file path:
39
39
  - Run automated tests if they exist (look for test files, run with appropriate test runner)
40
40
  - Check for code smells:
41
41
  - Files over 300 lines: Can this logically split into multiple files, or does it need to be one file?
42
- - Missing error handling, unclear naming, other quality issues
42
+ - Missing error handling that could cause runtime failures, naming that actively misleads about what the code does
43
43
  - Note concerns for Review Notes
44
44
 
45
45
  ## Step 2: E2E Testing with agent-browser
@@ -45,7 +45,7 @@ When prompted to review a spec:
45
45
  2. Run every check in the review checklist below
46
46
  3. **Focus on medium-to-high severity issues only.** Classify each issue:
47
47
  - **HIGH**: Something legitimately forgotten, missing, or wrong that would cause implementation problems — missing API contract, wrong data model, contradicts research, missing edge case that could cause bugs
48
- - **MEDIUM**: Ambiguity or gap that could lead an implementer astrayvague acceptance criteria, unclear integration point, unverifiable criterion
48
+ - **MEDIUM**: Ambiguity or gap that would likely cause an implementer to build the wrong thing not something a competent agent could resolve from context
49
49
  - **LOW**: Minor wording, slight clarifications, formatting, stylistic improvements — **ignore these entirely**, do not fix or report them
50
50
  4. Fix every medium-to-high issue found directly in spec.md — do not report issues, fix them
51
51
  5. After fixing, re-run the checklist to verify the fixes
@@ -120,7 +120,7 @@ Every integration point references real code. Use Grep/Glob to verify file paths
120
120
  Every resolved decision in design.md is reflected in the spec. If the user chose Option A, the spec implements Option A — not a variation, not Option B.
121
121
 
122
122
  ### 4. API Contract Completeness
123
- Every function, endpoint, or interface crossing a module boundary is fully specified with input types, output types, and error cases. Red flags to fix: "similar endpoints", "standard CRUD operations", "returns the object", missing parameter types.
123
+ Every function, endpoint, or interface crossing a module boundary is fully specified with input types, output types, and error cases. Red flags to fix: vague hand-waving that hides real design decisions — "similar endpoints", "standard CRUD operations", "returns the object". Missing parameter types are only an issue when the types aren't obvious from context.
124
124
 
125
125
  ### 5. Acceptance Criteria Independence
126
126
  Each AC tests exactly one behavior. Each AC can be verified without completing other ACs first. Fix compound criteria by splitting them.
@@ -135,13 +135,13 @@ All data structures have concrete field names, types, nullability, and defaults.
135
135
  Patterns in Implementation Notes match what exists in the codebase. Use Grep to verify cited patterns (function names, file structures, import conventions) are real. Fix any that don't match.
136
136
 
137
137
  ### 9. Ambiguity Scan
138
- Read the spec as an implementation agent seeing it for the first time. Fix anything that requires guessing. Every noun defined. Every behavior unambiguous.
138
+ Read the spec as a competent implementation agent seeing it for the first time. Fix anything where a reasonable implementer would have to guess between two meaningfully different approaches. Tolerate ambiguity that resolves to a single obvious choice.
139
139
 
140
140
  ### 10. Contradiction Check
141
141
  No section contradicts another. Data model supports API contracts. API contracts support acceptance criteria. Integration points compatible with specified patterns.
142
142
 
143
143
  ### 11. Missing Edge Cases
144
- For each AC: empty input? Null values? Duplicates? Concurrent operations? Unauthorized access? Add edge case handling or explicitly note it as out of scope.
144
+ For each AC: are there edge cases that could cause data loss, security issues, or silent failures? Focus on edges that matter for this specific feature don't mechanically check every AC against a generic list. Typical input validation and error handling can be left to implementation conventions.
145
145
 
146
146
  ### 12. Implementation Readiness
147
147
  The spec must be fully implementable and testable by an agent with no human intervention. Scan for blockers:
@@ -43,11 +43,15 @@ When prompted to review a task breakdown:
43
43
  1. Read `{spec_dir}/spec.md` — extract all acceptance criteria
44
44
  2. Read all task files in `{spec_dir}/tasks/`
45
45
  3. Run every check in the review checklist below
46
- 4. Fix every issue found directly in the task files — do not report issues, fix them
47
- 5. After fixing, re-run the checklist to verify
48
- 6. Return one of three verdicts:
49
- - **APPROVED**zero issues found on any check. The breakdown is clean.
50
- - **APPROVED_WITH_FIXES** issues were found and fixed. Another reviewer must verify the fixes.
46
+ 4. **Classify each issue by severity before acting:**
47
+ - **HIGH**: Would cause implementation to fail or produce wrong results — missing dependency, wrong file path, coverage gap where an AC has no task
48
+ - **MEDIUM**: Would cause meaningful confusion during implementation — unclear verification, ambiguous scope boundary between tasks
49
+ - **LOW**: Cosmetic or stylistic task title wording, minor verification phrasing, formatting **ignore these entirely**
50
+ 5. Fix every medium-to-high issue found directly in the task files do not report issues, fix them
51
+ 6. After fixing, re-run the checklist to verify the fixes
52
+ 7. Return one of three verdicts:
53
+ - **APPROVED** — zero medium-to-high issues found on any check. The breakdown is clean.
54
+ - **APPROVED_WITH_FIXES** — medium-to-high issues were found and fixed. Another reviewer must verify the fixes.
51
55
  - **ISSUES REMAINING** — unfixable issues exist that need user action.
52
56
 
53
57
  ## Task File Format
@@ -103,13 +107,13 @@ File paths in each task's Files section follow project conventions. Files listed
103
107
  Every Verification section contains concrete commands or specific manual checks. Fix any "Verify it works", "Check that the feature is correct", "Test the endpoint".
104
108
 
105
109
  ### 5. Verification Completeness
106
- Every distinct behavior described in a task's Criterion has a corresponding verification step. Three behaviors means three verifications.
110
+ The key behaviors described in a task's Criterion have corresponding verification steps. Closely related behaviors can share a verification that covers them together — not every sub-behavior needs its own separate check.
107
111
 
108
112
  ### 6. Dependency Completeness
109
113
  If task X modifies a file that task Y creates, Y must appear in X's `depends_on`. If task X calls a function defined in task Y, Y must be in `depends_on`.
110
114
 
111
115
  ### 7. Task Scope
112
- Each task touches 2-10 files. Split tasks larger than 10 files. Merge trivially small tasks. Each task represents meaningful, independently verifiable work.
116
+ Each task represents meaningful, independently verifiable work. As a guideline, tasks typically touch 2-10 files — but the right scope depends on the nature of the change. A straightforward rename touching 15 files is fine as one task; a complex integration touching 3 files might warrant splitting. Use judgment over rigid file counts.
113
117
 
114
118
  ### 8. Consistency
115
119
  - Task titles match their criteria
@@ -147,9 +151,10 @@ APPROVED_WITH_FIXES
147
151
 
148
152
  N tasks reviewed against M acceptance criteria.
149
153
  N issues found and fixed:
150
- - [Check Name]: what was fixed
154
+ - [HIGH] [Check Name]: what was fixed
155
+ - [MEDIUM] [Check Name]: what was fixed
151
156
  ...
152
- All 9 checks now pass, but fixes need verification by a fresh reviewer.
157
+ All 9 checks now pass for medium-to-high issues.
153
158
  ```
154
159
 
155
160
  **Review mode (unfixable issues remain):**
@@ -2,7 +2,7 @@
2
2
 
3
3
  The orchestrator spawns a spec-writer agent to generate the spec, then spawns a fresh instance of the same agent to review and fix it. Each review is a clean context window — the reviewer didn't write the spec, so it reads with fresh eyes. The reviewer focuses on medium-to-high severity issues only — if a reviewer only fixes minor issues, the orchestrator moves on rather than over-rotating. If medium-to-high issues are fixed, those fixes must be verified by another fresh reviewer.
4
4
 
5
- The spec is the last line of defense. Any error or ambiguity here multiplies through task breakdown and implementation.
5
+ The spec sets the foundation for task breakdown and implementation. Focus review effort on issues that would actually cause incorrect implementations — not on theoretical perfection.
6
6
 
7
7
  ## Create Tasks
8
8
 
@@ -1,6 +1,6 @@
1
1
  # Task Breakdown
2
2
 
3
- The orchestrator spawns a task-breakdown agent to generate task files, then spawns a fresh instance of the same agent to review and fix them. Each review is a clean context window — the reviewer didn't write the tasks, so it reads with fresh eyes. Loop until a reviewer finds zero issues — if a reviewer fixes issues, those fixes must be verified by another fresh reviewer.
3
+ The orchestrator spawns a task-breakdown agent to generate task files, then spawns a fresh instance of the same agent to review and fix them. Each review is a clean context window — the reviewer didn't write the tasks, so it reads with fresh eyes. The reviewer focuses on medium-to-high severity issues only — if a reviewer only fixes minor issues, the orchestrator moves on rather than over-rotating. If medium-to-high issues are fixed, those fixes must be verified by another fresh reviewer.
4
4
 
5
5
  Read `{spec_dir}/spec.md` before proceeding.
6
6
 
@@ -30,12 +30,14 @@ Spawn a FRESH instance of task-breakdown in review mode:
30
30
  ```
31
31
  Agent tool:
32
32
  subagent_type: "task-breakdown"
33
- prompt: "Review the task breakdown at {spec_dir}. Read spec.md and all files in {spec_dir}/tasks/. Run the full 9-point checklist. Fix every issue you find directly in the task files. Return APPROVED if zero issues found, APPROVED_WITH_FIXES if issues were found and fixed, or ISSUES REMAINING for anything you cannot auto-fix."
33
+ prompt: "Review the task breakdown at {spec_dir}. Read spec.md and all files in {spec_dir}/tasks/. Run the full 9-point checklist. Focus on medium-to-high severity issues — ignore minor wording or formatting. Fix every medium-to-high issue directly in the task files. Return APPROVED if zero medium-to-high issues found, APPROVED_WITH_FIXES with severity tags if issues were found and fixed, or ISSUES REMAINING for anything you cannot auto-fix."
34
34
  ```
35
35
 
36
36
  **If APPROVED** (zero issues found): The breakdown is verified clean. Move to Task 3.
37
37
 
38
- **If APPROVED_WITH_FIXES**: The reviewer fixed issues, but those fixes have not been verified. Spawn another fresh instance to review again. Continue until a reviewer returns APPROVED with zero issues.
38
+ **If APPROVED_WITH_FIXES**: Parse the severity of each fix from the reviewer's output:
39
+ - If ANY fix was **HIGH** or **MEDIUM** — those fixes need verification. Spawn another fresh instance to review again.
40
+ - If all fixes were low-severity — the reviewer is finding diminishing returns. Move to Task 3.
39
41
 
40
42
  **If ISSUES REMAINING**: Spawn another fresh instance to review again. The previous reviewer already fixed what it could — the next reviewer may catch different things or resolve what the last one couldn't.
41
43