cc-dev-template 0.1.55 → 0.1.58

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cc-dev-template",
3
- "version": "0.1.55",
3
+ "version": "0.1.58",
4
4
  "description": "Structured AI-assisted development framework for Claude Code",
5
5
  "bin": {
6
6
  "cc-dev-template": "./bin/install.js"
@@ -16,6 +16,6 @@ Then explore:
16
16
 
17
17
  ## When to Move On
18
18
 
19
- Move to `references/step-2-ui-ux.md` when:
19
+ Move to `references/step-2-ideation.md` when:
20
20
  - The core problem and user goal are clear
21
21
  - Success criteria are understood at a high level
@@ -0,0 +1,59 @@
1
+ # Step 2: Ideation
2
+
3
+ Before designing a solution, explore the solution space. This step prevents premature convergence on the first idea that comes to mind.
4
+
5
+ ## Determine Mode
6
+
7
+ Use AskUserQuestion to ask:
8
+
9
+ > "Do you already have a clear approach in mind, or would you like to explore different options first?"
10
+
11
+ **Options:**
12
+ - **I know my approach** → Skip to `references/step-3-ui-ux.md` (or step-4-deep-dive.md if no UI)
13
+ - **Let's explore options** → Continue with brainstorming below
14
+
15
+ ## Hybrid Brainstorming
16
+
17
+ Research shows that combining human and AI ideas produces more original solutions than either alone. The key: get human ideas first, before AI suggestions anchor their thinking.
18
+
19
+ ### 1. Collect User Ideas First
20
+
21
+ Use AskUserQuestion:
22
+
23
+ > "Before I suggest anything - what approaches have you been considering? Even rough or half-formed ideas are valuable."
24
+
25
+ Let them share freely. Don't evaluate yet. The goal is to capture their independent thinking before AI ideas influence it.
26
+
27
+ ### 2. Generate AI Alternatives
28
+
29
+ Now generate 3-4 different approaches to the same problem. These should:
30
+ - Include options the user didn't mention
31
+ - Vary meaningfully in architecture, complexity, or tradeoffs
32
+ - Not just be variations on the user's ideas
33
+
34
+ Frame it as: "Let me add some alternatives you might not have considered..."
35
+
36
+ ### 3. Diversity Check
37
+
38
+ Review all ideas (user's and yours). Ask yourself:
39
+ - Are these actually different, or variations of the same approach?
40
+ - What's the boldest option here?
41
+ - Can any ideas be combined into something better?
42
+
43
+ If the options feel too similar, push for a more divergent alternative.
44
+
45
+ ### 4. Select or Combine
46
+
47
+ Present all approaches with tradeoffs. Use AskUserQuestion:
48
+
49
+ > "Looking at these together, which direction feels right? Or should we combine elements from multiple approaches?"
50
+
51
+ Document the chosen approach and why before proceeding.
52
+
53
+ ## When to Move On
54
+
55
+ Proceed to `references/step-3-ui-ux.md` when:
56
+ - An approach has been selected (or user chose to skip brainstorming)
57
+ - The rationale for the choice is understood
58
+
59
+ If the feature has no user interface, skip to `references/step-4-deep-dive.md`.
@@ -1,6 +1,6 @@
1
- # Step 2: UI/UX Design
1
+ # Step 3: UI/UX Design
2
2
 
3
- If the feature has no user interface, skip to `references/step-3-deep-dive.md`.
3
+ If the feature has no user interface, skip to `references/step-4-deep-dive.md`.
4
4
 
5
5
  ## Determine Design Direction
6
6
 
@@ -67,7 +67,7 @@ Format as simple numbered steps under each flow name.
67
67
 
68
68
  ## When to Move On
69
69
 
70
- Proceed to `references/step-3-deep-dive.md` when:
70
+ Proceed to `references/step-4-deep-dive.md` when:
71
71
  - Design direction is agreed upon
72
72
  - Wireframes exist for primary screens
73
73
  - User has confirmed the layout approach
@@ -1,4 +1,4 @@
1
- # Step 3: Deep Dive
1
+ # Step 4: Deep Dive
2
2
 
3
3
  Cover all specification areas through conversation. Update `docs/specs/<name>/spec.md` incrementally as information emerges.
4
4
 
@@ -27,6 +27,19 @@ Example: To understand auth integration:
27
27
 
28
28
  No assumptions. If you don't know how something works, send an Explorer to find out.
29
29
 
30
+ ### File Landscape
31
+
32
+ Once the feature is understood, identify concrete file paths. Ask an Explorer:
33
+
34
+ > "To implement [this feature], what files would need to be created or modified? Give me concrete file paths."
35
+
36
+ Capture:
37
+ - **Files to create**: New files with full paths (e.g., `src/models/notification.ts`)
38
+ - **Files to modify**: Existing files that need changes (e.g., `src/routes/index.ts`)
39
+ - **Directory conventions**: Where each type of code lives in this project
40
+
41
+ This becomes the File Landscape section of the spec, which spec-to-tasks uses directly.
42
+
30
43
  ### Data Model
31
44
  - Entities and relationships
32
45
  - Constraints (required fields, validations, limits)
@@ -67,6 +80,14 @@ Write to `docs/specs/<name>/spec.md` with this structure:
67
80
  - External: [APIs, services, libraries]
68
81
  - Data flows: [in/out]
69
82
 
83
+ ## File Landscape
84
+
85
+ ### Files to Create
86
+ - [path/to/new-file.ts]: [purpose]
87
+
88
+ ### Files to Modify
89
+ - [path/to/existing-file.ts]: [what changes]
90
+
70
91
  ## Data Model
71
92
  [Entities, relationships, constraints]
72
93
 
@@ -79,6 +100,7 @@ Write to `docs/specs/<name>/spec.md` with this structure:
79
100
 
80
101
  ## Acceptance Criteria
81
102
  - [ ] [Testable requirement]
103
+ **Verify:** [verification method]
82
104
 
83
105
  ## Blockers
84
106
  - [ ] [Blocker]: [what's needed]
@@ -86,4 +108,4 @@ Write to `docs/specs/<name>/spec.md` with this structure:
86
108
 
87
109
  ## When to Move On
88
110
 
89
- Move to `references/step-4-research-needs.md` when all areas have been covered and the spec document is substantially complete.
111
+ Move to `references/step-5-research-needs.md` when all areas have been covered and the spec document is substantially complete.
@@ -1,4 +1,4 @@
1
- # Step 4: Identify Research Needs
1
+ # Step 5: Identify Research Needs
2
2
 
3
3
  Before finalizing, determine if implementation requires unfamiliar paradigms.
4
4
 
@@ -40,11 +40,11 @@ Wait for research to complete before continuing. The research output goes to `do
40
40
 
41
41
  ## If No Research Needed
42
42
 
43
- State that all paradigms have existing examples in the codebase. Proceed to `references/step-5-verification.md`.
43
+ State that all paradigms have existing examples in the codebase. Proceed to `references/step-6-verification.md`.
44
44
 
45
45
  ## When to Move On
46
46
 
47
- Proceed to `references/step-5-verification.md` when:
47
+ Proceed to `references/step-6-verification.md` when:
48
48
  - All new paradigms have been researched, OR
49
49
  - User confirmed no research is needed, OR
50
50
  - All patterns have existing codebase examples
@@ -1,4 +1,4 @@
1
- # Step 5: Verification Planning
1
+ # Step 6: Verification Planning
2
2
 
3
3
  Every acceptance criterion needs a specific, executable verification method. The goal: autonomous implementation with zero ambiguity about whether something works.
4
4
 
@@ -71,4 +71,4 @@ The standard: if the agent executes the verification and it passes, the feature
71
71
 
72
72
  ## When to Move On
73
73
 
74
- Proceed to `references/step-6-finalize.md` when every acceptance criterion has a verification method and the user agrees each method proves the criterion works.
74
+ Proceed to `references/step-7-finalize.md` when every acceptance criterion has a verification method and the user agrees each method proves the criterion works.
@@ -1,4 +1,4 @@
1
- # Step 6: Finalize
1
+ # Step 7: Finalize
2
2
 
3
3
  Review the spec for completeness and soundness, then hand off.
4
4
 
@@ -19,11 +19,20 @@ If no specs found in git history, check `docs/specs/` for any spec files and ask
19
19
  ## Verify the Spec
20
20
 
21
21
  Read the spec file. Confirm it contains enough detail to generate implementation tasks:
22
- - Clear requirements or user stories
23
- - Technical approach or architecture decisions
24
- - Defined scope
25
22
 
26
- If the spec is incomplete or unclear, inform the user and ask if they want to complete the spec first.
23
+ **Required:**
24
+ - Acceptance Criteria with verification methods (each criterion becomes a task)
25
+ - Clear behavior descriptions
26
+
27
+ **Expected (from spec-interview):**
28
+ - File Landscape section listing files to create/modify
29
+ - Integration points and data model
30
+
31
+ **If missing acceptance criteria or verification methods:**
32
+ Inform the user: "This spec doesn't have acceptance criteria with verification methods. Each task needs a clear pass/fail test. Would you like to add them now, or run spec-interview to complete the spec?"
33
+
34
+ **If missing file landscape:**
35
+ Proceed to step 2 where we'll discover file paths via exploration.
27
36
 
28
37
  ## Next Step
29
38
 
@@ -1,25 +1,43 @@
1
- # Step 2: Explore the Codebase
1
+ # Step 2: Verify File Landscape
2
2
 
3
- Before generating tasks, understand the codebase structure to determine accurate file paths.
3
+ The spec should already contain a File Landscape section from the interview process. This step verifies and supplements it.
4
4
 
5
- ## Launch Exploration Subagent
5
+ ## Check the Spec
6
6
 
7
- Use a subagent to explore the codebase. The subagent should identify:
7
+ Read the spec's File Landscape section. It should list:
8
+ - **Files to create**: New files with paths and purposes
9
+ - **Files to modify**: Existing files that need changes
8
10
 
9
- - **Existing patterns**: Where do models, services, routes, and components live?
10
- - **Naming conventions**: How are files and directories named?
11
- - **Files to modify**: Which existing files need changes for this feature?
12
- - **Files to create**: What new files are needed and where should they go?
11
+ If the File Landscape section is missing or incomplete, use an Explorer to fill the gaps:
13
12
 
14
- Provide the subagent with the spec content so it understands what is being implemented.
13
+ > "To implement [feature from spec], what files would need to be created or modified? Give me concrete file paths."
15
14
 
16
- ## Capture Findings
15
+ ## Map Files to Acceptance Criteria
17
16
 
18
- The subagent should return:
19
- - A list of directories and their purposes
20
- - Specific file paths for each aspect of the feature
21
- - Any existing code that the new feature should integrate with
17
+ For each acceptance criterion in the spec, identify which files are involved. This mapping drives task generation.
18
+
19
+ Example:
20
+ ```
21
+ "User can receive notifications"
22
+ → src/models/notification.ts
23
+ → src/services/notificationService.ts
24
+ → src/services/notificationService.test.ts
25
+
26
+ "User can view notification list"
27
+ → src/routes/notifications.ts
28
+ → src/components/NotificationList.tsx
29
+ ```
30
+
31
+ If a criterion's files aren't clear from the spec, ask an Explorer:
32
+
33
+ > "What files would be involved in making this criterion pass: [criterion]?"
34
+
35
+ ## Output
36
+
37
+ You should now have:
38
+ 1. Complete list of files to create and modify
39
+ 2. Each acceptance criterion mapped to its files
22
40
 
23
41
  ## Next Step
24
42
 
25
- Once exploration is complete and file paths are known, read `references/step-3-generate.md`.
43
+ Once files are mapped to criteria, read `references/step-3-generate.md`.
@@ -1,31 +1,65 @@
1
- # Step 3: Generate Task Manifest
1
+ # Step 3: Generate Task Files
2
2
 
3
- Create the task manifest based on the spec and codebase exploration.
3
+ Create individual task files based on the spec and codebase exploration.
4
4
 
5
5
  ## Task Principles
6
6
 
7
- **Atomic**: Each task is a single, focused change one model, one endpoint, one component, or one test file. A task should take an AI agent one focused session to complete.
7
+ **Criterion-based**: Each task corresponds to one acceptance criterion from the spec. A task includes all files needed to make that criterion pass. Do NOT split by file or architectural layer.
8
8
 
9
- **Ordered**: Sequence tasks so each can be completed without blocking. Tasks with no dependencies can share the same `depends_on` value to signal parallel execution.
9
+ **Verifiable**: Every task has a verification method from the spec. A coder implements, a QA agent verifies, and the loop continues until it passes.
10
10
 
11
- **Concrete file paths**: Use the file paths discovered in Step 2. Every task specifies which files it touches.
11
+ **Ordered**: Name files so they sort in dependency order (T001, T002, etc.). Tasks with no dependencies on each other can be worked in parallel.
12
12
 
13
- **Declarative**: Tasks describe WHAT to implement, not HOW. Implementation details live in the spec.
13
+ **Concrete file paths**: Use the file paths discovered in Step 2. Every task lists all files it touches.
14
14
 
15
- ## Generate the Manifest
15
+ ## Deriving Tasks from Acceptance Criteria
16
16
 
17
- Write to `docs/specs/<name>/tasks.yaml` using the template in `templates/tasks.yaml`.
17
+ Each acceptance criterion in the spec becomes one task.
18
18
 
19
- Derive tasks from the spec:
20
- - Break each requirement or feature section into atomic units
21
- - Sequence by dependency (data layer, then business logic, then API, then integration)
22
- - Reference specific spec sections in task descriptions where helpful
19
+ **For each criterion, determine:**
20
+ 1. What files must exist or change for this to pass?
21
+ 2. What's the verification method from the spec?
22
+ 3. What other criteria must pass first? (dependencies)
23
23
 
24
- ## Review with User
24
+ **Grouping rules:**
25
+ - If two criteria share foundational work (e.g., both need a model), the first task creates the foundation, later tasks build on it
26
+ - If a criterion is too large (touches 10+ files), flag it — the spec may need refinement
27
+ - Small tasks are fine; artificial splits are not
25
28
 
26
- Present:
27
- 1. Number of tasks and estimated scope
28
- 2. Task titles in dependency order
29
- 3. Offer to show full YAML or proceed to implementation
29
+ **Anti-patterns to avoid:**
30
+ - "Create the User model" no verifiable outcome
31
+ - "Add service layer" implementation detail, not behavior
32
+ - "Set up database schema" means to an end, not the end
30
33
 
31
- Save the manifest to `docs/specs/<name>/tasks.yaml`.
34
+ **Good task boundaries:**
35
+ - "User can register with email" — verifiable, coherent
36
+ - "Duplicate emails are rejected" — verifiable, coherent
37
+ - "Dashboard shows notification count" — verifiable, coherent
38
+
39
+ ## Validate Criteria Quality
40
+
41
+ Before generating tasks, verify each acceptance criterion has:
42
+ - A specific, testable condition
43
+ - A verification method (test command, agent-browser script, or query)
44
+
45
+ If criteria are vague or missing verification, stop and ask:
46
+ > "The criterion '[X]' doesn't have a clear verification method. Should I suggest one, or would you like to refine the spec first?"
47
+
48
+ ## Generate Task Files
49
+
50
+ Create a `tasks/` directory inside the spec folder:
51
+
52
+ ```
53
+ docs/specs/<name>/
54
+ ├── spec.md
55
+ └── tasks/
56
+ ├── T001-<slug>.md
57
+ ├── T002-<slug>.md
58
+ └── T003-<slug>.md
59
+ ```
60
+
61
+ Use the template in `templates/task.md` for each file. Name files in dependency order so alphabetical sorting reflects execution order.
62
+
63
+ ## Next Step
64
+
65
+ Once task files are generated, read `references/step-4-review.md` to run the review before presenting to the user.
@@ -0,0 +1,44 @@
1
+ # Step 4: Review Task Breakdown
2
+
3
+ Before presenting to the user, run a review to catch issues.
4
+
5
+ ## Run Task Review
6
+
7
+ Invoke the `task-review` skill, specifying the spec name:
8
+
9
+ ```
10
+ Skill(skill: "task-review", args: "<spec-name>")
11
+ ```
12
+
13
+ The review will check:
14
+ - Coverage (all criteria have tasks)
15
+ - Dependency order (tasks properly sequenced)
16
+ - File plausibility (paths make sense)
17
+ - Verification executability (concrete commands)
18
+ - Task scope (appropriately sized)
19
+ - Consistency (format, frontmatter)
20
+
21
+ ## Handle Findings
22
+
23
+ The review returns findings categorized as Critical, Warning, or Note.
24
+
25
+ **Critical issues**: Fix before proceeding. Update the affected task files.
26
+
27
+ **Warnings**: Present to user with your recommendation (fix or skip).
28
+
29
+ **Notes**: Mention briefly, no action required.
30
+
31
+ ## Present to User
32
+
33
+ After addressing critical issues, present:
34
+
35
+ 1. Number of tasks generated
36
+ 2. Task titles in dependency order
37
+ 3. Any warnings from the review (with recommendations)
38
+ 4. Offer to show task files or proceed to implementation
39
+
40
+ If the user wants changes, update the task files and re-run the review.
41
+
42
+ ## Complete
43
+
44
+ Once the user approves the task breakdown, the skill is complete. The tasks are ready for implementation.
@@ -0,0 +1,30 @@
1
+ ---
2
+ id: T00X
3
+ title: <Short descriptive title — the acceptance criterion>
4
+ status: pending
5
+ depends_on: []
6
+ ---
7
+
8
+ ## Criterion
9
+
10
+ <The acceptance criterion from the spec, verbatim or lightly edited for clarity>
11
+
12
+ ## Files
13
+
14
+ - <path/to/file.ts>
15
+ - <path/to/another-file.ts>
16
+ - <path/to/test-file.test.ts>
17
+
18
+ ## Verification
19
+
20
+ <The verification method from the spec — test command, agent-browser script, or manual steps>
21
+
22
+ ---
23
+
24
+ ## Implementation Notes
25
+
26
+ <!-- Coder agent writes here after each implementation attempt -->
27
+
28
+ ## Review Notes
29
+
30
+ <!-- QA agent writes here after each review pass -->
@@ -0,0 +1,17 @@
1
+ ---
2
+ name: task-review
3
+ description: Reviews task breakdown for completeness, correct ordering, and implementation readiness. Use after spec-to-tasks generates task files.
4
+ argument-hint: <spec-name>
5
+ ---
6
+
7
+ # Task Review
8
+
9
+ Review the task breakdown to catch issues before implementation begins.
10
+
11
+ ## What To Do Now
12
+
13
+ If an argument was provided, use it as the spec name. Otherwise, find the most recent spec with a `tasks/` directory.
14
+
15
+ Read the spec file and all task files in the `tasks/` directory.
16
+
17
+ Then read `references/checklist.md` and evaluate each item.
@@ -0,0 +1,101 @@
1
+ # Task Review Checklist
2
+
3
+ Evaluate each area. For each issue found, note the severity:
4
+ - **Critical**: Must fix before implementation
5
+ - **Warning**: Should fix, but could proceed
6
+ - **Note**: Minor suggestion
7
+
8
+ ## 1. Coverage
9
+
10
+ Compare acceptance criteria in the spec to tasks generated.
11
+
12
+ **Check:**
13
+ - [ ] Every acceptance criterion has exactly one corresponding task
14
+ - [ ] No criteria were skipped or forgotten
15
+ - [ ] No phantom tasks that don't map to a criterion
16
+
17
+ **How to verify:**
18
+ List each criterion from the spec's Acceptance Criteria section. For each, find the matching task file. Flag any orphans in either direction.
19
+
20
+ ## 2. Dependency Order
21
+
22
+ Tasks should be sequenced so each can be completed without waiting on later tasks.
23
+
24
+ **Check:**
25
+ - [ ] Task file names sort in valid execution order (T001, T002, etc.)
26
+ - [ ] Each task's `depends_on` references only earlier tasks
27
+ - [ ] No circular dependencies
28
+ - [ ] Foundation work comes before features that use it
29
+
30
+ **Common issues:**
31
+ - API route task before the service it calls
32
+ - UI component before the API it fetches from
33
+ - Test file before the code it tests (tests should be in same task as code)
34
+
35
+ ## 3. File Plausibility
36
+
37
+ Files listed in each task should make sense for the project.
38
+
39
+ **Check:**
40
+ - [ ] File paths follow project conventions (use Explorer if unsure)
41
+ - [ ] Files to modify actually exist in the codebase
42
+ - [ ] Files to create are in appropriate directories
43
+ - [ ] No duplicate files across tasks (each file appears in exactly one task)
44
+
45
+ **How to verify:**
46
+ For files to modify, confirm they exist. For files to create, confirm the parent directory exists and the naming follows conventions.
47
+
48
+ ## 4. Verification Executability
49
+
50
+ Each task's verification method must be concrete and runnable.
51
+
52
+ **Check:**
53
+ - [ ] Verification is a specific command or script, not vague prose
54
+ - [ ] Test file paths exist or will be created by the task
55
+ - [ ] agent-browser commands reference real routes/elements
56
+ - [ ] No "manually verify" without clear steps
57
+
58
+ **Red flags:**
59
+ - "Verify it works correctly"
60
+ - "Check that the feature functions"
61
+ - Test commands for files not listed in the task
62
+
63
+ ## 5. Task Scope
64
+
65
+ Each task should be appropriately sized for the coder→QA loop.
66
+
67
+ **Check:**
68
+ - [ ] No task touches more than ~10 files (consider splitting)
69
+ - [ ] No trivially small tasks that could merge with related work
70
+ - [ ] Each task produces a verifiable outcome, not just "creates a file"
71
+
72
+ ## 6. Consistency
73
+
74
+ Cross-check task files against each other and the spec.
75
+
76
+ **Check:**
77
+ - [ ] Task titles match or closely reflect the acceptance criterion
78
+ - [ ] Status is `pending` for all new tasks
79
+ - [ ] Frontmatter format is consistent across all task files
80
+ - [ ] Implementation Notes and Review Notes sections exist (empty is fine)
81
+
82
+ ---
83
+
84
+ ## Output Format
85
+
86
+ Return findings as a structured list:
87
+
88
+ ```
89
+ ## Critical Issues
90
+ - [T002] Depends on T005 which comes later - wrong order
91
+ - [T003] Missing verification method
92
+
93
+ ## Warnings
94
+ - [T001] touches 12 files - consider splitting
95
+ - Criterion "User can delete account" has no corresponding task
96
+
97
+ ## Notes
98
+ - [T004] Could merge with T005 since they share the same files
99
+ ```
100
+
101
+ If no issues found, state: "Task breakdown looks good. All criteria covered, dependencies valid, verification methods concrete."
@@ -1,25 +0,0 @@
1
- spec: <name>
2
- spec_path: docs/specs/<name>/spec.md
3
- generated: <ISO timestamp>
4
-
5
- tasks:
6
- - id: T001
7
- title: <Short descriptive title>
8
- description: |
9
- <What to implement>
10
- <Reference to spec section if applicable>
11
- files:
12
- - <path/to/file.ts>
13
- depends_on: []
14
- acceptance: |
15
- <How to verify this task is complete>
16
-
17
- - id: T002
18
- title: <Next task>
19
- description: |
20
- <What to implement>
21
- files:
22
- - <path/to/file.ts>
23
- depends_on: [T001]
24
- acceptance: |
25
- <Verification criteria>