cc-dev-template 0.1.56 → 0.1.61

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/bin/install.js CHANGED
@@ -20,7 +20,7 @@ console.log('='.repeat(50));
20
20
  console.log(`Installing to ${CLAUDE_DIR}...`);
21
21
 
22
22
  // Create directories
23
- const dirs = ['commands', 'scripts', 'skills', 'hooks', 'mcp-servers'];
23
+ const dirs = ['commands', 'scripts', 'skills', 'hooks', 'mcp-servers', 'agents'];
24
24
  dirs.forEach(dir => {
25
25
  fs.mkdirSync(path.join(CLAUDE_DIR, dir), { recursive: true });
26
26
  });
@@ -62,6 +62,11 @@ console.log('\nCommands:');
62
62
  const cmdCount = copyFiles('commands', 'commands', '.md');
63
63
  console.log(cmdCount ? `✓ ${cmdCount} commands installed` : ' No commands to install');
64
64
 
65
+ // Copy agents
66
+ console.log('\nAgents:');
67
+ const agentCount = copyFiles('agents', 'agents', '.md');
68
+ console.log(agentCount ? `✓ ${agentCount} agents installed` : ' No agents to install');
69
+
65
70
  // Copy scripts
66
71
  console.log('\nScripts:');
67
72
  const scriptCount = copyFiles('scripts', 'scripts', '.js');
@@ -348,6 +353,7 @@ console.log('='.repeat(50));
348
353
  console.log(`
349
354
  Installed to:
350
355
  Commands: ${CLAUDE_DIR}/commands/
356
+ Agents: ${CLAUDE_DIR}/agents/
351
357
  Scripts: ${CLAUDE_DIR}/scripts/
352
358
  Skills: ${CLAUDE_DIR}/skills/
353
359
  Hooks: ${CLAUDE_DIR}/hooks/
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cc-dev-template",
3
- "version": "0.1.56",
3
+ "version": "0.1.61",
4
4
  "description": "Structured AI-assisted development framework for Claude Code",
5
5
  "bin": {
6
6
  "cc-dev-template": "./bin/install.js"
@@ -0,0 +1,16 @@
1
+ ---
2
+ name: spec-implementer
3
+ description: Implements a single criterion from a spec task file. Only use when explicitly assigned a task file path from the execute-spec workflow.
4
+ tools: Read, Grep, Glob, Edit, Write, Bash, LSP
5
+ ---
6
+
7
+ You implement one task from a spec breakdown.
8
+
9
+ When given a task file path:
10
+
11
+ 1. Read the task file at that path
12
+ 2. Read the spec file in the parent directory (`../spec.md`)
13
+ 3. Understand the **Criterion** section—this defines success
14
+ 4. Implement the criterion, touching only files listed in the **Files** section
15
+ 5. Write a brief summary of what you did to the **Implementation Notes** section of the task file
16
+ 6. Mark the task complete
@@ -0,0 +1,42 @@
1
+ ---
2
+ name: spec-validator
3
+ description: Validates a completed task through code review and E2E testing. Only use when explicitly assigned a task file path from the execute-spec workflow.
4
+ tools: Read, Grep, Glob, Bash
5
+ ---
6
+
7
+ You are a senior QA engineer validating completed work.
8
+
9
+ When given a task file path:
10
+
11
+ 1. Read the task file and parent spec (`../spec.md`)
12
+ 2. Read the **Implementation Notes** to understand what was built
13
+
14
+ ## Step 1: Code Review + Automated Tests
15
+
16
+ - Run automated tests if they exist (look for test files, run with appropriate test runner)
17
+ - Check for code smells:
18
+ - Files over 300 lines: Can this logically split into multiple files, or does it need to be one file? Note your assessment.
19
+ - Missing error handling, unclear naming, other quality issues
20
+ - Note any concerns
21
+
22
+ ## Step 2: E2E Testing with agent-browser
23
+
24
+ Run `agent-browser --help` if you need to understand its capabilities.
25
+
26
+ - Create your own session to avoid conflicts: `--session validator-{task-id}`
27
+ - Dev server runs via `make dev` (check output for port if not already running)
28
+ - Pretend you are a user testing the criterion:
29
+ - Use `agent-browser snapshot -i` to see interactive elements
30
+ - Click buttons, fill forms, navigate flows
31
+ - Does the UI look right? Are elements interactable?
32
+ - Does the feature work as a user would expect?
33
+ - Close your session when finished: `agent-browser close --session validator-{task-id}`
34
+
35
+ ## Output
36
+
37
+ Write findings to the **Review Notes** section of the task file:
38
+
39
+ - Issues found (with severity: critical, warning, suggestion)
40
+ - Files that may need refactoring (with reasoning)
41
+ - E2E test results (what worked, what didn't)
42
+ - Overall pass/fail assessment
@@ -0,0 +1,35 @@
1
+ ---
2
+ allowed-tools: Read, Grep, Glob, Task, TaskCreate, TaskList, TaskUpdate, TaskGet, AskUserQuestion, Bash
3
+ ---
4
+
5
+ # Execute Spec
6
+
7
+ Orchestrates the implementation and validation of a spec's task breakdown.
8
+
9
+ **Important**: This skill is an orchestrator. It reads task files and dispatches agents to do the work. It does NOT edit files directly - that's the job of the spec-implementer agents it spawns.
10
+
11
+ ## When to Use
12
+
13
+ Invoke when you have a complete spec with a `tasks/` folder containing task files (T001-*.md, T002-*.md, etc.) ready for implementation.
14
+
15
+ ## Arguments
16
+
17
+ This skill takes a spec path as an argument:
18
+ - `docs/specs/my-feature` - path to the spec folder containing `spec.md` and `tasks/`
19
+
20
+ ## Workflow
21
+
22
+ Read `references/workflow.md` for the full orchestration flow.
23
+
24
+ ## Phases
25
+
26
+ 1. **Hydrate** - Load task files into the task system with dependencies
27
+ 2. **Build** - Dispatch spec-implementer agents for each task (parallel, respecting dependencies)
28
+ 3. **Validate** - Dispatch spec-validator agents for each completed task
29
+ 4. **Triage** - Collect feedback, dispatch fixes or escalate to user
30
+
31
+ ## Requirements
32
+
33
+ - Spec folder must contain `spec.md` and `tasks/` directory
34
+ - Task files must have YAML frontmatter with `id`, `title`, `status`, `depends_on`
35
+ - The `spec-implementer` and `spec-validator` agents must be installed
@@ -0,0 +1,65 @@
1
+ # Phase 1: Hydrate Tasks
2
+
3
+ Load task files from the spec into the Claude Code task system.
4
+
5
+ ## Input
6
+
7
+ Spec path argument, e.g., `docs/specs/kiosk-storefront`
8
+
9
+ ## Process
10
+
11
+ ```
12
+ 1. Validate spec structure:
13
+ - {spec-path}/spec.md exists
14
+ - {spec-path}/tasks/ directory exists
15
+ - tasks/ contains T*.md files
16
+
17
+ 2. For each task file (sorted by ID):
18
+ - Read file content
19
+ - Parse YAML frontmatter:
20
+ - id: T001, T002, etc.
21
+ - title: Human-readable title
22
+ - status: pending (should be pending at start)
23
+ - depends_on: [T001, T002] array of task IDs
24
+
25
+ 3. Create tasks in Claude Code task system:
26
+ TaskCreate(
27
+ subject: "{id}: {title}",
28
+ description: "Implement task file: {full-path-to-task-file}",
29
+ activeForm: "Implementing {title}"
30
+ )
31
+
32
+ 4. After all tasks created, set up dependencies:
33
+ For each task with depends_on:
34
+ TaskUpdate(
35
+ taskId: {claude-task-id},
36
+ addBlockedBy: [mapped-claude-task-ids]
37
+ )
38
+ ```
39
+
40
+ ## Mapping Task IDs
41
+
42
+ The task files use IDs like T001, T002. The Claude Code task system assigns its own IDs.
43
+
44
+ Maintain a mapping:
45
+ ```
46
+ {
47
+ "T001": "claude-task-id-1",
48
+ "T002": "claude-task-id-2",
49
+ ...
50
+ }
51
+ ```
52
+
53
+ Use this mapping when setting up blockedBy relationships.
54
+
55
+ ## Output
56
+
57
+ - All tasks loaded into Claude Code task system
58
+ - Dependencies correctly configured
59
+ - Ready for Phase 2: Build
60
+
61
+ ## Error Handling
62
+
63
+ - If spec.md missing: Stop and report error
64
+ - If tasks/ empty: Stop and report "No tasks to execute"
65
+ - If task file has invalid frontmatter: Report which file and what's wrong
@@ -0,0 +1,64 @@
1
+ # Phase 2: Build
2
+
3
+ Dispatch spec-implementer agents for each task, respecting dependencies.
4
+
5
+ ## Process
6
+
7
+ ```
8
+ Loop until all tasks complete:
9
+
10
+ 1. TaskList() to get current state
11
+
12
+ 2. Find ready tasks:
13
+ - status: pending
14
+ - blockedBy: empty (no unfinished dependencies)
15
+
16
+ 3. For each ready task:
17
+ - Extract task file path from description
18
+ - Mark as in_progress: TaskUpdate(taskId, status: "in_progress")
19
+ - Dispatch implementer:
20
+ Task(
21
+ subagent_type: "spec-implementer",
22
+ prompt: "{task-file-path}",
23
+ run_in_background: true,
24
+ description: "Implement {task-id}"
25
+ )
26
+
27
+ 4. Wait for completions:
28
+ - Agents mark tasks complete when done
29
+ - Poll TaskList periodically to check status
30
+ - As tasks complete, newly unblocked tasks become ready
31
+
32
+ 5. Repeat until no pending tasks remain
33
+ ```
34
+
35
+ ## Parallelism Strategy
36
+
37
+ - Dispatch ALL ready tasks simultaneously
38
+ - Don't wait for one to finish before starting another
39
+ - The dependency graph controls what can run in parallel
40
+ - Example: If T002, T003, T004 all depend only on T001, they all start when T001 completes
41
+
42
+ ## Monitoring Progress
43
+
44
+ Report progress as tasks complete:
45
+ ```
46
+ Build Progress:
47
+ [x] T001: Public API endpoints (complete)
48
+ [~] T002: Kiosk routing (in progress)
49
+ [~] T003: Entity chain validation (in progress)
50
+ [ ] T007: Cart persistence (blocked by T005, T006)
51
+ ...
52
+ ```
53
+
54
+ ## Error Handling
55
+
56
+ - If an implementer fails: Note the error, continue with other tasks
57
+ - If a task stays in_progress too long: May need manual intervention
58
+ - Failed tasks block their dependents
59
+
60
+ ## Output
61
+
62
+ - All tasks implemented (or failed with notes)
63
+ - Implementation Notes written to each task file
64
+ - Ready for Phase 3: Validate
@@ -0,0 +1,76 @@
1
+ # Phase 3: Validate
2
+
3
+ Dispatch spec-validator agents for each completed task.
4
+
5
+ ## Prerequisites
6
+
7
+ - All build tasks complete
8
+ - Code is stable (no more modifications happening)
9
+
10
+ ## Process
11
+
12
+ ```
13
+ 1. Get list of all tasks from TaskList()
14
+
15
+ 2. For each completed task:
16
+ - Extract task file path from description
17
+ - Dispatch validator:
18
+ Task(
19
+ subagent_type: "spec-validator",
20
+ prompt: "{task-file-path}",
21
+ run_in_background: true,
22
+ description: "Validate {task-id}"
23
+ )
24
+
25
+ 3. All validators run in parallel:
26
+ - Each creates its own browser session
27
+ - No dependencies between validators
28
+ - They don't modify code, just read and test
29
+
30
+ 4. Wait for all validators to complete
31
+
32
+ 5. Collect results:
33
+ - Read Review Notes from each task file
34
+ - Aggregate issues by severity
35
+ ```
36
+
37
+ ## Validator Behavior
38
+
39
+ Each validator:
40
+ 1. Reviews code changes for the task
41
+ 2. Runs automated tests if available
42
+ 3. Performs E2E testing with agent-browser
43
+ 4. Writes findings to Review Notes section
44
+
45
+ ## Browser Session Isolation
46
+
47
+ Validators use isolated sessions:
48
+ ```
49
+ --session validator-T001
50
+ --session validator-T002
51
+ ...
52
+ ```
53
+
54
+ This prevents conflicts when multiple validators test simultaneously.
55
+
56
+ ## Collecting Results
57
+
58
+ After all validators complete, read each task file's Review Notes section.
59
+
60
+ Structure findings:
61
+ ```
62
+ Validation Results:
63
+ T001: PASS
64
+ T002: PASS
65
+ T003: FAIL
66
+ - [critical] Button not clickable at /kiosk/:id/product
67
+ - [warning] ProductCard.tsx is 342 lines, consider splitting
68
+ T004: PASS
69
+ ...
70
+ ```
71
+
72
+ ## Output
73
+
74
+ - Validation complete for all tasks
75
+ - Issues collected and categorized
76
+ - Ready for Phase 4: Triage
@@ -0,0 +1,103 @@
1
+ # Phase 4: Triage
2
+
3
+ Process validation findings, fix issues, and iterate until clean.
4
+
5
+ ## Process
6
+
7
+ ```
8
+ 1. Collect all issues from validator Review Notes
9
+
10
+ 2. Categorize issues:
11
+
12
+ EASY FIXES (dispatch automatically):
13
+ - Missing import
14
+ - Typo in text
15
+ - Small styling fix
16
+ - Test assertion needs update
17
+ - File exists but missing export
18
+
19
+ COMPLEX ISSUES (ask user):
20
+ - Architectural concerns
21
+ - "Should we refactor X into Y?"
22
+ - Unclear requirements
23
+ - Trade-off decisions
24
+ - Performance concerns with multiple valid approaches
25
+
26
+ 3. For easy fixes:
27
+ - Dispatch fix agent with specific instructions
28
+ - Agent makes the fix
29
+ - Re-run validator for affected task
30
+
31
+ 4. For complex issues:
32
+ - Use AskUserQuestion to discuss with user
33
+ - Present the issue and options
34
+ - Implement based on user decision
35
+ - Or user may defer ("not now, add to backlog")
36
+
37
+ 5. Repeat until:
38
+ - No issues remain, OR
39
+ - All remaining issues are deferred by user
40
+ ```
41
+
42
+ ## Issue Severity Guide
43
+
44
+ **Critical** - Must fix before considering complete
45
+ - Feature doesn't work
46
+ - UI is broken
47
+ - Test failures
48
+
49
+ **Warning** - Should fix, but not blocking
50
+ - Code smells (large files, unclear naming)
51
+ - Minor UI issues
52
+ - Missing edge case handling
53
+
54
+ **Suggestion** - Nice to have
55
+ - Refactoring opportunities
56
+ - Performance optimizations
57
+ - Style improvements
58
+
59
+ ## Dispatching Fix Agents
60
+
61
+ For easy fixes, dispatch a general-purpose agent:
62
+ ```
63
+ Task(
64
+ subagent_type: "general-purpose",
65
+ prompt: "Fix this issue in {file}:
66
+ Problem: {issue description}
67
+ Expected: {what it should be}
68
+
69
+ Make the minimal fix needed.",
70
+ run_in_background: true
71
+ )
72
+ ```
73
+
74
+ ## Asking User About Complex Issues
75
+
76
+ ```
77
+ AskUserQuestion(
78
+ questions: [{
79
+ header: "Refactor?",
80
+ question: "ProductCard.tsx is 342 lines. Should we split it into smaller components?",
81
+ options: [
82
+ { label: "Yes, split it", description: "Create ProductImage, ProductInfo, ProductActions components" },
83
+ { label: "No, keep as-is", description: "It's complex but cohesive, splitting would add indirection" },
84
+ { label: "Defer", description: "Add to backlog, not blocking for this milestone" }
85
+ ]
86
+ }]
87
+ )
88
+ ```
89
+
90
+ ## Re-validation Loop
91
+
92
+ After fixes:
93
+ 1. Re-run validators ONLY on affected tasks
94
+ 2. Check if new issues introduced
95
+ 3. Repeat triage for new issues
96
+ 4. Continue until stable
97
+
98
+ ## Output
99
+
100
+ - All critical issues resolved
101
+ - Warnings addressed or deferred
102
+ - Final validation passes
103
+ - Spec implementation complete
@@ -0,0 +1,92 @@
1
+ # Execute Spec Workflow
2
+
3
+ ## Overview
4
+
5
+ ```
6
+ PHASE 1: HYDRATE
7
+ Read task files → TaskCreate with dependencies
8
+
9
+ PHASE 2: BUILD
10
+ Loop: find unblocked tasks → dispatch spec-implementer → wait for completion
11
+ Continue until all tasks built
12
+
13
+ PHASE 3: VALIDATE
14
+ Dispatch spec-validator for each task (all in parallel)
15
+ Collect findings
16
+
17
+ PHASE 4: TRIAGE
18
+ Group issues: easy fixes vs complex
19
+ Easy → dispatch fix agents
20
+ Complex → AskUserQuestion, discuss with user
21
+ Re-validate affected tasks
22
+ Loop until clean or user defers
23
+ ```
24
+
25
+ ## Phase 1: Hydrate
26
+
27
+ Read `phase-1-hydrate.md` for details.
28
+
29
+ 1. Parse the spec path argument to find the tasks directory
30
+ 2. Read all `T*.md` files from `{spec-path}/tasks/`
31
+ 3. For each task file:
32
+ - Parse YAML frontmatter (id, title, status, depends_on)
33
+ - Create a task in the Claude Code task system using TaskCreate
34
+ - Set dependencies using the `depends_on` array (map to task IDs)
35
+ 4. Output: All tasks loaded into the task system with proper dependency graph
36
+
37
+ ## Phase 2: Build
38
+
39
+ Read `phase-2-build.md` for details.
40
+
41
+ 1. Call TaskList to find pending tasks with no blockers
42
+ 2. For each unblocked task:
43
+ - Dispatch a `spec-implementer` agent in background:
44
+ ```
45
+ Task(
46
+ subagent_type: "spec-implementer",
47
+ prompt: "{full-path-to-task-file}",
48
+ run_in_background: true
49
+ )
50
+ ```
51
+ 3. Monitor for completions (agents will mark tasks complete)
52
+ 4. As tasks complete, check for newly unblocked tasks
53
+ 5. Repeat until all build tasks are complete
54
+
55
+ ## Phase 3: Validate
56
+
57
+ Read `phase-3-validate.md` for details.
58
+
59
+ 1. Once all build tasks complete, dispatch validators
60
+ 2. For each task:
61
+ - Dispatch a `spec-validator` agent in background:
62
+ ```
63
+ Task(
64
+ subagent_type: "spec-validator",
65
+ prompt: "{full-path-to-task-file}",
66
+ run_in_background: true
67
+ )
68
+ ```
69
+ 3. Validators run in parallel (they create isolated browser sessions)
70
+ 4. Wait for all validators to complete
71
+ 5. Collect findings from Review Notes sections
72
+
73
+ ## Phase 4: Triage
74
+
75
+ Read `phase-4-triage.md` for details.
76
+
77
+ 1. Read all task files and collect Review Notes
78
+ 2. Parse issues by severity (critical, warning, suggestion)
79
+ 3. Group issues:
80
+ - **Easy fixes**: Clear problem, obvious solution → dispatch fix agent
81
+ - **Complex issues**: Ambiguous, architectural, needs discussion → AskUserQuestion
82
+ 4. After fixes, re-run validation on affected tasks
83
+ 5. Loop until:
84
+ - No issues remain, OR
85
+ - User explicitly defers remaining issues
86
+
87
+ ## Key Principles
88
+
89
+ - **Parallelism**: Use `run_in_background: true` to maximize throughput
90
+ - **Dependency respect**: Only dispatch tasks whose dependencies are complete
91
+ - **Isolation**: Each validator gets its own browser session
92
+ - **Full paths**: Always use full relative paths to task files so agents know exactly which spec they're implementing
@@ -27,6 +27,19 @@ Example: To understand auth integration:
27
27
 
28
28
  No assumptions. If you don't know how something works, send an Explorer to find out.
29
29
 
30
+ ### File Landscape
31
+
32
+ Once the feature is understood, identify concrete file paths. Ask an Explorer:
33
+
34
+ > "To implement [this feature], what files would need to be created or modified? Give me concrete file paths."
35
+
36
+ Capture:
37
+ - **Files to create**: New files with full paths (e.g., `src/models/notification.ts`)
38
+ - **Files to modify**: Existing files that need changes (e.g., `src/routes/index.ts`)
39
+ - **Directory conventions**: Where each type of code lives in this project
40
+
41
+ This becomes the File Landscape section of the spec, which spec-to-tasks uses directly.
42
+
30
43
  ### Data Model
31
44
  - Entities and relationships
32
45
  - Constraints (required fields, validations, limits)
@@ -67,6 +80,14 @@ Write to `docs/specs/<name>/spec.md` with this structure:
67
80
  - External: [APIs, services, libraries]
68
81
  - Data flows: [in/out]
69
82
 
83
+ ## File Landscape
84
+
85
+ ### Files to Create
86
+ - [path/to/new-file.ts]: [purpose]
87
+
88
+ ### Files to Modify
89
+ - [path/to/existing-file.ts]: [what changes]
90
+
70
91
  ## Data Model
71
92
  [Entities, relationships, constraints]
72
93
 
@@ -79,6 +100,7 @@ Write to `docs/specs/<name>/spec.md` with this structure:
79
100
 
80
101
  ## Acceptance Criteria
81
102
  - [ ] [Testable requirement]
103
+ **Verify:** [verification method]
82
104
 
83
105
  ## Blockers
84
106
  - [ ] [Blocker]: [what's needed]
@@ -7,6 +7,17 @@ context: fork
7
7
 
8
8
  # Spec to Tasks
9
9
 
10
+ ## Workflow Overview
11
+
12
+ This skill has 4 steps. **You must complete ALL steps before presenting to the user.**
13
+
14
+ 1. **Identify Spec** - Find and verify the spec file
15
+ 2. **Verify File Landscape** - Map files to acceptance criteria
16
+ 3. **Generate Tasks** - Create task files in `tasks/` directory
17
+ 4. **Review Tasks** - Invoke `task-review` skill to validate, fix issues
18
+
19
+ Do NOT skip step 4. The review catches dependency errors and coverage gaps.
20
+
10
21
  ## What To Do Now
11
22
 
12
23
  Read `references/step-1-identify-spec.md` and begin.
@@ -19,11 +19,20 @@ If no specs found in git history, check `docs/specs/` for any spec files and ask
19
19
  ## Verify the Spec
20
20
 
21
21
  Read the spec file. Confirm it contains enough detail to generate implementation tasks:
22
- - Clear requirements or user stories
23
- - Technical approach or architecture decisions
24
- - Defined scope
25
22
 
26
- If the spec is incomplete or unclear, inform the user and ask if they want to complete the spec first.
23
+ **Required:**
24
+ - Acceptance Criteria with verification methods (each criterion becomes a task)
25
+ - Clear behavior descriptions
26
+
27
+ **Expected (from spec-interview):**
28
+ - File Landscape section listing files to create/modify
29
+ - Integration points and data model
30
+
31
+ **If missing acceptance criteria or verification methods:**
32
+ Inform the user: "This spec doesn't have acceptance criteria with verification methods. Each task needs a clear pass/fail test. Would you like to add them now, or run spec-interview to complete the spec?"
33
+
34
+ **If missing file landscape:**
35
+ Proceed to step 2 where we'll discover file paths via exploration.
27
36
 
28
37
  ## Next Step
29
38
 
@@ -1,25 +1,43 @@
1
- # Step 2: Explore the Codebase
1
+ # Step 2: Verify File Landscape
2
2
 
3
- Before generating tasks, understand the codebase structure to determine accurate file paths.
3
+ The spec should already contain a File Landscape section from the interview process. This step verifies and supplements it.
4
4
 
5
- ## Launch Exploration Subagent
5
+ ## Check the Spec
6
6
 
7
- Use a subagent to explore the codebase. The subagent should identify:
7
+ Read the spec's File Landscape section. It should list:
8
+ - **Files to create**: New files with paths and purposes
9
+ - **Files to modify**: Existing files that need changes
8
10
 
9
- - **Existing patterns**: Where do models, services, routes, and components live?
10
- - **Naming conventions**: How are files and directories named?
11
- - **Files to modify**: Which existing files need changes for this feature?
12
- - **Files to create**: What new files are needed and where should they go?
11
+ If the File Landscape section is missing or incomplete, use an Explorer to fill the gaps:
13
12
 
14
- Provide the subagent with the spec content so it understands what is being implemented.
13
+ > "To implement [feature from spec], what files would need to be created or modified? Give me concrete file paths."
15
14
 
16
- ## Capture Findings
15
+ ## Map Files to Acceptance Criteria
17
16
 
18
- The subagent should return:
19
- - A list of directories and their purposes
20
- - Specific file paths for each aspect of the feature
21
- - Any existing code that the new feature should integrate with
17
+ For each acceptance criterion in the spec, identify which files are involved. This mapping drives task generation.
18
+
19
+ Example:
20
+ ```
21
+ "User can receive notifications"
22
+ → src/models/notification.ts
23
+ → src/services/notificationService.ts
24
+ → src/services/notificationService.test.ts
25
+
26
+ "User can view notification list"
27
+ → src/routes/notifications.ts
28
+ → src/components/NotificationList.tsx
29
+ ```
30
+
31
+ If a criterion's files aren't clear from the spec, ask an Explorer:
32
+
33
+ > "What files would be involved in making this criterion pass: [criterion]?"
34
+
35
+ ## Output
36
+
37
+ You should now have:
38
+ 1. Complete list of files to create and modify
39
+ 2. Each acceptance criterion mapped to its files
22
40
 
23
41
  ## Next Step
24
42
 
25
- Once exploration is complete and file paths are known, read `references/step-3-generate.md`.
43
+ Once files are mapped to criteria, read `references/step-3-generate.md`.
@@ -1,31 +1,72 @@
1
- # Step 3: Generate Task Manifest
1
+ # Step 3: Generate Task Files
2
2
 
3
- Create the task manifest based on the spec and codebase exploration.
3
+ Create individual task files based on the spec and codebase exploration.
4
4
 
5
5
  ## Task Principles
6
6
 
7
- **Atomic**: Each task is a single, focused change one model, one endpoint, one component, or one test file. A task should take an AI agent one focused session to complete.
7
+ **Criterion-based**: Each task corresponds to one acceptance criterion from the spec. A task includes all files needed to make that criterion pass. Do NOT split by file or architectural layer.
8
8
 
9
- **Ordered**: Sequence tasks so each can be completed without blocking. Tasks with no dependencies can share the same `depends_on` value to signal parallel execution.
9
+ **Verifiable**: Every task has a verification method from the spec. A coder implements, a QA agent verifies, and the loop continues until it passes.
10
10
 
11
- **Concrete file paths**: Use the file paths discovered in Step 2. Every task specifies which files it touches.
11
+ **Ordered**: Name files so they sort in dependency order (T001, T002, etc.). Tasks with no dependencies on each other can be worked in parallel.
12
12
 
13
- **Declarative**: Tasks describe WHAT to implement, not HOW. Implementation details live in the spec.
13
+ **Concrete file paths**: Use the file paths discovered in Step 2. Every task lists all files it touches.
14
14
 
15
- ## Generate the Manifest
15
+ ## Deriving Tasks from Acceptance Criteria
16
16
 
17
- Write to `docs/specs/<name>/tasks.yaml` using the template in `templates/tasks.yaml`.
17
+ Each acceptance criterion in the spec becomes one task.
18
18
 
19
- Derive tasks from the spec:
20
- - Break each requirement or feature section into atomic units
21
- - Sequence by dependency (data layer, then business logic, then API, then integration)
22
- - Reference specific spec sections in task descriptions where helpful
19
+ **For each criterion, determine:**
20
+ 1. What files must exist or change for this to pass?
21
+ 2. What's the verification method from the spec?
22
+ 3. What other criteria must pass first? (dependencies)
23
23
 
24
- ## Review with User
24
+ **Grouping rules:**
25
+ - If two criteria share foundational work (e.g., both need a model), the first task creates the foundation, later tasks build on it
26
+ - If a criterion is too large (touches 10+ files), flag it — the spec may need refinement
27
+ - Small tasks are fine; artificial splits are not
25
28
 
26
- Present:
27
- 1. Number of tasks and estimated scope
28
- 2. Task titles in dependency order
29
- 3. Offer to show full YAML or proceed to implementation
29
+ **Anti-patterns to avoid:**
30
+ - "Create the User model" no verifiable outcome
31
+ - "Add service layer" implementation detail, not behavior
32
+ - "Set up database schema" means to an end, not the end
30
33
 
31
- Save the manifest to `docs/specs/<name>/tasks.yaml`.
34
+ **Good task boundaries:**
35
+ - "User can register with email" — verifiable, coherent
36
+ - "Duplicate emails are rejected" — verifiable, coherent
37
+ - "Dashboard shows notification count" — verifiable, coherent
38
+
39
+ ## Validate Criteria Quality
40
+
41
+ Before generating tasks, verify each acceptance criterion has:
42
+ - A specific, testable condition
43
+ - A verification method (test command, agent-browser script, or query)
44
+
45
+ If criteria are vague or missing verification, stop and ask:
46
+ > "The criterion '[X]' doesn't have a clear verification method. Should I suggest one, or would you like to refine the spec first?"
47
+
48
+ ## Generate Task Files
49
+
50
+ Create a `tasks/` directory inside the spec folder:
51
+
52
+ ```
53
+ docs/specs/<name>/
54
+ ├── spec.md
55
+ └── tasks/
56
+ ├── T001-<slug>.md
57
+ ├── T002-<slug>.md
58
+ └── T003-<slug>.md
59
+ ```
60
+
61
+ Use the template in `templates/task.md` for each file. Name files in dependency order so alphabetical sorting reflects execution order.
62
+
63
+ ## REQUIRED: Run Review Before Presenting
64
+
65
+ **Do NOT present results to the user yet.** After generating task files, you MUST:
66
+
67
+ 1. Read `references/step-4-review.md`
68
+ 2. Invoke the `task-review` skill to validate the breakdown
69
+ 3. Fix any critical issues found
70
+ 4. Only then present results (including any warnings)
71
+
72
+ This review step catches dependency errors, coverage gaps, and verification issues. Skipping it leads to broken task breakdowns that fail during implementation.
@@ -0,0 +1,46 @@
1
+ # Step 4: Review Task Breakdown
2
+
3
+ **This step is REQUIRED.** Do not present results until review is complete.
4
+
5
+ ## Run Task Review
6
+
7
+ Invoke the task-review skill NOW:
8
+
9
+ ```
10
+ Skill(skill: "task-review", args: "<spec-name>")
11
+ ```
12
+
13
+ Wait for the review to complete before proceeding.
14
+
15
+ The review will check:
16
+ - Coverage (all criteria have tasks)
17
+ - Dependency order (tasks properly sequenced)
18
+ - File plausibility (paths make sense)
19
+ - Verification executability (concrete commands)
20
+ - Task scope (appropriately sized)
21
+ - Consistency (format, frontmatter)
22
+
23
+ ## Handle Findings
24
+
25
+ The review returns findings categorized as Critical, Warning, or Note.
26
+
27
+ **Critical issues**: Fix before proceeding. Update the affected task files.
28
+
29
+ **Warnings**: Present to user with your recommendation (fix or skip).
30
+
31
+ **Notes**: Mention briefly, no action required.
32
+
33
+ ## Present to User
34
+
35
+ After addressing critical issues, present:
36
+
37
+ 1. Number of tasks generated
38
+ 2. Task titles in dependency order
39
+ 3. Any warnings from the review (with recommendations)
40
+ 4. Offer to show task files or proceed to implementation
41
+
42
+ If the user wants changes, update the task files and re-run the review.
43
+
44
+ ## Complete
45
+
46
+ Once the user approves the task breakdown, the skill is complete. The tasks are ready for implementation.
@@ -0,0 +1,30 @@
1
+ ---
2
+ id: T00X
3
+ title: <Short descriptive title — the acceptance criterion>
4
+ status: pending
5
+ depends_on: []
6
+ ---
7
+
8
+ ## Criterion
9
+
10
+ <The acceptance criterion from the spec, verbatim or lightly edited for clarity>
11
+
12
+ ## Files
13
+
14
+ - <path/to/file.ts>
15
+ - <path/to/another-file.ts>
16
+ - <path/to/test-file.test.ts>
17
+
18
+ ## Verification
19
+
20
+ <The verification method from the spec — test command, agent-browser script, or manual steps>
21
+
22
+ ---
23
+
24
+ ## Implementation Notes
25
+
26
+ <!-- Coder agent writes here after each implementation attempt -->
27
+
28
+ ## Review Notes
29
+
30
+ <!-- QA agent writes here after each review pass -->
@@ -0,0 +1,18 @@
1
+ ---
2
+ name: task-review
3
+ description: Reviews task breakdown for completeness, correct ordering, and implementation readiness. Use after spec-to-tasks generates task files.
4
+ argument-hint: <spec-name>
5
+ context: fork
6
+ ---
7
+
8
+ # Task Review
9
+
10
+ Review the task breakdown to catch issues before implementation begins.
11
+
12
+ ## What To Do Now
13
+
14
+ If an argument was provided, use it as the spec name. Otherwise, find the most recent spec with a `tasks/` directory.
15
+
16
+ Read the spec file and all task files in the `tasks/` directory.
17
+
18
+ Then read `references/checklist.md` and evaluate each item.
@@ -0,0 +1,135 @@
1
+ # Task Review Checklist
2
+
3
+ Evaluate each area. For each issue found, note the severity:
4
+ - **Critical**: Must fix before implementation
5
+ - **Warning**: Should fix, but could proceed
6
+ - **Note**: Minor suggestion
7
+
8
+ ## 1. Coverage
9
+
10
+ Compare acceptance criteria in the spec to tasks generated.
11
+
12
+ **Check:**
13
+ - [ ] Every acceptance criterion has exactly one corresponding task
14
+ - [ ] No criteria were skipped or forgotten
15
+ - [ ] No phantom tasks that don't map to a criterion
16
+
17
+ **How to verify:**
18
+ List each criterion from the spec's Acceptance Criteria section. For each, find the matching task file. Flag any orphans in either direction.
19
+
20
+ ## 2. Dependency Order
21
+
22
+ Tasks should be sequenced so each can be completed without waiting on later tasks.
23
+
24
+ **Check:**
25
+ - [ ] Task file names sort in valid execution order (T001, T002, etc.)
26
+ - [ ] Each task's `depends_on` references only earlier tasks
27
+ - [ ] No circular dependencies
28
+ - [ ] Foundation work comes before features that use it
29
+
30
+ **Common issues:**
31
+ - API route task before the service it calls
32
+ - UI component before the API it fetches from
33
+ - Test file before the code it tests (tests should be in same task as code)
34
+
35
+ ## 3. File Plausibility
36
+
37
+ Files listed in each task should make sense for the project.
38
+
39
+ **Check:**
40
+ - [ ] File paths follow project conventions (use Explorer if unsure)
41
+ - [ ] Files to modify actually exist in the codebase
42
+ - [ ] Files to create are in appropriate directories
43
+ - [ ] No duplicate files across tasks (each file appears in exactly one task)
44
+
45
+ **How to verify:**
46
+ For files to modify, confirm they exist. For files to create, confirm the parent directory exists and the naming follows conventions.
47
+
48
+ ## 4. Verification Executability
49
+
50
+ Each task's verification method must be concrete and runnable.
51
+
52
+ **Check:**
53
+ - [ ] Verification is a specific command or script, not vague prose
54
+ - [ ] Test file paths exist or will be created by the task
55
+ - [ ] agent-browser commands reference real routes/elements
56
+ - [ ] No "manually verify" without clear steps
57
+
58
+ **Red flags:**
59
+ - "Verify it works correctly"
60
+ - "Check that the feature functions"
61
+ - Test commands for files not listed in the task
62
+
63
+ ## 5. Verification Completeness
64
+
65
+ Each task's verification must test ALL behaviors mentioned in its criterion.
66
+
67
+ **Check:**
68
+ - [ ] Read the criterion text carefully - identify every distinct behavior or edge case mentioned
69
+ - [ ] For each behavior, confirm there's a corresponding verification step
70
+ - [ ] Flag any behaviors in the criterion that have no verification
71
+
72
+ **How to verify:**
73
+ For each task, extract bullet points from the criterion. For each bullet, find the matching verification step. If a behavior is mentioned but not tested, that's a Critical issue.
74
+
75
+ **Common gaps:**
76
+ - Criterion mentions "X persists across refresh" but verification doesn't test refresh
77
+ - Criterion mentions "handles edge case Y" but verification only tests happy path
78
+ - Criterion mentions animation/timing but verification can't test it (should note "Manual test required")
79
+
80
+ ## 6. Dependency Completeness
81
+
82
+ Dependencies must be complete, not just valid.
83
+
84
+ **Check:**
85
+ - [ ] If task X modifies a file, check if another task creates it - that task must be in X's depends_on
86
+ - [ ] If task X uses a component/function/route, check if another task creates it - that task must be in X's depends_on
87
+ - [ ] If task X requires context from task Y (e.g., branding, layout, shared state), Y must be in X's depends_on
88
+
89
+ **How to verify:**
90
+ For each task, look at its Files section. For each "modify" entry, search other tasks for where that file is created. If found, verify the creating task is in depends_on. Also check the criterion for implicit dependencies (e.g., "shows branding" implies depending on the branding task).
91
+
92
+ **Common gaps:**
93
+ - Task uses a layout but doesn't depend on the task that configures the layout
94
+ - Task modifies shared state but doesn't depend on the task that creates the context
95
+ - Task assumes a feature exists but the feature is created by a later task
96
+
97
+ ## 7. Task Scope
98
+
99
+ Each task should be appropriately sized for the coder→QA loop.
100
+
101
+ **Check:**
102
+ - [ ] No task touches more than ~10 files (consider splitting)
103
+ - [ ] No trivially small tasks that could merge with related work
104
+ - [ ] Each task produces a verifiable outcome, not just "creates a file"
105
+
106
+ ## 8. Consistency
107
+
108
+ Cross-check task files against each other and the spec.
109
+
110
+ **Check:**
111
+ - [ ] Task titles match or closely reflect the acceptance criterion
112
+ - [ ] Status is `pending` for all new tasks
113
+ - [ ] Frontmatter format is consistent across all task files
114
+ - [ ] Implementation Notes and Review Notes sections exist (empty is fine)
115
+
116
+ ---
117
+
118
+ ## Output Format
119
+
120
+ Return findings as a structured list:
121
+
122
+ ```
123
+ ## Critical Issues
124
+ - [T002] Depends on T005 which comes later - wrong order
125
+ - [T003] Missing verification method
126
+
127
+ ## Warnings
128
+ - [T001] touches 12 files - consider splitting
129
+ - Criterion "User can delete account" has no corresponding task
130
+
131
+ ## Notes
132
+ - [T004] Could merge with T005 since they share the same files
133
+ ```
134
+
135
+ If no issues found, state: "Task breakdown looks good. All criteria covered, dependencies valid, verification methods concrete."
@@ -1,25 +0,0 @@
1
- spec: <name>
2
- spec_path: docs/specs/<name>/spec.md
3
- generated: <ISO timestamp>
4
-
5
- tasks:
6
- - id: T001
7
- title: <Short descriptive title>
8
- description: |
9
- <What to implement>
10
- <Reference to spec section if applicable>
11
- files:
12
- - <path/to/file.ts>
13
- depends_on: []
14
- acceptance: |
15
- <How to verify this task is complete>
16
-
17
- - id: T002
18
- title: <Next task>
19
- description: |
20
- <What to implement>
21
- files:
22
- - <path/to/file.ts>
23
- depends_on: [T001]
24
- acceptance: |
25
- <Verification criteria>