cc-dev-template 0.1.79 → 0.1.81

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (72) hide show
  1. package/package.json +1 -1
  2. package/src/commands/done.md +1 -1
  3. package/src/skills/agent-browser/SKILL.md +7 -133
  4. package/src/skills/agent-browser/references/common-patterns.md +64 -0
  5. package/src/skills/agent-browser/references/ios-simulator.md +25 -0
  6. package/src/skills/agent-browser/references/reflect.md +9 -0
  7. package/src/skills/agent-browser/references/semantic-locators.md +11 -0
  8. package/src/skills/claude-md/SKILL.md +1 -3
  9. package/src/skills/claude-md/references/audit-reflect.md +0 -4
  10. package/src/skills/claude-md/references/audit.md +1 -3
  11. package/src/skills/claude-md/references/create-reflect.md +0 -4
  12. package/src/skills/claude-md/references/create.md +1 -3
  13. package/src/skills/claude-md/references/modify-reflect.md +0 -4
  14. package/src/skills/claude-md/references/modify.md +1 -3
  15. package/src/skills/creating-agent-skills/SKILL.md +2 -2
  16. package/src/skills/creating-agent-skills/references/create-step-1-understand.md +1 -1
  17. package/src/skills/creating-agent-skills/references/create-step-2-design.md +3 -3
  18. package/src/skills/creating-agent-skills/references/create-step-3-write.md +42 -10
  19. package/src/skills/creating-agent-skills/references/create-step-4-review.md +2 -2
  20. package/src/skills/creating-agent-skills/references/create-step-5-install.md +1 -3
  21. package/src/skills/creating-agent-skills/references/create-step-6-reflect.md +1 -3
  22. package/src/skills/creating-agent-skills/references/fix-step-1-diagnose.md +5 -4
  23. package/src/skills/creating-agent-skills/references/fix-step-2-apply.md +2 -2
  24. package/src/skills/creating-agent-skills/references/fix-step-3-validate.md +1 -3
  25. package/src/skills/creating-agent-skills/references/fix-step-4-reflect.md +1 -3
  26. package/src/skills/creating-agent-skills/templates/router-skill.md +3 -3
  27. package/src/skills/creating-sub-agents/references/create-step-1-understand.md +1 -1
  28. package/src/skills/creating-sub-agents/references/create-step-2-design.md +1 -1
  29. package/src/skills/creating-sub-agents/references/create-step-3-write.md +1 -1
  30. package/src/skills/creating-sub-agents/references/create-step-4-review.md +1 -1
  31. package/src/skills/creating-sub-agents/references/create-step-5-install.md +1 -3
  32. package/src/skills/creating-sub-agents/references/create-step-6-reflect.md +0 -4
  33. package/src/skills/creating-sub-agents/references/fix-step-3-validate.md +1 -3
  34. package/src/skills/creating-sub-agents/references/fix-step-4-reflect.md +0 -4
  35. package/src/skills/execute-spec/SKILL.md +9 -17
  36. package/src/skills/execute-spec/references/phase-1-hydrate.md +4 -1
  37. package/src/skills/execute-spec/references/phase-2-build.md +5 -3
  38. package/src/skills/execute-spec/references/phase-3-validate.md +5 -4
  39. package/src/skills/execute-spec/references/phase-4-triage.md +4 -0
  40. package/src/skills/execute-spec/references/phase-5-reflect.md +1 -3
  41. package/src/skills/initialize-project/SKILL.md +2 -4
  42. package/src/skills/initialize-project/references/reflect.md +0 -4
  43. package/src/skills/project-setup/references/step-5-verify.md +1 -3
  44. package/src/skills/project-setup/references/step-6-reflect.md +0 -4
  45. package/src/skills/prompting/SKILL.md +1 -1
  46. package/src/skills/prompting/references/create-reflect.md +0 -4
  47. package/src/skills/prompting/references/create.md +1 -3
  48. package/src/skills/prompting/references/review-reflect.md +0 -4
  49. package/src/skills/prompting/references/review.md +1 -3
  50. package/src/skills/research/SKILL.md +1 -1
  51. package/src/skills/research/references/step-2-conduct-research.md +1 -3
  52. package/src/skills/research/references/step-3-reflect.md +0 -4
  53. package/src/skills/setup-lsp/SKILL.md +15 -0
  54. package/src/skills/setup-lsp/references/lsp-registry.md +148 -0
  55. package/src/skills/setup-lsp/references/step-1-scan.md +28 -0
  56. package/src/skills/setup-lsp/references/step-2-install-configure.md +83 -0
  57. package/src/skills/setup-lsp/references/step-3-verify.md +41 -0
  58. package/src/skills/setup-lsp/references/step-4-reflect.md +20 -0
  59. package/src/skills/spec-interview/SKILL.md +1 -32
  60. package/src/skills/spec-interview/references/step-1-opening.md +32 -1
  61. package/src/skills/spec-interview/references/step-2-ideation.md +2 -2
  62. package/src/skills/spec-interview/references/step-4-deep-dive.md +18 -0
  63. package/src/skills/spec-interview/references/step-7-finalize.md +1 -3
  64. package/src/skills/spec-interview/references/step-8-reflect.md +1 -3
  65. package/src/skills/spec-review/SKILL.md +8 -9
  66. package/src/skills/spec-sanity-check/SKILL.md +2 -2
  67. package/src/skills/spec-to-tasks/SKILL.md +3 -3
  68. package/src/skills/spec-to-tasks/references/step-3-generate.md +5 -7
  69. package/src/skills/spec-to-tasks/references/step-4-review.md +9 -14
  70. package/src/skills/task-review/SKILL.md +2 -2
  71. package/src/skills/task-review/references/checklist.md +14 -16
  72. package/src/skills/execute-spec/references/workflow.md +0 -82
@@ -6,38 +6,7 @@ argument-hint: <spec-name>
6
6
 
7
7
  # Spec Interview
8
8
 
9
- ## Team-Based Approach
10
-
11
- **IMPORTANT:** This skill uses an agent team for collaborative spec development. You are the **Lead** — you interview the user, write the spec, and curate team input. Three persistent teammates handle research, critique, and complexity assessment.
12
-
13
- ### Team Composition
14
-
15
- All teammates run on Opus:
16
- - **Researcher** (researcher): Continuously explores the codebase, maps file landscape, integration points, data model. Drafts technical sections.
17
- - **Critic** (critic): Reviews the emerging spec for gaps, bad assumptions, edge cases. Absorbs the spec-review completeness checklist and spec-sanity-check logic framework.
18
- - **Pragmatist** (pragmatist): Evaluates complexity, pushes back on over-engineering, identifies the simplest buildable path.
19
-
20
- ### Working Directory
21
-
22
- The team shares `{spec_dir}/working/`:
23
- - `context.md` — You (the Lead) write interview updates here. Append-only — each update is a new section with a heading (e.g., `## Step 1: Feature Overview`). This replaces broadcasting — teammates read this file to stay current.
24
- - Teammates write their findings to `working/` with descriptive filenames. Read these at checkpoints.
25
- - `spec.md` (parent dir) — The living spec. You own this file. Teammates read it but never write to it.
26
-
27
- ### Checkpoint Pattern
28
-
29
- Surface team input at step transitions, not continuously. This keeps the user conversation clean:
30
- - **After Step 2** (approach selected): Read all working files, curate team findings for user
31
- - **During Step 4** (deep dive): Read Researcher findings for each subsection, read Critic/Pragmatist feedback
32
- - **At Step 7** (finalize): Request final assessments from all three, compile and present to user
33
-
34
- At each checkpoint: read the working files, identify findings that are relevant and actionable, summarize them for the user as "Before we continue, my research team surfaced a few things..." Skip trivial items.
35
-
36
- ### Team Lifecycle
37
-
38
- 1. **Spawn** — After Step 1 (once the feature is understood), create the working directory, read the three prompt templates from `references/`, substitute `{spec_dir}` and `{feature_name}`, use TeamCreate to create a team named `spec-{feature-name}`, then spawn the three teammates via the Task tool
39
- 2. **Communicate** — Update context.md after each step. Message teammates for specific questions. Read their working files at checkpoints.
40
- 3. **Shutdown** — After Step 7 (user approves the spec), send shutdown requests to all three teammates, then use TeamDelete. Leave the `working/` directory in place as reference for implementation.
9
+ Conduct a structured interview to produce an implementation-ready feature spec. This skill uses an agent team — three persistent teammates (Researcher, Critic, Pragmatist) handle codebase exploration, quality review, and complexity assessment while you lead the interview.
41
10
 
42
11
  ## What To Do Now
43
12
 
@@ -22,7 +22,38 @@ Move on when:
22
22
 
23
23
  ## Initialize the Team
24
24
 
25
- Before proceeding to Step 2, set up the agent team:
25
+ You are the **Lead** you interview the user, write the spec, and curate team input. Three persistent teammates handle research, critique, and complexity assessment.
26
+
27
+ ### Team Composition
28
+
29
+ All teammates run on Opus:
30
+ - **Researcher** (researcher): Continuously explores the codebase, maps file landscape, integration points, data model. Drafts technical sections.
31
+ - **Critic** (critic): Reviews the emerging spec for gaps, bad assumptions, edge cases. Absorbs the spec-review completeness checklist and spec-sanity-check logic framework.
32
+ - **Pragmatist** (pragmatist): Evaluates complexity, pushes back on over-engineering, identifies the simplest buildable path.
33
+
34
+ ### Working Directory
35
+
36
+ The team shares `{spec_dir}/working/`:
37
+ - `context.md` — You (the Lead) write interview updates here. Append-only — each update is a new section with a heading (e.g., `## Step 1: Feature Overview`). This replaces broadcasting — teammates read this file to stay current.
38
+ - Teammates write their findings to `working/` with descriptive filenames. Read these at checkpoints.
39
+ - `spec.md` (parent dir) — The living spec. You own this file. Teammates read it but never write to it.
40
+
41
+ ### Checkpoint Pattern
42
+
43
+ Surface team input at step transitions, not continuously. This keeps the user conversation clean:
44
+ - **After Step 2** (approach selected): Read all working files, curate team findings for user
45
+ - **During Step 4** (deep dive): Read Researcher findings for each subsection, read Critic/Pragmatist feedback
46
+ - **At Step 7** (finalize): Request final assessments from all three, compile and present to user
47
+
48
+ At each checkpoint: read the working files, identify findings that are relevant and actionable, summarize them for the user as "Before we continue, my research team surfaced a few things..." Skip trivial items.
49
+
50
+ ### Team Lifecycle
51
+
52
+ 1. **Spawn** — After the opening questions (once the feature is understood), create the working directory, read the three prompt templates from `references/`, substitute `{spec_dir}` and `{feature_name}`, use TeamCreate to create a team named `spec-{feature-name}`, then spawn the three teammates via the Task tool
53
+ 2. **Communicate** — Update context.md after each step. Message teammates for specific questions. Read their working files at checkpoints.
54
+ 3. **Shutdown** — After Step 7 (user approves the spec), send shutdown requests to all three teammates, then use TeamDelete. Leave the `working/` directory in place as reference for implementation.
55
+
56
+ ### Spawn Steps
26
57
 
27
58
  1. Create the spec directory at `docs/specs/<feature-name>/` if not already created
28
59
  2. Create `docs/specs/<feature-name>/working/` subdirectory
@@ -14,7 +14,7 @@ Use AskUserQuestion to ask:
14
14
 
15
15
  ## Hybrid Brainstorming
16
16
 
17
- Research shows that combining human and AI ideas produces more original solutions than either alone. The key: get human ideas first, before AI suggestions anchor their thinking.
17
+ Get human ideas first, before AI suggestions anchor their thinking.
18
18
 
19
19
  ### 1. Collect User Ideas First
20
20
 
@@ -22,7 +22,7 @@ Use AskUserQuestion:
22
22
 
23
23
  > "Before I suggest anything - what approaches have you been considering? Even rough or half-formed ideas are valuable."
24
24
 
25
- Let them share freely. Don't evaluate yet. The goal is to capture their independent thinking before AI ideas influence it.
25
+ Capture ideas without evaluating. The goal is to capture their independent thinking before AI ideas influence it.
26
26
 
27
27
  ### 2. Generate AI Alternatives
28
28
 
@@ -42,6 +42,15 @@ This becomes the File Landscape section of the spec, which spec-to-tasks uses di
42
42
  - Triggers and resulting actions
43
43
  - Different modes or variations
44
44
 
45
+ ### Constraints
46
+ - What is explicitly out of scope (features, users, flows to NOT build)
47
+ - Technology boundaries (must use X, must not introduce Y)
48
+ - Performance requirements (latency, throughput, resource limits)
49
+ - Security requirements (auth, PII handling, logging restrictions)
50
+ - Compatibility requirements (browsers, platforms, API versions)
51
+
52
+ Constraints that aren't written down don't exist during implementation. If the spec doesn't say "don't introduce a new ORM" or "must stay under 200ms," those boundaries won't be respected downstream.
53
+
45
54
  ### Edge Cases & Error Handling
46
55
  - Failure modes and how to handle them
47
56
  - Invalid input handling
@@ -67,6 +76,15 @@ Write to `docs/specs/<name>/spec.md` with this structure:
67
76
  - [Primary goal]
68
77
  - [Secondary goals]
69
78
 
79
+ ## Approach
80
+ [Chosen approach and rationale. What alternatives were considered and why this one was selected.]
81
+
82
+ ## Constraints
83
+ - **Out of scope:** [What this feature explicitly does NOT do]
84
+ - **Technology:** [Must use / must not introduce]
85
+ - **Performance:** [Latency, throughput, resource limits]
86
+ - **Security:** [Auth, PII, logging restrictions]
87
+
70
88
  ## Integration Points
71
89
  - Touches: [existing components]
72
90
  - External: [APIs, services, libraries]
@@ -57,6 +57,4 @@ Once user confirms no more review passes needed:
57
57
 
58
58
  If yes to task breakdown, invoke `spec-to-tasks` and specify which spec to break down.
59
59
 
60
- **IMPORTANT: You are not done. You MUST read and complete the next step. The workflow is incomplete without it.**
61
-
62
- Read `references/step-8-reflect.md` now.
60
+ Use the Read tool on `references/step-8-reflect.md` to reflect on the interview process and note any skill issues.
@@ -1,8 +1,6 @@
1
1
  # Step 8: Reflect and Improve
2
2
 
3
- **IMPORTANT: This step is mandatory. The spec interview workflow is not complete until this step is finished. Do not skip this.**
4
-
5
- Reflect on your experience conducting this spec interview. The purpose is to improve the spec-interview skill itself based on what you just learned.
3
+ Reflect on this spec interview to improve the skill itself.
6
4
 
7
5
  ## Assess
8
6
 
@@ -1,22 +1,21 @@
1
1
  ---
2
2
  name: spec-review
3
- description: This skill should be used when the user says "review the spec", "check spec completeness", or "is this spec ready". Also invoked by spec-interview when a spec is complete.
3
+ description: Review a feature spec for completeness and implementation readiness. Checks data models, integration points, acceptance criteria, CLAUDE.md alignment, and duplication.
4
4
  argument-hint: <spec-name>
5
5
  context: fork
6
6
  ---
7
7
 
8
8
  # Spec Review
9
9
 
10
- ## Steps
10
+ ## Find the Spec
11
11
 
12
- 1. **Find the spec** - Use the path from the prompt if provided. Otherwise, find the most recently modified file in `docs/specs/`. If no specs exist, inform the user and stop.
13
- 2. **Read the spec file**
14
- 3. **Find all CLAUDE.md files** - Search for every CLAUDE.md in the project (root and subdirectories)
15
- 4. **Read all CLAUDE.md files** - These contain project constraints and conventions
16
- 5. **Evaluate against the checklist below** - Including CLAUDE.md alignment
17
- 6. **Return structured feedback using the output format**
12
+ Use the path from the prompt if provided. Otherwise, find the most recently modified file in `docs/specs/`. If no specs exist, inform the user and stop.
18
13
 
19
- ## Completeness Checklist
14
+ ## Read Context
15
+
16
+ Read the spec file and all CLAUDE.md files in the project (root and subdirectories). CLAUDE.md files contain project constraints and conventions to check alignment against.
17
+
18
+ ## Evaluate Against Checklist
20
19
 
21
20
  A spec is implementation-ready when ALL of these are satisfied:
22
21
 
@@ -1,13 +1,13 @@
1
1
  ---
2
2
  name: spec-sanity-check
3
- description: This skill should be used alongside spec-review to catch logic gaps and incorrect assumptions. Invoked when the user says "sanity check this spec", "does this plan make sense", or "what am I missing". Also auto-invoked by spec-interview during finalization.
3
+ description: Fresh-eyes review of a spec's logic and assumptions. Checks for logic gaps, incorrect assumptions about existing systems, unconsidered scenarios, and implementation pitfalls.
4
4
  argument-hint: <spec-path>
5
5
  context: fork
6
6
  ---
7
7
 
8
8
  # Spec Sanity Check
9
9
 
10
- Provide a "fresh eyes" review of the spec. This is different from spec-review — you're not checking format or completeness. You're checking whether the plan will actually work.
10
+ Review the spec with fresh eyes. Focus on whether the plan will actually work, not format or completeness.
11
11
 
12
12
  ## Find the Spec
13
13
 
@@ -13,11 +13,11 @@ This skill has 5 steps. **You must complete ALL steps. Do not stop early or skip
13
13
 
14
14
  1. **Identify Spec** - Find and verify the spec file
15
15
  2. **Verify File Landscape** - Map files to acceptance criteria
16
- 3. **Generate Tasks** - Create task files in `tasks/` directory
17
- 4. **Review Tasks** - Run review checklist in a loop until no critical issues remain
16
+ 3. **Draft Tasks** - Create draft task files in `tasks/` directory
17
+ 4. **Review and Present** - Review drafts, auto-fix issues, then present final results to the user
18
18
  5. **Reflect** - Note any skill issues observed during this run
19
19
 
20
- Steps 4 and 5 are mandatory. The review loop in step 4 is automated fix issues and re-check until clean, with no user input required.
20
+ Step 4 is where you present results to the user. Step 3 produces drafts; step 4 reviews, fixes, and presents them.
21
21
 
22
22
  ## What To Do Now
23
23
 
@@ -1,6 +1,6 @@
1
- # Step 3: Generate Task Files
1
+ # Step 3: Draft Task Files
2
2
 
3
- Create individual task files based on the spec and codebase exploration.
3
+ Create draft task files based on the spec and codebase exploration. These are drafts — you will review, fix, and present them in the next step.
4
4
 
5
5
  ## Task Principles
6
6
 
@@ -60,10 +60,8 @@ docs/specs/<name>/
60
60
 
61
61
  Use the template in `templates/task.md` for each file. Name files in dependency order so alphabetical sorting reflects execution order.
62
62
 
63
- ## Do NOT Present Results Yet
63
+ After writing all draft task files, use the Read tool on `references/step-4-review.md` to review and present your results to the user.
64
64
 
65
- You have generated task files but you are NOT done. The review step is next and it is mandatory.
65
+ ## Continue to Review
66
66
 
67
- **IMPORTANT: You are not done. You MUST read and complete the next step. The workflow is incomplete without it.**
68
-
69
- Read `references/step-4-review.md` now.
67
+ Draft task files are ready. Use the Read tool on `references/step-4-review.md` now that is where you review the drafts and present the final results to the user.
@@ -1,8 +1,8 @@
1
- # Step 4: Review Task Breakdown
1
+ # Step 4: Review and Present Tasks
2
2
 
3
- **IMPORTANT: This step is mandatory. The spec-to-tasks workflow is not complete until this step is finished. Do not skip this.**
3
+ Review the draft tasks, auto-fix all issues, then present the final results to the user. This is the step where the user sees your output.
4
4
 
5
- You must review the generated tasks, fix any issues, and re-review until the breakdown is clean. This is fully automated — do not ask the user for input during this step.
5
+ This is fully automated fix every issue you find without asking the user. Do not ask for input during this step.
6
6
 
7
7
  ## Review Checklist
8
8
 
@@ -71,24 +71,19 @@ Compare acceptance criteria in the spec to tasks generated.
71
71
  - No two tasks create components with similar names, purposes, or overlapping structure
72
72
  - Shared patterns use a single shared component with configuration, not separate implementations
73
73
 
74
- ## Review Loop
74
+ ## Review and Fix Loop
75
75
 
76
- Run the checklist above against all task files. Then:
76
+ Run the checklist above against all task files. Fix every issue you find — Critical, Warning, and fixable Notes — by editing the task files directly. Then re-run the full checklist from the top. Repeat until no issues remain.
77
77
 
78
- 1. **If Critical issues found:** Fix them by editing the task files. Then re-run the full checklist again from the top. Repeat until no Critical issues remain.
79
- 2. **If only Warnings/Notes remain:** Proceed — you will present these to the user.
80
- 3. **If no issues found:** Proceed.
78
+ Do not present results until the loop is clean.
81
79
 
82
- Do NOT present results after a single pass if Critical issues exist. The loop must continue until clean.
80
+ ## Present Results to User
83
81
 
84
- ## Present to User
85
-
86
- After the review loop completes with no Critical issues, present:
82
+ After the review loop completes clean, present:
87
83
 
88
84
  1. Number of tasks generated
89
85
  2. Task dependency tree (visual format)
90
- 3. Any Warnings from the review (with your recommendation)
91
- 4. Offer to show task files or proceed to implementation
86
+ 3. Summary of review findings and fixes applied (what you found, what you fixed)
92
87
 
93
88
  **IMPORTANT: You are not done. You MUST read and complete the next step. The workflow is incomplete without it.**
94
89
 
@@ -7,7 +7,7 @@ context: fork
7
7
 
8
8
  # Task Review
9
9
 
10
- Review the task breakdown to catch issues before implementation begins.
10
+ Review the task breakdown, auto-fix all issues found, and report what was fixed.
11
11
 
12
12
  ## What To Do Now
13
13
 
@@ -15,4 +15,4 @@ If an argument was provided, use it as the spec name. Otherwise, find the most r
15
15
 
16
16
  Read the spec file and all task files in the `tasks/` directory.
17
17
 
18
- Then read `references/checklist.md` and evaluate each item.
18
+ Then read `references/checklist.md` and run the review-and-fix loop.
@@ -1,9 +1,9 @@
1
1
  # Task Review Checklist
2
2
 
3
- Evaluate each area. For each issue found, note the severity:
4
- - **Critical**: Must fix before implementation
5
- - **Warning**: Should fix, but could proceed
6
- - **Note**: Minor suggestion
3
+ Evaluate each area. Fix every issue you find by editing the task files directly. Note what you found and fixed:
4
+ - **Critical**: Must fix fix immediately
5
+ - **Warning**: Should fix fix now
6
+ - **Note**: Minor suggestion — fix if straightforward
7
7
 
8
8
  ## 1. Coverage
9
9
 
@@ -132,24 +132,22 @@ Compare files-to-create across all tasks. Group by similarity (naming patterns,
132
132
 
133
133
  ---
134
134
 
135
- ## Output Format
135
+ ## Review and Fix Loop
136
136
 
137
- Return findings as a structured list:
137
+ Run the checklist above against all task files. Fix every issue you find by editing the task files directly. Then re-run the full checklist from the top. Repeat until no issues remain.
138
138
 
139
- ```
140
- ## Critical Issues
141
- - [T002] Depends on T005 which comes later - wrong order
142
- - [T003] Missing verification method
139
+ ## Output Format
143
140
 
144
- ## Warnings
145
- - [T001] touches 12 files - consider splitting
146
- - Criterion "User can delete account" has no corresponding task
141
+ After all issues are fixed, report what you found and fixed:
147
142
 
148
- ## Notes
149
- - [T004] Could merge with T005 since they share the same files
143
+ ```
144
+ ## Review Summary
145
+ - [T002] Fixed: reordered dependency — was depending on T005 which came later
146
+ - [T003] Fixed: added concrete verification command (was missing)
147
+ - [T001] Fixed: split into T001a/T001b — was touching 12 files
150
148
 
151
149
  ## Skill Observations (optional)
152
- If any checklist items, severity criteria, or review patterns in this skill were wrong, incomplete, or misleading during this review, note them here. Leave empty if no issues were found.
150
+ If any checklist items or review patterns in this skill were wrong or misleading, note them here.
153
151
  ```
154
152
 
155
153
  If no issues found, state: "Task breakdown looks good. All criteria covered, dependencies valid, verification methods concrete."
@@ -1,82 +0,0 @@
1
- # Execute Spec Workflow
2
-
3
- ## Overview
4
-
5
- ```
6
- PHASE 1: HYDRATE
7
- Run parse script → TaskCreate with dependencies
8
- (NO file reading by orchestrator)
9
-
10
- PHASE 2: BUILD
11
- Loop: find unblocked tasks → dispatch spec-implementer → receive minimal status
12
- Continue until all tasks built
13
-
14
- PHASE 3: VALIDATE
15
- Dispatch spec-validator for each task (all in parallel)
16
- Receive pass/fail status only
17
-
18
- PHASE 4: TRIAGE
19
- For failed tasks: re-dispatch spec-implementer
20
- Re-validate
21
- Loop until clean or user defers
22
-
23
- PHASE 5: REFLECT
24
- Assess orchestration experience → improve skill files
25
- (Mandatory — workflow is NOT complete without this)
26
- ```
27
-
28
- ## Critical: Minimal Context
29
-
30
- **Agent returns are pass/fail only.** All details go in task files.
31
-
32
- - Implementer returns: `Task complete: T005` or `Blocked: T005 - reason`
33
- - Validator returns: `Pass: T005` or `Issues: T005 - [brief list]`
34
-
35
- The orchestrator never reads task files. It dispatches paths and receives status.
36
-
37
- ## Phase 1: Hydrate
38
-
39
- Read `phase-1-hydrate.md` for details.
40
-
41
- Use the parse script to get task metadata:
42
- ```bash
43
- node ~/.claude/scripts/parse-task-files.js {spec-path}
44
- ```
45
-
46
- This returns JSON with task IDs, titles, dependencies, and paths. Create tasks from this output without reading any files.
47
-
48
- ## Phase 2: Build
49
-
50
- Read `phase-2-build.md` for details.
51
-
52
- 1. Find unblocked tasks via TaskList
53
- 2. Dispatch spec-implementer with just the file path
54
- 3. Receive minimal status (pass/fail)
55
- 4. Repeat until all built
56
-
57
- ## Phase 3: Validate
58
-
59
- Read `phase-3-validate.md` for details.
60
-
61
- 1. Dispatch spec-validator for each task (parallel)
62
- 2. Receive pass/fail status
63
- 3. Collect list of failed task IDs
64
-
65
- ## Phase 4: Triage
66
-
67
- Read `phase-4-triage.md` for details.
68
-
69
- 1. For failed tasks: re-dispatch spec-implementer (it reads Review Notes and fixes)
70
- 2. Re-run spec-validator on fixed tasks
71
- 3. Loop until all pass or user defers remaining issues
72
-
73
- ## Key Principles
74
-
75
- - **No file reading by orchestrator** - Hook blocks task file reads
76
- - **Minimal returns** - Agents return status only, details in task files
77
- - **Task file is source of truth** - Implementation Notes and Review Notes track all history
78
- - **Parallelism** - Use `run_in_background: true` where possible
79
-
80
- **IMPORTANT: You are not done. You MUST read and complete the next step. The workflow is incomplete without it.**
81
-
82
- Read `references/phase-5-reflect.md` now.