cc-dev-template 0.1.81 → 0.1.82
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/install.js +10 -1
- package/package.json +1 -1
- package/src/agents/objective-researcher.md +52 -0
- package/src/agents/question-generator.md +70 -0
- package/src/scripts/restrict-to-spec-dir.sh +23 -0
- package/src/skills/ship/SKILL.md +46 -0
- package/src/skills/ship/references/step-1-intent.md +50 -0
- package/src/skills/ship/references/step-2-questions.md +42 -0
- package/src/skills/ship/references/step-3-research.md +44 -0
- package/src/skills/ship/references/step-4-design.md +70 -0
- package/src/skills/ship/references/step-5-spec.md +86 -0
- package/src/skills/ship/references/step-6-tasks.md +83 -0
- package/src/skills/ship/references/step-7-implement.md +61 -0
- package/src/skills/ship/references/step-8-reflect.md +21 -0
- package/src/skills/execute-spec/SKILL.md +0 -40
- package/src/skills/execute-spec/references/phase-1-hydrate.md +0 -74
- package/src/skills/execute-spec/references/phase-2-build.md +0 -65
- package/src/skills/execute-spec/references/phase-3-validate.md +0 -73
- package/src/skills/execute-spec/references/phase-4-triage.md +0 -79
- package/src/skills/execute-spec/references/phase-5-reflect.md +0 -32
- package/src/skills/research/SKILL.md +0 -14
- package/src/skills/research/references/step-1-check-existing.md +0 -25
- package/src/skills/research/references/step-2-conduct-research.md +0 -65
- package/src/skills/research/references/step-3-reflect.md +0 -29
- package/src/skills/spec-interview/SKILL.md +0 -17
- package/src/skills/spec-interview/references/critic-prompt.md +0 -140
- package/src/skills/spec-interview/references/pragmatist-prompt.md +0 -76
- package/src/skills/spec-interview/references/researcher-prompt.md +0 -46
- package/src/skills/spec-interview/references/step-1-opening.md +0 -78
- package/src/skills/spec-interview/references/step-2-ideation.md +0 -73
- package/src/skills/spec-interview/references/step-3-ui-ux.md +0 -83
- package/src/skills/spec-interview/references/step-4-deep-dive.md +0 -137
- package/src/skills/spec-interview/references/step-5-research-needs.md +0 -53
- package/src/skills/spec-interview/references/step-6-verification.md +0 -89
- package/src/skills/spec-interview/references/step-7-finalize.md +0 -60
- package/src/skills/spec-interview/references/step-8-reflect.md +0 -32
- package/src/skills/spec-review/SKILL.md +0 -91
- package/src/skills/spec-sanity-check/SKILL.md +0 -82
- package/src/skills/spec-to-tasks/SKILL.md +0 -24
- package/src/skills/spec-to-tasks/references/step-1-identify-spec.md +0 -39
- package/src/skills/spec-to-tasks/references/step-2-explore.md +0 -43
- package/src/skills/spec-to-tasks/references/step-3-generate.md +0 -67
- package/src/skills/spec-to-tasks/references/step-4-review.md +0 -90
- package/src/skills/spec-to-tasks/references/step-5-reflect.md +0 -22
- package/src/skills/spec-to-tasks/templates/task.md +0 -30
- package/src/skills/task-review/SKILL.md +0 -18
- package/src/skills/task-review/references/checklist.md +0 -153
|
@@ -1,60 +0,0 @@
|
|
|
1
|
-
# Step 7: Finalize
|
|
2
|
-
|
|
3
|
-
Review the spec for completeness and soundness, then hand off.
|
|
4
|
-
|
|
5
|
-
## Request Final Team Reviews
|
|
6
|
-
|
|
7
|
-
Message both the Critic and Pragmatist requesting final assessments:
|
|
8
|
-
|
|
9
|
-
1. Message the Critic: "The spec is substantially complete. Please do a final review against your completeness checklist and sanity check framework. Write your complete findings to `{spec_dir}/working/critic-final-review.md` using the format in your prompt."
|
|
10
|
-
2. Message the Pragmatist: "The spec is substantially complete. Please do a final complexity assessment. Write your findings to `{spec_dir}/working/pragmatist-final-assessment.md` using the format in your prompt."
|
|
11
|
-
3. Wait for both to respond (they will message you when their files are ready)
|
|
12
|
-
4. Read their working files: `critic-final-review.md` and `pragmatist-final-assessment.md`
|
|
13
|
-
|
|
14
|
-
## Curate the Findings
|
|
15
|
-
|
|
16
|
-
Synthesize findings from the Critic's review and the Pragmatist's assessment. Some findings may be:
|
|
17
|
-
- Critical issues that must be addressed
|
|
18
|
-
- Valid suggestions worth considering
|
|
19
|
-
- Pedantic or irrelevant items to skip
|
|
20
|
-
|
|
21
|
-
For each finding, form a recommendation: address it or skip it, and why.
|
|
22
|
-
|
|
23
|
-
The Critic and Pragmatist have had full context of the entire interview — their findings are more informed than cold reviews. Weight their input accordingly.
|
|
24
|
-
|
|
25
|
-
## Walk Through With User
|
|
26
|
-
|
|
27
|
-
Use AskUserQuestion to present findings in batches (2-3 at a time). For each finding:
|
|
28
|
-
- State what the review found
|
|
29
|
-
- Give your recommendation (always include a recommended option)
|
|
30
|
-
- Let user decide: fix, skip, or something else
|
|
31
|
-
|
|
32
|
-
Track two lists:
|
|
33
|
-
- **Addressed**: findings the user chose to fix
|
|
34
|
-
- **Intentionally skipped**: findings the user chose to ignore
|
|
35
|
-
|
|
36
|
-
After walking through all findings, make the approved changes to the spec.
|
|
37
|
-
|
|
38
|
-
## Offer Another Pass
|
|
39
|
-
|
|
40
|
-
Use AskUserQuestion: "Do you want to run the reviews again?"
|
|
41
|
-
|
|
42
|
-
If yes, message the Critic and Pragmatist again with additional context: "We already ran a review. These changes were made: [list]. These findings were intentionally skipped: [list]. Look for anything new we haven't considered."
|
|
43
|
-
|
|
44
|
-
Read their updated working files and repeat the curate → walk through → offer another pass cycle until user is satisfied.
|
|
45
|
-
|
|
46
|
-
## Complete the Interview
|
|
47
|
-
|
|
48
|
-
Once user confirms no more review passes needed:
|
|
49
|
-
|
|
50
|
-
1. Show the user the final spec
|
|
51
|
-
2. Use AskUserQuestion to confirm they are satisfied
|
|
52
|
-
3. Ask if they want to proceed to task breakdown
|
|
53
|
-
4. Shutdown the team:
|
|
54
|
-
- Send shutdown requests to all three teammates (researcher, critic, pragmatist) via SendMessage with type "shutdown_request"
|
|
55
|
-
- After all teammates confirm shutdown, use TeamDelete to clean up team resources
|
|
56
|
-
- The `{spec_dir}/working/` directory remains on disk as reference for implementation
|
|
57
|
-
|
|
58
|
-
If yes to task breakdown, invoke `spec-to-tasks` and specify which spec to break down.
|
|
59
|
-
|
|
60
|
-
Use the Read tool on `references/step-8-reflect.md` to reflect on the interview process and note any skill issues.
|
|
@@ -1,32 +0,0 @@
|
|
|
1
|
-
# Step 8: Reflect and Improve
|
|
2
|
-
|
|
3
|
-
Reflect on this spec interview to improve the skill itself.
|
|
4
|
-
|
|
5
|
-
## Assess
|
|
6
|
-
|
|
7
|
-
Answer these questions honestly:
|
|
8
|
-
|
|
9
|
-
1. Were any interview steps wrong, incomplete, or misleading? Did any step send you down a wrong path or leave out critical guidance?
|
|
10
|
-
2. Did the team coordination work smoothly? Were the checkpoint patterns (post-ideation, deep dive, finalize) at the right moments? Did any teammate produce findings too late or too early to be useful?
|
|
11
|
-
3. Did the prompt templates (researcher, critic, pragmatist) give adequate direction? Did any teammate misunderstand its role or produce unhelpful output?
|
|
12
|
-
4. Did you discover a question sequence, interview technique, or spec structure that worked better than what the skill prescribed?
|
|
13
|
-
5. Did any commands, paths, tool interactions, or team communication patterns fail and require correction?
|
|
14
|
-
6. Was the step ordering right? Should any steps be reordered, merged, or split?
|
|
15
|
-
|
|
16
|
-
## Act
|
|
17
|
-
|
|
18
|
-
If you identified issues above, fix them now:
|
|
19
|
-
|
|
20
|
-
1. Identify the specific file in the spec-interview skill where the issue lives
|
|
21
|
-
2. Read that file
|
|
22
|
-
3. Apply the fix -- add what was missing, correct what was wrong
|
|
23
|
-
4. Apply the tribal knowledge test: only add what a fresh Claude instance would not already know about conducting spec interviews or coordinating agent teams
|
|
24
|
-
5. Keep the file within its size target
|
|
25
|
-
|
|
26
|
-
If no issues were found, confirm that to the user.
|
|
27
|
-
|
|
28
|
-
## Report
|
|
29
|
-
|
|
30
|
-
Tell the user:
|
|
31
|
-
- What you changed in the spec-interview skill and why, OR
|
|
32
|
-
- That no updates were needed and the skill performed correctly
|
|
@@ -1,91 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: spec-review
|
|
3
|
-
description: Review a feature spec for completeness and implementation readiness. Checks data models, integration points, acceptance criteria, CLAUDE.md alignment, and duplication.
|
|
4
|
-
argument-hint: <spec-name>
|
|
5
|
-
context: fork
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
# Spec Review
|
|
9
|
-
|
|
10
|
-
## Find the Spec
|
|
11
|
-
|
|
12
|
-
Use the path from the prompt if provided. Otherwise, find the most recently modified file in `docs/specs/`. If no specs exist, inform the user and stop.
|
|
13
|
-
|
|
14
|
-
## Read Context
|
|
15
|
-
|
|
16
|
-
Read the spec file and all CLAUDE.md files in the project (root and subdirectories). CLAUDE.md files contain project constraints and conventions to check alignment against.
|
|
17
|
-
|
|
18
|
-
## Evaluate Against Checklist
|
|
19
|
-
|
|
20
|
-
A spec is implementation-ready when ALL of these are satisfied:
|
|
21
|
-
|
|
22
|
-
### Must Have (Blocking if missing)
|
|
23
|
-
|
|
24
|
-
- [ ] **Clear intent** - What is being built and why is unambiguous
|
|
25
|
-
- [ ] **Data model defined** - Entities, relationships, and constraints are explicit
|
|
26
|
-
- [ ] **Integration points mapped** - What existing code this touches is documented
|
|
27
|
-
- [ ] **Core behavior specified** - Main flows are step-by-step clear
|
|
28
|
-
- [ ] **Acceptance criteria exist** - Testable requirements are listed
|
|
29
|
-
- [ ] **Verification methods defined** - Every acceptance criterion has a specific way to verify it (test command, agent-browser steps, or explicit check)
|
|
30
|
-
- [ ] **No ambiguities** - Nothing requires interpretation; all requirements are explicit
|
|
31
|
-
- [ ] **No unknowns** - All information needed for implementation is present; nothing left to discover
|
|
32
|
-
- [ ] **CLAUDE.md alignment** - Spec does not conflict with constraints in any CLAUDE.md file
|
|
33
|
-
- [ ] **No internal duplication** - File Landscape contains no sets of new files that serve similar purposes and could share a common implementation
|
|
34
|
-
|
|
35
|
-
### Should Have (Gaps that cause implementation friction)
|
|
36
|
-
|
|
37
|
-
- [ ] **Edge cases covered** - Error conditions and boundaries are addressed
|
|
38
|
-
- [ ] **External dependencies documented** - APIs, libraries, services are listed
|
|
39
|
-
- [ ] **Blockers section exists** - Missing credentials, pending decisions are called out
|
|
40
|
-
- [ ] **UI/UX wireframes exist** - If feature has a user interface, ASCII wireframes are present
|
|
41
|
-
- [ ] **Design direction documented** - If feature has UI, visual approach is explicit (not assumed)
|
|
42
|
-
|
|
43
|
-
### Implementation Readiness
|
|
44
|
-
|
|
45
|
-
The test: could an agent implement this feature with ZERO assumptions? If the agent would need to guess, interpret, or discover anything, the spec is not ready.
|
|
46
|
-
|
|
47
|
-
Flag these problems:
|
|
48
|
-
- Vague language ("should handle errors appropriately" — HOW?)
|
|
49
|
-
- Missing details ("integrates with auth" — WHERE? HOW?)
|
|
50
|
-
- Unstated assumptions ("uses the standard pattern" — WHICH pattern?)
|
|
51
|
-
- Blocking dependencies ("needs API access" — DO WE HAVE IT?)
|
|
52
|
-
- Unverifiable criteria ("dashboard works correctly" — HOW DO WE CHECK?)
|
|
53
|
-
- Missing verification ("loads fast" — WHAT COMMAND PROVES IT?)
|
|
54
|
-
- Implicit knowledge ("depends on how X works" — SPECIFY IT)
|
|
55
|
-
- Unverified claims ("the API returns..." — HAS THIS BEEN CONFIRMED?)
|
|
56
|
-
- CLAUDE.md conflicts (spec proposes X but CLAUDE.md requires Y — WHICH IS IT?)
|
|
57
|
-
- Near-duplicate new components (three card components for different pages — CONSOLIDATE into one shared component with props/configuration)
|
|
58
|
-
|
|
59
|
-
## Output Format
|
|
60
|
-
|
|
61
|
-
Return the review as:
|
|
62
|
-
|
|
63
|
-
```
|
|
64
|
-
## Spec Review: [Feature Name]
|
|
65
|
-
|
|
66
|
-
### Status: [READY | NEEDS WORK]
|
|
67
|
-
|
|
68
|
-
### Missing (Blocking)
|
|
69
|
-
- [Item]: [What's missing and why it blocks implementation]
|
|
70
|
-
|
|
71
|
-
### CLAUDE.md Conflicts
|
|
72
|
-
- [Constraint from CLAUDE.md]: [How the spec conflicts with it]
|
|
73
|
-
|
|
74
|
-
### Gaps (Non-blocking but should address)
|
|
75
|
-
- [Item]: [What's unclear or incomplete]
|
|
76
|
-
|
|
77
|
-
### Duplication Concerns
|
|
78
|
-
- [Group of similar new files/components]: [How they overlap and consolidation recommendation]
|
|
79
|
-
|
|
80
|
-
### Blocking Dependencies
|
|
81
|
-
- [Dependency]: [What's needed before implementation can start]
|
|
82
|
-
|
|
83
|
-
### Skill Observations (optional)
|
|
84
|
-
If any checklist items, evaluation criteria, or output format instructions in this skill were wrong, incomplete, or misleading during this review, note them here. Leave empty if no issues were found.
|
|
85
|
-
|
|
86
|
-
### Recommendation
|
|
87
|
-
[Specific questions to ask the user, or "Spec is implementation-ready"]
|
|
88
|
-
```
|
|
89
|
-
|
|
90
|
-
**READY**: Spec can proceed to task breakdown.
|
|
91
|
-
**NEEDS WORK**: List specific questions that need answers.
|
|
@@ -1,82 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: spec-sanity-check
|
|
3
|
-
description: Fresh-eyes review of a spec's logic and assumptions. Checks for logic gaps, incorrect assumptions about existing systems, unconsidered scenarios, and implementation pitfalls.
|
|
4
|
-
argument-hint: <spec-path>
|
|
5
|
-
context: fork
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
# Spec Sanity Check
|
|
9
|
-
|
|
10
|
-
Review the spec with fresh eyes. Focus on whether the plan will actually work, not format or completeness.
|
|
11
|
-
|
|
12
|
-
## Find the Spec
|
|
13
|
-
|
|
14
|
-
Use the path from the prompt if provided. Otherwise, find the most recently modified file in `docs/specs/`. If no specs exist, inform the user and stop.
|
|
15
|
-
|
|
16
|
-
## Read and Understand
|
|
17
|
-
|
|
18
|
-
Read the entire spec. Understand what is being built and how.
|
|
19
|
-
|
|
20
|
-
## Ask These Questions
|
|
21
|
-
|
|
22
|
-
For each section of the spec, challenge it:
|
|
23
|
-
|
|
24
|
-
### Logic Gaps
|
|
25
|
-
- Does the described flow actually work end-to-end?
|
|
26
|
-
- Are there steps that assume a previous step succeeded without checking?
|
|
27
|
-
- Are there circular dependencies?
|
|
28
|
-
- Does the order of operations make sense?
|
|
29
|
-
|
|
30
|
-
### Incorrect Assumptions
|
|
31
|
-
- Are there assumptions about how existing systems work that might be wrong?
|
|
32
|
-
- Are there assumptions about external APIs, libraries, or services?
|
|
33
|
-
- Are there assumptions about data formats or availability?
|
|
34
|
-
- Use Explorer subagents to verify assumptions against the actual codebase
|
|
35
|
-
|
|
36
|
-
### Unconsidered Scenarios
|
|
37
|
-
- What happens in edge cases not explicitly covered?
|
|
38
|
-
- What happens under load or at scale?
|
|
39
|
-
- What happens if external dependencies fail?
|
|
40
|
-
- What happens if data is malformed or missing?
|
|
41
|
-
|
|
42
|
-
### Implementation Pitfalls
|
|
43
|
-
- Are there common bugs this approach would likely introduce?
|
|
44
|
-
- Are there security implications not addressed?
|
|
45
|
-
- Are there performance implications not addressed?
|
|
46
|
-
- Are there race conditions or timing issues?
|
|
47
|
-
|
|
48
|
-
### The "What If" Test
|
|
49
|
-
- What if [key assumption] is wrong?
|
|
50
|
-
- What if [external dependency] changes?
|
|
51
|
-
- What if [data volume] is 10x what we expect?
|
|
52
|
-
|
|
53
|
-
## Output Format
|
|
54
|
-
|
|
55
|
-
Return findings as:
|
|
56
|
-
|
|
57
|
-
```
|
|
58
|
-
## Sanity Check: [Feature Name]
|
|
59
|
-
|
|
60
|
-
### Status: [SOUND | CONCERNS]
|
|
61
|
-
|
|
62
|
-
### Logic Issues
|
|
63
|
-
- [Issue]: [Why this is a problem]
|
|
64
|
-
|
|
65
|
-
### Questionable Assumptions
|
|
66
|
-
- [Assumption]: [Why this might be wrong] [Suggestion to verify]
|
|
67
|
-
|
|
68
|
-
### Unconsidered Scenarios
|
|
69
|
-
- [Scenario]: [What could go wrong]
|
|
70
|
-
|
|
71
|
-
### Potential Pitfalls
|
|
72
|
-
- [Pitfall]: [How to avoid]
|
|
73
|
-
|
|
74
|
-
### Skill Observations (optional)
|
|
75
|
-
If any evaluation questions, check categories, or output format instructions in this skill were wrong, incomplete, or misleading during this review, note them here. Leave empty if no issues were found.
|
|
76
|
-
|
|
77
|
-
### Recommendation
|
|
78
|
-
[Either "Plan is sound" or specific concerns to address]
|
|
79
|
-
```
|
|
80
|
-
|
|
81
|
-
**SOUND**: No significant concerns found.
|
|
82
|
-
**CONCERNS**: Issues that should be addressed before implementation.
|
|
@@ -1,24 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: spec-to-tasks
|
|
3
|
-
description: Converts a feature specification into implementation tasks. Use after a spec is complete and approved, or when planning work for a defined feature.
|
|
4
|
-
argument-hint: <spec-name>
|
|
5
|
-
context: fork
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
# Spec to Tasks
|
|
9
|
-
|
|
10
|
-
## Workflow Overview
|
|
11
|
-
|
|
12
|
-
This skill has 5 steps. **You must complete ALL steps. Do not stop early or skip any step.**
|
|
13
|
-
|
|
14
|
-
1. **Identify Spec** - Find and verify the spec file
|
|
15
|
-
2. **Verify File Landscape** - Map files to acceptance criteria
|
|
16
|
-
3. **Draft Tasks** - Create draft task files in `tasks/` directory
|
|
17
|
-
4. **Review and Present** - Review drafts, auto-fix issues, then present final results to the user
|
|
18
|
-
5. **Reflect** - Note any skill issues observed during this run
|
|
19
|
-
|
|
20
|
-
Step 4 is where you present results to the user. Step 3 produces drafts; step 4 reviews, fixes, and presents them.
|
|
21
|
-
|
|
22
|
-
## What To Do Now
|
|
23
|
-
|
|
24
|
-
Read `references/step-1-identify-spec.md` and begin.
|
|
@@ -1,39 +0,0 @@
|
|
|
1
|
-
# Step 1: Identify the Spec
|
|
2
|
-
|
|
3
|
-
Locate the specification to convert into tasks.
|
|
4
|
-
|
|
5
|
-
## If Spec Path Provided
|
|
6
|
-
|
|
7
|
-
Use the path from the user's prompt. Verify the file exists and read it.
|
|
8
|
-
|
|
9
|
-
## If No Path Provided
|
|
10
|
-
|
|
11
|
-
Find the most recently modified spec:
|
|
12
|
-
|
|
13
|
-
```bash
|
|
14
|
-
git log -1 --name-only --diff-filter=AM -- 'docs/specs/**/*.md' | tail -1
|
|
15
|
-
```
|
|
16
|
-
|
|
17
|
-
If no specs found in git history, check `docs/specs/` for any spec files and ask the user which one to use.
|
|
18
|
-
|
|
19
|
-
## Verify the Spec
|
|
20
|
-
|
|
21
|
-
Read the spec file. Confirm it contains enough detail to generate implementation tasks:
|
|
22
|
-
|
|
23
|
-
**Required:**
|
|
24
|
-
- Acceptance Criteria with verification methods (each criterion becomes a task)
|
|
25
|
-
- Clear behavior descriptions
|
|
26
|
-
|
|
27
|
-
**Expected (from spec-interview):**
|
|
28
|
-
- File Landscape section listing files to create/modify
|
|
29
|
-
- Integration points and data model
|
|
30
|
-
|
|
31
|
-
**If missing acceptance criteria or verification methods:**
|
|
32
|
-
Inform the user: "This spec doesn't have acceptance criteria with verification methods. Each task needs a clear pass/fail test. Would you like to add them now, or run spec-interview to complete the spec?"
|
|
33
|
-
|
|
34
|
-
**If missing file landscape:**
|
|
35
|
-
Proceed to step 2 where we'll discover file paths via exploration.
|
|
36
|
-
|
|
37
|
-
## Next Step
|
|
38
|
-
|
|
39
|
-
Once the spec is identified and verified, read `references/step-2-explore.md`.
|
|
@@ -1,43 +0,0 @@
|
|
|
1
|
-
# Step 2: Verify File Landscape
|
|
2
|
-
|
|
3
|
-
The spec should already contain a File Landscape section from the interview process. This step verifies and supplements it.
|
|
4
|
-
|
|
5
|
-
## Check the Spec
|
|
6
|
-
|
|
7
|
-
Read the spec's File Landscape section. It should list:
|
|
8
|
-
- **Files to create**: New files with paths and purposes
|
|
9
|
-
- **Files to modify**: Existing files that need changes
|
|
10
|
-
|
|
11
|
-
If the File Landscape section is missing or incomplete, use an Explorer to fill the gaps:
|
|
12
|
-
|
|
13
|
-
> "To implement [feature from spec], what files would need to be created or modified? Give me concrete file paths."
|
|
14
|
-
|
|
15
|
-
## Map Files to Acceptance Criteria
|
|
16
|
-
|
|
17
|
-
For each acceptance criterion in the spec, identify which files are involved. This mapping drives task generation.
|
|
18
|
-
|
|
19
|
-
Example:
|
|
20
|
-
```
|
|
21
|
-
"User can receive notifications"
|
|
22
|
-
→ src/models/notification.ts
|
|
23
|
-
→ src/services/notificationService.ts
|
|
24
|
-
→ src/services/notificationService.test.ts
|
|
25
|
-
|
|
26
|
-
"User can view notification list"
|
|
27
|
-
→ src/routes/notifications.ts
|
|
28
|
-
→ src/components/NotificationList.tsx
|
|
29
|
-
```
|
|
30
|
-
|
|
31
|
-
If a criterion's files aren't clear from the spec, ask an Explorer:
|
|
32
|
-
|
|
33
|
-
> "What files would be involved in making this criterion pass: [criterion]?"
|
|
34
|
-
|
|
35
|
-
## Output
|
|
36
|
-
|
|
37
|
-
You should now have:
|
|
38
|
-
1. Complete list of files to create and modify
|
|
39
|
-
2. Each acceptance criterion mapped to its files
|
|
40
|
-
|
|
41
|
-
## Next Step
|
|
42
|
-
|
|
43
|
-
Once files are mapped to criteria, read `references/step-3-generate.md`.
|
|
@@ -1,67 +0,0 @@
|
|
|
1
|
-
# Step 3: Draft Task Files
|
|
2
|
-
|
|
3
|
-
Create draft task files based on the spec and codebase exploration. These are drafts — you will review, fix, and present them in the next step.
|
|
4
|
-
|
|
5
|
-
## Task Principles
|
|
6
|
-
|
|
7
|
-
**Criterion-based**: Each task corresponds to one acceptance criterion from the spec. A task includes all files needed to make that criterion pass. Do NOT split by file or architectural layer.
|
|
8
|
-
|
|
9
|
-
**Verifiable**: Every task has a verification method from the spec. A coder implements, a QA agent verifies, and the loop continues until it passes.
|
|
10
|
-
|
|
11
|
-
**Ordered**: Name files so they sort in dependency order (T001, T002, etc.). Tasks with no dependencies on each other can be worked in parallel.
|
|
12
|
-
|
|
13
|
-
**Concrete file paths**: Use the file paths discovered in Step 2. Every task lists all files it touches.
|
|
14
|
-
|
|
15
|
-
## Deriving Tasks from Acceptance Criteria
|
|
16
|
-
|
|
17
|
-
Each acceptance criterion in the spec becomes one task.
|
|
18
|
-
|
|
19
|
-
**For each criterion, determine:**
|
|
20
|
-
1. What files must exist or change for this to pass?
|
|
21
|
-
2. What's the verification method from the spec?
|
|
22
|
-
3. What other criteria must pass first? (dependencies)
|
|
23
|
-
|
|
24
|
-
**Grouping rules:**
|
|
25
|
-
- If two criteria share foundational work (e.g., both need a model), the first task creates the foundation, later tasks build on it
|
|
26
|
-
- If a criterion is too large (touches 10+ files), flag it — the spec may need refinement
|
|
27
|
-
- Small tasks are fine; artificial splits are not
|
|
28
|
-
|
|
29
|
-
**Anti-patterns to avoid:**
|
|
30
|
-
- "Create the User model" — no verifiable outcome
|
|
31
|
-
- "Add service layer" — implementation detail, not behavior
|
|
32
|
-
- "Set up database schema" — means to an end, not the end
|
|
33
|
-
|
|
34
|
-
**Good task boundaries:**
|
|
35
|
-
- "User can register with email" — verifiable, coherent
|
|
36
|
-
- "Duplicate emails are rejected" — verifiable, coherent
|
|
37
|
-
- "Dashboard shows notification count" — verifiable, coherent
|
|
38
|
-
|
|
39
|
-
## Validate Criteria Quality
|
|
40
|
-
|
|
41
|
-
Before generating tasks, verify each acceptance criterion has:
|
|
42
|
-
- A specific, testable condition
|
|
43
|
-
- A verification method (test command, agent-browser script, or query)
|
|
44
|
-
|
|
45
|
-
If criteria are vague or missing verification, stop and ask:
|
|
46
|
-
> "The criterion '[X]' doesn't have a clear verification method. Should I suggest one, or would you like to refine the spec first?"
|
|
47
|
-
|
|
48
|
-
## Generate Task Files
|
|
49
|
-
|
|
50
|
-
Create a `tasks/` directory inside the spec folder:
|
|
51
|
-
|
|
52
|
-
```
|
|
53
|
-
docs/specs/<name>/
|
|
54
|
-
├── spec.md
|
|
55
|
-
└── tasks/
|
|
56
|
-
├── T001-<slug>.md
|
|
57
|
-
├── T002-<slug>.md
|
|
58
|
-
└── T003-<slug>.md
|
|
59
|
-
```
|
|
60
|
-
|
|
61
|
-
Use the template in `templates/task.md` for each file. Name files in dependency order so alphabetical sorting reflects execution order.
|
|
62
|
-
|
|
63
|
-
After writing all draft task files, use the Read tool on `references/step-4-review.md` to review and present your results to the user.
|
|
64
|
-
|
|
65
|
-
## Continue to Review
|
|
66
|
-
|
|
67
|
-
Draft task files are ready. Use the Read tool on `references/step-4-review.md` now — that is where you review the drafts and present the final results to the user.
|
|
@@ -1,90 +0,0 @@
|
|
|
1
|
-
# Step 4: Review and Present Tasks
|
|
2
|
-
|
|
3
|
-
Review the draft tasks, auto-fix all issues, then present the final results to the user. This is the step where the user sees your output.
|
|
4
|
-
|
|
5
|
-
This is fully automated — fix every issue you find without asking the user. Do not ask for input during this step.
|
|
6
|
-
|
|
7
|
-
## Review Checklist
|
|
8
|
-
|
|
9
|
-
Read every task file and the spec. Evaluate each area below. For each issue found, note the severity:
|
|
10
|
-
- **Critical**: Must fix before proceeding
|
|
11
|
-
- **Warning**: Should fix, but could proceed
|
|
12
|
-
- **Note**: Minor suggestion
|
|
13
|
-
|
|
14
|
-
### 1. Coverage
|
|
15
|
-
|
|
16
|
-
Compare acceptance criteria in the spec to tasks generated.
|
|
17
|
-
|
|
18
|
-
- Every acceptance criterion has exactly one corresponding task
|
|
19
|
-
- No criteria were skipped or forgotten
|
|
20
|
-
- No phantom tasks that don't map to a criterion
|
|
21
|
-
|
|
22
|
-
**How to verify:** List each criterion from the spec's Acceptance Criteria section. For each, find the matching task file. Flag any orphans in either direction.
|
|
23
|
-
|
|
24
|
-
### 2. Dependency Order
|
|
25
|
-
|
|
26
|
-
- Task file names sort in valid execution order (T001, T002, etc.)
|
|
27
|
-
- Each task's `depends_on` references only earlier tasks
|
|
28
|
-
- No circular dependencies
|
|
29
|
-
- Foundation work comes before features that use it
|
|
30
|
-
|
|
31
|
-
### 3. File Plausibility
|
|
32
|
-
|
|
33
|
-
- File paths follow project conventions
|
|
34
|
-
- Files to modify actually exist in the codebase
|
|
35
|
-
- Files to create are in appropriate directories
|
|
36
|
-
- No duplicate files across tasks (each file appears in exactly one task)
|
|
37
|
-
|
|
38
|
-
### 4. Verification Executability
|
|
39
|
-
|
|
40
|
-
- Verification is a specific command or script, not vague prose
|
|
41
|
-
- Test file paths exist or will be created by the task
|
|
42
|
-
- No "manually verify" without clear steps
|
|
43
|
-
|
|
44
|
-
**Red flags:** "Verify it works correctly", "Check that the feature functions", test commands for files not listed in the task.
|
|
45
|
-
|
|
46
|
-
### 5. Verification Completeness
|
|
47
|
-
|
|
48
|
-
- Read the criterion text carefully — identify every distinct behavior or edge case mentioned
|
|
49
|
-
- For each behavior, confirm there's a corresponding verification step
|
|
50
|
-
- Flag any behaviors in the criterion that have no verification
|
|
51
|
-
|
|
52
|
-
### 6. Dependency Completeness
|
|
53
|
-
|
|
54
|
-
- If task X modifies a file, check if another task creates it — that task must be in X's depends_on
|
|
55
|
-
- If task X uses a component/function/route, check if another task creates it — that task must be in X's depends_on
|
|
56
|
-
|
|
57
|
-
### 7. Task Scope
|
|
58
|
-
|
|
59
|
-
- No task touches more than ~10 files (consider splitting)
|
|
60
|
-
- No trivially small tasks that could merge with related work
|
|
61
|
-
- Each task produces a verifiable outcome
|
|
62
|
-
|
|
63
|
-
### 8. Consistency
|
|
64
|
-
|
|
65
|
-
- Task titles match or closely reflect the acceptance criterion
|
|
66
|
-
- Status is `pending` for all new tasks
|
|
67
|
-
- Frontmatter format is consistent across all task files
|
|
68
|
-
|
|
69
|
-
### 9. Component Consolidation
|
|
70
|
-
|
|
71
|
-
- No two tasks create components with similar names, purposes, or overlapping structure
|
|
72
|
-
- Shared patterns use a single shared component with configuration, not separate implementations
|
|
73
|
-
|
|
74
|
-
## Review and Fix Loop
|
|
75
|
-
|
|
76
|
-
Run the checklist above against all task files. Fix every issue you find — Critical, Warning, and fixable Notes — by editing the task files directly. Then re-run the full checklist from the top. Repeat until no issues remain.
|
|
77
|
-
|
|
78
|
-
Do not present results until the loop is clean.
|
|
79
|
-
|
|
80
|
-
## Present Results to User
|
|
81
|
-
|
|
82
|
-
After the review loop completes clean, present:
|
|
83
|
-
|
|
84
|
-
1. Number of tasks generated
|
|
85
|
-
2. Task dependency tree (visual format)
|
|
86
|
-
3. Summary of review findings and fixes applied (what you found, what you fixed)
|
|
87
|
-
|
|
88
|
-
**IMPORTANT: You are not done. You MUST read and complete the next step. The workflow is incomplete without it.**
|
|
89
|
-
|
|
90
|
-
Read `references/step-5-reflect.md` now.
|
|
@@ -1,22 +0,0 @@
|
|
|
1
|
-
# Step 5: Skill Reflection
|
|
2
|
-
|
|
3
|
-
**IMPORTANT: This step is mandatory. The spec-to-tasks workflow is not complete until this step is finished. Do not skip this.**
|
|
4
|
-
|
|
5
|
-
## Reflect on This Run
|
|
6
|
-
|
|
7
|
-
Think about how this skill performed during this session. Consider:
|
|
8
|
-
|
|
9
|
-
1. **Step instructions**: Were any steps unclear, misleading, or missing information?
|
|
10
|
-
2. **Task template**: Did the template work well, or did you need to deviate from it?
|
|
11
|
-
3. **Review checklist**: Did the checklist catch real issues? Were any checks unnecessary or missing?
|
|
12
|
-
4. **Workflow flow**: Did the step order make sense? Were there unnecessary steps or missing ones?
|
|
13
|
-
|
|
14
|
-
## Report Issues
|
|
15
|
-
|
|
16
|
-
If you identified any problems with the skill's instructions, templates, or workflow, include a brief note in your final output to the user under a "Skill Observations" heading. Keep it factual — what was wrong, what would be better.
|
|
17
|
-
|
|
18
|
-
If everything worked well, state: "No skill issues observed."
|
|
19
|
-
|
|
20
|
-
## Complete
|
|
21
|
-
|
|
22
|
-
The spec-to-tasks workflow is now complete.
|
|
@@ -1,30 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
id: T00X
|
|
3
|
-
title: <Short descriptive title — the acceptance criterion>
|
|
4
|
-
status: pending
|
|
5
|
-
depends_on: []
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
## Criterion
|
|
9
|
-
|
|
10
|
-
<The acceptance criterion from the spec, verbatim or lightly edited for clarity>
|
|
11
|
-
|
|
12
|
-
## Files
|
|
13
|
-
|
|
14
|
-
- <path/to/file.ts>
|
|
15
|
-
- <path/to/another-file.ts>
|
|
16
|
-
- <path/to/test-file.test.ts>
|
|
17
|
-
|
|
18
|
-
## Verification
|
|
19
|
-
|
|
20
|
-
<The verification method from the spec — test command, agent-browser script, or manual steps>
|
|
21
|
-
|
|
22
|
-
---
|
|
23
|
-
|
|
24
|
-
## Implementation Notes
|
|
25
|
-
|
|
26
|
-
<!-- Coder agent writes here after each implementation attempt -->
|
|
27
|
-
|
|
28
|
-
## Review Notes
|
|
29
|
-
|
|
30
|
-
<!-- QA agent writes here after each review pass -->
|
|
@@ -1,18 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: task-review
|
|
3
|
-
description: Reviews task breakdown for completeness, correct ordering, and implementation readiness. Use after spec-to-tasks generates task files.
|
|
4
|
-
argument-hint: <spec-name>
|
|
5
|
-
context: fork
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
# Task Review
|
|
9
|
-
|
|
10
|
-
Review the task breakdown, auto-fix all issues found, and report what was fixed.
|
|
11
|
-
|
|
12
|
-
## What To Do Now
|
|
13
|
-
|
|
14
|
-
If an argument was provided, use it as the spec name. Otherwise, find the most recent spec with a `tasks/` directory.
|
|
15
|
-
|
|
16
|
-
Read the spec file and all task files in the `tasks/` directory.
|
|
17
|
-
|
|
18
|
-
Then read `references/checklist.md` and run the review-and-fix loop.
|