cc-dev-template 0.1.80 → 0.1.82
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/install.js +10 -1
- package/package.json +1 -1
- package/src/agents/objective-researcher.md +52 -0
- package/src/agents/question-generator.md +70 -0
- package/src/scripts/restrict-to-spec-dir.sh +23 -0
- package/src/skills/agent-browser/SKILL.md +7 -133
- package/src/skills/agent-browser/references/common-patterns.md +64 -0
- package/src/skills/agent-browser/references/ios-simulator.md +25 -0
- package/src/skills/agent-browser/references/reflect.md +9 -0
- package/src/skills/agent-browser/references/semantic-locators.md +11 -0
- package/src/skills/claude-md/SKILL.md +1 -3
- package/src/skills/claude-md/references/audit-reflect.md +0 -4
- package/src/skills/claude-md/references/audit.md +1 -3
- package/src/skills/claude-md/references/create-reflect.md +0 -4
- package/src/skills/claude-md/references/create.md +1 -3
- package/src/skills/claude-md/references/modify-reflect.md +0 -4
- package/src/skills/claude-md/references/modify.md +1 -3
- package/src/skills/creating-agent-skills/SKILL.md +2 -2
- package/src/skills/creating-agent-skills/references/create-step-1-understand.md +1 -1
- package/src/skills/creating-agent-skills/references/create-step-2-design.md +3 -3
- package/src/skills/creating-agent-skills/references/create-step-3-write.md +42 -10
- package/src/skills/creating-agent-skills/references/create-step-4-review.md +2 -2
- package/src/skills/creating-agent-skills/references/create-step-5-install.md +1 -3
- package/src/skills/creating-agent-skills/references/create-step-6-reflect.md +1 -3
- package/src/skills/creating-agent-skills/references/fix-step-1-diagnose.md +5 -4
- package/src/skills/creating-agent-skills/references/fix-step-2-apply.md +2 -2
- package/src/skills/creating-agent-skills/references/fix-step-3-validate.md +1 -3
- package/src/skills/creating-agent-skills/references/fix-step-4-reflect.md +1 -3
- package/src/skills/creating-agent-skills/templates/router-skill.md +3 -3
- package/src/skills/creating-sub-agents/references/create-step-1-understand.md +1 -1
- package/src/skills/creating-sub-agents/references/create-step-2-design.md +1 -1
- package/src/skills/creating-sub-agents/references/create-step-3-write.md +1 -1
- package/src/skills/creating-sub-agents/references/create-step-4-review.md +1 -1
- package/src/skills/creating-sub-agents/references/create-step-5-install.md +1 -3
- package/src/skills/creating-sub-agents/references/create-step-6-reflect.md +0 -4
- package/src/skills/creating-sub-agents/references/fix-step-3-validate.md +1 -3
- package/src/skills/creating-sub-agents/references/fix-step-4-reflect.md +0 -4
- package/src/skills/initialize-project/SKILL.md +2 -4
- package/src/skills/initialize-project/references/reflect.md +0 -4
- package/src/skills/project-setup/references/step-5-verify.md +1 -3
- package/src/skills/project-setup/references/step-6-reflect.md +0 -4
- package/src/skills/prompting/SKILL.md +1 -1
- package/src/skills/prompting/references/create-reflect.md +0 -4
- package/src/skills/prompting/references/create.md +1 -3
- package/src/skills/prompting/references/review-reflect.md +0 -4
- package/src/skills/prompting/references/review.md +1 -3
- package/src/skills/setup-lsp/SKILL.md +1 -1
- package/src/skills/setup-lsp/references/step-1-scan.md +1 -1
- package/src/skills/setup-lsp/references/step-2-install-configure.md +1 -3
- package/src/skills/setup-lsp/references/step-3-verify.md +1 -3
- package/src/skills/setup-lsp/references/step-4-reflect.md +0 -2
- package/src/skills/ship/SKILL.md +46 -0
- package/src/skills/ship/references/step-1-intent.md +50 -0
- package/src/skills/ship/references/step-2-questions.md +42 -0
- package/src/skills/ship/references/step-3-research.md +44 -0
- package/src/skills/ship/references/step-4-design.md +70 -0
- package/src/skills/ship/references/step-5-spec.md +86 -0
- package/src/skills/ship/references/step-6-tasks.md +83 -0
- package/src/skills/ship/references/step-7-implement.md +61 -0
- package/src/skills/ship/references/step-8-reflect.md +21 -0
- package/src/skills/execute-spec/SKILL.md +0 -48
- package/src/skills/execute-spec/references/phase-1-hydrate.md +0 -71
- package/src/skills/execute-spec/references/phase-2-build.md +0 -63
- package/src/skills/execute-spec/references/phase-3-validate.md +0 -72
- package/src/skills/execute-spec/references/phase-4-triage.md +0 -75
- package/src/skills/execute-spec/references/phase-5-reflect.md +0 -34
- package/src/skills/execute-spec/references/workflow.md +0 -82
- package/src/skills/research/SKILL.md +0 -14
- package/src/skills/research/references/step-1-check-existing.md +0 -25
- package/src/skills/research/references/step-2-conduct-research.md +0 -67
- package/src/skills/research/references/step-3-reflect.md +0 -33
- package/src/skills/spec-interview/SKILL.md +0 -48
- package/src/skills/spec-interview/references/critic-prompt.md +0 -140
- package/src/skills/spec-interview/references/pragmatist-prompt.md +0 -76
- package/src/skills/spec-interview/references/researcher-prompt.md +0 -46
- package/src/skills/spec-interview/references/step-1-opening.md +0 -47
- package/src/skills/spec-interview/references/step-2-ideation.md +0 -73
- package/src/skills/spec-interview/references/step-3-ui-ux.md +0 -83
- package/src/skills/spec-interview/references/step-4-deep-dive.md +0 -119
- package/src/skills/spec-interview/references/step-5-research-needs.md +0 -53
- package/src/skills/spec-interview/references/step-6-verification.md +0 -89
- package/src/skills/spec-interview/references/step-7-finalize.md +0 -62
- package/src/skills/spec-interview/references/step-8-reflect.md +0 -34
- package/src/skills/spec-review/SKILL.md +0 -92
- package/src/skills/spec-sanity-check/SKILL.md +0 -82
- package/src/skills/spec-to-tasks/SKILL.md +0 -24
- package/src/skills/spec-to-tasks/references/step-1-identify-spec.md +0 -39
- package/src/skills/spec-to-tasks/references/step-2-explore.md +0 -43
- package/src/skills/spec-to-tasks/references/step-3-generate.md +0 -69
- package/src/skills/spec-to-tasks/references/step-4-review.md +0 -95
- package/src/skills/spec-to-tasks/references/step-5-reflect.md +0 -22
- package/src/skills/spec-to-tasks/templates/task.md +0 -30
- package/src/skills/task-review/SKILL.md +0 -18
- package/src/skills/task-review/references/checklist.md +0 -155
|
@@ -1,83 +0,0 @@
|
|
|
1
|
-
# Step 3: UI/UX Design
|
|
2
|
-
|
|
3
|
-
If the feature has no user interface, skip to `references/step-4-deep-dive.md`.
|
|
4
|
-
|
|
5
|
-
## Determine Design Direction
|
|
6
|
-
|
|
7
|
-
Before any wireframes, establish the visual approach. Use AskUserQuestion to confirm:
|
|
8
|
-
|
|
9
|
-
**Product context:**
|
|
10
|
-
- What does this product need to feel like?
|
|
11
|
-
- Who uses it? (Power users want density, occasional users want guidance)
|
|
12
|
-
- What's the emotional job? (Trust, efficiency, delight, focus)
|
|
13
|
-
|
|
14
|
-
**Design direction options:**
|
|
15
|
-
- Precision & Density — tight spacing, monochrome, information-forward (Linear, Raycast)
|
|
16
|
-
- Warmth & Approachability — generous spacing, soft shadows, friendly (Notion, Coda)
|
|
17
|
-
- Sophistication & Trust — cool tones, layered depth, financial gravitas (Stripe, Mercury)
|
|
18
|
-
- Boldness & Clarity — high contrast, dramatic negative space (Vercel)
|
|
19
|
-
- Utility & Function — muted palette, functional density (GitHub)
|
|
20
|
-
|
|
21
|
-
**Color foundation:**
|
|
22
|
-
- Warm (creams, warm grays) — approachable, human
|
|
23
|
-
- Cool (slate, blue-gray) — professional, serious
|
|
24
|
-
- Pure neutrals (true grays) — minimal, technical
|
|
25
|
-
|
|
26
|
-
**Layout approach:**
|
|
27
|
-
- Dense grids for scanning/comparing
|
|
28
|
-
- Generous spacing for focused tasks
|
|
29
|
-
- Sidebar navigation for multi-section apps
|
|
30
|
-
- Split panels for list-detail patterns
|
|
31
|
-
|
|
32
|
-
Use AskUserQuestion to present 2-3 options and get the user's preference.
|
|
33
|
-
|
|
34
|
-
## Create ASCII Wireframes
|
|
35
|
-
|
|
36
|
-
Sketch the interface in ASCII. Keep it rough—this is for alignment, not pixel precision.
|
|
37
|
-
|
|
38
|
-
```
|
|
39
|
-
Example:
|
|
40
|
-
┌─────────────────────────────────────────┐
|
|
41
|
-
│ Page Title [Action ▾] │
|
|
42
|
-
├──────────┬──────────────────────────────┤
|
|
43
|
-
│ Nav Item │ Content Area │
|
|
44
|
-
│ Nav Item │ ┌─────────────────────────┐ │
|
|
45
|
-
│ Nav Item │ │ Component │ │
|
|
46
|
-
│ │ └─────────────────────────┘ │
|
|
47
|
-
└──────────┴──────────────────────────────┘
|
|
48
|
-
```
|
|
49
|
-
|
|
50
|
-
Create wireframes for:
|
|
51
|
-
- Primary screen(s) the user will interact with
|
|
52
|
-
- Key states (empty, loading, error, populated)
|
|
53
|
-
- Any modals or secondary views
|
|
54
|
-
|
|
55
|
-
Present each wireframe to the user. Use AskUserQuestion to confirm or iterate.
|
|
56
|
-
|
|
57
|
-
## Map User Flows
|
|
58
|
-
|
|
59
|
-
For each primary action, document the interaction sequence:
|
|
60
|
-
|
|
61
|
-
1. Where does the user start?
|
|
62
|
-
2. What do they click/type?
|
|
63
|
-
3. What feedback do they see?
|
|
64
|
-
4. Where do they end up?
|
|
65
|
-
|
|
66
|
-
Format as simple numbered steps under each flow name.
|
|
67
|
-
|
|
68
|
-
## When to Move On
|
|
69
|
-
|
|
70
|
-
Proceed to `references/step-4-deep-dive.md` when:
|
|
71
|
-
- Design direction is agreed upon
|
|
72
|
-
- Wireframes exist for primary screens
|
|
73
|
-
- User has confirmed the layout approach
|
|
74
|
-
|
|
75
|
-
## Update Team Context
|
|
76
|
-
|
|
77
|
-
After design decisions are confirmed, update `{spec_dir}/working/context.md` — append:
|
|
78
|
-
```
|
|
79
|
-
## Step 3: Design Decisions
|
|
80
|
-
[Design direction chosen, layout approach, key wireframe descriptions, user flow summaries]
|
|
81
|
-
```
|
|
82
|
-
|
|
83
|
-
No team checkpoint at this step — design is user-driven. Teammates will read the updated context.md on their own.
|
|
@@ -1,119 +0,0 @@
|
|
|
1
|
-
# Step 4: Deep Dive
|
|
2
|
-
|
|
3
|
-
Cover all specification areas through conversation. Update `docs/specs/<name>/spec.md` incrementally as information emerges.
|
|
4
|
-
|
|
5
|
-
Use AskUserQuestion whenever requirements are ambiguous or multiple approaches exist. Present options with tradeoffs and get explicit decisions.
|
|
6
|
-
|
|
7
|
-
## Areas to Cover
|
|
8
|
-
|
|
9
|
-
### Intent & Goals
|
|
10
|
-
- Primary goal and business/user value
|
|
11
|
-
- Success metrics and how to verify it works
|
|
12
|
-
|
|
13
|
-
### Integration Points
|
|
14
|
-
- Existing system components this touches
|
|
15
|
-
- External services, APIs, or libraries
|
|
16
|
-
- Data flows in and out
|
|
17
|
-
|
|
18
|
-
The Researcher has been exploring the codebase since Step 1. Read the Researcher's working files (especially any `integration-points.md` or related files). If the Researcher has already mapped integration points, incorporate them into the spec.
|
|
19
|
-
|
|
20
|
-
If specific questions remain, message the Researcher via SendMessage with targeted questions like "How does authentication work in this codebase?" or "What middleware handles protected routes?" and wait for a response.
|
|
21
|
-
|
|
22
|
-
No assumptions. If something is unclear, ask the Researcher to investigate.
|
|
23
|
-
|
|
24
|
-
### File Landscape
|
|
25
|
-
|
|
26
|
-
Read the Researcher's `file-landscape.md` working file. The Researcher should have identified concrete file paths by now. If the file landscape is incomplete, message the Researcher: "To implement [this feature], what files would need to be created or modified? Give me concrete file paths."
|
|
27
|
-
|
|
28
|
-
Capture:
|
|
29
|
-
- **Files to create**: New files with full paths (e.g., `src/models/notification.ts`)
|
|
30
|
-
- **Files to modify**: Existing files that need changes (e.g., `src/routes/index.ts`)
|
|
31
|
-
- **Directory conventions**: Where each type of code lives in this project
|
|
32
|
-
|
|
33
|
-
This becomes the File Landscape section of the spec, which spec-to-tasks uses directly.
|
|
34
|
-
|
|
35
|
-
### Data Model
|
|
36
|
-
- Entities and relationships
|
|
37
|
-
- Constraints (required fields, validations, limits)
|
|
38
|
-
- Extending existing models vs creating new ones
|
|
39
|
-
|
|
40
|
-
### Behavior & Flows
|
|
41
|
-
- Main user flows, step by step
|
|
42
|
-
- Triggers and resulting actions
|
|
43
|
-
- Different modes or variations
|
|
44
|
-
|
|
45
|
-
### Edge Cases & Error Handling
|
|
46
|
-
- Failure modes and how to handle them
|
|
47
|
-
- Invalid input handling
|
|
48
|
-
- Boundary conditions
|
|
49
|
-
- Partial failure recovery
|
|
50
|
-
|
|
51
|
-
### Blockers & Dependencies
|
|
52
|
-
- External dependencies (APIs, services, libraries)
|
|
53
|
-
- Credentials or access needed
|
|
54
|
-
- Decisions that must be made before implementation
|
|
55
|
-
|
|
56
|
-
## Spec Structure
|
|
57
|
-
|
|
58
|
-
Write to `docs/specs/<name>/spec.md` with this structure:
|
|
59
|
-
|
|
60
|
-
```markdown
|
|
61
|
-
# [Feature Name]
|
|
62
|
-
|
|
63
|
-
## Overview
|
|
64
|
-
[2-3 sentences: what and why]
|
|
65
|
-
|
|
66
|
-
## Goals
|
|
67
|
-
- [Primary goal]
|
|
68
|
-
- [Secondary goals]
|
|
69
|
-
|
|
70
|
-
## Integration Points
|
|
71
|
-
- Touches: [existing components]
|
|
72
|
-
- External: [APIs, services, libraries]
|
|
73
|
-
- Data flows: [in/out]
|
|
74
|
-
|
|
75
|
-
## File Landscape
|
|
76
|
-
|
|
77
|
-
### Files to Create
|
|
78
|
-
- [path/to/new-file.ts]: [purpose]
|
|
79
|
-
|
|
80
|
-
### Files to Modify
|
|
81
|
-
- [path/to/existing-file.ts]: [what changes]
|
|
82
|
-
|
|
83
|
-
## Data Model
|
|
84
|
-
[Entities, relationships, constraints]
|
|
85
|
-
|
|
86
|
-
## Behavior
|
|
87
|
-
### [Flow 1]
|
|
88
|
-
[Step by step]
|
|
89
|
-
|
|
90
|
-
## Edge Cases
|
|
91
|
-
- [Case]: [handling]
|
|
92
|
-
|
|
93
|
-
## Acceptance Criteria
|
|
94
|
-
- [ ] [Testable requirement]
|
|
95
|
-
**Verify:** [verification method]
|
|
96
|
-
|
|
97
|
-
## Blockers
|
|
98
|
-
- [ ] [Blocker]: [what's needed]
|
|
99
|
-
```
|
|
100
|
-
|
|
101
|
-
## Team Checkpoint: Deep Dive
|
|
102
|
-
|
|
103
|
-
After completing all deep dive subsections:
|
|
104
|
-
|
|
105
|
-
1. Update `{spec_dir}/working/context.md` — append:
|
|
106
|
-
```
|
|
107
|
-
## Step 4: Deep Dive Complete
|
|
108
|
-
[Summary of what was covered: integration points, file landscape, data model, behaviors, edge cases, blockers]
|
|
109
|
-
```
|
|
110
|
-
2. Read all working files from the Critic and Pragmatist
|
|
111
|
-
3. Present curated findings to the user:
|
|
112
|
-
- Critic's identified gaps, bad assumptions, or logic issues
|
|
113
|
-
- Pragmatist's complexity assessment and simplification suggestions
|
|
114
|
-
4. Use AskUserQuestion to discuss significant findings. If the Critic found gaps, address them. If the Pragmatist suggests simplifications, let the user decide.
|
|
115
|
-
5. Update spec.md with any changes from this discussion
|
|
116
|
-
|
|
117
|
-
## When to Move On
|
|
118
|
-
|
|
119
|
-
Move to `references/step-5-research-needs.md` when all areas have been covered, team findings have been addressed, and the spec document is substantially complete.
|
|
@@ -1,53 +0,0 @@
|
|
|
1
|
-
# Step 5: Identify Research Needs
|
|
2
|
-
|
|
3
|
-
Before finalizing, determine if implementation requires unfamiliar paradigms.
|
|
4
|
-
|
|
5
|
-
## The Question
|
|
6
|
-
|
|
7
|
-
For each major component in the spec, ask: **Does working code using this exact pattern already exist in this codebase?**
|
|
8
|
-
|
|
9
|
-
This is not about whether Claude knows how to do something in general. It's about whether this specific project has proven, working examples of the same approach.
|
|
10
|
-
|
|
11
|
-
## Evaluate Each Component
|
|
12
|
-
|
|
13
|
-
Review the spec's integration points, data model, and behavior sections.
|
|
14
|
-
|
|
15
|
-
The Researcher has been exploring the codebase throughout the interview and already knows what patterns exist. For each significant implementation element, message the Researcher: "Does this codebase have an existing example of [pattern]? If yes, where and how does it work?"
|
|
16
|
-
|
|
17
|
-
You can ask about multiple patterns in a single message. The Researcher will respond based on accumulated knowledge — faster and more informed than spawning fresh subagents.
|
|
18
|
-
|
|
19
|
-
Based on Researcher findings:
|
|
20
|
-
- If pattern exists → paradigm is established, no research needed
|
|
21
|
-
- If not found → this is a new paradigm requiring research
|
|
22
|
-
|
|
23
|
-
Examples of "new paradigm" triggers:
|
|
24
|
-
- Using a library not yet in the project
|
|
25
|
-
- A UI pattern not implemented elsewhere in the app
|
|
26
|
-
- An integration with an external service not previously connected
|
|
27
|
-
- A data structure or flow unlike existing code
|
|
28
|
-
|
|
29
|
-
## If Research Needed
|
|
30
|
-
|
|
31
|
-
For each new paradigm identified:
|
|
32
|
-
1. State what needs research and why (no existing example found)
|
|
33
|
-
2. Use AskUserQuestion to ask if they want to proceed with research, or if they have existing knowledge to share
|
|
34
|
-
3. If proceeding, invoke the `research` skill for that topic
|
|
35
|
-
|
|
36
|
-
Wait for research to complete before continuing. The research output goes to `docs/research/` and informs implementation.
|
|
37
|
-
|
|
38
|
-
## If No Research Needed
|
|
39
|
-
|
|
40
|
-
State that all paradigms have existing examples in the codebase. Proceed to `references/step-6-verification.md`.
|
|
41
|
-
|
|
42
|
-
## When to Move On
|
|
43
|
-
|
|
44
|
-
Proceed to `references/step-6-verification.md` when:
|
|
45
|
-
- All new paradigms have been researched, OR
|
|
46
|
-
- User confirmed no research is needed, OR
|
|
47
|
-
- All patterns have existing codebase examples
|
|
48
|
-
|
|
49
|
-
Update `{spec_dir}/working/context.md` — append:
|
|
50
|
-
```
|
|
51
|
-
## Step 5: Research Needs
|
|
52
|
-
[Which paradigms are established, which required research, research outcomes]
|
|
53
|
-
```
|
|
@@ -1,89 +0,0 @@
|
|
|
1
|
-
# Step 6: Verification Planning
|
|
2
|
-
|
|
3
|
-
Every acceptance criterion needs a specific, executable verification method. The goal: autonomous implementation with zero ambiguity about whether something works.
|
|
4
|
-
|
|
5
|
-
## Verification Methods
|
|
6
|
-
|
|
7
|
-
### UI Verification: agent-browser
|
|
8
|
-
|
|
9
|
-
For any criterion involving visual output or user interaction, use Vercel's agent-browser CLI:
|
|
10
|
-
|
|
11
|
-
```
|
|
12
|
-
agent-browser open <url> # Navigate to page
|
|
13
|
-
agent-browser snapshot # Get accessibility tree with refs
|
|
14
|
-
agent-browser click @ref # Click element by ref
|
|
15
|
-
agent-browser fill @ref "value" # Fill input by ref
|
|
16
|
-
agent-browser get text @ref # Read text content
|
|
17
|
-
agent-browser screenshot file.png # Capture visual state
|
|
18
|
-
agent-browser close # Close browser
|
|
19
|
-
```
|
|
20
|
-
|
|
21
|
-
Example verification for "Dashboard shows signup count":
|
|
22
|
-
1. `agent-browser open /admin`
|
|
23
|
-
2. `agent-browser snapshot`
|
|
24
|
-
3. `agent-browser get text @signup-count`
|
|
25
|
-
4. Assert returned value is a number
|
|
26
|
-
|
|
27
|
-
### Automated Tests
|
|
28
|
-
|
|
29
|
-
For logic, data, and API behavior, specify the exact test:
|
|
30
|
-
- Unit tests for pure functions
|
|
31
|
-
- Integration tests for API endpoints
|
|
32
|
-
- End-to-end tests for critical flows
|
|
33
|
-
|
|
34
|
-
Include the test file path: `pnpm test src/convex/featureFlags.test.ts`
|
|
35
|
-
|
|
36
|
-
### Database/State Verification
|
|
37
|
-
|
|
38
|
-
For data persistence criteria:
|
|
39
|
-
1. Perform the action
|
|
40
|
-
2. Query the database directly
|
|
41
|
-
3. Assert expected state
|
|
42
|
-
|
|
43
|
-
### Manual Verification (Fallback)
|
|
44
|
-
|
|
45
|
-
If no automated method exists, document exactly what to check. Flag these as candidates for future automation.
|
|
46
|
-
|
|
47
|
-
## Update Each Acceptance Criterion
|
|
48
|
-
|
|
49
|
-
Review every acceptance criterion in the spec. Add a verification method using this format:
|
|
50
|
-
|
|
51
|
-
```markdown
|
|
52
|
-
## Acceptance Criteria
|
|
53
|
-
|
|
54
|
-
- [ ] Dashboard loads in under 2s
|
|
55
|
-
**Verify:** `agent-browser open /admin`, measure time to snapshot ready
|
|
56
|
-
|
|
57
|
-
- [ ] Flag toggles persist across refresh
|
|
58
|
-
**Verify:** `pnpm test src/convex/featureFlags.test.ts` (toggle persistence test)
|
|
59
|
-
|
|
60
|
-
- [ ] Signup chart shows accurate counts
|
|
61
|
-
**Verify:** `agent-browser get text @chart-total`, compare to `npx convex run users:count`
|
|
62
|
-
```
|
|
63
|
-
|
|
64
|
-
## Confirm With User
|
|
65
|
-
|
|
66
|
-
Use AskUserQuestion to review verification methods with the user:
|
|
67
|
-
- "For [criterion], I'll verify by [method]. Does that prove it works?"
|
|
68
|
-
- Flag any criteria where verification seems insufficient
|
|
69
|
-
|
|
70
|
-
The standard: if the agent executes the verification and it passes, the feature is done. No human checking required.
|
|
71
|
-
|
|
72
|
-
## Team Validation
|
|
73
|
-
|
|
74
|
-
After defining verification methods and before confirming with the user:
|
|
75
|
-
|
|
76
|
-
1. Message the Critic: "Review the verification methods in spec.md. Are they concrete and executable? Will each one actually prove its criterion works?"
|
|
77
|
-
2. Message the Pragmatist: "Review the verification methods in spec.md. Are any over-complex? Could simpler verification achieve the same confidence?"
|
|
78
|
-
3. Read their responses (via working files or SendMessage)
|
|
79
|
-
4. Adjust verification methods based on valid feedback before presenting to the user
|
|
80
|
-
|
|
81
|
-
## When to Move On
|
|
82
|
-
|
|
83
|
-
Proceed to `references/step-7-finalize.md` when every acceptance criterion has a verification method and the user agrees each method proves the criterion works.
|
|
84
|
-
|
|
85
|
-
Update `{spec_dir}/working/context.md` — append:
|
|
86
|
-
```
|
|
87
|
-
## Step 6: Verification Methods Defined
|
|
88
|
-
[Summary of verification approach and any team feedback incorporated]
|
|
89
|
-
```
|
|
@@ -1,62 +0,0 @@
|
|
|
1
|
-
# Step 7: Finalize
|
|
2
|
-
|
|
3
|
-
Review the spec for completeness and soundness, then hand off.
|
|
4
|
-
|
|
5
|
-
## Request Final Team Reviews
|
|
6
|
-
|
|
7
|
-
Message both the Critic and Pragmatist requesting final assessments:
|
|
8
|
-
|
|
9
|
-
1. Message the Critic: "The spec is substantially complete. Please do a final review against your completeness checklist and sanity check framework. Write your complete findings to `{spec_dir}/working/critic-final-review.md` using the format in your prompt."
|
|
10
|
-
2. Message the Pragmatist: "The spec is substantially complete. Please do a final complexity assessment. Write your findings to `{spec_dir}/working/pragmatist-final-assessment.md` using the format in your prompt."
|
|
11
|
-
3. Wait for both to respond (they will message you when their files are ready)
|
|
12
|
-
4. Read their working files: `critic-final-review.md` and `pragmatist-final-assessment.md`
|
|
13
|
-
|
|
14
|
-
## Curate the Findings
|
|
15
|
-
|
|
16
|
-
Synthesize findings from the Critic's review and the Pragmatist's assessment. Some findings may be:
|
|
17
|
-
- Critical issues that must be addressed
|
|
18
|
-
- Valid suggestions worth considering
|
|
19
|
-
- Pedantic or irrelevant items to skip
|
|
20
|
-
|
|
21
|
-
For each finding, form a recommendation: address it or skip it, and why.
|
|
22
|
-
|
|
23
|
-
The Critic and Pragmatist have had full context of the entire interview — their findings are more informed than cold reviews. Weight their input accordingly.
|
|
24
|
-
|
|
25
|
-
## Walk Through With User
|
|
26
|
-
|
|
27
|
-
Use AskUserQuestion to present findings in batches (2-3 at a time). For each finding:
|
|
28
|
-
- State what the review found
|
|
29
|
-
- Give your recommendation (always include a recommended option)
|
|
30
|
-
- Let user decide: fix, skip, or something else
|
|
31
|
-
|
|
32
|
-
Track two lists:
|
|
33
|
-
- **Addressed**: findings the user chose to fix
|
|
34
|
-
- **Intentionally skipped**: findings the user chose to ignore
|
|
35
|
-
|
|
36
|
-
After walking through all findings, make the approved changes to the spec.
|
|
37
|
-
|
|
38
|
-
## Offer Another Pass
|
|
39
|
-
|
|
40
|
-
Use AskUserQuestion: "Do you want to run the reviews again?"
|
|
41
|
-
|
|
42
|
-
If yes, message the Critic and Pragmatist again with additional context: "We already ran a review. These changes were made: [list]. These findings were intentionally skipped: [list]. Look for anything new we haven't considered."
|
|
43
|
-
|
|
44
|
-
Read their updated working files and repeat the curate → walk through → offer another pass cycle until user is satisfied.
|
|
45
|
-
|
|
46
|
-
## Complete the Interview
|
|
47
|
-
|
|
48
|
-
Once user confirms no more review passes needed:
|
|
49
|
-
|
|
50
|
-
1. Show the user the final spec
|
|
51
|
-
2. Use AskUserQuestion to confirm they are satisfied
|
|
52
|
-
3. Ask if they want to proceed to task breakdown
|
|
53
|
-
4. Shutdown the team:
|
|
54
|
-
- Send shutdown requests to all three teammates (researcher, critic, pragmatist) via SendMessage with type "shutdown_request"
|
|
55
|
-
- After all teammates confirm shutdown, use TeamDelete to clean up team resources
|
|
56
|
-
- The `{spec_dir}/working/` directory remains on disk as reference for implementation
|
|
57
|
-
|
|
58
|
-
If yes to task breakdown, invoke `spec-to-tasks` and specify which spec to break down.
|
|
59
|
-
|
|
60
|
-
**IMPORTANT: You are not done. You MUST read and complete the next step. The workflow is incomplete without it.**
|
|
61
|
-
|
|
62
|
-
Read `references/step-8-reflect.md` now.
|
|
@@ -1,34 +0,0 @@
|
|
|
1
|
-
# Step 8: Reflect and Improve
|
|
2
|
-
|
|
3
|
-
**IMPORTANT: This step is mandatory. The spec interview workflow is not complete until this step is finished. Do not skip this.**
|
|
4
|
-
|
|
5
|
-
Reflect on your experience conducting this spec interview. The purpose is to improve the spec-interview skill itself based on what you just learned.
|
|
6
|
-
|
|
7
|
-
## Assess
|
|
8
|
-
|
|
9
|
-
Answer these questions honestly:
|
|
10
|
-
|
|
11
|
-
1. Were any interview steps wrong, incomplete, or misleading? Did any step send you down a wrong path or leave out critical guidance?
|
|
12
|
-
2. Did the team coordination work smoothly? Were the checkpoint patterns (post-ideation, deep dive, finalize) at the right moments? Did any teammate produce findings too late or too early to be useful?
|
|
13
|
-
3. Did the prompt templates (researcher, critic, pragmatist) give adequate direction? Did any teammate misunderstand its role or produce unhelpful output?
|
|
14
|
-
4. Did you discover a question sequence, interview technique, or spec structure that worked better than what the skill prescribed?
|
|
15
|
-
5. Did any commands, paths, tool interactions, or team communication patterns fail and require correction?
|
|
16
|
-
6. Was the step ordering right? Should any steps be reordered, merged, or split?
|
|
17
|
-
|
|
18
|
-
## Act
|
|
19
|
-
|
|
20
|
-
If you identified issues above, fix them now:
|
|
21
|
-
|
|
22
|
-
1. Identify the specific file in the spec-interview skill where the issue lives
|
|
23
|
-
2. Read that file
|
|
24
|
-
3. Apply the fix -- add what was missing, correct what was wrong
|
|
25
|
-
4. Apply the tribal knowledge test: only add what a fresh Claude instance would not already know about conducting spec interviews or coordinating agent teams
|
|
26
|
-
5. Keep the file within its size target
|
|
27
|
-
|
|
28
|
-
If no issues were found, confirm that to the user.
|
|
29
|
-
|
|
30
|
-
## Report
|
|
31
|
-
|
|
32
|
-
Tell the user:
|
|
33
|
-
- What you changed in the spec-interview skill and why, OR
|
|
34
|
-
- That no updates were needed and the skill performed correctly
|
|
@@ -1,92 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: spec-review
|
|
3
|
-
description: This skill should be used when the user says "review the spec", "check spec completeness", or "is this spec ready". Also invoked by spec-interview when a spec is complete.
|
|
4
|
-
argument-hint: <spec-name>
|
|
5
|
-
context: fork
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
# Spec Review
|
|
9
|
-
|
|
10
|
-
## Steps
|
|
11
|
-
|
|
12
|
-
1. **Find the spec** - Use the path from the prompt if provided. Otherwise, find the most recently modified file in `docs/specs/`. If no specs exist, inform the user and stop.
|
|
13
|
-
2. **Read the spec file**
|
|
14
|
-
3. **Find all CLAUDE.md files** - Search for every CLAUDE.md in the project (root and subdirectories)
|
|
15
|
-
4. **Read all CLAUDE.md files** - These contain project constraints and conventions
|
|
16
|
-
5. **Evaluate against the checklist below** - Including CLAUDE.md alignment
|
|
17
|
-
6. **Return structured feedback using the output format**
|
|
18
|
-
|
|
19
|
-
## Completeness Checklist
|
|
20
|
-
|
|
21
|
-
A spec is implementation-ready when ALL of these are satisfied:
|
|
22
|
-
|
|
23
|
-
### Must Have (Blocking if missing)
|
|
24
|
-
|
|
25
|
-
- [ ] **Clear intent** - What is being built and why is unambiguous
|
|
26
|
-
- [ ] **Data model defined** - Entities, relationships, and constraints are explicit
|
|
27
|
-
- [ ] **Integration points mapped** - What existing code this touches is documented
|
|
28
|
-
- [ ] **Core behavior specified** - Main flows are step-by-step clear
|
|
29
|
-
- [ ] **Acceptance criteria exist** - Testable requirements are listed
|
|
30
|
-
- [ ] **Verification methods defined** - Every acceptance criterion has a specific way to verify it (test command, agent-browser steps, or explicit check)
|
|
31
|
-
- [ ] **No ambiguities** - Nothing requires interpretation; all requirements are explicit
|
|
32
|
-
- [ ] **No unknowns** - All information needed for implementation is present; nothing left to discover
|
|
33
|
-
- [ ] **CLAUDE.md alignment** - Spec does not conflict with constraints in any CLAUDE.md file
|
|
34
|
-
- [ ] **No internal duplication** - File Landscape contains no sets of new files that serve similar purposes and could share a common implementation
|
|
35
|
-
|
|
36
|
-
### Should Have (Gaps that cause implementation friction)
|
|
37
|
-
|
|
38
|
-
- [ ] **Edge cases covered** - Error conditions and boundaries are addressed
|
|
39
|
-
- [ ] **External dependencies documented** - APIs, libraries, services are listed
|
|
40
|
-
- [ ] **Blockers section exists** - Missing credentials, pending decisions are called out
|
|
41
|
-
- [ ] **UI/UX wireframes exist** - If feature has a user interface, ASCII wireframes are present
|
|
42
|
-
- [ ] **Design direction documented** - If feature has UI, visual approach is explicit (not assumed)
|
|
43
|
-
|
|
44
|
-
### Implementation Readiness
|
|
45
|
-
|
|
46
|
-
The test: could an agent implement this feature with ZERO assumptions? If the agent would need to guess, interpret, or discover anything, the spec is not ready.
|
|
47
|
-
|
|
48
|
-
Flag these problems:
|
|
49
|
-
- Vague language ("should handle errors appropriately" — HOW?)
|
|
50
|
-
- Missing details ("integrates with auth" — WHERE? HOW?)
|
|
51
|
-
- Unstated assumptions ("uses the standard pattern" — WHICH pattern?)
|
|
52
|
-
- Blocking dependencies ("needs API access" — DO WE HAVE IT?)
|
|
53
|
-
- Unverifiable criteria ("dashboard works correctly" — HOW DO WE CHECK?)
|
|
54
|
-
- Missing verification ("loads fast" — WHAT COMMAND PROVES IT?)
|
|
55
|
-
- Implicit knowledge ("depends on how X works" — SPECIFY IT)
|
|
56
|
-
- Unverified claims ("the API returns..." — HAS THIS BEEN CONFIRMED?)
|
|
57
|
-
- CLAUDE.md conflicts (spec proposes X but CLAUDE.md requires Y — WHICH IS IT?)
|
|
58
|
-
- Near-duplicate new components (three card components for different pages — CONSOLIDATE into one shared component with props/configuration)
|
|
59
|
-
|
|
60
|
-
## Output Format
|
|
61
|
-
|
|
62
|
-
Return the review as:
|
|
63
|
-
|
|
64
|
-
```
|
|
65
|
-
## Spec Review: [Feature Name]
|
|
66
|
-
|
|
67
|
-
### Status: [READY | NEEDS WORK]
|
|
68
|
-
|
|
69
|
-
### Missing (Blocking)
|
|
70
|
-
- [Item]: [What's missing and why it blocks implementation]
|
|
71
|
-
|
|
72
|
-
### CLAUDE.md Conflicts
|
|
73
|
-
- [Constraint from CLAUDE.md]: [How the spec conflicts with it]
|
|
74
|
-
|
|
75
|
-
### Gaps (Non-blocking but should address)
|
|
76
|
-
- [Item]: [What's unclear or incomplete]
|
|
77
|
-
|
|
78
|
-
### Duplication Concerns
|
|
79
|
-
- [Group of similar new files/components]: [How they overlap and consolidation recommendation]
|
|
80
|
-
|
|
81
|
-
### Blocking Dependencies
|
|
82
|
-
- [Dependency]: [What's needed before implementation can start]
|
|
83
|
-
|
|
84
|
-
### Skill Observations (optional)
|
|
85
|
-
If any checklist items, evaluation criteria, or output format instructions in this skill were wrong, incomplete, or misleading during this review, note them here. Leave empty if no issues were found.
|
|
86
|
-
|
|
87
|
-
### Recommendation
|
|
88
|
-
[Specific questions to ask the user, or "Spec is implementation-ready"]
|
|
89
|
-
```
|
|
90
|
-
|
|
91
|
-
**READY**: Spec can proceed to task breakdown.
|
|
92
|
-
**NEEDS WORK**: List specific questions that need answers.
|
|
@@ -1,82 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: spec-sanity-check
|
|
3
|
-
description: This skill should be used alongside spec-review to catch logic gaps and incorrect assumptions. Invoked when the user says "sanity check this spec", "does this plan make sense", or "what am I missing". Also auto-invoked by spec-interview during finalization.
|
|
4
|
-
argument-hint: <spec-path>
|
|
5
|
-
context: fork
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
# Spec Sanity Check
|
|
9
|
-
|
|
10
|
-
Provide a "fresh eyes" review of the spec. This is different from spec-review — you're not checking format or completeness. You're checking whether the plan will actually work.
|
|
11
|
-
|
|
12
|
-
## Find the Spec
|
|
13
|
-
|
|
14
|
-
Use the path from the prompt if provided. Otherwise, find the most recently modified file in `docs/specs/`. If no specs exist, inform the user and stop.
|
|
15
|
-
|
|
16
|
-
## Read and Understand
|
|
17
|
-
|
|
18
|
-
Read the entire spec. Understand what is being built and how.
|
|
19
|
-
|
|
20
|
-
## Ask These Questions
|
|
21
|
-
|
|
22
|
-
For each section of the spec, challenge it:
|
|
23
|
-
|
|
24
|
-
### Logic Gaps
|
|
25
|
-
- Does the described flow actually work end-to-end?
|
|
26
|
-
- Are there steps that assume a previous step succeeded without checking?
|
|
27
|
-
- Are there circular dependencies?
|
|
28
|
-
- Does the order of operations make sense?
|
|
29
|
-
|
|
30
|
-
### Incorrect Assumptions
|
|
31
|
-
- Are there assumptions about how existing systems work that might be wrong?
|
|
32
|
-
- Are there assumptions about external APIs, libraries, or services?
|
|
33
|
-
- Are there assumptions about data formats or availability?
|
|
34
|
-
- Use Explorer subagents to verify assumptions against the actual codebase
|
|
35
|
-
|
|
36
|
-
### Unconsidered Scenarios
|
|
37
|
-
- What happens in edge cases not explicitly covered?
|
|
38
|
-
- What happens under load or at scale?
|
|
39
|
-
- What happens if external dependencies fail?
|
|
40
|
-
- What happens if data is malformed or missing?
|
|
41
|
-
|
|
42
|
-
### Implementation Pitfalls
|
|
43
|
-
- Are there common bugs this approach would likely introduce?
|
|
44
|
-
- Are there security implications not addressed?
|
|
45
|
-
- Are there performance implications not addressed?
|
|
46
|
-
- Are there race conditions or timing issues?
|
|
47
|
-
|
|
48
|
-
### The "What If" Test
|
|
49
|
-
- What if [key assumption] is wrong?
|
|
50
|
-
- What if [external dependency] changes?
|
|
51
|
-
- What if [data volume] is 10x what we expect?
|
|
52
|
-
|
|
53
|
-
## Output Format
|
|
54
|
-
|
|
55
|
-
Return findings as:
|
|
56
|
-
|
|
57
|
-
```
|
|
58
|
-
## Sanity Check: [Feature Name]
|
|
59
|
-
|
|
60
|
-
### Status: [SOUND | CONCERNS]
|
|
61
|
-
|
|
62
|
-
### Logic Issues
|
|
63
|
-
- [Issue]: [Why this is a problem]
|
|
64
|
-
|
|
65
|
-
### Questionable Assumptions
|
|
66
|
-
- [Assumption]: [Why this might be wrong] [Suggestion to verify]
|
|
67
|
-
|
|
68
|
-
### Unconsidered Scenarios
|
|
69
|
-
- [Scenario]: [What could go wrong]
|
|
70
|
-
|
|
71
|
-
### Potential Pitfalls
|
|
72
|
-
- [Pitfall]: [How to avoid]
|
|
73
|
-
|
|
74
|
-
### Skill Observations (optional)
|
|
75
|
-
If any evaluation questions, check categories, or output format instructions in this skill were wrong, incomplete, or misleading during this review, note them here. Leave empty if no issues were found.
|
|
76
|
-
|
|
77
|
-
### Recommendation
|
|
78
|
-
[Either "Plan is sound" or specific concerns to address]
|
|
79
|
-
```
|
|
80
|
-
|
|
81
|
-
**SOUND**: No significant concerns found.
|
|
82
|
-
**CONCERNS**: Issues that should be addressed before implementation.
|
|
@@ -1,24 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: spec-to-tasks
|
|
3
|
-
description: Converts a feature specification into implementation tasks. Use after a spec is complete and approved, or when planning work for a defined feature.
|
|
4
|
-
argument-hint: <spec-name>
|
|
5
|
-
context: fork
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
# Spec to Tasks
|
|
9
|
-
|
|
10
|
-
## Workflow Overview
|
|
11
|
-
|
|
12
|
-
This skill has 5 steps. **You must complete ALL steps. Do not stop early or skip any step.**
|
|
13
|
-
|
|
14
|
-
1. **Identify Spec** - Find and verify the spec file
|
|
15
|
-
2. **Verify File Landscape** - Map files to acceptance criteria
|
|
16
|
-
3. **Generate Tasks** - Create task files in `tasks/` directory
|
|
17
|
-
4. **Review Tasks** - Run review checklist in a loop until no critical issues remain
|
|
18
|
-
5. **Reflect** - Note any skill issues observed during this run
|
|
19
|
-
|
|
20
|
-
Steps 4 and 5 are mandatory. The review loop in step 4 is automated — fix issues and re-check until clean, with no user input required.
|
|
21
|
-
|
|
22
|
-
## What To Do Now
|
|
23
|
-
|
|
24
|
-
Read `references/step-1-identify-spec.md` and begin.
|
|
@@ -1,39 +0,0 @@
|
|
|
1
|
-
# Step 1: Identify the Spec
|
|
2
|
-
|
|
3
|
-
Locate the specification to convert into tasks.
|
|
4
|
-
|
|
5
|
-
## If Spec Path Provided
|
|
6
|
-
|
|
7
|
-
Use the path from the user's prompt. Verify the file exists and read it.
|
|
8
|
-
|
|
9
|
-
## If No Path Provided
|
|
10
|
-
|
|
11
|
-
Find the most recently modified spec:
|
|
12
|
-
|
|
13
|
-
```bash
|
|
14
|
-
git log -1 --name-only --diff-filter=AM -- 'docs/specs/**/*.md' | tail -1
|
|
15
|
-
```
|
|
16
|
-
|
|
17
|
-
If no specs found in git history, check `docs/specs/` for any spec files and ask the user which one to use.
|
|
18
|
-
|
|
19
|
-
## Verify the Spec
|
|
20
|
-
|
|
21
|
-
Read the spec file. Confirm it contains enough detail to generate implementation tasks:
|
|
22
|
-
|
|
23
|
-
**Required:**
|
|
24
|
-
- Acceptance Criteria with verification methods (each criterion becomes a task)
|
|
25
|
-
- Clear behavior descriptions
|
|
26
|
-
|
|
27
|
-
**Expected (from spec-interview):**
|
|
28
|
-
- File Landscape section listing files to create/modify
|
|
29
|
-
- Integration points and data model
|
|
30
|
-
|
|
31
|
-
**If missing acceptance criteria or verification methods:**
|
|
32
|
-
Inform the user: "This spec doesn't have acceptance criteria with verification methods. Each task needs a clear pass/fail test. Would you like to add them now, or run spec-interview to complete the spec?"
|
|
33
|
-
|
|
34
|
-
**If missing file landscape:**
|
|
35
|
-
Proceed to step 2 where we'll discover file paths via exploration.
|
|
36
|
-
|
|
37
|
-
## Next Step
|
|
38
|
-
|
|
39
|
-
Once the spec is identified and verified, read `references/step-2-explore.md`.
|