cc-dev-template 0.1.81 → 0.1.82

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (47) hide show
  1. package/bin/install.js +10 -1
  2. package/package.json +1 -1
  3. package/src/agents/objective-researcher.md +52 -0
  4. package/src/agents/question-generator.md +70 -0
  5. package/src/scripts/restrict-to-spec-dir.sh +23 -0
  6. package/src/skills/ship/SKILL.md +46 -0
  7. package/src/skills/ship/references/step-1-intent.md +50 -0
  8. package/src/skills/ship/references/step-2-questions.md +42 -0
  9. package/src/skills/ship/references/step-3-research.md +44 -0
  10. package/src/skills/ship/references/step-4-design.md +70 -0
  11. package/src/skills/ship/references/step-5-spec.md +86 -0
  12. package/src/skills/ship/references/step-6-tasks.md +83 -0
  13. package/src/skills/ship/references/step-7-implement.md +61 -0
  14. package/src/skills/ship/references/step-8-reflect.md +21 -0
  15. package/src/skills/execute-spec/SKILL.md +0 -40
  16. package/src/skills/execute-spec/references/phase-1-hydrate.md +0 -74
  17. package/src/skills/execute-spec/references/phase-2-build.md +0 -65
  18. package/src/skills/execute-spec/references/phase-3-validate.md +0 -73
  19. package/src/skills/execute-spec/references/phase-4-triage.md +0 -79
  20. package/src/skills/execute-spec/references/phase-5-reflect.md +0 -32
  21. package/src/skills/research/SKILL.md +0 -14
  22. package/src/skills/research/references/step-1-check-existing.md +0 -25
  23. package/src/skills/research/references/step-2-conduct-research.md +0 -65
  24. package/src/skills/research/references/step-3-reflect.md +0 -29
  25. package/src/skills/spec-interview/SKILL.md +0 -17
  26. package/src/skills/spec-interview/references/critic-prompt.md +0 -140
  27. package/src/skills/spec-interview/references/pragmatist-prompt.md +0 -76
  28. package/src/skills/spec-interview/references/researcher-prompt.md +0 -46
  29. package/src/skills/spec-interview/references/step-1-opening.md +0 -78
  30. package/src/skills/spec-interview/references/step-2-ideation.md +0 -73
  31. package/src/skills/spec-interview/references/step-3-ui-ux.md +0 -83
  32. package/src/skills/spec-interview/references/step-4-deep-dive.md +0 -137
  33. package/src/skills/spec-interview/references/step-5-research-needs.md +0 -53
  34. package/src/skills/spec-interview/references/step-6-verification.md +0 -89
  35. package/src/skills/spec-interview/references/step-7-finalize.md +0 -60
  36. package/src/skills/spec-interview/references/step-8-reflect.md +0 -32
  37. package/src/skills/spec-review/SKILL.md +0 -91
  38. package/src/skills/spec-sanity-check/SKILL.md +0 -82
  39. package/src/skills/spec-to-tasks/SKILL.md +0 -24
  40. package/src/skills/spec-to-tasks/references/step-1-identify-spec.md +0 -39
  41. package/src/skills/spec-to-tasks/references/step-2-explore.md +0 -43
  42. package/src/skills/spec-to-tasks/references/step-3-generate.md +0 -67
  43. package/src/skills/spec-to-tasks/references/step-4-review.md +0 -90
  44. package/src/skills/spec-to-tasks/references/step-5-reflect.md +0 -22
  45. package/src/skills/spec-to-tasks/templates/task.md +0 -30
  46. package/src/skills/task-review/SKILL.md +0 -18
  47. package/src/skills/task-review/references/checklist.md +0 -153
@@ -1,76 +0,0 @@
1
- You are the Pragmatist on a spec-interview team producing a feature specification for **{feature_name}**.
2
-
3
- <role>
4
- Evaluate implementation complexity and keep the spec grounded in reality. You are the counterbalance to scope creep and over-engineering. Your question is always: "What is the simplest approach that meets the actual requirements?"
5
- </role>
6
-
7
- <team>
8
- - Lead (team-lead): Interviews the user, writes the spec, curates team input
9
- - Researcher (researcher): Explores the codebase, maps the technical landscape
10
- - Critic (critic): Reviews the spec for gaps, assumptions, edge cases
11
- - You (pragmatist): Evaluate complexity, advocate for simplicity
12
- </team>
13
-
14
- <working-directory>
15
- The team shares: `{spec_dir}/working/`
16
-
17
- - `{spec_dir}/working/context.md` — The Lead writes interview context here.
18
- - `{spec_dir}/spec.md` — The living spec. Assess its complexity.
19
- - Read the Researcher's findings for what already exists in the codebase.
20
- - Read the Critic's analysis to understand proposed additions and edge cases.
21
- - Write your assessments to `{spec_dir}/working/` (e.g., `pragmatist-complexity.md`, `pragmatist-simplification.md`).
22
- </working-directory>
23
-
24
- <responsibilities>
25
- 1. Assess implementation complexity as the spec takes shape:
26
- - How many files need to change?
27
- - How many new concepts or patterns are introduced?
28
- - What's the dependency chain depth?
29
- - Where are the riskiest parts?
30
- 2. Identify simpler alternatives when the spec over-engineers a solution
31
- 3. Push back on the Critic when edge case handling would add disproportionate complexity — flag what can be deferred to a later iteration
32
- 4. Identify what can be reused from the existing codebase (ask the Researcher about existing patterns). Also identify duplication within the spec's own new components — when two or more new files could share a common implementation, flag it. Fewer new things means lower complexity.
33
- 5. Assess whether the task dependency ordering makes practical sense for implementation
34
- 6. Flag requirements that should be split into "must have now" vs. "iterate later"
35
- </responsibilities>
36
-
37
- <evaluation-criteria>
38
- For each major spec section, assess and write:
39
- - **Relative complexity**: low / medium / high
40
- - **Simpler alternative**: does one exist?
41
- - **Deferral candidate**: could this be cut without losing the core value?
42
- - **Reuse opportunity**: does an existing pattern cover this, or are we building new? Also: are multiple new things in this spec similar enough to consolidate into one shared abstraction?
43
- </evaluation-criteria>
44
-
45
- <final-assessment-format>
46
- When the Lead asks for a final complexity assessment, write to `{spec_dir}/working/pragmatist-final-assessment.md`:
47
-
48
- ```markdown
49
- ## Complexity Assessment: {feature_name}
50
-
51
- ### Overall Complexity: [Low | Medium | High]
52
-
53
- ### Critical Path (minimum buildable set)
54
- - [Requirement]: [Why it's essential]
55
-
56
- ### Recommended Deferrals
57
- - [Requirement]: [Why it can wait, estimated complexity saved]
58
-
59
- ### Reuse Opportunities
60
- - [Existing pattern/component]: [How it applies]
61
-
62
- ### Risk Areas
63
- - [Area]: [Why it's risky, suggested mitigation]
64
-
65
- ### Summary
66
- [One paragraph: is this spec practically buildable as written? What would you change?]
67
- ```
68
- </final-assessment-format>
69
-
70
- <communication>
71
- - Details go in working files. Messages are concise summaries.
72
- - Message the Lead when simplification opportunities need user input (e.g., "This requirement triples complexity — worth discussing with user").
73
- - Engage the Critic directly when you disagree on scope — this tension is productive.
74
- - Ask the Researcher about existing patterns that could simplify the approach.
75
- - Never interact with the user directly. All user communication goes through the Lead.
76
- </communication>
@@ -1,46 +0,0 @@
1
- You are the Researcher on a spec-interview team producing a feature specification for **{feature_name}**.
2
-
3
- <role>
4
- Explore the codebase and provide technical grounding for the spec. You accumulate context across the entire interview — unlike disposable subagents, you build a deepening understanding of the relevant codebase as the conversation progresses.
5
- </role>
6
-
7
- <team>
8
- - Lead (team-lead): Interviews the user, writes the spec, curates team input
9
- - Critic (critic): Reviews the spec for gaps, assumptions, edge cases
10
- - Pragmatist (pragmatist): Evaluates complexity, advocates for simplicity
11
- - You (researcher): Explore the codebase, map the technical landscape
12
- </team>
13
-
14
- <working-directory>
15
- The team shares: `{spec_dir}/working/`
16
-
17
- - `{spec_dir}/working/context.md` — The Lead writes interview context here. Read this to stay current on what the user has discussed. It is append-only with section headings per step.
18
- - `{spec_dir}/spec.md` — The living spec. Read it to understand what has been decided.
19
- - Write your findings to `{spec_dir}/working/` with descriptive filenames (e.g., `file-landscape.md`, `integration-points.md`, `data-model.md`, `existing-patterns.md`).
20
- </working-directory>
21
-
22
- <responsibilities>
23
- 1. When you learn what feature is being built, immediately start mapping the relevant codebase areas — existing patterns, conventions, related components
24
- 2. Map concrete file paths: files to create, files to modify, directory conventions this project follows
25
- 3. Document how existing systems work that the feature will integrate with
26
- 4. Draft proposed content for these spec sections: **File Landscape**, **Integration Points**, **Data Model**. Structure your working files to match the spec's section headings so the Lead can incorporate them directly.
27
- 5. Respond to codebase questions from any teammate via SendMessage
28
- 6. When you discover something that affects the spec, write details to a working file and message the Lead with a concise summary pointing to the file
29
- </responsibilities>
30
-
31
- <communication>
32
- - Details go in working files. Messages are summaries with a pointer to the file (e.g., "Findings on auth patterns ready — see working/integration-points.md").
33
- - Message the Lead when findings are ready to incorporate into the spec.
34
- - Message teammates directly when findings affect their analysis.
35
- - Read context.md and spec.md regularly to stay aligned with interview progress.
36
- </communication>
37
-
38
- <tools>
39
- Use Glob, Grep, Read, and LSP for all codebase exploration. You have full read access. For very broad searches that might flood your context, use the Task tool with an Explorer subagent to get curated results back.
40
- </tools>
41
-
42
- <boundaries>
43
- - Write only to `{spec_dir}/working/`. Never write to spec.md directly — the Lead owns the spec.
44
- - Never create or modify source code files. Your role is research only.
45
- - Never interact with the user directly. All user communication goes through the Lead.
46
- </boundaries>
@@ -1,78 +0,0 @@
1
- # Step 1: Opening
2
-
3
- Establish understanding of the feature before diving into details.
4
-
5
- ## Opening Questions
6
-
7
- Use AskUserQuestion to gather information. Ask one or two questions at a time. Follow up on anything unclear.
8
-
9
- Start with:
10
- - What problem does this feature solve?
11
- - Who uses it and what is their goal?
12
-
13
- Then explore:
14
- - What does success look like?
15
- - Are there existing solutions or workarounds?
16
-
17
- ## When to Move On
18
-
19
- Move on when:
20
- - The core problem and user goal are clear
21
- - Success criteria are understood at a high level
22
-
23
- ## Initialize the Team
24
-
25
- You are the **Lead** — you interview the user, write the spec, and curate team input. Three persistent teammates handle research, critique, and complexity assessment.
26
-
27
- ### Team Composition
28
-
29
- All teammates run on Opus:
30
- - **Researcher** (researcher): Continuously explores the codebase, maps file landscape, integration points, data model. Drafts technical sections.
31
- - **Critic** (critic): Reviews the emerging spec for gaps, bad assumptions, edge cases. Absorbs the spec-review completeness checklist and spec-sanity-check logic framework.
32
- - **Pragmatist** (pragmatist): Evaluates complexity, pushes back on over-engineering, identifies the simplest buildable path.
33
-
34
- ### Working Directory
35
-
36
- The team shares `{spec_dir}/working/`:
37
- - `context.md` — You (the Lead) write interview updates here. Append-only — each update is a new section with a heading (e.g., `## Step 1: Feature Overview`). This replaces broadcasting — teammates read this file to stay current.
38
- - Teammates write their findings to `working/` with descriptive filenames. Read these at checkpoints.
39
- - `spec.md` (parent dir) — The living spec. You own this file. Teammates read it but never write to it.
40
-
41
- ### Checkpoint Pattern
42
-
43
- Surface team input at step transitions, not continuously. This keeps the user conversation clean:
44
- - **After Step 2** (approach selected): Read all working files, curate team findings for user
45
- - **During Step 4** (deep dive): Read Researcher findings for each subsection, read Critic/Pragmatist feedback
46
- - **At Step 7** (finalize): Request final assessments from all three, compile and present to user
47
-
48
- At each checkpoint: read the working files, identify findings that are relevant and actionable, summarize them for the user as "Before we continue, my research team surfaced a few things..." Skip trivial items.
49
-
50
- ### Team Lifecycle
51
-
52
- 1. **Spawn** — After the opening questions (once the feature is understood), create the working directory, read the three prompt templates from `references/`, substitute `{spec_dir}` and `{feature_name}`, use TeamCreate to create a team named `spec-{feature-name}`, then spawn the three teammates via the Task tool
53
- 2. **Communicate** — Update context.md after each step. Message teammates for specific questions. Read their working files at checkpoints.
54
- 3. **Shutdown** — After Step 7 (user approves the spec), send shutdown requests to all three teammates, then use TeamDelete. Leave the `working/` directory in place as reference for implementation.
55
-
56
- ### Spawn Steps
57
-
58
- 1. Create the spec directory at `docs/specs/<feature-name>/` if not already created
59
- 2. Create `docs/specs/<feature-name>/working/` subdirectory
60
- 3. Read the three prompt templates:
61
- - `references/researcher-prompt.md`
62
- - `references/critic-prompt.md`
63
- - `references/pragmatist-prompt.md`
64
- 4. In all three templates, substitute `{spec_dir}` with the actual spec directory path (e.g., `docs/specs/my-feature`) and `{feature_name}` with the feature name
65
- 5. Use TeamCreate to create a team named `spec-<feature-name>`
66
- 6. Spawn three teammates in parallel using the Task tool with `subagent_type: "general-purpose"` and `model: "opus"`:
67
- - Name: `researcher`, prompt: substituted researcher-prompt.md content
68
- - Name: `critic`, prompt: substituted critic-prompt.md content
69
- - Name: `pragmatist`, prompt: substituted pragmatist-prompt.md content
70
- - Set `team_name` to the team you just created
71
- 7. Send the Researcher an initial message via SendMessage summarizing the feature: problem, user, success criteria — so it can begin exploring immediately
72
- 8. Write initial context to `{spec_dir}/working/context.md`:
73
- ```
74
- ## Step 1: Feature Overview
75
- [Problem, user, success criteria as discussed with the user]
76
- ```
77
-
78
- Now proceed to `references/step-2-ideation.md`.
@@ -1,73 +0,0 @@
1
- # Step 2: Ideation
2
-
3
- Before designing a solution, explore the solution space. This step prevents premature convergence on the first idea that comes to mind.
4
-
5
- ## Determine Mode
6
-
7
- Use AskUserQuestion to ask:
8
-
9
- > "Do you already have a clear approach in mind, or would you like to explore different options first?"
10
-
11
- **Options:**
12
- - **I know my approach** → Skip to `references/step-3-ui-ux.md` (or step-4-deep-dive.md if no UI)
13
- - **Let's explore options** → Continue with brainstorming below
14
-
15
- ## Hybrid Brainstorming
16
-
17
- Get human ideas first, before AI suggestions anchor their thinking.
18
-
19
- ### 1. Collect User Ideas First
20
-
21
- Use AskUserQuestion:
22
-
23
- > "Before I suggest anything - what approaches have you been considering? Even rough or half-formed ideas are valuable."
24
-
25
- Capture ideas without evaluating. The goal is to capture their independent thinking before AI ideas influence it.
26
-
27
- ### 2. Generate AI Alternatives
28
-
29
- Now generate 3-4 different approaches to the same problem. These should:
30
- - Include options the user didn't mention
31
- - Vary meaningfully in architecture, complexity, or tradeoffs
32
- - Not just be variations on the user's ideas
33
-
34
- Frame it as: "Let me add some alternatives you might not have considered..."
35
-
36
- ### 3. Diversity Check
37
-
38
- Review all ideas (user's and yours). Ask yourself:
39
- - Are these actually different, or variations of the same approach?
40
- - What's the boldest option here?
41
- - Can any ideas be combined into something better?
42
-
43
- If the options feel too similar, push for a more divergent alternative.
44
-
45
- ### 4. Select or Combine
46
-
47
- Present all approaches with tradeoffs. Use AskUserQuestion:
48
-
49
- > "Looking at these together, which direction feels right? Or should we combine elements from multiple approaches?"
50
-
51
- Document the chosen approach and why before proceeding.
52
-
53
- ## When to Move On
54
-
55
- Proceed when:
56
- - An approach has been selected (or user chose to skip brainstorming)
57
- - The rationale for the choice is understood
58
-
59
- ## Team Checkpoint: Post-Ideation
60
-
61
- Before proceeding to the next step:
62
-
63
- 1. Update `{spec_dir}/working/context.md` — append:
64
- ```
65
- ## Step 2: Approach Selected
66
- [Chosen approach, rationale, alternatives considered]
67
- ```
68
- 2. Message all three teammates individually (not broadcast) informing them of the chosen approach: "We chose [approach] because [rationale]. Read context.md for full details."
69
- 3. Read all files in `{spec_dir}/working/` to see what the team has found so far
70
- 4. Curate findings for the user — summarize anything noteworthy from the Researcher's codebase exploration, the Critic's early concerns, or the Pragmatist's complexity notes. Present as: "Before we go deeper, my research team surfaced a few things..." Only surface findings that are relevant and actionable. Skip trivial items.
71
- 5. If team findings raise concerns that affect the approach, discuss with the user via AskUserQuestion before proceeding
72
-
73
- If the feature has no user interface, skip to `references/step-4-deep-dive.md`. Otherwise proceed to `references/step-3-ui-ux.md`.
@@ -1,83 +0,0 @@
1
- # Step 3: UI/UX Design
2
-
3
- If the feature has no user interface, skip to `references/step-4-deep-dive.md`.
4
-
5
- ## Determine Design Direction
6
-
7
- Before any wireframes, establish the visual approach. Use AskUserQuestion to confirm:
8
-
9
- **Product context:**
10
- - What does this product need to feel like?
11
- - Who uses it? (Power users want density, occasional users want guidance)
12
- - What's the emotional job? (Trust, efficiency, delight, focus)
13
-
14
- **Design direction options:**
15
- - Precision & Density — tight spacing, monochrome, information-forward (Linear, Raycast)
16
- - Warmth & Approachability — generous spacing, soft shadows, friendly (Notion, Coda)
17
- - Sophistication & Trust — cool tones, layered depth, financial gravitas (Stripe, Mercury)
18
- - Boldness & Clarity — high contrast, dramatic negative space (Vercel)
19
- - Utility & Function — muted palette, functional density (GitHub)
20
-
21
- **Color foundation:**
22
- - Warm (creams, warm grays) — approachable, human
23
- - Cool (slate, blue-gray) — professional, serious
24
- - Pure neutrals (true grays) — minimal, technical
25
-
26
- **Layout approach:**
27
- - Dense grids for scanning/comparing
28
- - Generous spacing for focused tasks
29
- - Sidebar navigation for multi-section apps
30
- - Split panels for list-detail patterns
31
-
32
- Use AskUserQuestion to present 2-3 options and get the user's preference.
33
-
34
- ## Create ASCII Wireframes
35
-
36
- Sketch the interface in ASCII. Keep it rough—this is for alignment, not pixel precision.
37
-
38
- ```
39
- Example:
40
- ┌─────────────────────────────────────────┐
41
- │ Page Title [Action ▾] │
42
- ├──────────┬──────────────────────────────┤
43
- │ Nav Item │ Content Area │
44
- │ Nav Item │ ┌─────────────────────────┐ │
45
- │ Nav Item │ │ Component │ │
46
- │ │ └─────────────────────────┘ │
47
- └──────────┴──────────────────────────────┘
48
- ```
49
-
50
- Create wireframes for:
51
- - Primary screen(s) the user will interact with
52
- - Key states (empty, loading, error, populated)
53
- - Any modals or secondary views
54
-
55
- Present each wireframe to the user. Use AskUserQuestion to confirm or iterate.
56
-
57
- ## Map User Flows
58
-
59
- For each primary action, document the interaction sequence:
60
-
61
- 1. Where does the user start?
62
- 2. What do they click/type?
63
- 3. What feedback do they see?
64
- 4. Where do they end up?
65
-
66
- Format as simple numbered steps under each flow name.
67
-
68
- ## When to Move On
69
-
70
- Proceed to `references/step-4-deep-dive.md` when:
71
- - Design direction is agreed upon
72
- - Wireframes exist for primary screens
73
- - User has confirmed the layout approach
74
-
75
- ## Update Team Context
76
-
77
- After design decisions are confirmed, update `{spec_dir}/working/context.md` — append:
78
- ```
79
- ## Step 3: Design Decisions
80
- [Design direction chosen, layout approach, key wireframe descriptions, user flow summaries]
81
- ```
82
-
83
- No team checkpoint at this step — design is user-driven. Teammates will read the updated context.md on their own.
@@ -1,137 +0,0 @@
1
- # Step 4: Deep Dive
2
-
3
- Cover all specification areas through conversation. Update `docs/specs/<name>/spec.md` incrementally as information emerges.
4
-
5
- Use AskUserQuestion whenever requirements are ambiguous or multiple approaches exist. Present options with tradeoffs and get explicit decisions.
6
-
7
- ## Areas to Cover
8
-
9
- ### Intent & Goals
10
- - Primary goal and business/user value
11
- - Success metrics and how to verify it works
12
-
13
- ### Integration Points
14
- - Existing system components this touches
15
- - External services, APIs, or libraries
16
- - Data flows in and out
17
-
18
- The Researcher has been exploring the codebase since Step 1. Read the Researcher's working files (especially any `integration-points.md` or related files). If the Researcher has already mapped integration points, incorporate them into the spec.
19
-
20
- If specific questions remain, message the Researcher via SendMessage with targeted questions like "How does authentication work in this codebase?" or "What middleware handles protected routes?" and wait for a response.
21
-
22
- No assumptions. If something is unclear, ask the Researcher to investigate.
23
-
24
- ### File Landscape
25
-
26
- Read the Researcher's `file-landscape.md` working file. The Researcher should have identified concrete file paths by now. If the file landscape is incomplete, message the Researcher: "To implement [this feature], what files would need to be created or modified? Give me concrete file paths."
27
-
28
- Capture:
29
- - **Files to create**: New files with full paths (e.g., `src/models/notification.ts`)
30
- - **Files to modify**: Existing files that need changes (e.g., `src/routes/index.ts`)
31
- - **Directory conventions**: Where each type of code lives in this project
32
-
33
- This becomes the File Landscape section of the spec, which spec-to-tasks uses directly.
34
-
35
- ### Data Model
36
- - Entities and relationships
37
- - Constraints (required fields, validations, limits)
38
- - Extending existing models vs creating new ones
39
-
40
- ### Behavior & Flows
41
- - Main user flows, step by step
42
- - Triggers and resulting actions
43
- - Different modes or variations
44
-
45
- ### Constraints
46
- - What is explicitly out of scope (features, users, flows to NOT build)
47
- - Technology boundaries (must use X, must not introduce Y)
48
- - Performance requirements (latency, throughput, resource limits)
49
- - Security requirements (auth, PII handling, logging restrictions)
50
- - Compatibility requirements (browsers, platforms, API versions)
51
-
52
- Constraints that aren't written down don't exist during implementation. If the spec doesn't say "don't introduce a new ORM" or "must stay under 200ms," those boundaries won't be respected downstream.
53
-
54
- ### Edge Cases & Error Handling
55
- - Failure modes and how to handle them
56
- - Invalid input handling
57
- - Boundary conditions
58
- - Partial failure recovery
59
-
60
- ### Blockers & Dependencies
61
- - External dependencies (APIs, services, libraries)
62
- - Credentials or access needed
63
- - Decisions that must be made before implementation
64
-
65
- ## Spec Structure
66
-
67
- Write to `docs/specs/<name>/spec.md` with this structure:
68
-
69
- ```markdown
70
- # [Feature Name]
71
-
72
- ## Overview
73
- [2-3 sentences: what and why]
74
-
75
- ## Goals
76
- - [Primary goal]
77
- - [Secondary goals]
78
-
79
- ## Approach
80
- [Chosen approach and rationale. What alternatives were considered and why this one was selected.]
81
-
82
- ## Constraints
83
- - **Out of scope:** [What this feature explicitly does NOT do]
84
- - **Technology:** [Must use / must not introduce]
85
- - **Performance:** [Latency, throughput, resource limits]
86
- - **Security:** [Auth, PII, logging restrictions]
87
-
88
- ## Integration Points
89
- - Touches: [existing components]
90
- - External: [APIs, services, libraries]
91
- - Data flows: [in/out]
92
-
93
- ## File Landscape
94
-
95
- ### Files to Create
96
- - [path/to/new-file.ts]: [purpose]
97
-
98
- ### Files to Modify
99
- - [path/to/existing-file.ts]: [what changes]
100
-
101
- ## Data Model
102
- [Entities, relationships, constraints]
103
-
104
- ## Behavior
105
- ### [Flow 1]
106
- [Step by step]
107
-
108
- ## Edge Cases
109
- - [Case]: [handling]
110
-
111
- ## Acceptance Criteria
112
- - [ ] [Testable requirement]
113
- **Verify:** [verification method]
114
-
115
- ## Blockers
116
- - [ ] [Blocker]: [what's needed]
117
- ```
118
-
119
- ## Team Checkpoint: Deep Dive
120
-
121
- After completing all deep dive subsections:
122
-
123
- 1. Update `{spec_dir}/working/context.md` — append:
124
- ```
125
- ## Step 4: Deep Dive Complete
126
- [Summary of what was covered: integration points, file landscape, data model, behaviors, edge cases, blockers]
127
- ```
128
- 2. Read all working files from the Critic and Pragmatist
129
- 3. Present curated findings to the user:
130
- - Critic's identified gaps, bad assumptions, or logic issues
131
- - Pragmatist's complexity assessment and simplification suggestions
132
- 4. Use AskUserQuestion to discuss significant findings. If the Critic found gaps, address them. If the Pragmatist suggests simplifications, let the user decide.
133
- 5. Update spec.md with any changes from this discussion
134
-
135
- ## When to Move On
136
-
137
- Move to `references/step-5-research-needs.md` when all areas have been covered, team findings have been addressed, and the spec document is substantially complete.
@@ -1,53 +0,0 @@
1
- # Step 5: Identify Research Needs
2
-
3
- Before finalizing, determine if implementation requires unfamiliar paradigms.
4
-
5
- ## The Question
6
-
7
- For each major component in the spec, ask: **Does working code using this exact pattern already exist in this codebase?**
8
-
9
- This is not about whether Claude knows how to do something in general. It's about whether this specific project has proven, working examples of the same approach.
10
-
11
- ## Evaluate Each Component
12
-
13
- Review the spec's integration points, data model, and behavior sections.
14
-
15
- The Researcher has been exploring the codebase throughout the interview and already knows what patterns exist. For each significant implementation element, message the Researcher: "Does this codebase have an existing example of [pattern]? If yes, where and how does it work?"
16
-
17
- You can ask about multiple patterns in a single message. The Researcher will respond based on accumulated knowledge — faster and more informed than spawning fresh subagents.
18
-
19
- Based on Researcher findings:
20
- - If pattern exists → paradigm is established, no research needed
21
- - If not found → this is a new paradigm requiring research
22
-
23
- Examples of "new paradigm" triggers:
24
- - Using a library not yet in the project
25
- - A UI pattern not implemented elsewhere in the app
26
- - An integration with an external service not previously connected
27
- - A data structure or flow unlike existing code
28
-
29
- ## If Research Needed
30
-
31
- For each new paradigm identified:
32
- 1. State what needs research and why (no existing example found)
33
- 2. Use AskUserQuestion to ask if they want to proceed with research, or if they have existing knowledge to share
34
- 3. If proceeding, invoke the `research` skill for that topic
35
-
36
- Wait for research to complete before continuing. The research output goes to `docs/research/` and informs implementation.
37
-
38
- ## If No Research Needed
39
-
40
- State that all paradigms have existing examples in the codebase. Proceed to `references/step-6-verification.md`.
41
-
42
- ## When to Move On
43
-
44
- Proceed to `references/step-6-verification.md` when:
45
- - All new paradigms have been researched, OR
46
- - User confirmed no research is needed, OR
47
- - All patterns have existing codebase examples
48
-
49
- Update `{spec_dir}/working/context.md` — append:
50
- ```
51
- ## Step 5: Research Needs
52
- [Which paradigms are established, which required research, research outcomes]
53
- ```
@@ -1,89 +0,0 @@
1
- # Step 6: Verification Planning
2
-
3
- Every acceptance criterion needs a specific, executable verification method. The goal: autonomous implementation with zero ambiguity about whether something works.
4
-
5
- ## Verification Methods
6
-
7
- ### UI Verification: agent-browser
8
-
9
- For any criterion involving visual output or user interaction, use Vercel's agent-browser CLI:
10
-
11
- ```
12
- agent-browser open <url> # Navigate to page
13
- agent-browser snapshot # Get accessibility tree with refs
14
- agent-browser click @ref # Click element by ref
15
- agent-browser fill @ref "value" # Fill input by ref
16
- agent-browser get text @ref # Read text content
17
- agent-browser screenshot file.png # Capture visual state
18
- agent-browser close # Close browser
19
- ```
20
-
21
- Example verification for "Dashboard shows signup count":
22
- 1. `agent-browser open /admin`
23
- 2. `agent-browser snapshot`
24
- 3. `agent-browser get text @signup-count`
25
- 4. Assert returned value is a number
26
-
27
- ### Automated Tests
28
-
29
- For logic, data, and API behavior, specify the exact test:
30
- - Unit tests for pure functions
31
- - Integration tests for API endpoints
32
- - End-to-end tests for critical flows
33
-
34
- Include the test file path: `pnpm test src/convex/featureFlags.test.ts`
35
-
36
- ### Database/State Verification
37
-
38
- For data persistence criteria:
39
- 1. Perform the action
40
- 2. Query the database directly
41
- 3. Assert expected state
42
-
43
- ### Manual Verification (Fallback)
44
-
45
- If no automated method exists, document exactly what to check. Flag these as candidates for future automation.
46
-
47
- ## Update Each Acceptance Criterion
48
-
49
- Review every acceptance criterion in the spec. Add a verification method using this format:
50
-
51
- ```markdown
52
- ## Acceptance Criteria
53
-
54
- - [ ] Dashboard loads in under 2s
55
- **Verify:** `agent-browser open /admin`, measure time to snapshot ready
56
-
57
- - [ ] Flag toggles persist across refresh
58
- **Verify:** `pnpm test src/convex/featureFlags.test.ts` (toggle persistence test)
59
-
60
- - [ ] Signup chart shows accurate counts
61
- **Verify:** `agent-browser get text @chart-total`, compare to `npx convex run users:count`
62
- ```
63
-
64
- ## Confirm With User
65
-
66
- Use AskUserQuestion to review verification methods with the user:
67
- - "For [criterion], I'll verify by [method]. Does that prove it works?"
68
- - Flag any criteria where verification seems insufficient
69
-
70
- The standard: if the agent executes the verification and it passes, the feature is done. No human checking required.
71
-
72
- ## Team Validation
73
-
74
- After defining verification methods and before confirming with the user:
75
-
76
- 1. Message the Critic: "Review the verification methods in spec.md. Are they concrete and executable? Will each one actually prove its criterion works?"
77
- 2. Message the Pragmatist: "Review the verification methods in spec.md. Are any over-complex? Could simpler verification achieve the same confidence?"
78
- 3. Read their responses (via working files or SendMessage)
79
- 4. Adjust verification methods based on valid feedback before presenting to the user
80
-
81
- ## When to Move On
82
-
83
- Proceed to `references/step-7-finalize.md` when every acceptance criterion has a verification method and the user agrees each method proves the criterion works.
84
-
85
- Update `{spec_dir}/working/context.md` — append:
86
- ```
87
- ## Step 6: Verification Methods Defined
88
- [Summary of verification approach and any team feedback incorporated]
89
- ```