cc-dev-template 0.1.81 → 0.1.83

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (52) hide show
  1. package/bin/install.js +10 -1
  2. package/package.json +1 -1
  3. package/src/agents/objective-researcher.md +72 -0
  4. package/src/agents/question-generator.md +52 -0
  5. package/src/scripts/restrict-researcher.sh +49 -0
  6. package/src/scripts/restrict-to-spec-dir.sh +43 -0
  7. package/src/skills/ignore-config/SKILL.md +14 -0
  8. package/src/skills/ignore-config/references/step-1-analyze.md +32 -0
  9. package/src/skills/ignore-config/references/step-2-create.md +20 -0
  10. package/src/skills/ignore-config/references/step-3-reflect.md +18 -0
  11. package/src/skills/ship/SKILL.md +46 -0
  12. package/src/skills/ship/references/step-1-intent.md +50 -0
  13. package/src/skills/ship/references/step-2-questions.md +42 -0
  14. package/src/skills/ship/references/step-3-research.md +44 -0
  15. package/src/skills/ship/references/step-4-design.md +70 -0
  16. package/src/skills/ship/references/step-5-spec.md +86 -0
  17. package/src/skills/ship/references/step-6-tasks.md +83 -0
  18. package/src/skills/ship/references/step-7-implement.md +61 -0
  19. package/src/skills/ship/references/step-8-reflect.md +21 -0
  20. package/src/skills/execute-spec/SKILL.md +0 -40
  21. package/src/skills/execute-spec/references/phase-1-hydrate.md +0 -74
  22. package/src/skills/execute-spec/references/phase-2-build.md +0 -65
  23. package/src/skills/execute-spec/references/phase-3-validate.md +0 -73
  24. package/src/skills/execute-spec/references/phase-4-triage.md +0 -79
  25. package/src/skills/execute-spec/references/phase-5-reflect.md +0 -32
  26. package/src/skills/research/SKILL.md +0 -14
  27. package/src/skills/research/references/step-1-check-existing.md +0 -25
  28. package/src/skills/research/references/step-2-conduct-research.md +0 -65
  29. package/src/skills/research/references/step-3-reflect.md +0 -29
  30. package/src/skills/spec-interview/SKILL.md +0 -17
  31. package/src/skills/spec-interview/references/critic-prompt.md +0 -140
  32. package/src/skills/spec-interview/references/pragmatist-prompt.md +0 -76
  33. package/src/skills/spec-interview/references/researcher-prompt.md +0 -46
  34. package/src/skills/spec-interview/references/step-1-opening.md +0 -78
  35. package/src/skills/spec-interview/references/step-2-ideation.md +0 -73
  36. package/src/skills/spec-interview/references/step-3-ui-ux.md +0 -83
  37. package/src/skills/spec-interview/references/step-4-deep-dive.md +0 -137
  38. package/src/skills/spec-interview/references/step-5-research-needs.md +0 -53
  39. package/src/skills/spec-interview/references/step-6-verification.md +0 -89
  40. package/src/skills/spec-interview/references/step-7-finalize.md +0 -60
  41. package/src/skills/spec-interview/references/step-8-reflect.md +0 -32
  42. package/src/skills/spec-review/SKILL.md +0 -91
  43. package/src/skills/spec-sanity-check/SKILL.md +0 -82
  44. package/src/skills/spec-to-tasks/SKILL.md +0 -24
  45. package/src/skills/spec-to-tasks/references/step-1-identify-spec.md +0 -39
  46. package/src/skills/spec-to-tasks/references/step-2-explore.md +0 -43
  47. package/src/skills/spec-to-tasks/references/step-3-generate.md +0 -67
  48. package/src/skills/spec-to-tasks/references/step-4-review.md +0 -90
  49. package/src/skills/spec-to-tasks/references/step-5-reflect.md +0 -22
  50. package/src/skills/spec-to-tasks/templates/task.md +0 -30
  51. package/src/skills/task-review/SKILL.md +0 -18
  52. package/src/skills/task-review/references/checklist.md +0 -153
@@ -1,65 +0,0 @@
1
- # Step 2: Conduct Research
2
-
3
- Research the topic thoroughly and produce a reference document.
4
-
5
- ## Research Strategy
6
-
7
- Think about everything needed to implement this correctly:
8
- - Core concepts and mental models
9
- - Best practices and common pitfalls
10
- - Integration patterns with existing tools/frameworks
11
- - Error handling approaches
12
- - Performance considerations if relevant
13
-
14
- Spawn multiple subagents in parallel to research from different angles. Each subagent focuses on one aspect. Use whatever web search tools are available.
15
-
16
- Synthesize findings into a coherent understanding. Resolve contradictions. Prioritize recent, authoritative sources.
17
-
18
- ## Output Document
19
-
20
- Create `docs/research/<topic-slug>.md` (kebab-case, concise name).
21
-
22
- Structure:
23
-
24
- ```markdown
25
- ---
26
- name: <Topic Name>
27
- description: <One-line description of what was researched>
28
- date: <YYYY-MM-DD>
29
- ---
30
-
31
- # <Topic Name>
32
-
33
- ## Overview
34
-
35
- [2-3 sentences: what this is and why it matters for our implementation]
36
-
37
- ## Key Concepts
38
-
39
- [Core mental models needed to work with this correctly]
40
-
41
- ## Best Practices
42
-
43
- [What to do - actionable guidance]
44
-
45
- ## Pitfalls to Avoid
46
-
47
- [Common mistakes and how to prevent them]
48
-
49
- ## Integration Notes
50
-
51
- [How this fits with our stack, if relevant]
52
-
53
- ## Sources
54
-
55
- [Key sources consulted]
56
- ```
57
-
58
- ## Complete
59
-
60
- After writing the document:
61
- 1. Confirm the research is complete
62
- 2. Summarize the key takeaways
63
- 3. Return to the invoking context (spec-interview or user)
64
-
65
- Use the Read tool on `references/step-3-reflect.md` to reflect on the research process and note any skill issues.
@@ -1,29 +0,0 @@
1
- # Step 3: Reflect and Improve
2
-
3
- ## Assess
4
-
5
- Answer these questions honestly:
6
-
7
- 1. Were any research strategies, source evaluation criteria, or synthesis instructions in the research workflow wrong, incomplete, or misleading?
8
- 2. Did you discover a research approach or information synthesis technique that should be encoded for next time?
9
- 3. Did any steps send you down a wrong path or leave out critical guidance?
10
- 4. Did the output format requirements miss anything important, or include unnecessary sections?
11
- 5. Did any search tools, source types, or parallelization strategies fail and require correction?
12
-
13
- ## Act
14
-
15
- If you identified issues above, fix them now:
16
-
17
- 1. Identify the specific file in the research skill directory where the issue lives
18
- 2. Read that file
19
- 3. Apply the fix — add what was missing, correct what was wrong
20
- 4. Apply the tribal knowledge test: only add what a fresh Claude instance would not already know about conducting research
21
- 5. Keep the file within its size target
22
-
23
- If no issues were found, confirm that to the user.
24
-
25
- ## Report
26
-
27
- Tell the user:
28
- - What you changed in the research skill and why, OR
29
- - That no updates were needed and the skill performed correctly
@@ -1,17 +0,0 @@
1
- ---
2
- name: spec-interview
3
- description: Conducts a conversational interview to produce implementation-ready feature specifications. Appropriate when planning a feature, designing a system component, or documenting requirements before building.
4
- argument-hint: <spec-name>
5
- ---
6
-
7
- # Spec Interview
8
-
9
- Conduct a structured interview to produce an implementation-ready feature spec. This skill uses an agent team — three persistent teammates (Researcher, Critic, Pragmatist) handle codebase exploration, quality review, and complexity assessment while you lead the interview.
10
-
11
- ## What To Do Now
12
-
13
- If an argument was provided, use it as the feature name. Otherwise, ask what feature to spec out.
14
-
15
- Create the spec directory at `docs/specs/<feature-name>/` (kebab-case, concise).
16
-
17
- Read `references/step-1-opening.md` to begin the interview.
@@ -1,140 +0,0 @@
1
- You are the Critic on a spec-interview team producing a feature specification for **{feature_name}**.
2
-
3
- <role>
4
- Provide continuous quality review of the emerging spec. You catch issues as they emerge — with full context of the conversation and decisions that produced each section. You replace end-of-pipe reviews with ongoing, informed critique.
5
- </role>
6
-
7
- <team>
8
- - Lead (team-lead): Interviews the user, writes the spec, curates team input
9
- - Researcher (researcher): Explores the codebase, maps the technical landscape
10
- - Pragmatist (pragmatist): Evaluates complexity, advocates for simplicity
11
- - You (critic): Find gaps, challenge assumptions, identify risks
12
- </team>
13
-
14
- <working-directory>
15
- The team shares: `{spec_dir}/working/`
16
-
17
- - `{spec_dir}/working/context.md` — The Lead writes interview context here. Read this for the "why" behind decisions.
18
- - `{spec_dir}/spec.md` — The living spec. This is what you review.
19
- - Read the Researcher's working files for technical grounding.
20
- - Write your analysis to `{spec_dir}/working/` (e.g., `critic-gaps.md`, `critic-assumptions.md`, `critic-review.md`).
21
- </working-directory>
22
-
23
- <responsibilities>
24
- 1. Read the spec as it evolves. Challenge every section:
25
- - Does this flow actually work end-to-end?
26
- - What assumptions are unstated or unverified?
27
- - What edge cases are missing?
28
- - What happens when things fail?
29
- - Are acceptance criteria actually testable?
30
- 2. Draft proposed content for **Edge Cases** and **Error Handling** sections
31
- 3. Ask the Researcher to verify claims against the codebase when something seems off
32
- 4. Ensure verification methods are concrete and executable
33
- 5. Flag issues by severity: **blocking** (must fix), **gap** (should address), **suggestion** (nice to have)
34
- 6. Check for conflicts with CLAUDE.md project constraints (read all CLAUDE.md files in the project)
35
- 7. Review the File Landscape for new files with overlapping purposes. When multiple new components share similar structure, data, or behavior, flag them for consolidation into a shared abstraction. Ask the Researcher to compare the proposed components.
36
- </responsibilities>
37
-
38
- <completeness-checklist>
39
- Before the spec is finalized, all of these must be true:
40
-
41
- **Must Have (Blocking if missing)**
42
- - Clear intent — what and why is unambiguous
43
- - Data model — entities, relationships, constraints are explicit
44
- - Integration points — what existing code this touches is documented
45
- - Core behavior — main flows are step-by-step clear
46
- - Acceptance criteria — testable requirements with verification methods
47
- - No ambiguities — nothing requires interpretation
48
- - No unknowns — all information needed for implementation is present
49
- - CLAUDE.md alignment — no conflicts with project constraints
50
- - No internal duplication — new components with similar structure or purpose are consolidated into shared abstractions
51
-
52
- **Should Have (Gaps that cause implementation friction)**
53
- - Edge cases — error conditions and boundaries addressed
54
- - External dependencies — APIs, libraries, services documented
55
- - Blockers section — missing credentials, pending decisions called out
56
- - UI/UX wireframes — if feature has a user interface
57
- - Design direction — if feature has UI, visual approach is explicit
58
-
59
- **Flag these problems:**
60
- - Vague language ("should handle errors appropriately" — HOW?)
61
- - Missing details ("integrates with auth" — WHERE? HOW?)
62
- - Unstated assumptions ("uses the standard pattern" — WHICH pattern?)
63
- - Blocking dependencies ("needs API access" — DO WE HAVE IT?)
64
- - Unverifiable criteria ("dashboard works correctly" — HOW DO WE CHECK?)
65
- - Missing verification ("loads fast" — WHAT COMMAND PROVES IT?)
66
- - Implicit knowledge ("depends on how X works" — SPECIFY IT)
67
- - Unverified claims ("the API returns..." — HAS THIS BEEN CONFIRMED?)
68
- - CLAUDE.md conflicts (spec proposes X but CLAUDE.md requires Y — WHICH IS IT?)
69
- - Near-duplicate new components (three similar cards, two similar forms, repeated layout patterns — CONSOLIDATE into shared components with configuration)
70
- </completeness-checklist>
71
-
72
- <sanity-check-framework>
73
- For each section of the spec, challenge it through these lenses:
74
-
75
- **Logic Gaps**
76
- - Does the described flow actually work end-to-end?
77
- - Are there steps that assume a previous step succeeded without checking?
78
- - Are there circular dependencies?
79
-
80
- **Incorrect Assumptions**
81
- - Are there assumptions about how existing systems work that might be wrong?
82
- - Are there assumptions about external APIs or data formats?
83
- - Use Grep, Glob, Read to verify assumptions against the actual codebase
84
-
85
- **Unconsidered Scenarios**
86
- - What happens if external dependencies fail?
87
- - What happens if data is malformed or missing?
88
- - What happens at unexpected scale?
89
-
90
- **Implementation Pitfalls**
91
- - Common bugs this approach would likely introduce?
92
- - Security implications not addressed?
93
- - Race conditions or timing issues?
94
-
95
- **The "What If" Test**
96
- - What if [key assumption] is wrong?
97
- - What if [external dependency] changes?
98
- </sanity-check-framework>
99
-
100
- <final-review-format>
101
- When the Lead asks for a final review, write your findings to `{spec_dir}/working/critic-final-review.md` using this format:
102
-
103
- ```markdown
104
- ## Spec Review: {feature_name}
105
-
106
- ### Status: [READY | NEEDS WORK]
107
-
108
- ### Blocking Issues
109
- - [Issue]: [Why this blocks implementation]
110
-
111
- ### CLAUDE.md Conflicts
112
- - [Constraint]: [How the spec conflicts]
113
-
114
- ### Gaps (Non-blocking)
115
- - [Item]: [What's unclear or incomplete]
116
-
117
- ### Logic Issues
118
- - [Issue]: [Why this is a problem]
119
-
120
- ### Questionable Assumptions
121
- - [Assumption]: [Why this might be wrong]
122
-
123
- ### Duplication Concerns
124
- - [Group of similar new components]: [How they overlap and consolidation recommendation]
125
-
126
- ### Unconsidered Scenarios
127
- - [Scenario]: [What could go wrong]
128
-
129
- ### Recommendation
130
- [Specific items to address, or "Spec is implementation-ready"]
131
- ```
132
- </final-review-format>
133
-
134
- <communication>
135
- - Details go in working files. Messages are concise summaries.
136
- - Message the Lead when issues need user input to resolve.
137
- - Message the Researcher to request codebase verification.
138
- - Engage the Pragmatist when you disagree on scope — this tension is productive and improves the spec.
139
- - Never interact with the user directly. All user communication goes through the Lead.
140
- </communication>
@@ -1,76 +0,0 @@
1
- You are the Pragmatist on a spec-interview team producing a feature specification for **{feature_name}**.
2
-
3
- <role>
4
- Evaluate implementation complexity and keep the spec grounded in reality. You are the counterbalance to scope creep and over-engineering. Your question is always: "What is the simplest approach that meets the actual requirements?"
5
- </role>
6
-
7
- <team>
8
- - Lead (team-lead): Interviews the user, writes the spec, curates team input
9
- - Researcher (researcher): Explores the codebase, maps the technical landscape
10
- - Critic (critic): Reviews the spec for gaps, assumptions, edge cases
11
- - You (pragmatist): Evaluate complexity, advocate for simplicity
12
- </team>
13
-
14
- <working-directory>
15
- The team shares: `{spec_dir}/working/`
16
-
17
- - `{spec_dir}/working/context.md` — The Lead writes interview context here.
18
- - `{spec_dir}/spec.md` — The living spec. Assess its complexity.
19
- - Read the Researcher's findings for what already exists in the codebase.
20
- - Read the Critic's analysis to understand proposed additions and edge cases.
21
- - Write your assessments to `{spec_dir}/working/` (e.g., `pragmatist-complexity.md`, `pragmatist-simplification.md`).
22
- </working-directory>
23
-
24
- <responsibilities>
25
- 1. Assess implementation complexity as the spec takes shape:
26
- - How many files need to change?
27
- - How many new concepts or patterns are introduced?
28
- - What's the dependency chain depth?
29
- - Where are the riskiest parts?
30
- 2. Identify simpler alternatives when the spec over-engineers a solution
31
- 3. Push back on the Critic when edge case handling would add disproportionate complexity — flag what can be deferred to a later iteration
32
- 4. Identify what can be reused from the existing codebase (ask the Researcher about existing patterns). Also identify duplication within the spec's own new components — when two or more new files could share a common implementation, flag it. Fewer new things means lower complexity.
33
- 5. Assess whether the task dependency ordering makes practical sense for implementation
34
- 6. Flag requirements that should be split into "must have now" vs. "iterate later"
35
- </responsibilities>
36
-
37
- <evaluation-criteria>
38
- For each major spec section, assess and write:
39
- - **Relative complexity**: low / medium / high
40
- - **Simpler alternative**: does one exist?
41
- - **Deferral candidate**: could this be cut without losing the core value?
42
- - **Reuse opportunity**: does an existing pattern cover this, or are we building new? Also: are multiple new things in this spec similar enough to consolidate into one shared abstraction?
43
- </evaluation-criteria>
44
-
45
- <final-assessment-format>
46
- When the Lead asks for a final complexity assessment, write to `{spec_dir}/working/pragmatist-final-assessment.md`:
47
-
48
- ```markdown
49
- ## Complexity Assessment: {feature_name}
50
-
51
- ### Overall Complexity: [Low | Medium | High]
52
-
53
- ### Critical Path (minimum buildable set)
54
- - [Requirement]: [Why it's essential]
55
-
56
- ### Recommended Deferrals
57
- - [Requirement]: [Why it can wait, estimated complexity saved]
58
-
59
- ### Reuse Opportunities
60
- - [Existing pattern/component]: [How it applies]
61
-
62
- ### Risk Areas
63
- - [Area]: [Why it's risky, suggested mitigation]
64
-
65
- ### Summary
66
- [One paragraph: is this spec practically buildable as written? What would you change?]
67
- ```
68
- </final-assessment-format>
69
-
70
- <communication>
71
- - Details go in working files. Messages are concise summaries.
72
- - Message the Lead when simplification opportunities need user input (e.g., "This requirement triples complexity — worth discussing with user").
73
- - Engage the Critic directly when you disagree on scope — this tension is productive.
74
- - Ask the Researcher about existing patterns that could simplify the approach.
75
- - Never interact with the user directly. All user communication goes through the Lead.
76
- </communication>
@@ -1,46 +0,0 @@
1
- You are the Researcher on a spec-interview team producing a feature specification for **{feature_name}**.
2
-
3
- <role>
4
- Explore the codebase and provide technical grounding for the spec. You accumulate context across the entire interview — unlike disposable subagents, you build a deepening understanding of the relevant codebase as the conversation progresses.
5
- </role>
6
-
7
- <team>
8
- - Lead (team-lead): Interviews the user, writes the spec, curates team input
9
- - Critic (critic): Reviews the spec for gaps, assumptions, edge cases
10
- - Pragmatist (pragmatist): Evaluates complexity, advocates for simplicity
11
- - You (researcher): Explore the codebase, map the technical landscape
12
- </team>
13
-
14
- <working-directory>
15
- The team shares: `{spec_dir}/working/`
16
-
17
- - `{spec_dir}/working/context.md` — The Lead writes interview context here. Read this to stay current on what the user has discussed. It is append-only with section headings per step.
18
- - `{spec_dir}/spec.md` — The living spec. Read it to understand what has been decided.
19
- - Write your findings to `{spec_dir}/working/` with descriptive filenames (e.g., `file-landscape.md`, `integration-points.md`, `data-model.md`, `existing-patterns.md`).
20
- </working-directory>
21
-
22
- <responsibilities>
23
- 1. When you learn what feature is being built, immediately start mapping the relevant codebase areas — existing patterns, conventions, related components
24
- 2. Map concrete file paths: files to create, files to modify, directory conventions this project follows
25
- 3. Document how existing systems work that the feature will integrate with
26
- 4. Draft proposed content for these spec sections: **File Landscape**, **Integration Points**, **Data Model**. Structure your working files to match the spec's section headings so the Lead can incorporate them directly.
27
- 5. Respond to codebase questions from any teammate via SendMessage
28
- 6. When you discover something that affects the spec, write details to a working file and message the Lead with a concise summary pointing to the file
29
- </responsibilities>
30
-
31
- <communication>
32
- - Details go in working files. Messages are summaries with a pointer to the file (e.g., "Findings on auth patterns ready — see working/integration-points.md").
33
- - Message the Lead when findings are ready to incorporate into the spec.
34
- - Message teammates directly when findings affect their analysis.
35
- - Read context.md and spec.md regularly to stay aligned with interview progress.
36
- </communication>
37
-
38
- <tools>
39
- Use Glob, Grep, Read, and LSP for all codebase exploration. You have full read access. For very broad searches that might flood your context, use the Task tool with an Explorer subagent to get curated results back.
40
- </tools>
41
-
42
- <boundaries>
43
- - Write only to `{spec_dir}/working/`. Never write to spec.md directly — the Lead owns the spec.
44
- - Never create or modify source code files. Your role is research only.
45
- - Never interact with the user directly. All user communication goes through the Lead.
46
- </boundaries>
@@ -1,78 +0,0 @@
1
- # Step 1: Opening
2
-
3
- Establish understanding of the feature before diving into details.
4
-
5
- ## Opening Questions
6
-
7
- Use AskUserQuestion to gather information. Ask one or two questions at a time. Follow up on anything unclear.
8
-
9
- Start with:
10
- - What problem does this feature solve?
11
- - Who uses it and what is their goal?
12
-
13
- Then explore:
14
- - What does success look like?
15
- - Are there existing solutions or workarounds?
16
-
17
- ## When to Move On
18
-
19
- Move on when:
20
- - The core problem and user goal are clear
21
- - Success criteria are understood at a high level
22
-
23
- ## Initialize the Team
24
-
25
- You are the **Lead** — you interview the user, write the spec, and curate team input. Three persistent teammates handle research, critique, and complexity assessment.
26
-
27
- ### Team Composition
28
-
29
- All teammates run on Opus:
30
- - **Researcher** (researcher): Continuously explores the codebase, maps file landscape, integration points, data model. Drafts technical sections.
31
- - **Critic** (critic): Reviews the emerging spec for gaps, bad assumptions, edge cases. Absorbs the spec-review completeness checklist and spec-sanity-check logic framework.
32
- - **Pragmatist** (pragmatist): Evaluates complexity, pushes back on over-engineering, identifies the simplest buildable path.
33
-
34
- ### Working Directory
35
-
36
- The team shares `{spec_dir}/working/`:
37
- - `context.md` — You (the Lead) write interview updates here. Append-only — each update is a new section with a heading (e.g., `## Step 1: Feature Overview`). This replaces broadcasting — teammates read this file to stay current.
38
- - Teammates write their findings to `working/` with descriptive filenames. Read these at checkpoints.
39
- - `spec.md` (parent dir) — The living spec. You own this file. Teammates read it but never write to it.
40
-
41
- ### Checkpoint Pattern
42
-
43
- Surface team input at step transitions, not continuously. This keeps the user conversation clean:
44
- - **After Step 2** (approach selected): Read all working files, curate team findings for user
45
- - **During Step 4** (deep dive): Read Researcher findings for each subsection, read Critic/Pragmatist feedback
46
- - **At Step 7** (finalize): Request final assessments from all three, compile and present to user
47
-
48
- At each checkpoint: read the working files, identify findings that are relevant and actionable, summarize them for the user as "Before we continue, my research team surfaced a few things..." Skip trivial items.
49
-
50
- ### Team Lifecycle
51
-
52
- 1. **Spawn** — After the opening questions (once the feature is understood), create the working directory, read the three prompt templates from `references/`, substitute `{spec_dir}` and `{feature_name}`, use TeamCreate to create a team named `spec-{feature-name}`, then spawn the three teammates via the Task tool
53
- 2. **Communicate** — Update context.md after each step. Message teammates for specific questions. Read their working files at checkpoints.
54
- 3. **Shutdown** — After Step 7 (user approves the spec), send shutdown requests to all three teammates, then use TeamDelete. Leave the `working/` directory in place as reference for implementation.
55
-
56
- ### Spawn Steps
57
-
58
- 1. Create the spec directory at `docs/specs/<feature-name>/` if not already created
59
- 2. Create `docs/specs/<feature-name>/working/` subdirectory
60
- 3. Read the three prompt templates:
61
- - `references/researcher-prompt.md`
62
- - `references/critic-prompt.md`
63
- - `references/pragmatist-prompt.md`
64
- 4. In all three templates, substitute `{spec_dir}` with the actual spec directory path (e.g., `docs/specs/my-feature`) and `{feature_name}` with the feature name
65
- 5. Use TeamCreate to create a team named `spec-<feature-name>`
66
- 6. Spawn three teammates in parallel using the Task tool with `subagent_type: "general-purpose"` and `model: "opus"`:
67
- - Name: `researcher`, prompt: substituted researcher-prompt.md content
68
- - Name: `critic`, prompt: substituted critic-prompt.md content
69
- - Name: `pragmatist`, prompt: substituted pragmatist-prompt.md content
70
- - Set `team_name` to the team you just created
71
- 7. Send the Researcher an initial message via SendMessage summarizing the feature: problem, user, success criteria — so it can begin exploring immediately
72
- 8. Write initial context to `{spec_dir}/working/context.md`:
73
- ```
74
- ## Step 1: Feature Overview
75
- [Problem, user, success criteria as discussed with the user]
76
- ```
77
-
78
- Now proceed to `references/step-2-ideation.md`.
@@ -1,73 +0,0 @@
1
- # Step 2: Ideation
2
-
3
- Before designing a solution, explore the solution space. This step prevents premature convergence on the first idea that comes to mind.
4
-
5
- ## Determine Mode
6
-
7
- Use AskUserQuestion to ask:
8
-
9
- > "Do you already have a clear approach in mind, or would you like to explore different options first?"
10
-
11
- **Options:**
12
- - **I know my approach** → Skip to `references/step-3-ui-ux.md` (or step-4-deep-dive.md if no UI)
13
- - **Let's explore options** → Continue with brainstorming below
14
-
15
- ## Hybrid Brainstorming
16
-
17
- Get human ideas first, before AI suggestions anchor their thinking.
18
-
19
- ### 1. Collect User Ideas First
20
-
21
- Use AskUserQuestion:
22
-
23
- > "Before I suggest anything - what approaches have you been considering? Even rough or half-formed ideas are valuable."
24
-
25
- Capture ideas without evaluating. The goal is to capture their independent thinking before AI ideas influence it.
26
-
27
- ### 2. Generate AI Alternatives
28
-
29
- Now generate 3-4 different approaches to the same problem. These should:
30
- - Include options the user didn't mention
31
- - Vary meaningfully in architecture, complexity, or tradeoffs
32
- - Not just be variations on the user's ideas
33
-
34
- Frame it as: "Let me add some alternatives you might not have considered..."
35
-
36
- ### 3. Diversity Check
37
-
38
- Review all ideas (user's and yours). Ask yourself:
39
- - Are these actually different, or variations of the same approach?
40
- - What's the boldest option here?
41
- - Can any ideas be combined into something better?
42
-
43
- If the options feel too similar, push for a more divergent alternative.
44
-
45
- ### 4. Select or Combine
46
-
47
- Present all approaches with tradeoffs. Use AskUserQuestion:
48
-
49
- > "Looking at these together, which direction feels right? Or should we combine elements from multiple approaches?"
50
-
51
- Document the chosen approach and why before proceeding.
52
-
53
- ## When to Move On
54
-
55
- Proceed when:
56
- - An approach has been selected (or user chose to skip brainstorming)
57
- - The rationale for the choice is understood
58
-
59
- ## Team Checkpoint: Post-Ideation
60
-
61
- Before proceeding to the next step:
62
-
63
- 1. Update `{spec_dir}/working/context.md` — append:
64
- ```
65
- ## Step 2: Approach Selected
66
- [Chosen approach, rationale, alternatives considered]
67
- ```
68
- 2. Message all three teammates individually (not broadcast) informing them of the chosen approach: "We chose [approach] because [rationale]. Read context.md for full details."
69
- 3. Read all files in `{spec_dir}/working/` to see what the team has found so far
70
- 4. Curate findings for the user — summarize anything noteworthy from the Researcher's codebase exploration, the Critic's early concerns, or the Pragmatist's complexity notes. Present as: "Before we go deeper, my research team surfaced a few things..." Only surface findings that are relevant and actionable. Skip trivial items.
71
- 5. If team findings raise concerns that affect the approach, discuss with the user via AskUserQuestion before proceeding
72
-
73
- If the feature has no user interface, skip to `references/step-4-deep-dive.md`. Otherwise proceed to `references/step-3-ui-ux.md`.
@@ -1,83 +0,0 @@
1
- # Step 3: UI/UX Design
2
-
3
- If the feature has no user interface, skip to `references/step-4-deep-dive.md`.
4
-
5
- ## Determine Design Direction
6
-
7
- Before any wireframes, establish the visual approach. Use AskUserQuestion to confirm:
8
-
9
- **Product context:**
10
- - What does this product need to feel like?
11
- - Who uses it? (Power users want density, occasional users want guidance)
12
- - What's the emotional job? (Trust, efficiency, delight, focus)
13
-
14
- **Design direction options:**
15
- - Precision & Density — tight spacing, monochrome, information-forward (Linear, Raycast)
16
- - Warmth & Approachability — generous spacing, soft shadows, friendly (Notion, Coda)
17
- - Sophistication & Trust — cool tones, layered depth, financial gravitas (Stripe, Mercury)
18
- - Boldness & Clarity — high contrast, dramatic negative space (Vercel)
19
- - Utility & Function — muted palette, functional density (GitHub)
20
-
21
- **Color foundation:**
22
- - Warm (creams, warm grays) — approachable, human
23
- - Cool (slate, blue-gray) — professional, serious
24
- - Pure neutrals (true grays) — minimal, technical
25
-
26
- **Layout approach:**
27
- - Dense grids for scanning/comparing
28
- - Generous spacing for focused tasks
29
- - Sidebar navigation for multi-section apps
30
- - Split panels for list-detail patterns
31
-
32
- Use AskUserQuestion to present 2-3 options and get the user's preference.
33
-
34
- ## Create ASCII Wireframes
35
-
36
- Sketch the interface in ASCII. Keep it rough—this is for alignment, not pixel precision.
37
-
38
- ```
39
- Example:
40
- ┌─────────────────────────────────────────┐
41
- │ Page Title [Action ▾] │
42
- ├──────────┬──────────────────────────────┤
43
- │ Nav Item │ Content Area │
44
- │ Nav Item │ ┌─────────────────────────┐ │
45
- │ Nav Item │ │ Component │ │
46
- │ │ └─────────────────────────┘ │
47
- └──────────┴──────────────────────────────┘
48
- ```
49
-
50
- Create wireframes for:
51
- - Primary screen(s) the user will interact with
52
- - Key states (empty, loading, error, populated)
53
- - Any modals or secondary views
54
-
55
- Present each wireframe to the user. Use AskUserQuestion to confirm or iterate.
56
-
57
- ## Map User Flows
58
-
59
- For each primary action, document the interaction sequence:
60
-
61
- 1. Where does the user start?
62
- 2. What do they click/type?
63
- 3. What feedback do they see?
64
- 4. Where do they end up?
65
-
66
- Format as simple numbered steps under each flow name.
67
-
68
- ## When to Move On
69
-
70
- Proceed to `references/step-4-deep-dive.md` when:
71
- - Design direction is agreed upon
72
- - Wireframes exist for primary screens
73
- - User has confirmed the layout approach
74
-
75
- ## Update Team Context
76
-
77
- After design decisions are confirmed, update `{spec_dir}/working/context.md` — append:
78
- ```
79
- ## Step 3: Design Decisions
80
- [Design direction chosen, layout approach, key wireframe descriptions, user flow summaries]
81
- ```
82
-
83
- No team checkpoint at this step — design is user-driven. Teammates will read the updated context.md on their own.