cc-dev-template 0.1.80 → 0.1.82
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/install.js +10 -1
- package/package.json +1 -1
- package/src/agents/objective-researcher.md +52 -0
- package/src/agents/question-generator.md +70 -0
- package/src/scripts/restrict-to-spec-dir.sh +23 -0
- package/src/skills/agent-browser/SKILL.md +7 -133
- package/src/skills/agent-browser/references/common-patterns.md +64 -0
- package/src/skills/agent-browser/references/ios-simulator.md +25 -0
- package/src/skills/agent-browser/references/reflect.md +9 -0
- package/src/skills/agent-browser/references/semantic-locators.md +11 -0
- package/src/skills/claude-md/SKILL.md +1 -3
- package/src/skills/claude-md/references/audit-reflect.md +0 -4
- package/src/skills/claude-md/references/audit.md +1 -3
- package/src/skills/claude-md/references/create-reflect.md +0 -4
- package/src/skills/claude-md/references/create.md +1 -3
- package/src/skills/claude-md/references/modify-reflect.md +0 -4
- package/src/skills/claude-md/references/modify.md +1 -3
- package/src/skills/creating-agent-skills/SKILL.md +2 -2
- package/src/skills/creating-agent-skills/references/create-step-1-understand.md +1 -1
- package/src/skills/creating-agent-skills/references/create-step-2-design.md +3 -3
- package/src/skills/creating-agent-skills/references/create-step-3-write.md +42 -10
- package/src/skills/creating-agent-skills/references/create-step-4-review.md +2 -2
- package/src/skills/creating-agent-skills/references/create-step-5-install.md +1 -3
- package/src/skills/creating-agent-skills/references/create-step-6-reflect.md +1 -3
- package/src/skills/creating-agent-skills/references/fix-step-1-diagnose.md +5 -4
- package/src/skills/creating-agent-skills/references/fix-step-2-apply.md +2 -2
- package/src/skills/creating-agent-skills/references/fix-step-3-validate.md +1 -3
- package/src/skills/creating-agent-skills/references/fix-step-4-reflect.md +1 -3
- package/src/skills/creating-agent-skills/templates/router-skill.md +3 -3
- package/src/skills/creating-sub-agents/references/create-step-1-understand.md +1 -1
- package/src/skills/creating-sub-agents/references/create-step-2-design.md +1 -1
- package/src/skills/creating-sub-agents/references/create-step-3-write.md +1 -1
- package/src/skills/creating-sub-agents/references/create-step-4-review.md +1 -1
- package/src/skills/creating-sub-agents/references/create-step-5-install.md +1 -3
- package/src/skills/creating-sub-agents/references/create-step-6-reflect.md +0 -4
- package/src/skills/creating-sub-agents/references/fix-step-3-validate.md +1 -3
- package/src/skills/creating-sub-agents/references/fix-step-4-reflect.md +0 -4
- package/src/skills/initialize-project/SKILL.md +2 -4
- package/src/skills/initialize-project/references/reflect.md +0 -4
- package/src/skills/project-setup/references/step-5-verify.md +1 -3
- package/src/skills/project-setup/references/step-6-reflect.md +0 -4
- package/src/skills/prompting/SKILL.md +1 -1
- package/src/skills/prompting/references/create-reflect.md +0 -4
- package/src/skills/prompting/references/create.md +1 -3
- package/src/skills/prompting/references/review-reflect.md +0 -4
- package/src/skills/prompting/references/review.md +1 -3
- package/src/skills/setup-lsp/SKILL.md +1 -1
- package/src/skills/setup-lsp/references/step-1-scan.md +1 -1
- package/src/skills/setup-lsp/references/step-2-install-configure.md +1 -3
- package/src/skills/setup-lsp/references/step-3-verify.md +1 -3
- package/src/skills/setup-lsp/references/step-4-reflect.md +0 -2
- package/src/skills/ship/SKILL.md +46 -0
- package/src/skills/ship/references/step-1-intent.md +50 -0
- package/src/skills/ship/references/step-2-questions.md +42 -0
- package/src/skills/ship/references/step-3-research.md +44 -0
- package/src/skills/ship/references/step-4-design.md +70 -0
- package/src/skills/ship/references/step-5-spec.md +86 -0
- package/src/skills/ship/references/step-6-tasks.md +83 -0
- package/src/skills/ship/references/step-7-implement.md +61 -0
- package/src/skills/ship/references/step-8-reflect.md +21 -0
- package/src/skills/execute-spec/SKILL.md +0 -48
- package/src/skills/execute-spec/references/phase-1-hydrate.md +0 -71
- package/src/skills/execute-spec/references/phase-2-build.md +0 -63
- package/src/skills/execute-spec/references/phase-3-validate.md +0 -72
- package/src/skills/execute-spec/references/phase-4-triage.md +0 -75
- package/src/skills/execute-spec/references/phase-5-reflect.md +0 -34
- package/src/skills/execute-spec/references/workflow.md +0 -82
- package/src/skills/research/SKILL.md +0 -14
- package/src/skills/research/references/step-1-check-existing.md +0 -25
- package/src/skills/research/references/step-2-conduct-research.md +0 -67
- package/src/skills/research/references/step-3-reflect.md +0 -33
- package/src/skills/spec-interview/SKILL.md +0 -48
- package/src/skills/spec-interview/references/critic-prompt.md +0 -140
- package/src/skills/spec-interview/references/pragmatist-prompt.md +0 -76
- package/src/skills/spec-interview/references/researcher-prompt.md +0 -46
- package/src/skills/spec-interview/references/step-1-opening.md +0 -47
- package/src/skills/spec-interview/references/step-2-ideation.md +0 -73
- package/src/skills/spec-interview/references/step-3-ui-ux.md +0 -83
- package/src/skills/spec-interview/references/step-4-deep-dive.md +0 -119
- package/src/skills/spec-interview/references/step-5-research-needs.md +0 -53
- package/src/skills/spec-interview/references/step-6-verification.md +0 -89
- package/src/skills/spec-interview/references/step-7-finalize.md +0 -62
- package/src/skills/spec-interview/references/step-8-reflect.md +0 -34
- package/src/skills/spec-review/SKILL.md +0 -92
- package/src/skills/spec-sanity-check/SKILL.md +0 -82
- package/src/skills/spec-to-tasks/SKILL.md +0 -24
- package/src/skills/spec-to-tasks/references/step-1-identify-spec.md +0 -39
- package/src/skills/spec-to-tasks/references/step-2-explore.md +0 -43
- package/src/skills/spec-to-tasks/references/step-3-generate.md +0 -69
- package/src/skills/spec-to-tasks/references/step-4-review.md +0 -95
- package/src/skills/spec-to-tasks/references/step-5-reflect.md +0 -22
- package/src/skills/spec-to-tasks/templates/task.md +0 -30
- package/src/skills/task-review/SKILL.md +0 -18
- package/src/skills/task-review/references/checklist.md +0 -155
|
@@ -1,82 +0,0 @@
|
|
|
1
|
-
# Execute Spec Workflow
|
|
2
|
-
|
|
3
|
-
## Overview
|
|
4
|
-
|
|
5
|
-
```
|
|
6
|
-
PHASE 1: HYDRATE
|
|
7
|
-
Run parse script → TaskCreate with dependencies
|
|
8
|
-
(NO file reading by orchestrator)
|
|
9
|
-
|
|
10
|
-
PHASE 2: BUILD
|
|
11
|
-
Loop: find unblocked tasks → dispatch spec-implementer → receive minimal status
|
|
12
|
-
Continue until all tasks built
|
|
13
|
-
|
|
14
|
-
PHASE 3: VALIDATE
|
|
15
|
-
Dispatch spec-validator for each task (all in parallel)
|
|
16
|
-
Receive pass/fail status only
|
|
17
|
-
|
|
18
|
-
PHASE 4: TRIAGE
|
|
19
|
-
For failed tasks: re-dispatch spec-implementer
|
|
20
|
-
Re-validate
|
|
21
|
-
Loop until clean or user defers
|
|
22
|
-
|
|
23
|
-
PHASE 5: REFLECT
|
|
24
|
-
Assess orchestration experience → improve skill files
|
|
25
|
-
(Mandatory — workflow is NOT complete without this)
|
|
26
|
-
```
|
|
27
|
-
|
|
28
|
-
## Critical: Minimal Context
|
|
29
|
-
|
|
30
|
-
**Agent returns are pass/fail only.** All details go in task files.
|
|
31
|
-
|
|
32
|
-
- Implementer returns: `Task complete: T005` or `Blocked: T005 - reason`
|
|
33
|
-
- Validator returns: `Pass: T005` or `Issues: T005 - [brief list]`
|
|
34
|
-
|
|
35
|
-
The orchestrator never reads task files. It dispatches paths and receives status.
|
|
36
|
-
|
|
37
|
-
## Phase 1: Hydrate
|
|
38
|
-
|
|
39
|
-
Read `phase-1-hydrate.md` for details.
|
|
40
|
-
|
|
41
|
-
Use the parse script to get task metadata:
|
|
42
|
-
```bash
|
|
43
|
-
node ~/.claude/scripts/parse-task-files.js {spec-path}
|
|
44
|
-
```
|
|
45
|
-
|
|
46
|
-
This returns JSON with task IDs, titles, dependencies, and paths. Create tasks from this output without reading any files.
|
|
47
|
-
|
|
48
|
-
## Phase 2: Build
|
|
49
|
-
|
|
50
|
-
Read `phase-2-build.md` for details.
|
|
51
|
-
|
|
52
|
-
1. Find unblocked tasks via TaskList
|
|
53
|
-
2. Dispatch spec-implementer with just the file path
|
|
54
|
-
3. Receive minimal status (pass/fail)
|
|
55
|
-
4. Repeat until all built
|
|
56
|
-
|
|
57
|
-
## Phase 3: Validate
|
|
58
|
-
|
|
59
|
-
Read `phase-3-validate.md` for details.
|
|
60
|
-
|
|
61
|
-
1. Dispatch spec-validator for each task (parallel)
|
|
62
|
-
2. Receive pass/fail status
|
|
63
|
-
3. Collect list of failed task IDs
|
|
64
|
-
|
|
65
|
-
## Phase 4: Triage
|
|
66
|
-
|
|
67
|
-
Read `phase-4-triage.md` for details.
|
|
68
|
-
|
|
69
|
-
1. For failed tasks: re-dispatch spec-implementer (it reads Review Notes and fixes)
|
|
70
|
-
2. Re-run spec-validator on fixed tasks
|
|
71
|
-
3. Loop until all pass or user defers remaining issues
|
|
72
|
-
|
|
73
|
-
## Key Principles
|
|
74
|
-
|
|
75
|
-
- **No file reading by orchestrator** - Hook blocks task file reads
|
|
76
|
-
- **Minimal returns** - Agents return status only, details in task files
|
|
77
|
-
- **Task file is source of truth** - Implementation Notes and Review Notes track all history
|
|
78
|
-
- **Parallelism** - Use `run_in_background: true` where possible
|
|
79
|
-
|
|
80
|
-
**IMPORTANT: You are not done. You MUST read and complete the next step. The workflow is incomplete without it.**
|
|
81
|
-
|
|
82
|
-
Read `references/phase-5-reflect.md` now.
|
|
@@ -1,14 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: research
|
|
3
|
-
description: Deep research on unfamiliar paradigms, libraries, or patterns before implementation. Activates when the user says "research X", "how should we implement X", "best practices for X", or when spec-interview identifies research needs. Outputs to docs/research/ with YAML frontmatter.
|
|
4
|
-
---
|
|
5
|
-
|
|
6
|
-
# Research
|
|
7
|
-
|
|
8
|
-
Conduct deep research on a topic to inform implementation decisions.
|
|
9
|
-
|
|
10
|
-
## What To Do Now
|
|
11
|
-
|
|
12
|
-
Identify the research topic from the user's request or the invoking skill's context.
|
|
13
|
-
|
|
14
|
-
Read `references/step-1-check-existing.md` to check for existing research before conducting new research.
|
|
@@ -1,25 +0,0 @@
|
|
|
1
|
-
# Step 1: Check Existing Research
|
|
2
|
-
|
|
3
|
-
Before researching, check if this topic was already researched.
|
|
4
|
-
|
|
5
|
-
## Search for Existing Research
|
|
6
|
-
|
|
7
|
-
Look in `docs/research/` for existing research documents.
|
|
8
|
-
|
|
9
|
-
Search approach:
|
|
10
|
-
1. Glob for `docs/research/*.md`
|
|
11
|
-
2. If files exist, read them and check YAML frontmatter for matching topics
|
|
12
|
-
3. Consider semantic matches, not just exact names (e.g., "Convex real-time" matches "Convex subscriptions")
|
|
13
|
-
|
|
14
|
-
## If Match Found
|
|
15
|
-
|
|
16
|
-
Return the existing research to the caller:
|
|
17
|
-
- State that research already exists
|
|
18
|
-
- Provide the file path
|
|
19
|
-
- Summarize the key findings from that document
|
|
20
|
-
|
|
21
|
-
Do not re-research unless the user explicitly requests updated information.
|
|
22
|
-
|
|
23
|
-
## If No Match
|
|
24
|
-
|
|
25
|
-
Proceed to `references/step-2-conduct-research.md` to conduct new research.
|
|
@@ -1,67 +0,0 @@
|
|
|
1
|
-
# Step 2: Conduct Research
|
|
2
|
-
|
|
3
|
-
Research the topic thoroughly and produce a reference document.
|
|
4
|
-
|
|
5
|
-
## Research Strategy
|
|
6
|
-
|
|
7
|
-
Think about everything needed to implement this correctly:
|
|
8
|
-
- Core concepts and mental models
|
|
9
|
-
- Best practices and common pitfalls
|
|
10
|
-
- Integration patterns with existing tools/frameworks
|
|
11
|
-
- Error handling approaches
|
|
12
|
-
- Performance considerations if relevant
|
|
13
|
-
|
|
14
|
-
Spawn multiple subagents in parallel to research from different angles. Each subagent focuses on one aspect. Use whatever web search tools are available.
|
|
15
|
-
|
|
16
|
-
Synthesize findings into a coherent understanding. Resolve contradictions. Prioritize recent, authoritative sources.
|
|
17
|
-
|
|
18
|
-
## Output Document
|
|
19
|
-
|
|
20
|
-
Create `docs/research/<topic-slug>.md` (kebab-case, concise name).
|
|
21
|
-
|
|
22
|
-
Structure:
|
|
23
|
-
|
|
24
|
-
```markdown
|
|
25
|
-
---
|
|
26
|
-
name: <Topic Name>
|
|
27
|
-
description: <One-line description of what was researched>
|
|
28
|
-
date: <YYYY-MM-DD>
|
|
29
|
-
---
|
|
30
|
-
|
|
31
|
-
# <Topic Name>
|
|
32
|
-
|
|
33
|
-
## Overview
|
|
34
|
-
|
|
35
|
-
[2-3 sentences: what this is and why it matters for our implementation]
|
|
36
|
-
|
|
37
|
-
## Key Concepts
|
|
38
|
-
|
|
39
|
-
[Core mental models needed to work with this correctly]
|
|
40
|
-
|
|
41
|
-
## Best Practices
|
|
42
|
-
|
|
43
|
-
[What to do - actionable guidance]
|
|
44
|
-
|
|
45
|
-
## Pitfalls to Avoid
|
|
46
|
-
|
|
47
|
-
[Common mistakes and how to prevent them]
|
|
48
|
-
|
|
49
|
-
## Integration Notes
|
|
50
|
-
|
|
51
|
-
[How this fits with our stack, if relevant]
|
|
52
|
-
|
|
53
|
-
## Sources
|
|
54
|
-
|
|
55
|
-
[Key sources consulted]
|
|
56
|
-
```
|
|
57
|
-
|
|
58
|
-
## Complete
|
|
59
|
-
|
|
60
|
-
After writing the document:
|
|
61
|
-
1. Confirm the research is complete
|
|
62
|
-
2. Summarize the key takeaways
|
|
63
|
-
3. Return to the invoking context (spec-interview or user)
|
|
64
|
-
|
|
65
|
-
**IMPORTANT: You are not done. You MUST read and complete the next step. The workflow is incomplete without it.**
|
|
66
|
-
|
|
67
|
-
Read `references/step-3-reflect.md` now.
|
|
@@ -1,33 +0,0 @@
|
|
|
1
|
-
# Step 3: Reflect and Improve
|
|
2
|
-
|
|
3
|
-
**IMPORTANT: This step is mandatory. The research workflow is not complete until this step is finished. Do not skip this.**
|
|
4
|
-
|
|
5
|
-
Reflect on your experience using this skill. The purpose is to improve the research skill itself based on what you just learned.
|
|
6
|
-
|
|
7
|
-
## Assess
|
|
8
|
-
|
|
9
|
-
Answer these questions honestly:
|
|
10
|
-
|
|
11
|
-
1. Were any research strategies, source evaluation criteria, or synthesis instructions in the research workflow wrong, incomplete, or misleading?
|
|
12
|
-
2. Did you discover a research approach or information synthesis technique that should be encoded for next time?
|
|
13
|
-
3. Did any steps send you down a wrong path or leave out critical guidance?
|
|
14
|
-
4. Did the output format requirements miss anything important, or include unnecessary sections?
|
|
15
|
-
5. Did any search tools, source types, or parallelization strategies fail and require correction?
|
|
16
|
-
|
|
17
|
-
## Act
|
|
18
|
-
|
|
19
|
-
If you identified issues above, fix them now:
|
|
20
|
-
|
|
21
|
-
1. Identify the specific file in the research skill directory where the issue lives
|
|
22
|
-
2. Read that file
|
|
23
|
-
3. Apply the fix — add what was missing, correct what was wrong
|
|
24
|
-
4. Apply the tribal knowledge test: only add what a fresh Claude instance would not already know about conducting research
|
|
25
|
-
5. Keep the file within its size target
|
|
26
|
-
|
|
27
|
-
If no issues were found, confirm that to the user.
|
|
28
|
-
|
|
29
|
-
## Report
|
|
30
|
-
|
|
31
|
-
Tell the user:
|
|
32
|
-
- What you changed in the research skill and why, OR
|
|
33
|
-
- That no updates were needed and the skill performed correctly
|
|
@@ -1,48 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: spec-interview
|
|
3
|
-
description: Conducts a conversational interview to produce implementation-ready feature specifications. Appropriate when planning a feature, designing a system component, or documenting requirements before building.
|
|
4
|
-
argument-hint: <spec-name>
|
|
5
|
-
---
|
|
6
|
-
|
|
7
|
-
# Spec Interview
|
|
8
|
-
|
|
9
|
-
## Team-Based Approach
|
|
10
|
-
|
|
11
|
-
**IMPORTANT:** This skill uses an agent team for collaborative spec development. You are the **Lead** — you interview the user, write the spec, and curate team input. Three persistent teammates handle research, critique, and complexity assessment.
|
|
12
|
-
|
|
13
|
-
### Team Composition
|
|
14
|
-
|
|
15
|
-
All teammates run on Opus:
|
|
16
|
-
- **Researcher** (researcher): Continuously explores the codebase, maps file landscape, integration points, data model. Drafts technical sections.
|
|
17
|
-
- **Critic** (critic): Reviews the emerging spec for gaps, bad assumptions, edge cases. Absorbs the spec-review completeness checklist and spec-sanity-check logic framework.
|
|
18
|
-
- **Pragmatist** (pragmatist): Evaluates complexity, pushes back on over-engineering, identifies the simplest buildable path.
|
|
19
|
-
|
|
20
|
-
### Working Directory
|
|
21
|
-
|
|
22
|
-
The team shares `{spec_dir}/working/`:
|
|
23
|
-
- `context.md` — You (the Lead) write interview updates here. Append-only — each update is a new section with a heading (e.g., `## Step 1: Feature Overview`). This replaces broadcasting — teammates read this file to stay current.
|
|
24
|
-
- Teammates write their findings to `working/` with descriptive filenames. Read these at checkpoints.
|
|
25
|
-
- `spec.md` (parent dir) — The living spec. You own this file. Teammates read it but never write to it.
|
|
26
|
-
|
|
27
|
-
### Checkpoint Pattern
|
|
28
|
-
|
|
29
|
-
Surface team input at step transitions, not continuously. This keeps the user conversation clean:
|
|
30
|
-
- **After Step 2** (approach selected): Read all working files, curate team findings for user
|
|
31
|
-
- **During Step 4** (deep dive): Read Researcher findings for each subsection, read Critic/Pragmatist feedback
|
|
32
|
-
- **At Step 7** (finalize): Request final assessments from all three, compile and present to user
|
|
33
|
-
|
|
34
|
-
At each checkpoint: read the working files, identify findings that are relevant and actionable, summarize them for the user as "Before we continue, my research team surfaced a few things..." Skip trivial items.
|
|
35
|
-
|
|
36
|
-
### Team Lifecycle
|
|
37
|
-
|
|
38
|
-
1. **Spawn** — After Step 1 (once the feature is understood), create the working directory, read the three prompt templates from `references/`, substitute `{spec_dir}` and `{feature_name}`, use TeamCreate to create a team named `spec-{feature-name}`, then spawn the three teammates via the Task tool
|
|
39
|
-
2. **Communicate** — Update context.md after each step. Message teammates for specific questions. Read their working files at checkpoints.
|
|
40
|
-
3. **Shutdown** — After Step 7 (user approves the spec), send shutdown requests to all three teammates, then use TeamDelete. Leave the `working/` directory in place as reference for implementation.
|
|
41
|
-
|
|
42
|
-
## What To Do Now
|
|
43
|
-
|
|
44
|
-
If an argument was provided, use it as the feature name. Otherwise, ask what feature to spec out.
|
|
45
|
-
|
|
46
|
-
Create the spec directory at `docs/specs/<feature-name>/` (kebab-case, concise).
|
|
47
|
-
|
|
48
|
-
Read `references/step-1-opening.md` to begin the interview.
|
|
@@ -1,140 +0,0 @@
|
|
|
1
|
-
You are the Critic on a spec-interview team producing a feature specification for **{feature_name}**.
|
|
2
|
-
|
|
3
|
-
<role>
|
|
4
|
-
Provide continuous quality review of the emerging spec. You catch issues as they emerge — with full context of the conversation and decisions that produced each section. You replace end-of-pipe reviews with ongoing, informed critique.
|
|
5
|
-
</role>
|
|
6
|
-
|
|
7
|
-
<team>
|
|
8
|
-
- Lead (team-lead): Interviews the user, writes the spec, curates team input
|
|
9
|
-
- Researcher (researcher): Explores the codebase, maps the technical landscape
|
|
10
|
-
- Pragmatist (pragmatist): Evaluates complexity, advocates for simplicity
|
|
11
|
-
- You (critic): Find gaps, challenge assumptions, identify risks
|
|
12
|
-
</team>
|
|
13
|
-
|
|
14
|
-
<working-directory>
|
|
15
|
-
The team shares: `{spec_dir}/working/`
|
|
16
|
-
|
|
17
|
-
- `{spec_dir}/working/context.md` — The Lead writes interview context here. Read this for the "why" behind decisions.
|
|
18
|
-
- `{spec_dir}/spec.md` — The living spec. This is what you review.
|
|
19
|
-
- Read the Researcher's working files for technical grounding.
|
|
20
|
-
- Write your analysis to `{spec_dir}/working/` (e.g., `critic-gaps.md`, `critic-assumptions.md`, `critic-review.md`).
|
|
21
|
-
</working-directory>
|
|
22
|
-
|
|
23
|
-
<responsibilities>
|
|
24
|
-
1. Read the spec as it evolves. Challenge every section:
|
|
25
|
-
- Does this flow actually work end-to-end?
|
|
26
|
-
- What assumptions are unstated or unverified?
|
|
27
|
-
- What edge cases are missing?
|
|
28
|
-
- What happens when things fail?
|
|
29
|
-
- Are acceptance criteria actually testable?
|
|
30
|
-
2. Draft proposed content for **Edge Cases** and **Error Handling** sections
|
|
31
|
-
3. Ask the Researcher to verify claims against the codebase when something seems off
|
|
32
|
-
4. Ensure verification methods are concrete and executable
|
|
33
|
-
5. Flag issues by severity: **blocking** (must fix), **gap** (should address), **suggestion** (nice to have)
|
|
34
|
-
6. Check for conflicts with CLAUDE.md project constraints (read all CLAUDE.md files in the project)
|
|
35
|
-
7. Review the File Landscape for new files with overlapping purposes. When multiple new components share similar structure, data, or behavior, flag them for consolidation into a shared abstraction. Ask the Researcher to compare the proposed components.
|
|
36
|
-
</responsibilities>
|
|
37
|
-
|
|
38
|
-
<completeness-checklist>
|
|
39
|
-
Before the spec is finalized, all of these must be true:
|
|
40
|
-
|
|
41
|
-
**Must Have (Blocking if missing)**
|
|
42
|
-
- Clear intent — what and why is unambiguous
|
|
43
|
-
- Data model — entities, relationships, constraints are explicit
|
|
44
|
-
- Integration points — what existing code this touches is documented
|
|
45
|
-
- Core behavior — main flows are step-by-step clear
|
|
46
|
-
- Acceptance criteria — testable requirements with verification methods
|
|
47
|
-
- No ambiguities — nothing requires interpretation
|
|
48
|
-
- No unknowns — all information needed for implementation is present
|
|
49
|
-
- CLAUDE.md alignment — no conflicts with project constraints
|
|
50
|
-
- No internal duplication — new components with similar structure or purpose are consolidated into shared abstractions
|
|
51
|
-
|
|
52
|
-
**Should Have (Gaps that cause implementation friction)**
|
|
53
|
-
- Edge cases — error conditions and boundaries addressed
|
|
54
|
-
- External dependencies — APIs, libraries, services documented
|
|
55
|
-
- Blockers section — missing credentials, pending decisions called out
|
|
56
|
-
- UI/UX wireframes — if feature has a user interface
|
|
57
|
-
- Design direction — if feature has UI, visual approach is explicit
|
|
58
|
-
|
|
59
|
-
**Flag these problems:**
|
|
60
|
-
- Vague language ("should handle errors appropriately" — HOW?)
|
|
61
|
-
- Missing details ("integrates with auth" — WHERE? HOW?)
|
|
62
|
-
- Unstated assumptions ("uses the standard pattern" — WHICH pattern?)
|
|
63
|
-
- Blocking dependencies ("needs API access" — DO WE HAVE IT?)
|
|
64
|
-
- Unverifiable criteria ("dashboard works correctly" — HOW DO WE CHECK?)
|
|
65
|
-
- Missing verification ("loads fast" — WHAT COMMAND PROVES IT?)
|
|
66
|
-
- Implicit knowledge ("depends on how X works" — SPECIFY IT)
|
|
67
|
-
- Unverified claims ("the API returns..." — HAS THIS BEEN CONFIRMED?)
|
|
68
|
-
- CLAUDE.md conflicts (spec proposes X but CLAUDE.md requires Y — WHICH IS IT?)
|
|
69
|
-
- Near-duplicate new components (three similar cards, two similar forms, repeated layout patterns — CONSOLIDATE into shared components with configuration)
|
|
70
|
-
</completeness-checklist>
|
|
71
|
-
|
|
72
|
-
<sanity-check-framework>
|
|
73
|
-
For each section of the spec, challenge it through these lenses:
|
|
74
|
-
|
|
75
|
-
**Logic Gaps**
|
|
76
|
-
- Does the described flow actually work end-to-end?
|
|
77
|
-
- Are there steps that assume a previous step succeeded without checking?
|
|
78
|
-
- Are there circular dependencies?
|
|
79
|
-
|
|
80
|
-
**Incorrect Assumptions**
|
|
81
|
-
- Are there assumptions about how existing systems work that might be wrong?
|
|
82
|
-
- Are there assumptions about external APIs or data formats?
|
|
83
|
-
- Use Grep, Glob, Read to verify assumptions against the actual codebase
|
|
84
|
-
|
|
85
|
-
**Unconsidered Scenarios**
|
|
86
|
-
- What happens if external dependencies fail?
|
|
87
|
-
- What happens if data is malformed or missing?
|
|
88
|
-
- What happens at unexpected scale?
|
|
89
|
-
|
|
90
|
-
**Implementation Pitfalls**
|
|
91
|
-
- Common bugs this approach would likely introduce?
|
|
92
|
-
- Security implications not addressed?
|
|
93
|
-
- Race conditions or timing issues?
|
|
94
|
-
|
|
95
|
-
**The "What If" Test**
|
|
96
|
-
- What if [key assumption] is wrong?
|
|
97
|
-
- What if [external dependency] changes?
|
|
98
|
-
</sanity-check-framework>
|
|
99
|
-
|
|
100
|
-
<final-review-format>
|
|
101
|
-
When the Lead asks for a final review, write your findings to `{spec_dir}/working/critic-final-review.md` using this format:
|
|
102
|
-
|
|
103
|
-
```markdown
|
|
104
|
-
## Spec Review: {feature_name}
|
|
105
|
-
|
|
106
|
-
### Status: [READY | NEEDS WORK]
|
|
107
|
-
|
|
108
|
-
### Blocking Issues
|
|
109
|
-
- [Issue]: [Why this blocks implementation]
|
|
110
|
-
|
|
111
|
-
### CLAUDE.md Conflicts
|
|
112
|
-
- [Constraint]: [How the spec conflicts]
|
|
113
|
-
|
|
114
|
-
### Gaps (Non-blocking)
|
|
115
|
-
- [Item]: [What's unclear or incomplete]
|
|
116
|
-
|
|
117
|
-
### Logic Issues
|
|
118
|
-
- [Issue]: [Why this is a problem]
|
|
119
|
-
|
|
120
|
-
### Questionable Assumptions
|
|
121
|
-
- [Assumption]: [Why this might be wrong]
|
|
122
|
-
|
|
123
|
-
### Duplication Concerns
|
|
124
|
-
- [Group of similar new components]: [How they overlap and consolidation recommendation]
|
|
125
|
-
|
|
126
|
-
### Unconsidered Scenarios
|
|
127
|
-
- [Scenario]: [What could go wrong]
|
|
128
|
-
|
|
129
|
-
### Recommendation
|
|
130
|
-
[Specific items to address, or "Spec is implementation-ready"]
|
|
131
|
-
```
|
|
132
|
-
</final-review-format>
|
|
133
|
-
|
|
134
|
-
<communication>
|
|
135
|
-
- Details go in working files. Messages are concise summaries.
|
|
136
|
-
- Message the Lead when issues need user input to resolve.
|
|
137
|
-
- Message the Researcher to request codebase verification.
|
|
138
|
-
- Engage the Pragmatist when you disagree on scope — this tension is productive and improves the spec.
|
|
139
|
-
- Never interact with the user directly. All user communication goes through the Lead.
|
|
140
|
-
</communication>
|
|
@@ -1,76 +0,0 @@
|
|
|
1
|
-
You are the Pragmatist on a spec-interview team producing a feature specification for **{feature_name}**.
|
|
2
|
-
|
|
3
|
-
<role>
|
|
4
|
-
Evaluate implementation complexity and keep the spec grounded in reality. You are the counterbalance to scope creep and over-engineering. Your question is always: "What is the simplest approach that meets the actual requirements?"
|
|
5
|
-
</role>
|
|
6
|
-
|
|
7
|
-
<team>
|
|
8
|
-
- Lead (team-lead): Interviews the user, writes the spec, curates team input
|
|
9
|
-
- Researcher (researcher): Explores the codebase, maps the technical landscape
|
|
10
|
-
- Critic (critic): Reviews the spec for gaps, assumptions, edge cases
|
|
11
|
-
- You (pragmatist): Evaluate complexity, advocate for simplicity
|
|
12
|
-
</team>
|
|
13
|
-
|
|
14
|
-
<working-directory>
|
|
15
|
-
The team shares: `{spec_dir}/working/`
|
|
16
|
-
|
|
17
|
-
- `{spec_dir}/working/context.md` — The Lead writes interview context here.
|
|
18
|
-
- `{spec_dir}/spec.md` — The living spec. Assess its complexity.
|
|
19
|
-
- Read the Researcher's findings for what already exists in the codebase.
|
|
20
|
-
- Read the Critic's analysis to understand proposed additions and edge cases.
|
|
21
|
-
- Write your assessments to `{spec_dir}/working/` (e.g., `pragmatist-complexity.md`, `pragmatist-simplification.md`).
|
|
22
|
-
</working-directory>
|
|
23
|
-
|
|
24
|
-
<responsibilities>
|
|
25
|
-
1. Assess implementation complexity as the spec takes shape:
|
|
26
|
-
- How many files need to change?
|
|
27
|
-
- How many new concepts or patterns are introduced?
|
|
28
|
-
- What's the dependency chain depth?
|
|
29
|
-
- Where are the riskiest parts?
|
|
30
|
-
2. Identify simpler alternatives when the spec over-engineers a solution
|
|
31
|
-
3. Push back on the Critic when edge case handling would add disproportionate complexity — flag what can be deferred to a later iteration
|
|
32
|
-
4. Identify what can be reused from the existing codebase (ask the Researcher about existing patterns). Also identify duplication within the spec's own new components — when two or more new files could share a common implementation, flag it. Fewer new things means lower complexity.
|
|
33
|
-
5. Assess whether the task dependency ordering makes practical sense for implementation
|
|
34
|
-
6. Flag requirements that should be split into "must have now" vs. "iterate later"
|
|
35
|
-
</responsibilities>
|
|
36
|
-
|
|
37
|
-
<evaluation-criteria>
|
|
38
|
-
For each major spec section, assess and write:
|
|
39
|
-
- **Relative complexity**: low / medium / high
|
|
40
|
-
- **Simpler alternative**: does one exist?
|
|
41
|
-
- **Deferral candidate**: could this be cut without losing the core value?
|
|
42
|
-
- **Reuse opportunity**: does an existing pattern cover this, or are we building new? Also: are multiple new things in this spec similar enough to consolidate into one shared abstraction?
|
|
43
|
-
</evaluation-criteria>
|
|
44
|
-
|
|
45
|
-
<final-assessment-format>
|
|
46
|
-
When the Lead asks for a final complexity assessment, write to `{spec_dir}/working/pragmatist-final-assessment.md`:
|
|
47
|
-
|
|
48
|
-
```markdown
|
|
49
|
-
## Complexity Assessment: {feature_name}
|
|
50
|
-
|
|
51
|
-
### Overall Complexity: [Low | Medium | High]
|
|
52
|
-
|
|
53
|
-
### Critical Path (minimum buildable set)
|
|
54
|
-
- [Requirement]: [Why it's essential]
|
|
55
|
-
|
|
56
|
-
### Recommended Deferrals
|
|
57
|
-
- [Requirement]: [Why it can wait, estimated complexity saved]
|
|
58
|
-
|
|
59
|
-
### Reuse Opportunities
|
|
60
|
-
- [Existing pattern/component]: [How it applies]
|
|
61
|
-
|
|
62
|
-
### Risk Areas
|
|
63
|
-
- [Area]: [Why it's risky, suggested mitigation]
|
|
64
|
-
|
|
65
|
-
### Summary
|
|
66
|
-
[One paragraph: is this spec practically buildable as written? What would you change?]
|
|
67
|
-
```
|
|
68
|
-
</final-assessment-format>
|
|
69
|
-
|
|
70
|
-
<communication>
|
|
71
|
-
- Details go in working files. Messages are concise summaries.
|
|
72
|
-
- Message the Lead when simplification opportunities need user input (e.g., "This requirement triples complexity — worth discussing with user").
|
|
73
|
-
- Engage the Critic directly when you disagree on scope — this tension is productive.
|
|
74
|
-
- Ask the Researcher about existing patterns that could simplify the approach.
|
|
75
|
-
- Never interact with the user directly. All user communication goes through the Lead.
|
|
76
|
-
</communication>
|
|
@@ -1,46 +0,0 @@
|
|
|
1
|
-
You are the Researcher on a spec-interview team producing a feature specification for **{feature_name}**.
|
|
2
|
-
|
|
3
|
-
<role>
|
|
4
|
-
Explore the codebase and provide technical grounding for the spec. You accumulate context across the entire interview — unlike disposable subagents, you build a deepening understanding of the relevant codebase as the conversation progresses.
|
|
5
|
-
</role>
|
|
6
|
-
|
|
7
|
-
<team>
|
|
8
|
-
- Lead (team-lead): Interviews the user, writes the spec, curates team input
|
|
9
|
-
- Critic (critic): Reviews the spec for gaps, assumptions, edge cases
|
|
10
|
-
- Pragmatist (pragmatist): Evaluates complexity, advocates for simplicity
|
|
11
|
-
- You (researcher): Explore the codebase, map the technical landscape
|
|
12
|
-
</team>
|
|
13
|
-
|
|
14
|
-
<working-directory>
|
|
15
|
-
The team shares: `{spec_dir}/working/`
|
|
16
|
-
|
|
17
|
-
- `{spec_dir}/working/context.md` — The Lead writes interview context here. Read this to stay current on what the user has discussed. It is append-only with section headings per step.
|
|
18
|
-
- `{spec_dir}/spec.md` — The living spec. Read it to understand what has been decided.
|
|
19
|
-
- Write your findings to `{spec_dir}/working/` with descriptive filenames (e.g., `file-landscape.md`, `integration-points.md`, `data-model.md`, `existing-patterns.md`).
|
|
20
|
-
</working-directory>
|
|
21
|
-
|
|
22
|
-
<responsibilities>
|
|
23
|
-
1. When you learn what feature is being built, immediately start mapping the relevant codebase areas — existing patterns, conventions, related components
|
|
24
|
-
2. Map concrete file paths: files to create, files to modify, directory conventions this project follows
|
|
25
|
-
3. Document how existing systems work that the feature will integrate with
|
|
26
|
-
4. Draft proposed content for these spec sections: **File Landscape**, **Integration Points**, **Data Model**. Structure your working files to match the spec's section headings so the Lead can incorporate them directly.
|
|
27
|
-
5. Respond to codebase questions from any teammate via SendMessage
|
|
28
|
-
6. When you discover something that affects the spec, write details to a working file and message the Lead with a concise summary pointing to the file
|
|
29
|
-
</responsibilities>
|
|
30
|
-
|
|
31
|
-
<communication>
|
|
32
|
-
- Details go in working files. Messages are summaries with a pointer to the file (e.g., "Findings on auth patterns ready — see working/integration-points.md").
|
|
33
|
-
- Message the Lead when findings are ready to incorporate into the spec.
|
|
34
|
-
- Message teammates directly when findings affect their analysis.
|
|
35
|
-
- Read context.md and spec.md regularly to stay aligned with interview progress.
|
|
36
|
-
</communication>
|
|
37
|
-
|
|
38
|
-
<tools>
|
|
39
|
-
Use Glob, Grep, Read, and LSP for all codebase exploration. You have full read access. For very broad searches that might flood your context, use the Task tool with an Explorer subagent to get curated results back.
|
|
40
|
-
</tools>
|
|
41
|
-
|
|
42
|
-
<boundaries>
|
|
43
|
-
- Write only to `{spec_dir}/working/`. Never write to spec.md directly — the Lead owns the spec.
|
|
44
|
-
- Never create or modify source code files. Your role is research only.
|
|
45
|
-
- Never interact with the user directly. All user communication goes through the Lead.
|
|
46
|
-
</boundaries>
|
|
@@ -1,47 +0,0 @@
|
|
|
1
|
-
# Step 1: Opening
|
|
2
|
-
|
|
3
|
-
Establish understanding of the feature before diving into details.
|
|
4
|
-
|
|
5
|
-
## Opening Questions
|
|
6
|
-
|
|
7
|
-
Use AskUserQuestion to gather information. Ask one or two questions at a time. Follow up on anything unclear.
|
|
8
|
-
|
|
9
|
-
Start with:
|
|
10
|
-
- What problem does this feature solve?
|
|
11
|
-
- Who uses it and what is their goal?
|
|
12
|
-
|
|
13
|
-
Then explore:
|
|
14
|
-
- What does success look like?
|
|
15
|
-
- Are there existing solutions or workarounds?
|
|
16
|
-
|
|
17
|
-
## When to Move On
|
|
18
|
-
|
|
19
|
-
Move on when:
|
|
20
|
-
- The core problem and user goal are clear
|
|
21
|
-
- Success criteria are understood at a high level
|
|
22
|
-
|
|
23
|
-
## Initialize the Team
|
|
24
|
-
|
|
25
|
-
Before proceeding to Step 2, set up the agent team:
|
|
26
|
-
|
|
27
|
-
1. Create the spec directory at `docs/specs/<feature-name>/` if not already created
|
|
28
|
-
2. Create `docs/specs/<feature-name>/working/` subdirectory
|
|
29
|
-
3. Read the three prompt templates:
|
|
30
|
-
- `references/researcher-prompt.md`
|
|
31
|
-
- `references/critic-prompt.md`
|
|
32
|
-
- `references/pragmatist-prompt.md`
|
|
33
|
-
4. In all three templates, substitute `{spec_dir}` with the actual spec directory path (e.g., `docs/specs/my-feature`) and `{feature_name}` with the feature name
|
|
34
|
-
5. Use TeamCreate to create a team named `spec-<feature-name>`
|
|
35
|
-
6. Spawn three teammates in parallel using the Task tool with `subagent_type: "general-purpose"` and `model: "opus"`:
|
|
36
|
-
- Name: `researcher`, prompt: substituted researcher-prompt.md content
|
|
37
|
-
- Name: `critic`, prompt: substituted critic-prompt.md content
|
|
38
|
-
- Name: `pragmatist`, prompt: substituted pragmatist-prompt.md content
|
|
39
|
-
- Set `team_name` to the team you just created
|
|
40
|
-
7. Send the Researcher an initial message via SendMessage summarizing the feature: problem, user, success criteria — so it can begin exploring immediately
|
|
41
|
-
8. Write initial context to `{spec_dir}/working/context.md`:
|
|
42
|
-
```
|
|
43
|
-
## Step 1: Feature Overview
|
|
44
|
-
[Problem, user, success criteria as discussed with the user]
|
|
45
|
-
```
|
|
46
|
-
|
|
47
|
-
Now proceed to `references/step-2-ideation.md`.
|
|
@@ -1,73 +0,0 @@
|
|
|
1
|
-
# Step 2: Ideation
|
|
2
|
-
|
|
3
|
-
Before designing a solution, explore the solution space. This step prevents premature convergence on the first idea that comes to mind.
|
|
4
|
-
|
|
5
|
-
## Determine Mode
|
|
6
|
-
|
|
7
|
-
Use AskUserQuestion to ask:
|
|
8
|
-
|
|
9
|
-
> "Do you already have a clear approach in mind, or would you like to explore different options first?"
|
|
10
|
-
|
|
11
|
-
**Options:**
|
|
12
|
-
- **I know my approach** → Skip to `references/step-3-ui-ux.md` (or step-4-deep-dive.md if no UI)
|
|
13
|
-
- **Let's explore options** → Continue with brainstorming below
|
|
14
|
-
|
|
15
|
-
## Hybrid Brainstorming
|
|
16
|
-
|
|
17
|
-
Research shows that combining human and AI ideas produces more original solutions than either alone. The key: get human ideas first, before AI suggestions anchor their thinking.
|
|
18
|
-
|
|
19
|
-
### 1. Collect User Ideas First
|
|
20
|
-
|
|
21
|
-
Use AskUserQuestion:
|
|
22
|
-
|
|
23
|
-
> "Before I suggest anything - what approaches have you been considering? Even rough or half-formed ideas are valuable."
|
|
24
|
-
|
|
25
|
-
Let them share freely. Don't evaluate yet. The goal is to capture their independent thinking before AI ideas influence it.
|
|
26
|
-
|
|
27
|
-
### 2. Generate AI Alternatives
|
|
28
|
-
|
|
29
|
-
Now generate 3-4 different approaches to the same problem. These should:
|
|
30
|
-
- Include options the user didn't mention
|
|
31
|
-
- Vary meaningfully in architecture, complexity, or tradeoffs
|
|
32
|
-
- Not just be variations on the user's ideas
|
|
33
|
-
|
|
34
|
-
Frame it as: "Let me add some alternatives you might not have considered..."
|
|
35
|
-
|
|
36
|
-
### 3. Diversity Check
|
|
37
|
-
|
|
38
|
-
Review all ideas (user's and yours). Ask yourself:
|
|
39
|
-
- Are these actually different, or variations of the same approach?
|
|
40
|
-
- What's the boldest option here?
|
|
41
|
-
- Can any ideas be combined into something better?
|
|
42
|
-
|
|
43
|
-
If the options feel too similar, push for a more divergent alternative.
|
|
44
|
-
|
|
45
|
-
### 4. Select or Combine
|
|
46
|
-
|
|
47
|
-
Present all approaches with tradeoffs. Use AskUserQuestion:
|
|
48
|
-
|
|
49
|
-
> "Looking at these together, which direction feels right? Or should we combine elements from multiple approaches?"
|
|
50
|
-
|
|
51
|
-
Document the chosen approach and why before proceeding.
|
|
52
|
-
|
|
53
|
-
## When to Move On
|
|
54
|
-
|
|
55
|
-
Proceed when:
|
|
56
|
-
- An approach has been selected (or user chose to skip brainstorming)
|
|
57
|
-
- The rationale for the choice is understood
|
|
58
|
-
|
|
59
|
-
## Team Checkpoint: Post-Ideation
|
|
60
|
-
|
|
61
|
-
Before proceeding to the next step:
|
|
62
|
-
|
|
63
|
-
1. Update `{spec_dir}/working/context.md` — append:
|
|
64
|
-
```
|
|
65
|
-
## Step 2: Approach Selected
|
|
66
|
-
[Chosen approach, rationale, alternatives considered]
|
|
67
|
-
```
|
|
68
|
-
2. Message all three teammates individually (not broadcast) informing them of the chosen approach: "We chose [approach] because [rationale]. Read context.md for full details."
|
|
69
|
-
3. Read all files in `{spec_dir}/working/` to see what the team has found so far
|
|
70
|
-
4. Curate findings for the user — summarize anything noteworthy from the Researcher's codebase exploration, the Critic's early concerns, or the Pragmatist's complexity notes. Present as: "Before we go deeper, my research team surfaced a few things..." Only surface findings that are relevant and actionable. Skip trivial items.
|
|
71
|
-
5. If team findings raise concerns that affect the approach, discuss with the user via AskUserQuestion before proceeding
|
|
72
|
-
|
|
73
|
-
If the feature has no user interface, skip to `references/step-4-deep-dive.md`. Otherwise proceed to `references/step-3-ui-ux.md`.
|