cc-dev-template 0.1.81 → 0.1.82
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/install.js +10 -1
- package/package.json +1 -1
- package/src/agents/objective-researcher.md +52 -0
- package/src/agents/question-generator.md +70 -0
- package/src/scripts/restrict-to-spec-dir.sh +23 -0
- package/src/skills/ship/SKILL.md +46 -0
- package/src/skills/ship/references/step-1-intent.md +50 -0
- package/src/skills/ship/references/step-2-questions.md +42 -0
- package/src/skills/ship/references/step-3-research.md +44 -0
- package/src/skills/ship/references/step-4-design.md +70 -0
- package/src/skills/ship/references/step-5-spec.md +86 -0
- package/src/skills/ship/references/step-6-tasks.md +83 -0
- package/src/skills/ship/references/step-7-implement.md +61 -0
- package/src/skills/ship/references/step-8-reflect.md +21 -0
- package/src/skills/execute-spec/SKILL.md +0 -40
- package/src/skills/execute-spec/references/phase-1-hydrate.md +0 -74
- package/src/skills/execute-spec/references/phase-2-build.md +0 -65
- package/src/skills/execute-spec/references/phase-3-validate.md +0 -73
- package/src/skills/execute-spec/references/phase-4-triage.md +0 -79
- package/src/skills/execute-spec/references/phase-5-reflect.md +0 -32
- package/src/skills/research/SKILL.md +0 -14
- package/src/skills/research/references/step-1-check-existing.md +0 -25
- package/src/skills/research/references/step-2-conduct-research.md +0 -65
- package/src/skills/research/references/step-3-reflect.md +0 -29
- package/src/skills/spec-interview/SKILL.md +0 -17
- package/src/skills/spec-interview/references/critic-prompt.md +0 -140
- package/src/skills/spec-interview/references/pragmatist-prompt.md +0 -76
- package/src/skills/spec-interview/references/researcher-prompt.md +0 -46
- package/src/skills/spec-interview/references/step-1-opening.md +0 -78
- package/src/skills/spec-interview/references/step-2-ideation.md +0 -73
- package/src/skills/spec-interview/references/step-3-ui-ux.md +0 -83
- package/src/skills/spec-interview/references/step-4-deep-dive.md +0 -137
- package/src/skills/spec-interview/references/step-5-research-needs.md +0 -53
- package/src/skills/spec-interview/references/step-6-verification.md +0 -89
- package/src/skills/spec-interview/references/step-7-finalize.md +0 -60
- package/src/skills/spec-interview/references/step-8-reflect.md +0 -32
- package/src/skills/spec-review/SKILL.md +0 -91
- package/src/skills/spec-sanity-check/SKILL.md +0 -82
- package/src/skills/spec-to-tasks/SKILL.md +0 -24
- package/src/skills/spec-to-tasks/references/step-1-identify-spec.md +0 -39
- package/src/skills/spec-to-tasks/references/step-2-explore.md +0 -43
- package/src/skills/spec-to-tasks/references/step-3-generate.md +0 -67
- package/src/skills/spec-to-tasks/references/step-4-review.md +0 -90
- package/src/skills/spec-to-tasks/references/step-5-reflect.md +0 -22
- package/src/skills/spec-to-tasks/templates/task.md +0 -30
- package/src/skills/task-review/SKILL.md +0 -18
- package/src/skills/task-review/references/checklist.md +0 -153
|
@@ -1,40 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: execute-spec
|
|
3
|
-
description: Orchestrates implementation and validation of a spec's task breakdown by dispatching agents — never reads task files or edits code directly.
|
|
4
|
-
allowed-tools: Grep, Glob, Task, TaskCreate, TaskList, TaskUpdate, TaskGet, AskUserQuestion, Bash
|
|
5
|
-
hooks:
|
|
6
|
-
PreToolUse:
|
|
7
|
-
- matcher: "Read"
|
|
8
|
-
hooks:
|
|
9
|
-
- type: command
|
|
10
|
-
command: "$HOME/.claude/scripts/block-task-files.sh"
|
|
11
|
-
---
|
|
12
|
-
|
|
13
|
-
# Execute Spec
|
|
14
|
-
|
|
15
|
-
Orchestrates the implementation and validation of a spec's task breakdown.
|
|
16
|
-
|
|
17
|
-
## When to Use
|
|
18
|
-
|
|
19
|
-
Invoke when you have a complete spec with a `tasks/` folder containing task files (T001-*.md, T002-*.md, etc.) ready for implementation.
|
|
20
|
-
|
|
21
|
-
## Arguments
|
|
22
|
-
|
|
23
|
-
This skill takes a spec path as an argument:
|
|
24
|
-
- `docs/specs/my-feature` - path to the spec folder containing `spec.md` and `tasks/`
|
|
25
|
-
|
|
26
|
-
## Key Principles
|
|
27
|
-
|
|
28
|
-
- **Never read task files** — Use the parse script for hydration, pass paths to agents. A PreToolUse hook blocks task file reads.
|
|
29
|
-
- **Minimal context** — Agent returns are pass/fail only, details live in task files.
|
|
30
|
-
- **Dispatch only** — All implementation and fixes go to spec-implementer agents, all validation to spec-validator agents. The orchestrator dispatches and collects status.
|
|
31
|
-
|
|
32
|
-
## Requirements
|
|
33
|
-
|
|
34
|
-
- Spec folder must contain `spec.md` and `tasks/` directory
|
|
35
|
-
- Task files must have YAML frontmatter with `id`, `title`, `status`, `depends_on`
|
|
36
|
-
- The `spec-implementer` and `spec-validator` agents must be installed
|
|
37
|
-
|
|
38
|
-
## Start
|
|
39
|
-
|
|
40
|
-
Read `references/phase-1-hydrate.md` to begin the workflow.
|
|
@@ -1,74 +0,0 @@
|
|
|
1
|
-
# Phase 1: Hydrate Tasks
|
|
2
|
-
|
|
3
|
-
Load task metadata into the Claude Code task system using the parse script.
|
|
4
|
-
|
|
5
|
-
## Important: No File Reading
|
|
6
|
-
|
|
7
|
-
The orchestrator does NOT read task files directly. Use the parse script.
|
|
8
|
-
|
|
9
|
-
## Process
|
|
10
|
-
|
|
11
|
-
```bash
|
|
12
|
-
# Run the parse script
|
|
13
|
-
node ~/.claude/scripts/parse-task-files.js {spec-path}
|
|
14
|
-
```
|
|
15
|
-
|
|
16
|
-
This outputs JSON:
|
|
17
|
-
```json
|
|
18
|
-
{
|
|
19
|
-
"specPath": "docs/specs/kiosk-storefront",
|
|
20
|
-
"specFile": "docs/specs/kiosk-storefront/spec.md",
|
|
21
|
-
"taskCount": 15,
|
|
22
|
-
"tasks": [
|
|
23
|
-
{
|
|
24
|
-
"id": "T001",
|
|
25
|
-
"title": "Public API endpoints",
|
|
26
|
-
"status": "pending",
|
|
27
|
-
"depends_on": [],
|
|
28
|
-
"path": "docs/specs/kiosk-storefront/tasks/T001-public-api-endpoints.md"
|
|
29
|
-
},
|
|
30
|
-
{
|
|
31
|
-
"id": "T002",
|
|
32
|
-
"title": "Kiosk routing",
|
|
33
|
-
"depends_on": ["T001"],
|
|
34
|
-
"path": "..."
|
|
35
|
-
}
|
|
36
|
-
]
|
|
37
|
-
}
|
|
38
|
-
```
|
|
39
|
-
|
|
40
|
-
## Create Tasks
|
|
41
|
-
|
|
42
|
-
For each task in the JSON:
|
|
43
|
-
|
|
44
|
-
```
|
|
45
|
-
TaskCreate(
|
|
46
|
-
subject: "{id}: {title}",
|
|
47
|
-
description: "{path}",
|
|
48
|
-
activeForm: "Implementing {title}"
|
|
49
|
-
)
|
|
50
|
-
```
|
|
51
|
-
|
|
52
|
-
The description is JUST the path. Agents read the file themselves.
|
|
53
|
-
|
|
54
|
-
## Set Dependencies
|
|
55
|
-
|
|
56
|
-
After creating all tasks, set up blockedBy relationships:
|
|
57
|
-
|
|
58
|
-
```
|
|
59
|
-
TaskUpdate(
|
|
60
|
-
taskId: {claude-task-id},
|
|
61
|
-
addBlockedBy: [mapped IDs from depends_on]
|
|
62
|
-
)
|
|
63
|
-
```
|
|
64
|
-
|
|
65
|
-
Maintain a mapping of task IDs (T001, T002) to Claude task system IDs.
|
|
66
|
-
|
|
67
|
-
## Output
|
|
68
|
-
|
|
69
|
-
- All tasks in Claude Code task system
|
|
70
|
-
- Dependencies configured
|
|
71
|
-
|
|
72
|
-
## Next
|
|
73
|
-
|
|
74
|
-
Use the Read tool on `references/phase-2-build.md` to start dispatching implementers.
|
|
@@ -1,65 +0,0 @@
|
|
|
1
|
-
# Phase 2: Build
|
|
2
|
-
|
|
3
|
-
Dispatch spec-implementer agents for each task, respecting dependencies.
|
|
4
|
-
|
|
5
|
-
## Process
|
|
6
|
-
|
|
7
|
-
```
|
|
8
|
-
Loop until all tasks complete:
|
|
9
|
-
|
|
10
|
-
1. TaskList() to get current state
|
|
11
|
-
|
|
12
|
-
2. Find ready tasks:
|
|
13
|
-
- status: pending
|
|
14
|
-
- blockedBy: empty (no unfinished dependencies)
|
|
15
|
-
|
|
16
|
-
3. For each ready task:
|
|
17
|
-
- Extract task file path from description
|
|
18
|
-
- Mark as in_progress: TaskUpdate(taskId, status: "in_progress")
|
|
19
|
-
- Dispatch implementer:
|
|
20
|
-
Task(
|
|
21
|
-
subagent_type: "spec-implementer",
|
|
22
|
-
prompt: "{task-file-path}",
|
|
23
|
-
run_in_background: true,
|
|
24
|
-
description: "Implement {task-id}"
|
|
25
|
-
)
|
|
26
|
-
|
|
27
|
-
4. Wait for completions:
|
|
28
|
-
- Background agents will notify you when they finish — wait for their completion messages
|
|
29
|
-
- As tasks complete, newly unblocked tasks become ready
|
|
30
|
-
|
|
31
|
-
5. Repeat until no pending tasks remain
|
|
32
|
-
```
|
|
33
|
-
|
|
34
|
-
## Parallelism Strategy
|
|
35
|
-
|
|
36
|
-
- Dispatch ALL ready tasks simultaneously
|
|
37
|
-
- The dependency graph controls what can run in parallel
|
|
38
|
-
- Example: If T002, T003, T004 all depend only on T001, they all start when T001 completes
|
|
39
|
-
|
|
40
|
-
## Monitoring Progress
|
|
41
|
-
|
|
42
|
-
Report progress as tasks complete:
|
|
43
|
-
```
|
|
44
|
-
Build Progress:
|
|
45
|
-
[x] T001: Public API endpoints (complete)
|
|
46
|
-
[~] T002: Kiosk routing (in progress)
|
|
47
|
-
[~] T003: Entity chain validation (in progress)
|
|
48
|
-
[ ] T007: Cart persistence (blocked by T005, T006)
|
|
49
|
-
...
|
|
50
|
-
```
|
|
51
|
-
|
|
52
|
-
## Error Handling
|
|
53
|
-
|
|
54
|
-
- If an implementer fails: Note the error, continue with other tasks
|
|
55
|
-
- If a task stays in_progress too long: May need manual intervention
|
|
56
|
-
- Failed tasks block their dependents
|
|
57
|
-
|
|
58
|
-
## Output
|
|
59
|
-
|
|
60
|
-
- All tasks implemented (or failed with notes)
|
|
61
|
-
- Implementation Notes written to each task file
|
|
62
|
-
|
|
63
|
-
## Next
|
|
64
|
-
|
|
65
|
-
Use the Read tool on `references/phase-3-validate.md` to validate the implementations.
|
|
@@ -1,73 +0,0 @@
|
|
|
1
|
-
# Phase 3: Validate
|
|
2
|
-
|
|
3
|
-
Dispatch spec-validator agents for each completed task.
|
|
4
|
-
|
|
5
|
-
## Prerequisites
|
|
6
|
-
|
|
7
|
-
- All build tasks complete
|
|
8
|
-
- Code is stable (no more modifications happening)
|
|
9
|
-
|
|
10
|
-
## Process
|
|
11
|
-
|
|
12
|
-
```
|
|
13
|
-
1. Get list of all tasks from TaskList()
|
|
14
|
-
|
|
15
|
-
2. For each completed task:
|
|
16
|
-
- Extract task file path from description
|
|
17
|
-
- Dispatch validator:
|
|
18
|
-
Task(
|
|
19
|
-
subagent_type: "spec-validator",
|
|
20
|
-
prompt: "{task-file-path}",
|
|
21
|
-
run_in_background: true,
|
|
22
|
-
description: "Validate {task-id}"
|
|
23
|
-
)
|
|
24
|
-
|
|
25
|
-
3. All validators run in parallel:
|
|
26
|
-
- Each creates its own browser session
|
|
27
|
-
- No dependencies between validators
|
|
28
|
-
- They don't modify code, just read and test
|
|
29
|
-
|
|
30
|
-
4. Wait for all validators to complete
|
|
31
|
-
|
|
32
|
-
5. Collect results from validator returns (pass/fail status only)
|
|
33
|
-
```
|
|
34
|
-
|
|
35
|
-
## Validator Behavior
|
|
36
|
-
|
|
37
|
-
Each validator:
|
|
38
|
-
1. Reviews code changes for the task
|
|
39
|
-
2. Runs automated tests if available
|
|
40
|
-
3. Performs E2E testing with agent-browser
|
|
41
|
-
4. Writes findings to Review Notes section
|
|
42
|
-
|
|
43
|
-
## Browser Session Isolation
|
|
44
|
-
|
|
45
|
-
Validators use isolated sessions to prevent conflicts when running in parallel:
|
|
46
|
-
```
|
|
47
|
-
--session validator-T001
|
|
48
|
-
--session validator-T002
|
|
49
|
-
...
|
|
50
|
-
```
|
|
51
|
-
|
|
52
|
-
## Collecting Results
|
|
53
|
-
|
|
54
|
-
After all validators complete, structure findings:
|
|
55
|
-
```
|
|
56
|
-
Validation Results:
|
|
57
|
-
T001: PASS
|
|
58
|
-
T002: PASS
|
|
59
|
-
T003: FAIL
|
|
60
|
-
- [critical] Button not clickable at /kiosk/:id/product
|
|
61
|
-
- [warning] ProductCard.tsx is 342 lines, consider splitting
|
|
62
|
-
T004: PASS
|
|
63
|
-
...
|
|
64
|
-
```
|
|
65
|
-
|
|
66
|
-
## Output
|
|
67
|
-
|
|
68
|
-
- Validation complete for all tasks
|
|
69
|
-
- Issues collected and categorized
|
|
70
|
-
|
|
71
|
-
## Next
|
|
72
|
-
|
|
73
|
-
Use the Read tool on `references/phase-4-triage.md` to process results and fix any failures.
|
|
@@ -1,79 +0,0 @@
|
|
|
1
|
-
# Phase 4: Triage
|
|
2
|
-
|
|
3
|
-
Process validation results and iterate until all tasks pass.
|
|
4
|
-
|
|
5
|
-
## Process
|
|
6
|
-
|
|
7
|
-
```
|
|
8
|
-
1. Collect failed task IDs from validator returns
|
|
9
|
-
(Returns are minimal: "Issues: T005 - [brief list]")
|
|
10
|
-
|
|
11
|
-
2. For each failed task:
|
|
12
|
-
- Re-dispatch spec-implementer with the task path
|
|
13
|
-
- Implementer reads Review Notes and addresses issues
|
|
14
|
-
- Returns: "Task complete: T005"
|
|
15
|
-
|
|
16
|
-
3. Re-run spec-validator on fixed tasks
|
|
17
|
-
- Returns: "Pass: T005" or "Issues: T005 - ..."
|
|
18
|
-
|
|
19
|
-
4. Repeat until:
|
|
20
|
-
- All tasks pass, OR
|
|
21
|
-
- User defers remaining issues
|
|
22
|
-
```
|
|
23
|
-
|
|
24
|
-
## No Separate Fixer Agent
|
|
25
|
-
|
|
26
|
-
The spec-implementer handles fixes. When it reads the task file:
|
|
27
|
-
- If Review Notes has issues → fix mode (address those issues)
|
|
28
|
-
- If Review Notes is empty → initial mode (implement from scratch)
|
|
29
|
-
|
|
30
|
-
The task file's Review Notes section IS the feedback mechanism.
|
|
31
|
-
|
|
32
|
-
## When to Escalate to User
|
|
33
|
-
|
|
34
|
-
Use AskUserQuestion when:
|
|
35
|
-
- Same issue persists after 2+ fix attempts
|
|
36
|
-
- Issue is architectural or unclear how to resolve
|
|
37
|
-
- Trade-off decision needed (performance vs simplicity, etc.)
|
|
38
|
-
|
|
39
|
-
```
|
|
40
|
-
AskUserQuestion(
|
|
41
|
-
questions: [{
|
|
42
|
-
header: "Fix approach",
|
|
43
|
-
question: "T005 failed twice with: [issue]. How should we proceed?",
|
|
44
|
-
options: [
|
|
45
|
-
{ label: "Try approach A", description: "..." },
|
|
46
|
-
{ label: "Try approach B", description: "..." },
|
|
47
|
-
{ label: "Defer", description: "Skip for now, add to backlog" }
|
|
48
|
-
]
|
|
49
|
-
}]
|
|
50
|
-
)
|
|
51
|
-
```
|
|
52
|
-
|
|
53
|
-
## Log-Based History
|
|
54
|
-
|
|
55
|
-
Each pass appends to the task file:
|
|
56
|
-
- Implementer appends to Implementation Notes
|
|
57
|
-
- Validator appends to Review Notes
|
|
58
|
-
|
|
59
|
-
This creates a debugging trail:
|
|
60
|
-
```
|
|
61
|
-
Implementation Notes:
|
|
62
|
-
Pass 1: Initial implementation...
|
|
63
|
-
Pass 2: Fixed idle timer issue...
|
|
64
|
-
|
|
65
|
-
Review Notes:
|
|
66
|
-
Pass 1: [critical] Timer doesn't pause...
|
|
67
|
-
Pass 2: [pass] All issues resolved
|
|
68
|
-
```
|
|
69
|
-
|
|
70
|
-
## Exit Conditions
|
|
71
|
-
|
|
72
|
-
Phase completes when:
|
|
73
|
-
1. All validators return "Pass: TXXX"
|
|
74
|
-
2. User explicitly defers remaining issues
|
|
75
|
-
3. Max retry limit reached (suggest user intervention)
|
|
76
|
-
|
|
77
|
-
## Next
|
|
78
|
-
|
|
79
|
-
Use the Read tool on `references/phase-5-reflect.md` to present final results to the user and reflect on the workflow.
|
|
@@ -1,32 +0,0 @@
|
|
|
1
|
-
# Phase 5: Reflect and Improve
|
|
2
|
-
|
|
3
|
-
Reflect on your experience orchestrating this spec execution.
|
|
4
|
-
|
|
5
|
-
## Assess
|
|
6
|
-
|
|
7
|
-
Answer these questions honestly:
|
|
8
|
-
|
|
9
|
-
1. Were any orchestration patterns in this workflow wrong, incomplete, or misleading?
|
|
10
|
-
2. Did the dispatch instructions for implementers or validators cause confusion or failures?
|
|
11
|
-
3. Did the triage loop reveal a pattern that should be encoded for next time?
|
|
12
|
-
4. Were the dependency resolution or parallelism strategies effective, or did they need adjustment?
|
|
13
|
-
5. Did the minimal-context principle (pass/fail only) hold up, or did you need more detail from agents?
|
|
14
|
-
6. Did any scripts, paths, or agent types fail and require correction?
|
|
15
|
-
|
|
16
|
-
## Act
|
|
17
|
-
|
|
18
|
-
If you identified issues above, fix them now:
|
|
19
|
-
|
|
20
|
-
1. Identify the specific file in the execute-spec skill directory where the issue lives
|
|
21
|
-
2. Read that file
|
|
22
|
-
3. Apply the fix — add what was missing, correct what was wrong
|
|
23
|
-
4. Apply the tribal knowledge test: only add what a fresh Claude instance would not already know
|
|
24
|
-
5. Keep the file within its size target
|
|
25
|
-
|
|
26
|
-
If no issues were found, confirm that to the user.
|
|
27
|
-
|
|
28
|
-
## Report
|
|
29
|
-
|
|
30
|
-
Tell the user:
|
|
31
|
-
- What you changed in the execute-spec skill and why, OR
|
|
32
|
-
- That no updates were needed and the skill performed correctly
|
|
@@ -1,14 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: research
|
|
3
|
-
description: Deep research on unfamiliar paradigms, libraries, or patterns before implementation. Use when a topic needs investigation before coding, or when spec-interview identifies research needs. Outputs to docs/research/ with YAML frontmatter.
|
|
4
|
-
---
|
|
5
|
-
|
|
6
|
-
# Research
|
|
7
|
-
|
|
8
|
-
Conduct deep research on a topic to inform implementation decisions.
|
|
9
|
-
|
|
10
|
-
## What To Do Now
|
|
11
|
-
|
|
12
|
-
Identify the research topic from the user's request or the invoking skill's context.
|
|
13
|
-
|
|
14
|
-
Read `references/step-1-check-existing.md` to check for existing research before conducting new research.
|
|
@@ -1,25 +0,0 @@
|
|
|
1
|
-
# Step 1: Check Existing Research
|
|
2
|
-
|
|
3
|
-
Before researching, check if this topic was already researched.
|
|
4
|
-
|
|
5
|
-
## Search for Existing Research
|
|
6
|
-
|
|
7
|
-
Look in `docs/research/` for existing research documents.
|
|
8
|
-
|
|
9
|
-
Search approach:
|
|
10
|
-
1. Glob for `docs/research/*.md`
|
|
11
|
-
2. If files exist, read them and check YAML frontmatter for matching topics
|
|
12
|
-
3. Consider semantic matches, not just exact names (e.g., "Convex real-time" matches "Convex subscriptions")
|
|
13
|
-
|
|
14
|
-
## If Match Found
|
|
15
|
-
|
|
16
|
-
Return the existing research to the caller:
|
|
17
|
-
- State that research already exists
|
|
18
|
-
- Provide the file path
|
|
19
|
-
- Summarize the key findings from that document
|
|
20
|
-
|
|
21
|
-
Do not re-research unless the user explicitly requests updated information.
|
|
22
|
-
|
|
23
|
-
## If No Match
|
|
24
|
-
|
|
25
|
-
Proceed to `references/step-2-conduct-research.md` to conduct new research.
|
|
@@ -1,65 +0,0 @@
|
|
|
1
|
-
# Step 2: Conduct Research
|
|
2
|
-
|
|
3
|
-
Research the topic thoroughly and produce a reference document.
|
|
4
|
-
|
|
5
|
-
## Research Strategy
|
|
6
|
-
|
|
7
|
-
Think about everything needed to implement this correctly:
|
|
8
|
-
- Core concepts and mental models
|
|
9
|
-
- Best practices and common pitfalls
|
|
10
|
-
- Integration patterns with existing tools/frameworks
|
|
11
|
-
- Error handling approaches
|
|
12
|
-
- Performance considerations if relevant
|
|
13
|
-
|
|
14
|
-
Spawn multiple subagents in parallel to research from different angles. Each subagent focuses on one aspect. Use whatever web search tools are available.
|
|
15
|
-
|
|
16
|
-
Synthesize findings into a coherent understanding. Resolve contradictions. Prioritize recent, authoritative sources.
|
|
17
|
-
|
|
18
|
-
## Output Document
|
|
19
|
-
|
|
20
|
-
Create `docs/research/<topic-slug>.md` (kebab-case, concise name).
|
|
21
|
-
|
|
22
|
-
Structure:
|
|
23
|
-
|
|
24
|
-
```markdown
|
|
25
|
-
---
|
|
26
|
-
name: <Topic Name>
|
|
27
|
-
description: <One-line description of what was researched>
|
|
28
|
-
date: <YYYY-MM-DD>
|
|
29
|
-
---
|
|
30
|
-
|
|
31
|
-
# <Topic Name>
|
|
32
|
-
|
|
33
|
-
## Overview
|
|
34
|
-
|
|
35
|
-
[2-3 sentences: what this is and why it matters for our implementation]
|
|
36
|
-
|
|
37
|
-
## Key Concepts
|
|
38
|
-
|
|
39
|
-
[Core mental models needed to work with this correctly]
|
|
40
|
-
|
|
41
|
-
## Best Practices
|
|
42
|
-
|
|
43
|
-
[What to do - actionable guidance]
|
|
44
|
-
|
|
45
|
-
## Pitfalls to Avoid
|
|
46
|
-
|
|
47
|
-
[Common mistakes and how to prevent them]
|
|
48
|
-
|
|
49
|
-
## Integration Notes
|
|
50
|
-
|
|
51
|
-
[How this fits with our stack, if relevant]
|
|
52
|
-
|
|
53
|
-
## Sources
|
|
54
|
-
|
|
55
|
-
[Key sources consulted]
|
|
56
|
-
```
|
|
57
|
-
|
|
58
|
-
## Complete
|
|
59
|
-
|
|
60
|
-
After writing the document:
|
|
61
|
-
1. Confirm the research is complete
|
|
62
|
-
2. Summarize the key takeaways
|
|
63
|
-
3. Return to the invoking context (spec-interview or user)
|
|
64
|
-
|
|
65
|
-
Use the Read tool on `references/step-3-reflect.md` to reflect on the research process and note any skill issues.
|
|
@@ -1,29 +0,0 @@
|
|
|
1
|
-
# Step 3: Reflect and Improve
|
|
2
|
-
|
|
3
|
-
## Assess
|
|
4
|
-
|
|
5
|
-
Answer these questions honestly:
|
|
6
|
-
|
|
7
|
-
1. Were any research strategies, source evaluation criteria, or synthesis instructions in the research workflow wrong, incomplete, or misleading?
|
|
8
|
-
2. Did you discover a research approach or information synthesis technique that should be encoded for next time?
|
|
9
|
-
3. Did any steps send you down a wrong path or leave out critical guidance?
|
|
10
|
-
4. Did the output format requirements miss anything important, or include unnecessary sections?
|
|
11
|
-
5. Did any search tools, source types, or parallelization strategies fail and require correction?
|
|
12
|
-
|
|
13
|
-
## Act
|
|
14
|
-
|
|
15
|
-
If you identified issues above, fix them now:
|
|
16
|
-
|
|
17
|
-
1. Identify the specific file in the research skill directory where the issue lives
|
|
18
|
-
2. Read that file
|
|
19
|
-
3. Apply the fix — add what was missing, correct what was wrong
|
|
20
|
-
4. Apply the tribal knowledge test: only add what a fresh Claude instance would not already know about conducting research
|
|
21
|
-
5. Keep the file within its size target
|
|
22
|
-
|
|
23
|
-
If no issues were found, confirm that to the user.
|
|
24
|
-
|
|
25
|
-
## Report
|
|
26
|
-
|
|
27
|
-
Tell the user:
|
|
28
|
-
- What you changed in the research skill and why, OR
|
|
29
|
-
- That no updates were needed and the skill performed correctly
|
|
@@ -1,17 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: spec-interview
|
|
3
|
-
description: Conducts a conversational interview to produce implementation-ready feature specifications. Appropriate when planning a feature, designing a system component, or documenting requirements before building.
|
|
4
|
-
argument-hint: <spec-name>
|
|
5
|
-
---
|
|
6
|
-
|
|
7
|
-
# Spec Interview
|
|
8
|
-
|
|
9
|
-
Conduct a structured interview to produce an implementation-ready feature spec. This skill uses an agent team — three persistent teammates (Researcher, Critic, Pragmatist) handle codebase exploration, quality review, and complexity assessment while you lead the interview.
|
|
10
|
-
|
|
11
|
-
## What To Do Now
|
|
12
|
-
|
|
13
|
-
If an argument was provided, use it as the feature name. Otherwise, ask what feature to spec out.
|
|
14
|
-
|
|
15
|
-
Create the spec directory at `docs/specs/<feature-name>/` (kebab-case, concise).
|
|
16
|
-
|
|
17
|
-
Read `references/step-1-opening.md` to begin the interview.
|
|
@@ -1,140 +0,0 @@
|
|
|
1
|
-
You are the Critic on a spec-interview team producing a feature specification for **{feature_name}**.
|
|
2
|
-
|
|
3
|
-
<role>
|
|
4
|
-
Provide continuous quality review of the emerging spec. You catch issues as they emerge — with full context of the conversation and decisions that produced each section. You replace end-of-pipe reviews with ongoing, informed critique.
|
|
5
|
-
</role>
|
|
6
|
-
|
|
7
|
-
<team>
|
|
8
|
-
- Lead (team-lead): Interviews the user, writes the spec, curates team input
|
|
9
|
-
- Researcher (researcher): Explores the codebase, maps the technical landscape
|
|
10
|
-
- Pragmatist (pragmatist): Evaluates complexity, advocates for simplicity
|
|
11
|
-
- You (critic): Find gaps, challenge assumptions, identify risks
|
|
12
|
-
</team>
|
|
13
|
-
|
|
14
|
-
<working-directory>
|
|
15
|
-
The team shares: `{spec_dir}/working/`
|
|
16
|
-
|
|
17
|
-
- `{spec_dir}/working/context.md` — The Lead writes interview context here. Read this for the "why" behind decisions.
|
|
18
|
-
- `{spec_dir}/spec.md` — The living spec. This is what you review.
|
|
19
|
-
- Read the Researcher's working files for technical grounding.
|
|
20
|
-
- Write your analysis to `{spec_dir}/working/` (e.g., `critic-gaps.md`, `critic-assumptions.md`, `critic-review.md`).
|
|
21
|
-
</working-directory>
|
|
22
|
-
|
|
23
|
-
<responsibilities>
|
|
24
|
-
1. Read the spec as it evolves. Challenge every section:
|
|
25
|
-
- Does this flow actually work end-to-end?
|
|
26
|
-
- What assumptions are unstated or unverified?
|
|
27
|
-
- What edge cases are missing?
|
|
28
|
-
- What happens when things fail?
|
|
29
|
-
- Are acceptance criteria actually testable?
|
|
30
|
-
2. Draft proposed content for **Edge Cases** and **Error Handling** sections
|
|
31
|
-
3. Ask the Researcher to verify claims against the codebase when something seems off
|
|
32
|
-
4. Ensure verification methods are concrete and executable
|
|
33
|
-
5. Flag issues by severity: **blocking** (must fix), **gap** (should address), **suggestion** (nice to have)
|
|
34
|
-
6. Check for conflicts with CLAUDE.md project constraints (read all CLAUDE.md files in the project)
|
|
35
|
-
7. Review the File Landscape for new files with overlapping purposes. When multiple new components share similar structure, data, or behavior, flag them for consolidation into a shared abstraction. Ask the Researcher to compare the proposed components.
|
|
36
|
-
</responsibilities>
|
|
37
|
-
|
|
38
|
-
<completeness-checklist>
|
|
39
|
-
Before the spec is finalized, all of these must be true:
|
|
40
|
-
|
|
41
|
-
**Must Have (Blocking if missing)**
|
|
42
|
-
- Clear intent — what and why is unambiguous
|
|
43
|
-
- Data model — entities, relationships, constraints are explicit
|
|
44
|
-
- Integration points — what existing code this touches is documented
|
|
45
|
-
- Core behavior — main flows are step-by-step clear
|
|
46
|
-
- Acceptance criteria — testable requirements with verification methods
|
|
47
|
-
- No ambiguities — nothing requires interpretation
|
|
48
|
-
- No unknowns — all information needed for implementation is present
|
|
49
|
-
- CLAUDE.md alignment — no conflicts with project constraints
|
|
50
|
-
- No internal duplication — new components with similar structure or purpose are consolidated into shared abstractions
|
|
51
|
-
|
|
52
|
-
**Should Have (Gaps that cause implementation friction)**
|
|
53
|
-
- Edge cases — error conditions and boundaries addressed
|
|
54
|
-
- External dependencies — APIs, libraries, services documented
|
|
55
|
-
- Blockers section — missing credentials, pending decisions called out
|
|
56
|
-
- UI/UX wireframes — if feature has a user interface
|
|
57
|
-
- Design direction — if feature has UI, visual approach is explicit
|
|
58
|
-
|
|
59
|
-
**Flag these problems:**
|
|
60
|
-
- Vague language ("should handle errors appropriately" — HOW?)
|
|
61
|
-
- Missing details ("integrates with auth" — WHERE? HOW?)
|
|
62
|
-
- Unstated assumptions ("uses the standard pattern" — WHICH pattern?)
|
|
63
|
-
- Blocking dependencies ("needs API access" — DO WE HAVE IT?)
|
|
64
|
-
- Unverifiable criteria ("dashboard works correctly" — HOW DO WE CHECK?)
|
|
65
|
-
- Missing verification ("loads fast" — WHAT COMMAND PROVES IT?)
|
|
66
|
-
- Implicit knowledge ("depends on how X works" — SPECIFY IT)
|
|
67
|
-
- Unverified claims ("the API returns..." — HAS THIS BEEN CONFIRMED?)
|
|
68
|
-
- CLAUDE.md conflicts (spec proposes X but CLAUDE.md requires Y — WHICH IS IT?)
|
|
69
|
-
- Near-duplicate new components (three similar cards, two similar forms, repeated layout patterns — CONSOLIDATE into shared components with configuration)
|
|
70
|
-
</completeness-checklist>
|
|
71
|
-
|
|
72
|
-
<sanity-check-framework>
|
|
73
|
-
For each section of the spec, challenge it through these lenses:
|
|
74
|
-
|
|
75
|
-
**Logic Gaps**
|
|
76
|
-
- Does the described flow actually work end-to-end?
|
|
77
|
-
- Are there steps that assume a previous step succeeded without checking?
|
|
78
|
-
- Are there circular dependencies?
|
|
79
|
-
|
|
80
|
-
**Incorrect Assumptions**
|
|
81
|
-
- Are there assumptions about how existing systems work that might be wrong?
|
|
82
|
-
- Are there assumptions about external APIs or data formats?
|
|
83
|
-
- Use Grep, Glob, Read to verify assumptions against the actual codebase
|
|
84
|
-
|
|
85
|
-
**Unconsidered Scenarios**
|
|
86
|
-
- What happens if external dependencies fail?
|
|
87
|
-
- What happens if data is malformed or missing?
|
|
88
|
-
- What happens at unexpected scale?
|
|
89
|
-
|
|
90
|
-
**Implementation Pitfalls**
|
|
91
|
-
- Common bugs this approach would likely introduce?
|
|
92
|
-
- Security implications not addressed?
|
|
93
|
-
- Race conditions or timing issues?
|
|
94
|
-
|
|
95
|
-
**The "What If" Test**
|
|
96
|
-
- What if [key assumption] is wrong?
|
|
97
|
-
- What if [external dependency] changes?
|
|
98
|
-
</sanity-check-framework>
|
|
99
|
-
|
|
100
|
-
<final-review-format>
|
|
101
|
-
When the Lead asks for a final review, write your findings to `{spec_dir}/working/critic-final-review.md` using this format:
|
|
102
|
-
|
|
103
|
-
```markdown
|
|
104
|
-
## Spec Review: {feature_name}
|
|
105
|
-
|
|
106
|
-
### Status: [READY | NEEDS WORK]
|
|
107
|
-
|
|
108
|
-
### Blocking Issues
|
|
109
|
-
- [Issue]: [Why this blocks implementation]
|
|
110
|
-
|
|
111
|
-
### CLAUDE.md Conflicts
|
|
112
|
-
- [Constraint]: [How the spec conflicts]
|
|
113
|
-
|
|
114
|
-
### Gaps (Non-blocking)
|
|
115
|
-
- [Item]: [What's unclear or incomplete]
|
|
116
|
-
|
|
117
|
-
### Logic Issues
|
|
118
|
-
- [Issue]: [Why this is a problem]
|
|
119
|
-
|
|
120
|
-
### Questionable Assumptions
|
|
121
|
-
- [Assumption]: [Why this might be wrong]
|
|
122
|
-
|
|
123
|
-
### Duplication Concerns
|
|
124
|
-
- [Group of similar new components]: [How they overlap and consolidation recommendation]
|
|
125
|
-
|
|
126
|
-
### Unconsidered Scenarios
|
|
127
|
-
- [Scenario]: [What could go wrong]
|
|
128
|
-
|
|
129
|
-
### Recommendation
|
|
130
|
-
[Specific items to address, or "Spec is implementation-ready"]
|
|
131
|
-
```
|
|
132
|
-
</final-review-format>
|
|
133
|
-
|
|
134
|
-
<communication>
|
|
135
|
-
- Details go in working files. Messages are concise summaries.
|
|
136
|
-
- Message the Lead when issues need user input to resolve.
|
|
137
|
-
- Message the Researcher to request codebase verification.
|
|
138
|
-
- Engage the Pragmatist when you disagree on scope — this tension is productive and improves the spec.
|
|
139
|
-
- Never interact with the user directly. All user communication goes through the Lead.
|
|
140
|
-
</communication>
|