cc-dev-template 0.1.42 → 0.1.44

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/bin/install.js CHANGED
@@ -233,8 +233,7 @@ if (fs.existsSync(mergeSettingsPath)) {
233
233
  { file: 'read-guard-hook.json', name: 'Context guard for large reads' },
234
234
  { file: 'statusline-config.json', name: 'Custom status line' },
235
235
  { file: 'bash-overflow-hook.json', name: 'Bash overflow guard hook' },
236
- { file: 'bash-precheck-hook.json', name: 'Bash precheck hook' },
237
- { file: 'plan-agent-hook.json', name: 'Plan agent context injection hook' }
236
+ { file: 'bash-precheck-hook.json', name: 'Bash precheck hook' }
238
237
  ];
239
238
 
240
239
  configs.forEach(({ file, name }) => {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cc-dev-template",
3
- "version": "0.1.42",
3
+ "version": "0.1.44",
4
4
  "description": "Structured AI-assisted development framework for Claude Code",
5
5
  "bin": {
6
6
  "cc-dev-template": "./bin/install.js"
@@ -21,14 +21,9 @@ This requires full conversation context. Handle it yourself rather than delegati
21
21
 
22
22
  **1. Run the code simplifier**
23
23
 
24
- Run the code-simplifier agent on your staged changes before committing. This refines code for clarity and consistency.
24
+ Stage your changes, then use the code simplifier agent to refine them for clarity and consistency. Run tests afterward to verify nothing broke.
25
25
 
26
- 1. Stage your changes with `git add`
27
- 2. Launch the code-simplifier agent (look for it in available subagent types) targeting the staged files
28
- 3. Run the build and tests to verify nothing broke
29
- 4. Fix any issues before proceeding to commit
30
-
31
- Note: The code-simplifier is a plugin. If no code-simplifier agent is available, proceed to step 2.
26
+ Skip this step if no code simplifier agent is available.
32
27
 
33
28
  **2. Commit your work**
34
29
 
@@ -9,8 +9,7 @@
9
9
  - [Step 4: Write the Skill](#step-4-write-the-skill)
10
10
  - [Step 5: Choose Install Location](#step-5-choose-install-location)
11
11
  - [Step 6: Validate](#step-6-validate-feedback-loop)
12
- - [Step 7: Create Slash Command](#step-7-create-slash-command-optional)
13
- - [Step 8: Test](#step-8-test)
12
+ - [Step 7: Test](#step-7-test)
14
13
  - [Key Principles](#key-principles)
15
14
  - [Quality Check](#quality-check)
16
15
 
@@ -27,7 +26,6 @@ Copy and track progress:
27
26
  - [ ] Wrote skill as instructions (not docs)
28
27
  - [ ] Chose install location
29
28
  - [ ] Validated (loop until clean)
30
- - [ ] Offered slash command (optional)
31
29
  - [ ] Tested activation
32
30
  ```
33
31
 
@@ -130,42 +128,11 @@ Then manually verify:
130
128
  - Would Claude know what to DO after reading this?
131
129
  - Is heavy content in references/, not inline?
132
130
 
133
- ## Step 7: Create Slash Command (Optional)
134
-
135
- Ask the user: "Would you like a slash command to explicitly invoke this skill?"
136
-
137
- **When to recommend a command:**
138
- - Skills that users will invoke frequently
139
- - Skills where explicit invocation is clearer than description matching
140
- - Skills that are the entry point to complex workflows
141
-
142
- **If yes, create a command file:**
143
-
144
- ```markdown
145
- ---
146
- description: [Same as or similar to skill description]
147
- ---
148
-
149
- Invoke the [skill-name] skill immediately:
150
-
151
- ```
152
- Skill(skill: "[skill-name]")
153
- ```
154
-
155
- The skill contains all workflow instructions. Do not proceed until you have invoked the skill.
156
- ```
157
-
158
- **Location:**
159
- - User level: `~/.claude/commands/[command-name].md`
160
- - Project level: `.claude/commands/[command-name].md`
161
-
162
- **Naming:** Command name is typically the skill name without gerund (e.g., skill `creating-reports` → command `/create-report`).
163
-
164
- ## Step 8: Test
131
+ ## Step 7: Test
165
132
 
166
133
  Guide the user:
167
134
  1. Restart Claude Code
168
- 2. Say one of the trigger phrases (or use the slash command if created)
135
+ 2. Say one of the trigger phrases or use `/skill-name` to invoke directly
169
136
  3. Verify the skill activates and behaves as expected
170
137
 
171
138
  If it doesn't work, iterate. The conversation isn't over until the skill works.
@@ -250,15 +250,10 @@ Claude reads skill descriptions and decides when to activate based on user reque
250
250
  - Fails if description is too detailed
251
251
 
252
252
  **2. Explicit (Slash Command)**
253
- A slash command forces skill activation:
254
- ```markdown
255
- # /create-skill command
256
- Activate the creating-agent-skills skill to guide the user through skill creation.
257
- ```
258
- Most reliable method.
253
+ Every skill automatically works as a slash command using `/skill-name`. This is the most reliable activation method.
259
254
 
260
255
  **3. Hybrid (Recommended)**
261
- Slash command for explicit invocation + good description for autonomous discovery.
256
+ Use `/skill-name` for explicit invocation when needed, while a good description enables autonomous discovery for natural conversations.
262
257
 
263
258
  ## Iterative Development
264
259
 
@@ -0,0 +1,37 @@
1
+ ---
2
+ name: spec-interview
3
+ description: This skill helps create thorough feature specifications through conversation. Use when the user says "spec out a feature", "create a specification", "design a feature", "I need to plan a feature", or wants to document requirements before building.
4
+ argument-hint: <spec-name>
5
+ ---
6
+
7
+ # Spec Interview
8
+
9
+ Guide the user through creating a complete feature specification via structured conversation.
10
+
11
+ ## What To Do Now
12
+
13
+ 1. **Ask what feature they want to spec out** using AskUserQuestion
14
+ 2. **Create the spec directory** at `docs/specs/<feature-name>/`
15
+ 3. **Begin the interview** - read `references/interview-guide.md`
16
+
17
+ ## Key Principles
18
+
19
+ **Interviewer, not form.** Have a natural conversation. Ask follow-up questions. Dig into details that seem unclear.
20
+
21
+ **Subagents for research.** Offload exploration and research to subagents to keep the main interview context clean. They return only relevant findings.
22
+
23
+ **One or two questions at a time.** Use AskUserQuestion liberally. This is a conversation, not an interrogation.
24
+
25
+ **Implementation-ready output.** The finished spec should enable hands-off implementation with zero clarification needed.
26
+
27
+ ## Spec Location
28
+
29
+ Specs live at: `docs/specs/<feature-name>/spec.md`
30
+
31
+ The feature name is derived from the user's description (kebab-case, concise).
32
+
33
+ ## When You Think the Spec is Complete
34
+
35
+ Before finalizing, invoke the `spec-review` skill to check for gaps. It will return specific feedback. If gaps exist, ask follow-up questions to address them.
36
+
37
+ Only finalize when review passes.
@@ -0,0 +1,123 @@
1
+ # Interview Guide
2
+
3
+ ## The Interview Flow
4
+
5
+ This is a conversation, not a checklist march. But ensure all areas get covered naturally.
6
+
7
+ ### Opening
8
+
9
+ Start with open-ended understanding:
10
+ - "Tell me about this feature. What problem does it solve?"
11
+ - "Who will use this? What's their goal?"
12
+ - "Walk me through what success looks like."
13
+
14
+ Let the user talk. Ask follow-ups on anything unclear.
15
+
16
+ ### Areas to Cover
17
+
18
+ Cover these naturally through conversation. Don't force the order.
19
+
20
+ **Intent & Goals**
21
+ - What is the user trying to accomplish?
22
+ - Why does this matter? What's the business/user value?
23
+ - What does success look like? How will we know it's working?
24
+
25
+ **Integration Points**
26
+ - What existing parts of the system does this touch?
27
+ - Are there external services, APIs, or libraries involved?
28
+ - What data flows in and out?
29
+
30
+ → *Use subagents to investigate the current codebase when integration questions arise.*
31
+
32
+ **Data Model**
33
+ - What entities/objects are involved?
34
+ - What are the relationships between them?
35
+ - What constraints exist (required fields, validations, limits)?
36
+ - Are we extending existing models or creating new ones?
37
+
38
+ **Behavior & Flows**
39
+ - What are the main user flows?
40
+ - What triggers this feature? What happens step by step?
41
+ - Are there different modes or variations?
42
+
43
+ **Edge Cases & Error Handling**
44
+ - What can go wrong?
45
+ - What happens with invalid input?
46
+ - What are the boundary conditions?
47
+ - How do we handle partial failures?
48
+
49
+ **Acceptance Criteria**
50
+ - How do we verify this works?
51
+ - What are the specific, testable requirements?
52
+ - What would make this "done"?
53
+
54
+ **Blockers & Dependencies**
55
+ - What external dependencies exist? (APIs, services, libraries)
56
+ - Are there credentials, API keys, or access needed?
57
+ - Are there decisions that need to be made before implementation?
58
+ - Is anything waiting on external input?
59
+
60
+ ### Research During Interview
61
+
62
+ When you need to understand something about the current system, external libraries, or technical feasibility:
63
+
64
+ - Use **Explore agents** to investigate the codebase or research external APIs/libraries
65
+ - Use **Plan agents** to evaluate technical approaches or architectural decisions
66
+
67
+ Keep the main interview context clean. Agents return only the relevant findings you need to continue the conversation.
68
+
69
+ ### Writing the Spec
70
+
71
+ As you gather information, write to `docs/specs/<name>/spec.md`. Update it incrementally during the conversation.
72
+
73
+ **Spec structure:**
74
+
75
+ ```markdown
76
+ # [Feature Name]
77
+
78
+ ## Overview
79
+ [2-3 sentences: what this is and why it matters]
80
+
81
+ ## Goals
82
+ - [Primary goal]
83
+ - [Secondary goals if any]
84
+
85
+ ## Integration Points
86
+ [How this connects to existing system]
87
+ - Touches: [existing components]
88
+ - External: [APIs, services, libraries]
89
+ - Data flows: [in/out]
90
+
91
+ ## Data Model
92
+ [Entities, relationships, constraints]
93
+
94
+ ## Behavior
95
+ ### [Flow 1]
96
+ [Step by step]
97
+
98
+ ### [Flow 2]
99
+ [Step by step]
100
+
101
+ ## Edge Cases
102
+ - [Edge case]: [how handled]
103
+
104
+ ## Acceptance Criteria
105
+ - [ ] [Testable requirement]
106
+ - [ ] [Testable requirement]
107
+
108
+ ## Blockers
109
+ - [ ] [Blocker]: [what's needed]
110
+ - [ ] [Decision needed]: [options]
111
+ ```
112
+
113
+ ### Completion Check
114
+
115
+ When the spec seems complete, invoke the `spec-review` skill and specify which spec to review in the prompt. It analyzes the spec and returns feedback. If gaps are found, ask follow-up questions. Repeat until review passes.
116
+
117
+ ### Finalizing
118
+
119
+ Once review passes:
120
+ 1. Confirm with user and show the final spec
121
+ 2. Ask if they want to proceed to task breakdown
122
+
123
+ If yes, invoke the `spec-to-tasks` skill and specify which spec to break down.
@@ -0,0 +1,70 @@
1
+ ---
2
+ name: spec-review
3
+ description: Reviews a feature specification for completeness. Called by spec-interview when a spec seems complete, or invoke directly with "review the spec", "check spec completeness", "is this spec ready".
4
+ argument-hint: <spec-name>
5
+ context: fork
6
+ ---
7
+
8
+ # Spec Review
9
+
10
+ Analyze a specification for completeness and identify gaps.
11
+
12
+ ## What To Do
13
+
14
+ 1. **Identify the spec to review** - use the spec path if specified in the prompt, otherwise check git for the most recently modified spec in `docs/specs/`
15
+ 2. **Read the spec file**
16
+ 3. **Evaluate against the checklist below**
17
+ 4. **Return structured feedback**
18
+
19
+ ## Completeness Checklist
20
+
21
+ A spec is implementation-ready when ALL of these are satisfied:
22
+
23
+ ### Must Have (Blocking if missing)
24
+
25
+ - [ ] **Clear intent** - What is being built and why is unambiguous
26
+ - [ ] **Data model defined** - Entities, relationships, and constraints are explicit
27
+ - [ ] **Integration points mapped** - What existing code this touches is documented
28
+ - [ ] **Core behavior specified** - Main flows are step-by-step clear
29
+ - [ ] **Acceptance criteria exist** - Testable requirements are listed
30
+
31
+ ### Should Have (Gaps that cause implementation friction)
32
+
33
+ - [ ] **Edge cases covered** - Error conditions and boundaries are addressed
34
+ - [ ] **External dependencies documented** - APIs, libraries, services are listed
35
+ - [ ] **Blockers section exists** - Missing credentials, pending decisions are called out
36
+
37
+ ### Implementation Readiness
38
+
39
+ Ask yourself: "Could someone implement this feature completely hands-off, with zero questions?"
40
+
41
+ Check for:
42
+ - Vague language ("should handle errors appropriately" → HOW?)
43
+ - Missing details ("integrates with auth" → WHERE? HOW?)
44
+ - Unstated assumptions ("uses the standard pattern" → WHICH pattern?)
45
+ - Blocking dependencies ("needs API access" → DO WE HAVE IT?)
46
+
47
+ ## Output Format
48
+
49
+ Return the review as:
50
+
51
+ ```
52
+ ## Spec Review: [Feature Name]
53
+
54
+ ### Status: [READY | NEEDS WORK]
55
+
56
+ ### Missing (Blocking)
57
+ - [Item]: [What's missing and why it blocks implementation]
58
+
59
+ ### Gaps (Non-blocking but should address)
60
+ - [Item]: [What's unclear or incomplete]
61
+
62
+ ### Blocking Dependencies
63
+ - [Dependency]: [What's needed before implementation can start]
64
+
65
+ ### Recommendation
66
+ [Specific questions to ask the user, or "Spec is implementation-ready"]
67
+ ```
68
+
69
+ If status is READY, the spec can proceed to task breakdown.
70
+ If status is NEEDS WORK, list the specific questions that need answers.
@@ -0,0 +1,94 @@
1
+ ---
2
+ name: spec-to-tasks
3
+ description: Breaks a feature specification into implementation tasks. Use when the user says "break down the spec", "create tasks from spec", "generate task list", or after a spec review passes.
4
+ argument-hint: <spec-name>
5
+ context: fork
6
+ ---
7
+
8
+ # Spec to Tasks
9
+
10
+ Convert a completed specification into an ordered list of atomic implementation tasks.
11
+
12
+ ## What To Do
13
+
14
+ 1. **Identify the spec** - use the spec path if specified in the prompt, otherwise check git for the most recently modified spec in `docs/specs/`
15
+ 2. **Explore the codebase** - use a subagent to identify existing patterns, conventions, and where code should live
16
+ 3. **Generate the task manifest** - create `tasks.yaml` in the same directory as the spec
17
+ 4. **Review with user** - show the task list for approval
18
+
19
+ ## Task Principles
20
+
21
+ **Atomic tasks.** Each task is a single, focused change:
22
+ - One model/entity
23
+ - One endpoint/route
24
+ - One component
25
+ - One test file
26
+
27
+ A task should take an AI agent one focused session to complete.
28
+
29
+ **Ordered by dependency.** Tasks are sequenced so each can be completed in order without blocking.
30
+
31
+ **Include file paths.** Each task specifies the target file(s) based on codebase exploration.
32
+
33
+ **No code in tasks.** Tasks describe WHAT to do, not HOW. Implementation details are in the spec.
34
+
35
+ ## Task Format
36
+
37
+ Write to `docs/specs/<name>/tasks.yaml`:
38
+
39
+ ```yaml
40
+ spec: <name>
41
+ spec_path: docs/specs/<name>/spec.md
42
+ generated: <ISO timestamp>
43
+
44
+ tasks:
45
+ - id: T001
46
+ title: <Short descriptive title>
47
+ description: |
48
+ <What to implement>
49
+ <Reference to spec section if applicable>
50
+ files:
51
+ - <path/to/file.ts>
52
+ depends_on: []
53
+ acceptance: |
54
+ <How to verify this task is complete>
55
+
56
+ - id: T002
57
+ title: <Next task>
58
+ description: |
59
+ <What to implement>
60
+ files:
61
+ - <path/to/file.ts>
62
+ depends_on: [T001]
63
+ acceptance: |
64
+ <Verification criteria>
65
+ ```
66
+
67
+ ## Generating File Paths
68
+
69
+ Before writing tasks, explore the codebase to understand:
70
+ - Existing file patterns (where do models live? services? routes?)
71
+ - Specific files that need modification
72
+ - New files that need creation and where they should go
73
+
74
+ Use these findings to populate the `files` field for each task.
75
+
76
+ ## Ordering Tasks
77
+
78
+ Sequence by dependency:
79
+ 1. **Data layer first** - Models, schemas, database changes
80
+ 2. **Business logic** - Services, utilities, core functions
81
+ 3. **API layer** - Routes, controllers, endpoints
82
+ 4. **Integration** - Connecting components, wiring up
83
+ 5. **Tests** - Can be parallel with implementation or after
84
+
85
+ If tasks are independent, give them the same `depends_on`. This signals they could run in parallel.
86
+
87
+ ## Output
88
+
89
+ After generating:
90
+ 1. Show the user a summary: number of tasks, estimated scope
91
+ 2. List the task titles in order
92
+ 3. Ask if they want to see the full YAML or proceed to implementation
93
+
94
+ Save the manifest to `docs/specs/<name>/tasks.yaml`.
@@ -1,154 +0,0 @@
1
- #!/usr/bin/env node
2
- /**
3
- * plan-agent-context.js - PreToolUse hook for Task sub-agents
4
- *
5
- * Intercepts ALL Task calls and:
6
- * 1. Forces model: "opus" on every Task call
7
- * 2. For Plan agents (subagent_type: "Plan"), also injects ADRs and planning guidance
8
- */
9
-
10
- const fs = require('fs');
11
- const path = require('path');
12
-
13
- // js-yaml is loaded lazily in collectADRs() to handle missing dependency gracefully
14
-
15
- // Read hook input from stdin
16
- let input = '';
17
- process.stdin.setEncoding('utf8');
18
- process.stdin.on('data', chunk => input += chunk);
19
- process.stdin.on('end', () => {
20
- try {
21
- const hookInput = JSON.parse(input);
22
- const result = processHook(hookInput);
23
- console.log(JSON.stringify(result));
24
- } catch (err) {
25
- // On error, allow the tool call to proceed unchanged
26
- console.log(JSON.stringify({
27
- hookSpecificOutput: {
28
- hookEventName: "PreToolUse",
29
- permissionDecision: "allow"
30
- }
31
- }));
32
- }
33
- });
34
-
35
- function processHook(hookInput) {
36
- const toolInput = hookInput.tool_input || {};
37
-
38
- // Base override: force opus model on ALL Task calls
39
- const updatedInput = {
40
- ...toolInput,
41
- model: "opus"
42
- };
43
-
44
- // For Plan agent calls, also inject ADRs and planning guidance
45
- if (toolInput.subagent_type === 'Plan') {
46
- const adrs = collectADRs();
47
- const planningGuidance = buildPlanningGuidance(adrs);
48
- const originalPrompt = toolInput.prompt || '';
49
- updatedInput.prompt = `${planningGuidance}\n\n---\n\nORIGINAL TASK:\n${originalPrompt}`;
50
- }
51
-
52
- return {
53
- hookSpecificOutput: {
54
- hookEventName: "PreToolUse",
55
- permissionDecision: "allow",
56
- updatedInput
57
- }
58
- };
59
- }
60
-
61
- function collectADRs() {
62
- const adrDir = path.join(process.cwd(), '.claude', 'adrs');
63
- const adrs = [];
64
-
65
- // Try to load js-yaml - if unavailable, skip ADR collection entirely
66
- let yaml;
67
- try {
68
- yaml = require('js-yaml');
69
- } catch (err) {
70
- // js-yaml not available - return empty array (ADR injection is best-effort)
71
- return adrs;
72
- }
73
-
74
- if (!fs.existsSync(adrDir)) {
75
- return adrs;
76
- }
77
-
78
- const files = fs.readdirSync(adrDir).filter(f => f.endsWith('.yaml'));
79
-
80
- for (const file of files) {
81
- try {
82
- const content = fs.readFileSync(path.join(adrDir, file), 'utf8');
83
- const adr = yaml.load(content);
84
- if (adr && adr.status !== 'Superseded') {
85
- adrs.push({
86
- id: adr.id,
87
- title: adr.title,
88
- description: adr.description,
89
- constraints: adr.constraints,
90
- decision: adr.decision
91
- });
92
- }
93
- } catch (err) {
94
- // Skip malformed ADRs
95
- }
96
- }
97
-
98
- return adrs;
99
- }
100
-
101
- function buildPlanningGuidance(adrs) {
102
- let guidance = `<plan-agent-context>
103
- ## Planning Guidelines
104
-
105
- You are creating an implementation plan. Follow these principles:
106
-
107
- ### Research Before Planning
108
- - If the plan involves third-party libraries, APIs, or features not yet in the codebase, use WebSearch or WebFetch to get current documentation
109
- - Do not assume library APIs - verify them with up-to-date sources
110
- - Check compatibility with the project's infrastructure and existing patterns
111
-
112
- ### Remove Ambiguity
113
- - Each step in your plan should be concrete and actionable
114
- - If requirements are unclear, surface the ambiguity rather than making assumptions
115
- - Identify decision points that need user input before execution
116
-
117
- ### ADR Compliance
118
- The plan must respect these architectural decisions:
119
-
120
- `;
121
-
122
- if (adrs.length === 0) {
123
- guidance += `(No ADRs found in this project)\n`;
124
- } else {
125
- for (const adr of adrs) {
126
- guidance += `### ${adr.id}: ${adr.title}\n`;
127
- if (adr.description) {
128
- guidance += `${adr.description.trim()}\n`;
129
- }
130
- if (adr.constraints) {
131
- if (adr.constraints.must && adr.constraints.must.length > 0) {
132
- guidance += `**MUST:**\n`;
133
- for (const c of adr.constraints.must) {
134
- guidance += `- ${c}\n`;
135
- }
136
- }
137
- if (adr.constraints.must_not && adr.constraints.must_not.length > 0) {
138
- guidance += `**MUST NOT:**\n`;
139
- for (const c of adr.constraints.must_not) {
140
- guidance += `- ${c}\n`;
141
- }
142
- }
143
- }
144
- if (adr.decision) {
145
- guidance += `**Decision:** ${adr.decision.trim()}\n`;
146
- }
147
- guidance += `\n`;
148
- }
149
- }
150
-
151
- guidance += `</plan-agent-context>`;
152
-
153
- return guidance;
154
- }
@@ -1,15 +0,0 @@
1
- {
2
- "hooks": {
3
- "PreToolUse": [
4
- {
5
- "matcher": "Task",
6
- "hooks": [
7
- {
8
- "type": "command",
9
- "command": "node $HOME/.claude/hooks/plan-agent-context.js"
10
- }
11
- ]
12
- }
13
- ]
14
- }
15
- }