cc-discipline 2.7.0 → 2.8.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/global/CLAUDE.md CHANGED
@@ -33,6 +33,7 @@ Don't skip the first three steps and jump straight to the fourth.
33
33
  - When unsure of human's intent, confirm before acting.
34
34
  - Provide options for human to decide, rather than making decisions for them.
35
35
  - When human changes direction, follow immediately. Do not nag about the previous goal, do not ask "should we finish X first?", do not repeatedly remind about unfinished work. The human is the leader — they decide priorities. Your job is to execute the current direction with full commitment, not to manage the human's task list.
36
+ - Do not comment on the time, suggest rest, say "good night", or advise the human to "continue tomorrow". The human's schedule is not your concern. Work when asked, stop when told.
36
37
 
37
38
  ---
38
39
 
package/init.sh CHANGED
@@ -363,12 +363,14 @@ cp -r "$SCRIPT_DIR/templates/.claude/skills/evaluate" .claude/skills/
363
363
  cp -r "$SCRIPT_DIR/templates/.claude/skills/think" .claude/skills/
364
364
  cp -r "$SCRIPT_DIR/templates/.claude/skills/retro" .claude/skills/
365
365
  cp -r "$SCRIPT_DIR/templates/.claude/skills/summary" .claude/skills/
366
+ cp -r "$SCRIPT_DIR/templates/.claude/skills/investigate" .claude/skills/
366
367
  echo " ✓ /commit — smart commit (test → update memory → commit)"
367
368
  echo " ✓ /self-check — periodic discipline check (use with /loop 10m /self-check)"
368
369
  echo " ✓ /evaluate — evaluate external review/advice against codebase context"
369
370
  echo " ✓ /think — stop and think before coding (ask → propose → wait)"
370
371
  echo " ✓ /retro — post-task retrospective (project + framework feedback)"
371
372
  echo " ✓ /summary — write high-quality compact option before /compact"
373
+ echo " ✓ /investigate — multi-agent cross-investigation and proposal review"
372
374
 
373
375
  # ─── Handle CLAUDE.md ───
374
376
  if [ ! -f "CLAUDE.md" ]; then
@@ -480,6 +482,7 @@ if [ "$INSTALL_MODE" = "fresh" ]; then
480
482
  echo -e " ${GREEN}.claude/skills/think/${NC} ← /think stop and think before coding"
481
483
  echo -e " ${GREEN}.claude/skills/retro/${NC} ← /retro post-task retrospective"
482
484
  echo -e " ${GREEN}.claude/skills/summary/${NC} ← /summary before compacting"
485
+ echo -e " ${GREEN}.claude/skills/investigate/${NC} ← /investigate multi-agent cross-investigation"
483
486
  echo -e " ${GREEN}.claude/settings.json${NC} ← Hooks configuration"
484
487
  echo -e " ${GREEN}docs/progress.md${NC} ← Progress log (maintained by Claude)"
485
488
  echo -e " ${GREEN}docs/debug-log.md${NC} ← Debug log (maintained by Claude)"
@@ -501,6 +504,7 @@ else
501
504
  echo -e " ${GREEN}.claude/skills/think/${NC} ← /think stop and think before coding"
502
505
  echo -e " ${GREEN}.claude/skills/retro/${NC} ← /retro post-task retrospective"
503
506
  echo -e " ${GREEN}.claude/skills/summary/${NC} ← /summary before compacting"
507
+ echo -e " ${GREEN}.claude/skills/investigate/${NC} ← /investigate multi-agent cross-investigation"
504
508
  if [ ! -f "$BACKUP_DIR/settings.json" ] || [ -f ".claude/.cc-discipline-settings-template.json" ]; then
505
509
  echo -e " ${YELLOW}.claude/settings.json${NC} ← See notes above"
506
510
  else
package/lib/doctor.sh CHANGED
@@ -95,7 +95,7 @@ done
95
95
  # 6. Skills
96
96
  echo ""
97
97
  echo "Skills:"
98
- for skill in commit self-check evaluate think retro summary; do
98
+ for skill in commit self-check evaluate think retro summary investigate; do
99
99
  if [ -d ".claude/skills/${skill}" ]; then
100
100
  ok "/${skill}"
101
101
  else
package/lib/status.sh CHANGED
@@ -72,8 +72,9 @@ SKILLS=""
72
72
  [ -d ".claude/skills/think" ] && SKILLS="${SKILLS}/think "
73
73
  [ -d ".claude/skills/retro" ] && SKILLS="${SKILLS}/retro "
74
74
  [ -d ".claude/skills/summary" ] && SKILLS="${SKILLS}/summary "
75
+ [ -d ".claude/skills/investigate" ] && SKILLS="${SKILLS}/investigate "
75
76
  SKILL_COUNT=$(echo "$SKILLS" | wc -w | tr -d ' ')
76
- echo -e "${GREEN}${SKILL_COUNT}/6${NC} (${SKILLS% })"
77
+ echo -e "${GREEN}${SKILL_COUNT}/7${NC} (${SKILLS% })"
77
78
 
78
79
  # Settings
79
80
  echo -n "Settings: "
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cc-discipline",
3
- "version": "2.7.0",
3
+ "version": "2.8.1",
4
4
  "description": "Discipline framework for Claude Code — rules, hooks, and agents that keep AI on track",
5
5
  "bin": {
6
6
  "cc-discipline": "bin/cli.sh"
@@ -0,0 +1,192 @@
1
+ ---
2
+ name: investigate
3
+ description: Multi-agent cross-investigation. Two modes — research (explore from scratch) and review (challenge existing proposal). Spawns parallel agents per dimension, synthesizes with dialectical cross-check.
4
+ ---
5
+
6
+ ## Mode detection
7
+
8
+ Determine which mode based on user input:
9
+
10
+ - **Research mode** — User gives a topic/question with no existing proposal. Goal: build comprehensive understanding before forming an opinion.
11
+ - **Review mode** — User gives an existing document, proposal, or design. Goal: stress-test it from multiple angles, find blind spots and weaknesses.
12
+ - **Simulate mode** — User gives a plan and wants to "dry run" it. Goal: walk through execution step by step, let hidden problems surface naturally.
13
+
14
+ State which mode you're using and why.
15
+
16
+ ---
17
+
18
+ # Research Mode
19
+
20
+ You are about to research a topic or design a solution. **Do NOT go deep on one angle.** Your job is to see the full picture before converging.
21
+
22
+ ## Step 1: Decompose into dimensions
23
+
24
+ Before researching anything, identify 3-5 independent dimensions of the problem. Ask yourself:
25
+ - What are the different angles this could be viewed from?
26
+ - What are the stakeholders / affected systems / competing concerns?
27
+ - What would a devil's advocate focus on?
28
+
29
+ Output the dimensions as a numbered list. Each dimension should be genuinely different, not sub-points of the same thing.
30
+
31
+ Example for "should we migrate from REST to GraphQL?":
32
+ 1. **Performance & scalability** — latency, payload size, caching implications
33
+ 2. **Developer experience** — learning curve, tooling, debugging
34
+ 3. **Existing ecosystem** — what breaks, migration cost, backward compatibility
35
+ 4. **Security** — query complexity attacks, authorization model changes
36
+ 5. **Business** — timeline pressure, team skills, client requirements
37
+
38
+ ## Step 2: Parallel investigation
39
+
40
+ Spawn one subagent per dimension. Each agent:
41
+ - Investigates ONLY its assigned dimension
42
+ - Reads relevant code/docs for that angle
43
+ - Lists findings with evidence (file paths, code references, data)
44
+ - Flags risks and unknowns specific to that dimension
45
+ - Does NOT try to propose a final solution — just reports findings
46
+
47
+ Launch agents in parallel, not sequentially.
48
+
49
+ ## Step 3: Synthesize
50
+
51
+ After all agents return, synthesize in the main conversation:
52
+
53
+ ### Cross-check matrix
54
+ For each dimension pair, ask: do the findings conflict?
55
+
56
+ ```
57
+ | Dim 1 | Dim 2 | Dim 3 | Dim 4 |
58
+ Dim 1 | — | conflict? | aligned? | ? |
59
+ Dim 2 | | — | ? | ? |
60
+ ...
61
+ ```
62
+
63
+ ### Blind spots
64
+ - What did NO agent cover? What's missing from all reports?
65
+ - What assumptions are shared across all dimensions (and might be wrong)?
66
+ - What would someone who disagrees with ALL agents say?
67
+
68
+ ### Integrated findings
69
+ Combine into a unified picture. Flag where dimensions support each other and where they pull in different directions.
70
+
71
+ ## Step 4: Present
72
+
73
+ Output the integrated findings to the user. For each key finding:
74
+ - Which dimensions support it
75
+ - Which dimensions challenge it
76
+ - Confidence level (strong / moderate / weak)
77
+ - What would change your mind
78
+
79
+ **Do NOT present a single recommendation without showing the tensions.** The user needs to see the trade-offs, not just your favorite answer.
80
+
81
+ ---
82
+
83
+ # Review Mode
84
+
85
+ You have an existing document, proposal, or design to evaluate. **Do NOT just validate it.** Your job is to find what's wrong, what's missing, and what would break.
86
+
87
+ ## Step 1: Read and summarize
88
+
89
+ Read the document completely. Summarize its core claims and assumptions in 3-5 bullet points. Confirm with the user: "Is this what this document is proposing?"
90
+
91
+ ## Step 2: Decompose into challenge dimensions
92
+
93
+ Identify 3-5 angles to challenge the proposal from:
94
+ - **Feasibility** — Can this actually be built/done as described? What's underestimated?
95
+ - **Alternatives** — What approaches did the proposal NOT consider? Why might they be better?
96
+ - **Failure modes** — How could this fail? What happens when assumptions are wrong?
97
+ - **Scalability / long-term** — Does this hold up at 10x scale or in 2 years?
98
+ - **Domain-specific** — Does this violate any known constraints of the specific domain?
99
+
100
+ Adapt dimensions to the document's domain. Not all apply to every proposal.
101
+
102
+ ## Step 3: Parallel challenge agents
103
+
104
+ Spawn one agent per challenge dimension. Each agent:
105
+ - Takes the proposal's claims at face value, then tries to **break** them
106
+ - Reads relevant code/docs to verify the proposal's assumptions against reality
107
+ - Produces: what's solid, what's questionable, what's wrong, what's missing
108
+ - Includes evidence (code references, counterexamples, data)
109
+
110
+ ## Step 4: Synthesize review
111
+
112
+ ### Verdict per claim
113
+ For each core claim from Step 1:
114
+ - **Holds** — evidence supports it
115
+ - **Questionable** — partially true but has gaps
116
+ - **Wrong** — contradicted by evidence
117
+ - **Unverifiable** — no way to confirm from available information
118
+
119
+ ### Blind spots
120
+ What did the document completely fail to consider?
121
+
122
+ ### Strongest objection
123
+ If you had to argue AGAINST this proposal in one paragraph, what would you say?
124
+
125
+ ### Constructive output
126
+ Don't just tear it apart. For each issue found, suggest what would fix it.
127
+
128
+ ---
129
+
130
+ # Simulate Mode
131
+
132
+ You have a plan or proposal. Instead of analyzing it on paper, **walk through it as if you're actually executing it**, step by step. Let problems surface naturally.
133
+
134
+ ## Step 1: Extract execution steps
135
+
136
+ Read the plan and break it into concrete sequential steps. For each step, identify:
137
+ - What it requires (inputs, resources, preconditions)
138
+ - What it produces (outputs, state changes)
139
+ - What it assumes
140
+
141
+ Present the steps and confirm with the user: "Is this the execution sequence?"
142
+
143
+ ## Step 2: Assign simulation agents
144
+
145
+ Spawn one agent per phase or critical step. Each agent:
146
+ - **Actually attempts to execute** (or traces through execution) of their assigned step
147
+ - Works with real files, real code, real environment where possible
148
+ - If can't actually execute (e.g., deployment plan), does a detailed walkthrough: "At this point I would need X, but looking at the current state, X is not available because..."
149
+ - Reports for each step:
150
+ - **Went as planned** — step worked / would work as described
151
+ - **Missing precondition** — "Step 3 assumes X exists, but step 2 doesn't create it"
152
+ - **Harder than expected** — "This was described as 'configure Y' but actually requires Z, which takes much longer"
153
+ - **Hidden dependency** — "This step silently depends on A, which the plan doesn't mention"
154
+ - **Order problem** — "This needs to happen before step N, not after"
155
+ - **Ambiguity** — "The plan says 'set up the database' but doesn't specify which schema, migration, or seed data"
156
+
157
+ ## Step 3: Compile discoveries
158
+
159
+ After all agents return, compile a simulation report:
160
+
161
+ ### Execution timeline
162
+ Show the steps as actually executed (vs. as planned). Highlight where reality diverged from plan.
163
+
164
+ ### Issues discovered
165
+ For each issue:
166
+ - **Severity**: blocker / significant / minor
167
+ - **When discovered**: which step
168
+ - **Root cause**: why the plan missed this
169
+ - **Fix**: specific change to the plan
170
+
171
+ ### Missing steps
172
+ Steps that the plan didn't include but simulation revealed are necessary.
173
+
174
+ ### Revised plan
175
+ Present the original plan with all fixes, missing steps, and reordering applied. Mark what changed and why.
176
+
177
+ ## Step 4: Present to user
178
+
179
+ Show the simulation report. Let the user decide which fixes to adopt. The revised plan is a suggestion, not a mandate.
180
+
181
+ ---
182
+
183
+ ## When to use this skill
184
+
185
+ - Researching a technology choice or architectural decision
186
+ - Investigating a complex bug with multiple possible root causes
187
+ - Evaluating a migration or major refactor
188
+ - **Reviewing an existing proposal, RFC, design doc, or plan**
189
+ - **Stress-testing your own plan before presenting it to stakeholders**
190
+ - **Simulating execution of a plan before committing to it — technical, engineering, or operational**
191
+ - Any situation where you catch yourself going deep on one angle and ignoring others
192
+ - When the user says "you're being narrow" or "what about X?" — that's a sign you needed this from the start