cc-discipline 2.8.0 → 2.8.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cc-discipline",
3
- "version": "2.8.0",
3
+ "version": "2.8.1",
4
4
  "description": "Discipline framework for Claude Code — rules, hooks, and agents that keep AI on track",
5
5
  "bin": {
6
6
  "cc-discipline": "bin/cli.sh"
@@ -9,6 +9,7 @@ Determine which mode based on user input:
9
9
 
10
10
  - **Research mode** — User gives a topic/question with no existing proposal. Goal: build comprehensive understanding before forming an opinion.
11
11
  - **Review mode** — User gives an existing document, proposal, or design. Goal: stress-test it from multiple angles, find blind spots and weaknesses.
12
+ - **Simulate mode** — User gives a plan and wants to "dry run" it. Goal: walk through execution step by step, let hidden problems surface naturally.
12
13
 
13
14
  State which mode you're using and why.
14
15
 
@@ -126,6 +127,59 @@ Don't just tear it apart. For each issue found, suggest what would fix it.
126
127
 
127
128
  ---
128
129
 
130
+ # Simulate Mode
131
+
132
+ You have a plan or proposal. Instead of analyzing it on paper, **walk through it as if you're actually executing it**, step by step. Let problems surface naturally.
133
+
134
+ ## Step 1: Extract execution steps
135
+
136
+ Read the plan and break it into concrete sequential steps. For each step, identify:
137
+ - What it requires (inputs, resources, preconditions)
138
+ - What it produces (outputs, state changes)
139
+ - What it assumes
140
+
141
+ Present the steps and confirm with the user: "Is this the execution sequence?"
142
+
143
+ ## Step 2: Assign simulation agents
144
+
145
+ Spawn one agent per phase or critical step. Each agent:
146
+ - **Actually attempts to execute** (or traces through execution) of their assigned step
147
+ - Works with real files, real code, real environment where possible
148
+ - If can't actually execute (e.g., deployment plan), does a detailed walkthrough: "At this point I would need X, but looking at the current state, X is not available because..."
149
+ - Reports for each step:
150
+ - **Went as planned** — step worked / would work as described
151
+ - **Missing precondition** — "Step 3 assumes X exists, but step 2 doesn't create it"
152
+ - **Harder than expected** — "This was described as 'configure Y' but actually requires Z, which takes much longer"
153
+ - **Hidden dependency** — "This step silently depends on A, which the plan doesn't mention"
154
+ - **Order problem** — "This needs to happen before step N, not after"
155
+ - **Ambiguity** — "The plan says 'set up the database' but doesn't specify which schema, migration, or seed data"
156
+
157
+ ## Step 3: Compile discoveries
158
+
159
+ After all agents return, compile a simulation report:
160
+
161
+ ### Execution timeline
162
+ Show the steps as actually executed (vs. as planned). Highlight where reality diverged from plan.
163
+
164
+ ### Issues discovered
165
+ For each issue:
166
+ - **Severity**: blocker / significant / minor
167
+ - **When discovered**: which step
168
+ - **Root cause**: why the plan missed this
169
+ - **Fix**: specific change to the plan
170
+
171
+ ### Missing steps
172
+ Steps that the plan didn't include but simulation revealed are necessary.
173
+
174
+ ### Revised plan
175
+ Present the original plan with all fixes, missing steps, and reordering applied. Mark what changed and why.
176
+
177
+ ## Step 4: Present to user
178
+
179
+ Show the simulation report. Let the user decide which fixes to adopt. The revised plan is a suggestion, not a mandate.
180
+
181
+ ---
182
+
129
183
  ## When to use this skill
130
184
 
131
185
  - Researching a technology choice or architectural decision
@@ -133,5 +187,6 @@ Don't just tear it apart. For each issue found, suggest what would fix it.
133
187
  - Evaluating a migration or major refactor
134
188
  - **Reviewing an existing proposal, RFC, design doc, or plan**
135
189
  - **Stress-testing your own plan before presenting it to stakeholders**
190
+ - **Simulating execution of a plan before committing to it — technical, engineering, or operational**
136
191
  - Any situation where you catch yourself going deep on one angle and ignoring others
137
192
  - When the user says "you're being narrow" or "what about X?" — that's a sign you needed this from the start