mindsystem-cc 3.13.1 → 3.14.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE CHANGED
@@ -1,6 +1,6 @@
1
1
  MIT License
2
2
 
3
- Copyright (c) 2025 Lex Christopherson
3
+ Copyright (c) 2026 Roland Tolnay
4
4
 
5
5
  Permission is hereby granted, free of charge, to any person obtaining a copy
6
6
  of this software and associated documentation files (the "Software"), to deal
@@ -109,7 +109,11 @@ If exists, extract:
109
109
 
110
110
  **3c. Optional context — project UI skill:**
111
111
 
112
- Before proceeding, check your available skills for one that provides domain expertise relevant to this project's UI implementation patterns. If found, invoke it via the Skill tool and extract aesthetic patterns (colors, components, spacing, typography) for the `<existing_aesthetic>` block passed to ms-designer.
112
+ Scan the skills list in the most recent system-reminder for a skill whose description mentions UI patterns, components, design system, or implementation styling (e.g., "Flutter/Dart patterns", "React component library", "UI implementation patterns").
113
+
114
+ If a matching skill is found, invoke it: `Skill(skill: "skill-name")`. Extract aesthetic patterns (colors, components, spacing, typography) from the loaded content for the `<existing_aesthetic>` block passed to ms-designer.
115
+
116
+ If no matching skill is found, skip this step and note "No project UI skill found" in the `<existing_aesthetic>` block.
113
117
 
114
118
  **3d. Optional context - codebase analysis:**
115
119
 
@@ -178,7 +182,7 @@ Follow mockup-generation workflow:
178
182
  4. Present directions to user for approval/tweaking
179
183
  5. Read platform template (mobile or web)
180
184
  6. Spawn 3 x ms-mockup-designer agents in parallel
181
- 7. Generate comparison page, open in browser, and present to user
185
+ 7. Run comparison script (`compare_mockups.py`), open in browser, and present to user
182
186
  8. Handle selection (single pick, combine, tweak, more variants, or skip)
183
187
  9. Extract CSS specs from chosen variant into `<mockup_direction>` block
184
188
 
@@ -82,18 +82,11 @@ Phase: $ARGUMENTS (optional)
82
82
  </anti_patterns>
83
83
 
84
84
  <success_criteria>
85
- - [ ] Dirty tree handled at start (stash/commit/abort)
86
- - [ ] Tests extracted from SUMMARY.md and classified
87
- - [ ] Tests batched by mock requirements
88
- - [ ] Mocks applied inline when needed (1-4 direct, 5+ via subagent)
89
- - [ ] Tests presented in batches of 4 using AskUserQuestion
90
- - [ ] Issues investigated with lightweight check first
91
- - [ ] Simple issues fixed inline with proper commit message
92
- - [ ] Complex issues escalated to fixer subagent
93
- - [ ] Failed re-tests get 2 retries then options
94
- - [ ] Stash conflicts auto-resolved to fix version
95
- - [ ] Mocks reverted on completion (git checkout)
85
+ - [ ] Mocks stashed before fixing, restored after (git stash push/pop cycle)
86
+ - [ ] Stash conflicts auto-resolved to fix version (git checkout --theirs)
87
+ - [ ] Blocked tests re-presented after blocking issues resolved
88
+ - [ ] Failed re-tests get 2 retries then options (tracked via retry_count)
89
+ - [ ] All mocks reverted on completion (git checkout -- <mocked_files>)
96
90
  - [ ] UAT fixes patch generated
97
- - [ ] User's pre-existing work restored
98
- - [ ] UAT.md committed with final summary
91
+ - [ ] User's pre-existing work restored from stash
99
92
  </success_criteria>
@@ -127,277 +127,3 @@ mock_type: empty_response
127
127
  reason: "[user-provided reason when skipping batch]"
128
128
  ```
129
129
 
130
- ---
131
-
132
- <section_rules>
133
-
134
- **Frontmatter:**
135
- - `status`: OVERWRITE - "testing", "fixing", or "complete"
136
- - `phase`: IMMUTABLE - set on creation
137
- - `source`: IMMUTABLE - SUMMARY files being tested
138
- - `started`: IMMUTABLE - set on creation
139
- - `updated`: OVERWRITE - update on every change
140
- - `current_batch`: OVERWRITE - current batch number
141
- - `mocked_files`: OVERWRITE - list of files with inline mocks, or empty array
142
- - `pre_work_stash`: OVERWRITE - user's pre-existing work stash or null
143
-
144
- **Progress:**
145
- - OVERWRITE after each test result or fix
146
- - Tracks: total, tested, passed, issues, fixing, pending, skipped
147
-
148
- **Current Batch:**
149
- - OVERWRITE entirely on batch transitions
150
- - Shows which batch is active
151
-
152
- **Tests:**
153
- - Each test: OVERWRITE result/fix fields when status changes
154
- - `result` values: [pending], pass, issue, blocked, skipped
155
- - If issue: add `reported` (verbatim), `severity` (inferred), `fix_status`, `fix_commit`, `retry_count`
156
- - If blocked: no additional fields (will be re-tested)
157
- - If skipped: add `reason`
158
-
159
- **Fixes Applied:**
160
- - APPEND only when fix committed
161
- - Records commit hash, test number, description, files
162
-
163
- **Batches:**
164
- - Each batch: OVERWRITE status and counts as batch progresses
165
- - Tracks: tests, status, mock_type, passed, issues
166
-
167
- **Assumptions:**
168
- - APPEND when test is skipped
169
- - Records test number, name, expected behavior, reason
170
-
171
- </section_rules>
172
-
173
- <fix_lifecycle>
174
-
175
- **When issue reported:**
176
- 1. result → "issue"
177
- 2. Add `reported`, `severity`
178
- 3. Add `fix_status: investigating`, `retry_count: 0`
179
-
180
- **When fix committed:**
181
- 4. `fix_status: applied`
182
- 5. `fix_commit: {hash}`
183
- 6. Append to "Fixes Applied" section
184
-
185
- **When re-test passes:**
186
- 7. result → "pass"
187
- 8. `fix_status: verified`
188
-
189
- **When re-test fails:**
190
- 9. Increment `retry_count`
191
- 10. If `retry_count >= 2`: offer skip/escalate options
192
- 11. If user chooses skip: result → "skipped", add reason
193
-
194
- </fix_lifecycle>
195
-
196
- <mock_lifecycle>
197
-
198
- **When batch needs mocks:**
199
- 1. Edit service methods inline (hardcoded return values)
200
- 2. Record files in `mocked_files` frontmatter
201
- 3. User hot reloads, testing proceeds
202
-
203
- **When fix needed:**
204
- 1. `git stash push -m "mocks-batch-N" -- <mocked_files>`
205
- 2. Fix applied, committed
206
- 3. `git stash pop` to restore mocks
207
- 4. If conflict: take fix version, remove file from `mocked_files`
208
-
209
- **On batch transition (different mock_type):**
210
- 1. Revert old mocks: `git checkout -- <mocked_files>`
211
- 2. Clear `mocked_files`, apply new inline mocks
212
-
213
- **On session complete:**
214
- 1. Revert all mocks: `git checkout -- <mocked_files>`
215
- 2. Restore pre_work_stash if exists
216
-
217
- </mock_lifecycle>
218
-
219
- <resume_behavior>
220
-
221
- On `/ms:verify-work` with existing UAT.md:
222
-
223
- 1. Check `status`:
224
- - "complete" → offer to re-run or view results
225
- - "testing" or "fixing" → resume
226
-
227
- 2. Check `mocked_files`:
228
- - If non-empty, verify mocks still present via `git diff --name-only`
229
- - If mocks lost, regenerate for current batch
230
-
231
- 3. Check `current_batch`:
232
- - Resume from that batch
233
-
234
- 4. Check for tests with `fix_status: investigating` or `fix_status: applied`:
235
- - Resume fix/re-test flow for those
236
-
237
- 5. Present remaining tests in current batch
238
-
239
- </resume_behavior>
240
-
241
- <severity_guide>
242
-
243
- Severity is INFERRED from user's natural language, never asked.
244
-
245
- | User describes | Infer |
246
- |----------------|-------|
247
- | Crash, error, exception, fails completely, unusable | blocker |
248
- | Doesn't work, nothing happens, wrong behavior, missing | major |
249
- | Works but..., slow, weird, off, minor, small | minor |
250
- | Color, font, spacing, alignment, looks off | cosmetic |
251
-
252
- Default: **major** (safe default, user can clarify if wrong)
253
-
254
- </severity_guide>
255
-
256
- <good_example>
257
- ```markdown
258
- ---
259
- status: fixing
260
- phase: 04-comments
261
- source: 04-01-SUMMARY.md, 04-02-SUMMARY.md
262
- started: 2025-01-15T10:30:00Z
263
- updated: 2025-01-15T11:15:00Z
264
- current_batch: 2
265
- mocked_files: [src/services/auth_service.dart, src/services/api_service.dart]
266
- pre_work_stash: null
267
- ---
268
-
269
- ## Progress
270
-
271
- total: 12
272
- tested: 7
273
- passed: 5
274
- issues: 1
275
- fixing: 1
276
- pending: 5
277
- skipped: 0
278
-
279
- ## Current Batch
280
-
281
- batch: 2 of 4
282
- name: "Error States"
283
- mock_type: error_state
284
- tests: [4, 5, 6, 7]
285
- status: testing
286
-
287
- ## Tests
288
-
289
- ### 1. View Comments on Post
290
- expected: Comments section expands, shows count and comment list
291
- mock_required: false
292
- mock_type: null
293
- result: pass
294
-
295
- ### 2. Create Top-Level Comment
296
- expected: Submit comment via rich text editor, appears in list with author info
297
- mock_required: false
298
- mock_type: null
299
- result: pass
300
-
301
- ### 3. Reply to a Comment
302
- expected: Click Reply, inline composer appears, submit shows nested reply
303
- mock_required: false
304
- mock_type: null
305
- result: pass
306
-
307
- ### 4. Login Error Message
308
- expected: Invalid credentials show "Invalid email or password" message
309
- mock_required: true
310
- mock_type: error_state
311
- result: issue
312
- reported: "Shows 'Something went wrong' instead of specific error"
313
- severity: major
314
- fix_status: applied
315
- fix_commit: abc123f
316
- retry_count: 0
317
-
318
- ### 5. Network Error Handling
319
- expected: No connection shows "Check your internet connection" with retry button
320
- mock_required: true
321
- mock_type: error_state
322
- result: [pending]
323
-
324
- ### 6. Server Error Display
325
- expected: 500 error shows "Try again later" message
326
- mock_required: true
327
- mock_type: error_state
328
- result: [pending]
329
-
330
- ### 7. Rate Limit Message
331
- expected: Too many requests shows "Too many attempts" with countdown
332
- mock_required: true
333
- mock_type: error_state
334
- result: [pending]
335
-
336
- ### 8. Premium Badge Display
337
- expected: Premium users show gold badge on profile
338
- mock_required: true
339
- mock_type: premium_user
340
- result: [pending]
341
-
342
- ### 9. Premium Feature Access
343
- expected: Premium features accessible, non-premium shows upgrade prompt
344
- mock_required: true
345
- mock_type: premium_user
346
- result: [pending]
347
-
348
- ### 10. Subscription Status
349
- expected: Account page shows current subscription tier and expiry
350
- mock_required: true
351
- mock_type: premium_user
352
- result: [pending]
353
-
354
- ### 11. Empty Comments List
355
- expected: Post with no comments shows "No comments yet" placeholder
356
- mock_required: true
357
- mock_type: empty_response
358
- result: [pending]
359
-
360
- ### 12. Empty Search Results
361
- expected: Search with no matches shows "No results found" with suggestions
362
- mock_required: true
363
- mock_type: empty_response
364
- result: [pending]
365
-
366
- ## Fixes Applied
367
-
368
- - commit: abc123f
369
- test: 4
370
- description: "Display actual error message from API response"
371
- files: [src/components/ErrorBanner.tsx]
372
-
373
- ## Batches
374
-
375
- ### Batch 1: No Mocks Required
376
- tests: [1, 2, 3]
377
- status: complete
378
- mock_type: null
379
- passed: 3
380
- issues: 0
381
-
382
- ### Batch 2: Error States
383
- tests: [4, 5, 6, 7]
384
- status: in_progress
385
- mock_type: error_state
386
- passed: 0
387
- issues: 1
388
-
389
- ### Batch 3: Premium Features
390
- tests: [8, 9, 10]
391
- status: pending
392
- mock_type: premium_user
393
-
394
- ### Batch 4: Empty States
395
- tests: [11, 12]
396
- status: pending
397
- mock_type: empty_response
398
-
399
- ## Assumptions
400
-
401
- [none yet]
402
- ```
403
- </good_example>
@@ -87,7 +87,7 @@ Task(prompt=assembled_context, subagent_type="ms-mockup-designer", description="
87
87
  </step>
88
88
 
89
89
  <step name="present_mockups">
90
- After all 3 agents return, generate comparison page and open it:
90
+ After all 3 agents return, run the comparison script to create the comparison page. Do NOT generate comparison HTML manually — use the script:
91
91
 
92
92
  ```bash
93
93
  uv run ~/.claude/mindsystem/scripts/compare_mockups.py "${PHASE_DIR}/mockups"
@@ -8,29 +8,6 @@ Complete verify-and-fix session: by session end, everything verified, issues fix
8
8
  <!-- mock-patterns.md loaded on demand for transient_state mocks (see generate_mocks step) -->
9
9
  </execution_context>
10
10
 
11
- <template>
12
- @~/.claude/mindsystem/templates/UAT.md
13
- </template>
14
-
15
- <philosophy>
16
- **Verify and fix in one session.**
17
-
18
- Old flow: verify → log gaps → /clear → plan-phase --gaps → execute → verify again
19
- New flow: verify → investigate → fix → re-test → continue
20
-
21
- **Mocks enable testing unreachable states.**
22
-
23
- Error displays, premium features, empty lists — all require specific backend conditions. Mocks let you toggle states and test immediately.
24
-
25
- **Keep mocks and fixes separate.**
26
-
27
- Mocks are uncommitted scaffolding. Fixes are clean commits. Git stash keeps them separated.
28
-
29
- **Fix while context is hot.**
30
-
31
- When you find an issue, you have the mock state active, the test fresh in mind, and the user ready to re-test. Fix it now, not later.
32
- </philosophy>
33
-
34
11
  <process>
35
12
 
36
13
  <step name="check_dirty_tree" priority="first">
@@ -42,20 +19,7 @@ git status --porcelain
42
19
 
43
20
  **If output is non-empty (dirty tree):**
44
21
 
45
- Present options via AskUserQuestion:
46
- ```
47
- questions:
48
- - question: "You have uncommitted changes. How should I handle them before starting UAT?"
49
- header: "Git state"
50
- options:
51
- - label: "Stash changes"
52
- description: "git stash push -m 'pre-verify-work' — I'll restore them after UAT"
53
- - label: "Commit first"
54
- description: "Let me commit these changes before we start"
55
- - label: "Abort"
56
- description: "Cancel UAT, I'll handle my changes manually"
57
- multiSelect: false
58
- ```
22
+ AskUserQuestion with options: Stash changes / Commit first / Abort
59
23
 
60
24
  **Handle response:**
61
25
  - "Stash changes" → `git stash push -m "pre-verify-work"`, record `pre_work_stash: "pre-verify-work"` for later
@@ -183,50 +147,14 @@ Reason over SUMMARY.md content (accomplishments, files created/modified, decisio
183
147
  | "offline", "no connection" | offline_state |
184
148
  | Normal happy path | no mock needed |
185
149
 
186
- For tests that remain genuinely uncertain after both the two-question framework and keyword heuristics, present them via AskUserQuestion grouped by uncertainty:
187
- ```
188
- questions:
189
- - question: "Does [test name] require mock data or a special app state to test?"
190
- header: "Mock needed?"
191
- options:
192
- - label: "No mock needed"
193
- description: "Can test with real/local data"
194
- - label: "Needs mock"
195
- description: "Requires simulated state or data"
196
- ```
150
+ For tests that remain genuinely uncertain after both the two-question framework and keyword heuristics, AskUserQuestion per uncertain test: No mock needed / Needs mock.
197
151
 
198
152
  **Dependency inference (both tiers):**
199
153
  - "Reply to comment" depends on "View comments"
200
154
  - "Delete account" depends on "Login"
201
155
  - Tests mentioning prior state depend on tests that create that state
202
156
 
203
- Build classification list:
204
- ```yaml
205
- tests:
206
- - name: "Login success"
207
- mock_required: false
208
- mock_type: null
209
- dependencies: []
210
-
211
- - name: "Login error message"
212
- mock_required: true
213
- mock_type: "error_state"
214
- mock_reason: "error response from auth endpoint"
215
- dependencies: ["login_flow"]
216
-
217
- - name: "Recipe list loading skeleton"
218
- mock_required: true
219
- mock_type: "transient_state"
220
- mock_reason: "loading skeleton during recipe fetch — async, resolves in <1s"
221
- dependencies: []
222
-
223
- - name: "View recipe list"
224
- mock_required: true
225
- mock_type: "external_data"
226
- mock_reason: "recipe items from /api/recipes"
227
- needs_user_confirmation: true
228
- dependencies: []
229
- ```
157
+ Build classification list with fields: name, mock_required, mock_type, mock_reason, dependencies, needs_user_confirmation.
230
158
  </step>
231
159
 
232
160
  <step name="create_batches">
@@ -236,136 +164,24 @@ tests:
236
164
 
237
165
  **Rules:**
238
166
  1. Group by mock_type (tests needing same mock state go together)
239
- 2. **User confirmation for external_data tests:** Before batching, collect all tests with `needs_user_confirmation: true`, grouped by data source. Present via AskUserQuestion:
240
-
241
- ```
242
- questions:
243
- - question: "Do you have [data_type] data from [source] locally?"
244
- header: "[data_type]"
245
- options:
246
- - label: "Yes, data exists"
247
- description: "I have [data_type] in my local environment"
248
- - label: "No, needs mock"
249
- description: "I need this data mocked for testing"
250
- - label: "Skip these tests"
251
- description: "Log as assumptions and move on"
252
- multiSelect: false
253
- ```
254
-
255
- Handle responses:
256
- - "Yes, data exists" → reclassify affected tests as `mock_required: false`
257
- - "No, needs mock" → keep as `mock_required: true`, `mock_type: "external_data"`
258
- - "Skip these tests" → mark all affected tests as `skipped`
259
-
260
- Group by data source (not per-test) to stay within AskUserQuestion's 4-question limit.
167
+ 2. **User confirmation for external_data tests:** Before batching, collect all tests with `needs_user_confirmation: true`, grouped by data source. AskUserQuestion per data source: Yes, data exists / No, needs mock / Skip these tests. Handle responses: reclassify as `mock_required: false`, keep as mock, or mark `skipped`. Group by data source (not per-test) to stay within AskUserQuestion's 4-question limit.
261
168
 
262
169
  3. **Separate transient_state batch:** Transient states use a different mock strategy (delay/force) than data mocks. Give them their own batch.
263
170
  4. Respect dependencies (if B depends on A, A must be in same or earlier batch)
264
171
  5. Max 4 tests per batch (AskUserQuestion limit)
265
172
  6. Batch ordering: no-mock → external_data → error_state → empty_response → transient_state → premium_user → offline_state
266
173
 
267
- **Batch structure:**
268
- ```yaml
269
- batches:
270
- - batch: 1
271
- name: "No Mocks Required"
272
- mock_type: null
273
- tests: [1, 2, 3]
274
-
275
- - batch: 2
276
- name: "External Data"
277
- mock_type: "external_data"
278
- tests: [4, 5]
279
-
280
- - batch: 3
281
- name: "Error States"
282
- mock_type: "error_state"
283
- tests: [6, 7, 8]
284
-
285
- - batch: 4
286
- name: "Transient States"
287
- mock_type: "transient_state"
288
- tests: [9, 10]
289
-
290
- - batch: 5
291
- name: "Premium Features"
292
- mock_type: "premium_user"
293
- tests: [11, 12]
294
- ```
174
+ Each batch has: batch number, name, mock_type, and test list.
295
175
  </step>
296
176
 
297
177
  <step name="create_uat_file">
298
- **Create UAT file with full structure:**
178
+ **Create UAT file:**
299
179
 
300
180
  ```bash
301
181
  mkdir -p "$PHASE_DIR"
302
182
  ```
303
183
 
304
- Create file at `.planning/phases/XX-name/{phase}-UAT.md`:
305
-
306
- ```markdown
307
- ---
308
- status: testing
309
- phase: XX-name
310
- source: [list of SUMMARY.md files]
311
- started: [ISO timestamp]
312
- updated: [ISO timestamp]
313
- current_batch: 1
314
- mocked_files: []
315
- pre_work_stash: [from dirty tree handling, or null]
316
- ---
317
-
318
- ## Progress
319
-
320
- total: [N]
321
- tested: 0
322
- passed: 0
323
- issues: 0
324
- fixing: 0
325
- pending: [N]
326
- skipped: 0
327
-
328
- ## Current Batch
329
-
330
- batch: 1 of [total_batches]
331
- name: "[batch name]"
332
- mock_type: [mock_type or null]
333
- tests: [test numbers]
334
- status: pending
335
-
336
- ## Tests
337
-
338
- ### 1. [Test Name]
339
- expected: [observable behavior]
340
- mock_required: [true/false]
341
- mock_type: [type or null]
342
- result: [pending]
343
-
344
- ### 2. [Test Name]
345
- ...
346
-
347
- ## Fixes Applied
348
-
349
- [none yet]
350
-
351
- ## Batches
352
-
353
- ### Batch 1: [Name]
354
- tests: [1, 2, 3]
355
- status: pending
356
- mock_type: null
357
-
358
- ### Batch 2: [Name]
359
- tests: [4, 5, 6, 7]
360
- status: pending
361
- mock_type: error_state
362
-
363
- ...
364
-
365
- ## Assumptions
366
-
367
- [none yet]
368
- ```
184
+ Create file at `.planning/phases/XX-name/{phase}-UAT.md` following the template structure in context. Populate with classified tests and batch data from previous steps.
369
185
 
370
186
  Proceed to `execute_batch`.
371
187
  </step>
@@ -456,22 +272,7 @@ If user has previously indicated they want to skip mock batches, or if mock gene
456
272
 
457
273
  Collect tests for current batch (only `[pending]` and `blocked` results).
458
274
 
459
- Build AskUserQuestion with up to 4 questions:
460
- ```
461
- questions:
462
- - question: "Test {N}: {name} — {expected}"
463
- header: "Test {N}"
464
- options:
465
- - label: "Pass"
466
- description: "Works as expected"
467
- - label: "Can't test"
468
- description: "Blocked by a previous failure"
469
- - label: "Skip"
470
- description: "Assume it works (can't test this state)"
471
- multiSelect: false
472
- ```
473
-
474
- The "Other" option is auto-added for issue descriptions.
275
+ AskUserQuestion per test (up to 4): Pass / Can't test / Skip. "Other" auto-added for issue descriptions.
475
276
 
476
277
  **Tip for users:** To skip with custom reason, select "Other" and start with `Skip:` — e.g., `Skip: Requires paid API key`.
477
278
 
@@ -680,20 +481,7 @@ Mocks are stashed — working tree is clean.
680
481
  <step name="handle_retest">
681
482
  **Handle re-test result:**
682
483
 
683
- Present re-test question:
684
- ```
685
- questions:
686
- - question: "Re-test: {test_name} — Does it work now?"
687
- header: "Re-test"
688
- options:
689
- - label: "Pass"
690
- description: "Fixed! Works correctly now"
691
- - label: "Still broken"
692
- description: "Same issue persists"
693
- - label: "New issue"
694
- description: "Original fixed but found different problem"
695
- multiSelect: false
696
- ```
484
+ AskUserQuestion: Pass / Still broken / New issue.
697
485
 
698
486
  **If Pass:**
699
487
  - Update test: `result: pass`, `fix_status: verified`
@@ -840,6 +628,11 @@ Check if more phases remain in ROADMAP.md:
840
628
  </process>
841
629
 
842
630
  <update_rules>
631
+ **Immutable (set on creation, never overwrite):** phase, source, started
632
+ **Result values:** [pending], pass, issue, blocked, skipped
633
+ **Issue adds:** reported, severity, fix_status (investigating | applied | verified), fix_commit, retry_count
634
+ **Skipped adds:** reason
635
+
843
636
  **Write UAT.md after:**
844
637
  - Each batch of responses processed
845
638
  - Each fix applied
@@ -851,6 +644,7 @@ Check if more phases remain in ROADMAP.md:
851
644
  | Frontmatter.status | OVERWRITE | Phase transitions |
852
645
  | Frontmatter.current_batch | OVERWRITE | Batch transitions |
853
646
  | Frontmatter.mocked_files | OVERWRITE | Mock generation/cleanup |
647
+ | Frontmatter.pre_work_stash | OVERWRITE | Dirty tree handling |
854
648
  | Frontmatter.updated | OVERWRITE | Every write |
855
649
  | Progress | OVERWRITE | After each test result |
856
650
  | Current Batch | OVERWRITE | Batch transitions |
@@ -879,19 +673,11 @@ Default: **major** (safe default)
879
673
  </severity_inference>
880
674
 
881
675
  <success_criteria>
882
- - [ ] Dirty tree handled at start
883
- - [ ] Tests classified by mock requirements
884
- - [ ] Batches created respecting dependencies and mock types
885
- - [ ] Mocks applied inline when needed (1-4 direct, 5+ via subagent)
886
- - [ ] Tests presented in batches of 4
887
- - [ ] Issues investigated with lightweight check (2-3 calls)
888
- - [ ] Simple issues fixed inline with proper commit
889
- - [ ] Complex issues escalated to fixer subagent
890
- - [ ] Re-test retries (2 max, tracked via retry_count) before offering options
676
+ - [ ] Mocks stashed before fixing, restored after (git stash push/pop cycle)
677
+ - [ ] Stash conflicts auto-resolved to fix version (git checkout --theirs)
891
678
  - [ ] Blocked tests re-presented after blocking issues resolved
892
- - [ ] Stash conflicts auto-resolved to fix version
893
- - [ ] Mocks reverted on completion (git checkout)
679
+ - [ ] Failed re-tests get 2 retries then options (tracked via retry_count)
680
+ - [ ] All mocks reverted on completion (git checkout -- <mocked_files>)
894
681
  - [ ] UAT fixes patch generated
895
- - [ ] User's pre-existing work restored
896
- - [ ] UAT.md committed with final summary
682
+ - [ ] User's pre-existing work restored from stash
897
683
  </success_criteria>
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mindsystem-cc",
3
- "version": "3.13.1",
3
+ "version": "3.14.0",
4
4
  "description": "A meta-prompting, context engineering and spec-driven development system for Claude Code by TÂCHES.",
5
5
  "bin": {
6
6
  "mindsystem-cc": "bin/install.js"
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: flutter-code-quality
3
- description: Flutter/Dart code quality, widget organization, and folder structure guidelines. Use when reviewing, refactoring, or cleaning up Flutter code after implementation.
3
+ description: Organize Flutter/Dart code to follow project conventions. Use after implementation to restructure folders, fix widget file organization, align naming patterns, or clean up code to match project standards.
4
4
  license: MIT
5
5
  metadata:
6
6
  author: Roland Tolnay
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: flutter-code-simplification
3
- description: Flutter/Dart code simplification principles. Use when simplifying, refactoring, or cleaning up Flutter code for clarity and maintainability.
3
+ description: Reduce complexity in Flutter/Dart code. Use when code is too nested, hard to read, or has duplication. Extracts widgets, flattens logic, removes unnecessary abstraction.
4
4
  license: MIT
5
5
  metadata:
6
6
  author: Roland Tolnay
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: flutter-senior-review
3
- description: Senior engineering principles for Flutter/Dart code reviews. Apply when reviewing, refactoring, or writing Flutter code to identify structural improvements that make code evolvable, not just working.
3
+ description: Review Flutter/Dart code for architectural and structural design issues. Use when reviewing PRs, auditing widget design, evaluating state management, or identifying problems that make code hard to evolve.
4
4
  license: MIT
5
5
  metadata:
6
6
  author: Roland Tolnay