mindsystem-cc 3.18.0 → 3.18.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -283,49 +283,15 @@ uv run ~/.claude/mindsystem/scripts/scan-planning-context.py \
283
283
  ${SUBSYSTEM:+--subsystem "${SUBSYSTEM}"}
284
284
  ```
285
285
 
286
- **3. Parse JSON output.** Check `success` field. If the script fails or returns non-JSON, fall back to manual scanning (read SUMMARY frontmatter with sed as before). The scanner handles missing directories gracefully check `sources.*.skipped` for skip reasons.
286
+ The scanner outputs formatted markdown with sections for patterns, learnings,
287
+ summaries, knowledge files, and pending todos. Each section is omitted if empty.
288
+ If the script fails, fall back to manual scanning.
287
289
 
288
- **4. Present established patterns to user:**
290
+ **3. Conditionally read full summaries** — from "Summaries Needing Full Read" section, read each file. For "Other Relevant Summaries", read full body only if frontmatter context isn't sufficient (judgment).
289
291
 
290
- From `aggregated.patterns_established`, display:
292
+ **4. Read knowledge files** — from "Knowledge Files to Read" section, read each file.
291
293
 
292
- ```
293
- ### Established Patterns to Maintain
294
- - [Pattern: description] (from phase XX)
295
- ```
296
-
297
- If empty, skip.
298
-
299
- **5. Conditionally read full summaries.**
300
-
301
- Read full SUMMARY.md body ONLY for summaries where:
302
- - `relevance` is HIGH AND `has_readiness_warnings` is true
303
- - OR frontmatter alone doesn't provide enough context for task breakdown (use judgment)
304
-
305
- Extract from full reads: "Next Phase Readiness" warnings, "Issues Encountered", detailed accomplishments.
306
-
307
- Summaries scored MEDIUM or HIGH without readiness warnings → frontmatter data in JSON is sufficient (tech stack, patterns, key files, decisions already aggregated).
308
-
309
- **6. Present matched learnings:**
310
-
311
- **From `debug_learnings`** — present as warnings:
312
-
313
- ```
314
- ### Previous Debug Sessions in This Area
315
- - **{slug}** ({subsystem}): {root_cause} — Fix: {resolution}
316
- ```
317
-
318
- **From `adhoc_learnings`** — extract entries with non-empty `learnings` arrays for handoff context.
319
-
320
- If neither source has matches, skip.
321
-
322
- **7. Read knowledge files:**
323
-
324
- For each entry in `knowledge_files` where `matched` is true, read the full file. Knowledge files are prose (no frontmatter) so the LLM must read them to extract decisions, architecture, pitfalls.
325
-
326
- **8. Assess pending todos:**
327
-
328
- From `pending_todos` frontmatter in JSON, assess each todo's relevance. Read full body only for todos matching current phase subsystem/tags.
294
+ **5. Assess pending todos** — from "Pending Todos" section, assess relevance. Read full body for relevant ones.
329
295
 
330
296
  **From STATE.md:** Decisions → constrain approach. Pending todos → candidates. Blockers → may need to address.
331
297
 
@@ -337,13 +303,13 @@ From `pending_todos` frontmatter in JSON, assess each todo's relevance. Read ful
337
303
 
338
304
  **Track for handoff to ms-plan-writer:**
339
305
  - Which summaries were selected (for @context references)
340
- - Tech stack available (from `aggregated.tech_stack_added`)
341
- - Established patterns (from `aggregated.patterns_established`)
342
- - Key files to reference (from `aggregated.key_files_created` + `key_files_modified`)
343
- - Applicable decisions (from `aggregated.key_decisions` + full summary reads)
306
+ - Tech stack available (from formatted output)
307
+ - Established patterns (from formatted output)
308
+ - Key files to reference (from formatted output)
309
+ - Applicable decisions (from formatted output + full summary reads)
344
310
  - Todos being addressed (from pending todos)
345
311
  - Concerns being verified (from "Next Phase Readiness" in full reads)
346
- - Matched learnings (from `debug_learnings`, `adhoc_learnings`, patterns, knowledge files)
312
+ - Matched learnings (from formatted output + knowledge files)
347
313
  </step>
348
314
 
349
315
  <step name="gather_phase_context">
@@ -376,10 +342,10 @@ cat .planning/phases/XX-name/${PHASE}-DESIGN.md 2>/dev/null
376
342
  **If CONTEXT.md exists:** Honor vision, prioritize essential, respect boundaries, incorporate specifics. Track that CONTEXT.md exists for risk scoring.
377
343
 
378
344
  **If DESIGN.md exists:**
379
- - Tasks reference specific screens/components from design
380
- - Verification criteria include design verification items
345
+ - Tasks reference specific screens from design (wireframe + states + behavior + hints)
346
+ - Verification criteria inferred from States tables, Behavior notes, and token values
381
347
  - Must-Haves include design-specified observable behaviors
382
- - Task actions specify exact values (colors, spacing) from design
348
+ - Task actions specify exact values (colors, spacing) from Design Tokens table
383
349
 
384
350
  **If none exist:** Suggest /ms:research-phase for niche domains, /ms:discuss-phase for simpler domains, or proceed with roadmap only.
385
351
  </step>
@@ -517,7 +483,7 @@ Task(
517
483
  The subagent handles:
518
484
  - Building dependency graph from needs/creates
519
485
  - Assigning wave numbers
520
- - Grouping tasks into plans (2-3 changes per plan)
486
+ - Grouping tasks into plans (budget-based, ~45% cost target)
521
487
  - Deriving Must-Haves (goal-backward)
522
488
  - Estimating scope, splitting if needed
523
489
  - Writing PLAN.md files + EXECUTION-ORDER.md
@@ -559,6 +525,7 @@ Extract:
559
525
  - `plan_count`: Number of plans created
560
526
  - `wave_count`: Number of waves
561
527
  - `wave_structure`: Wave-to-plan mapping
528
+ - `grouping_rationale`: Optional table showing task weights and consolidation notes
562
529
  - `risk_score`: 0-100
563
530
  - `risk_tier`: "skip" | "optional" | "verify"
564
531
  - `risk_factors`: Top contributing factors
@@ -636,6 +603,11 @@ Wave 1 (parallel): {plan-01}, {plan-02}
636
603
  Wave 2: {plan-03}
637
604
  ...
638
605
 
606
+ {If grouping_rationale present, insert here:}
607
+
608
+ ## Grouping Notes
609
+ {grouping_rationale table from plan-writer}
610
+
639
611
  ---
640
612
 
641
613
  ## Next Up
@@ -690,7 +662,7 @@ Tasks are instructions for Claude, not Jira tickets.
690
662
  - [ ] PLAN file(s) created with pure markdown format
691
663
  - [ ] EXECUTION-ORDER.md created with wave groups
692
664
  - [ ] Each plan: Must-Haves section with observable truths
693
- - [ ] Each plan: 2-3 changes (~50% context)
665
+ - [ ] Each plan: budget-based grouping (target 25-45%, consolidate under 10%)
694
666
  - [ ] Wave structure maximizes parallelism
695
667
  - [ ] PLAN file(s) committed to git
696
668
  - [ ] Risk assessment presented (score + top factors)
@@ -532,7 +532,7 @@ Group related gaps into fix plans:
532
532
  ```
533
533
 
534
534
  3. **Keep plans focused:**
535
- - 2-3 tasks per plan
535
+ - Budget-based grouping (weights within 45%)
536
536
  - Single concern per plan
537
537
  - Include verification task
538
538
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mindsystem-cc",
3
- "version": "3.18.0",
3
+ "version": "3.18.1",
4
4
  "description": "A meta-prompting, context engineering and spec-driven development system for Claude Code by TÂCHES.",
5
5
  "bin": {
6
6
  "mindsystem-cc": "bin/install.js"
@@ -343,6 +343,7 @@ def scan_debug_docs(
343
343
  results.append(
344
344
  {
345
345
  "path": str(path),
346
+ "slug": path.stem,
346
347
  "subsystem": fm.get("subsystem", ""),
347
348
  "root_cause": fm.get("root_cause", ""),
348
349
  "resolution": fm.get("resolution", ""),
@@ -552,6 +553,145 @@ def aggregate_from_summaries(
552
553
  }
553
554
 
554
555
 
556
+ # ---------------------------------------------------------------------------
557
+ # Markdown formatting
558
+ # ---------------------------------------------------------------------------
559
+
560
+
561
+ def format_markdown(output: dict[str, Any]) -> str:
562
+ """Format scanner output as readable markdown for LLM consumption."""
563
+ sections: list[str] = []
564
+ agg = output.get("aggregated", {})
565
+
566
+ # --- Established Patterns ---
567
+ patterns = agg.get("patterns_established", [])
568
+ if patterns:
569
+ lines = ["### Established Patterns"]
570
+ lines.extend(f"- {p}" for p in patterns)
571
+ sections.append("\n".join(lines))
572
+
573
+ # --- Tech Stack ---
574
+ tech = agg.get("tech_stack_added", [])
575
+ if tech:
576
+ sections.append(f"### Tech Stack\n{', '.join(tech)}")
577
+
578
+ # --- Key Decisions ---
579
+ decisions = agg.get("key_decisions", [])
580
+ if decisions:
581
+ lines = ["### Key Decisions"]
582
+ lines.extend(f"- {d}" for d in decisions)
583
+ sections.append("\n".join(lines))
584
+
585
+ # --- Key Files ---
586
+ created = agg.get("key_files_created", [])
587
+ modified = agg.get("key_files_modified", [])
588
+ if created or modified:
589
+ lines = ["### Key Files"]
590
+ if created:
591
+ lines.append("**Created:**")
592
+ lines.extend(f"- `{f}`" for f in created)
593
+ if modified:
594
+ lines.append("**Modified:**")
595
+ lines.extend(f"- `{f}`" for f in modified)
596
+ sections.append("\n".join(lines))
597
+
598
+ # --- Debug Learnings ---
599
+ debug = output.get("debug_learnings", [])
600
+ if debug:
601
+ lines = ["### Debug Learnings"]
602
+ for d in debug:
603
+ slug = d.get("slug", "unknown")
604
+ sub = d.get("subsystem", "")
605
+ rc = d.get("root_cause", "")
606
+ res = d.get("resolution", "")
607
+ lines.append(f"- **{slug}** ({sub}): {rc} — Fix: {res}")
608
+ sections.append("\n".join(lines))
609
+
610
+ # --- Adhoc Learnings ---
611
+ adhoc_entries = [
612
+ a for a in output.get("adhoc_learnings", []) if a.get("learnings")
613
+ ]
614
+ if adhoc_entries:
615
+ lines = ["### Adhoc Learnings"]
616
+ for a in adhoc_entries:
617
+ sub = a.get("subsystem", "")
618
+ path = a.get("path", "")
619
+ label = sub or Path(path).stem if path else "unknown"
620
+ lines.append(f"- **{label}**")
621
+ for learning in a["learnings"]:
622
+ lines.append(f" - {learning}")
623
+ sections.append("\n".join(lines))
624
+
625
+ # --- Summaries ---
626
+ summaries = output.get("summaries", [])
627
+ needs_read = [
628
+ s
629
+ for s in summaries
630
+ if s.get("relevance") == "HIGH" and s.get("has_readiness_warnings")
631
+ ]
632
+ other_relevant = [
633
+ s
634
+ for s in summaries
635
+ if s.get("relevance") in ("HIGH", "MEDIUM")
636
+ and not s.get("has_readiness_warnings")
637
+ ]
638
+
639
+ if needs_read:
640
+ lines = ["### Summaries Needing Full Read"]
641
+ for s in needs_read:
642
+ lines.append(f"- `{s['path']}`")
643
+ sections.append("\n".join(lines))
644
+
645
+ if other_relevant:
646
+ lines = ["### Other Relevant Summaries"]
647
+ for s in other_relevant:
648
+ lines.append(f"- `{s['path']}` [{s.get('relevance', '')}]")
649
+ sections.append("\n".join(lines))
650
+
651
+ # --- Knowledge Files ---
652
+ matched_knowledge = [
653
+ k for k in output.get("knowledge_files", []) if k.get("matched")
654
+ ]
655
+ if matched_knowledge:
656
+ lines = ["### Knowledge Files to Read"]
657
+ for k in matched_knowledge:
658
+ lines.append(f"- `{k['path']}`")
659
+ sections.append("\n".join(lines))
660
+
661
+ # --- Pending Todos ---
662
+ todos = output.get("pending_todos", [])
663
+ if todos:
664
+ lines = ["### Pending Todos"]
665
+ for t in todos:
666
+ title = t.get("title", "untitled")
667
+ priority = t.get("priority", "")
668
+ sub = t.get("subsystem", "")
669
+ path = t.get("path", "")
670
+ lines.append(f"- **{title}** [{priority}] ({sub}) — `{path}`")
671
+ sections.append("\n".join(lines))
672
+
673
+ # --- Scanner Info ---
674
+ sources = output.get("sources", {})
675
+ parse_errors = sources.get("parse_errors", [])
676
+ info_lines = ["### Scanner Info"]
677
+ for name, src in sources.items():
678
+ if name == "parse_errors" or not isinstance(src, dict):
679
+ continue
680
+ scanned = src.get("scanned", 0)
681
+ skipped = src.get("skipped")
682
+ if skipped:
683
+ info_lines.append(f"- {name}: skipped ({skipped})")
684
+ else:
685
+ info_lines.append(f"- {name}: {scanned} scanned")
686
+ if parse_errors:
687
+ info_lines.append("**Parse errors:**")
688
+ for err in parse_errors:
689
+ info_lines.append(f"- `{err.get('path', '')}`: {err.get('error', '')}")
690
+ sections.append("\n".join(info_lines))
691
+
692
+ return "\n\n".join(sections)
693
+
694
+
555
695
  # ---------------------------------------------------------------------------
556
696
  # Main
557
697
  # ---------------------------------------------------------------------------
@@ -583,6 +723,11 @@ def build_parser() -> argparse.ArgumentParser:
583
723
  default="",
584
724
  help="Comma-separated keywords for tag matching",
585
725
  )
726
+ parser.add_argument(
727
+ "--json",
728
+ action="store_true",
729
+ help="Output raw JSON (default: formatted markdown)",
730
+ )
586
731
  return parser
587
732
 
588
733
 
@@ -604,40 +749,42 @@ def main() -> None:
604
749
 
605
750
  planning = find_planning_dir()
606
751
  if planning is None:
607
- # No .planning/ directory — output valid JSON with all sources skipped
608
- output: dict[str, Any] = {
609
- "success": True,
610
- "target": {
611
- "phase": phase,
612
- "phase_name": phase_name,
613
- "subsystems": subsystems,
614
- "keywords": keywords,
615
- },
616
- "sources": {
617
- "summaries": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
618
- "debug_docs": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
619
- "adhoc_summaries": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
620
- "completed_todos": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
621
- "pending_todos": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
622
- "knowledge_files": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
623
- "parse_errors": [],
624
- },
625
- "summaries": [],
626
- "debug_learnings": [],
627
- "adhoc_learnings": [],
628
- "completed_todos": [],
629
- "pending_todos": [],
630
- "knowledge_files": [],
631
- "aggregated": {
632
- "tech_stack_added": [],
633
- "patterns_established": [],
634
- "key_files_created": [],
635
- "key_files_modified": [],
636
- "key_decisions": [],
637
- },
638
- }
639
- json.dump(output, sys.stdout, indent=2, cls=_SafeEncoder)
640
- sys.stdout.write("\n")
752
+ if args.json:
753
+ output: dict[str, Any] = {
754
+ "success": True,
755
+ "target": {
756
+ "phase": phase,
757
+ "phase_name": phase_name,
758
+ "subsystems": subsystems,
759
+ "keywords": keywords,
760
+ },
761
+ "sources": {
762
+ "summaries": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
763
+ "debug_docs": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
764
+ "adhoc_summaries": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
765
+ "completed_todos": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
766
+ "pending_todos": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
767
+ "knowledge_files": {"dir": "", "scanned": 0, "skipped": ".planning/ not found"},
768
+ "parse_errors": [],
769
+ },
770
+ "summaries": [],
771
+ "debug_learnings": [],
772
+ "adhoc_learnings": [],
773
+ "completed_todos": [],
774
+ "pending_todos": [],
775
+ "knowledge_files": [],
776
+ "aggregated": {
777
+ "tech_stack_added": [],
778
+ "patterns_established": [],
779
+ "key_files_created": [],
780
+ "key_files_modified": [],
781
+ "key_decisions": [],
782
+ },
783
+ }
784
+ json.dump(output, sys.stdout, indent=2, cls=_SafeEncoder)
785
+ sys.stdout.write("\n")
786
+ else:
787
+ print("No .planning/ directory found. No prior context available.")
641
788
  return
642
789
 
643
790
  parse_errors: list[dict[str, str]] = []
@@ -681,8 +828,11 @@ def main() -> None:
681
828
  "aggregated": aggregated,
682
829
  }
683
830
 
684
- json.dump(output, sys.stdout, indent=2, cls=_SafeEncoder)
685
- sys.stdout.write("\n")
831
+ if args.json:
832
+ json.dump(output, sys.stdout, indent=2, cls=_SafeEncoder)
833
+ sys.stdout.write("\n")
834
+ else:
835
+ print(format_markdown(output))
686
836
 
687
837
 
688
838
  if __name__ == "__main__":