cadence-skill-installer 0.2.29 → 0.2.31

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cadence-skill-installer",
3
- "version": "0.2.29",
3
+ "version": "0.2.31",
4
4
  "description": "Install the Cadence skill into supported AI tool skill directories.",
5
5
  "repository": "https://github.com/snowdamiz/cadence",
6
6
  "private": false,
package/skill/SKILL.md CHANGED
@@ -17,6 +17,12 @@ description: Structured project operating system for end-to-end greenfield or br
17
17
  - raw commands, terminal traces, or timing metadata
18
18
  3. When internal gates/checks succeed, continue directly with the user task and do not announce that checks were run.
19
19
 
20
+ ## State Mutation Safety
21
+ 1. Never manually edit `.cadence/cadence.json`.
22
+ 2. Mutate Cadence state only through the provided Cadence scripts (for example `run-*-gate.py`, `inject-ideation.py`, `run-brownfield-documentation.py`, `run-research-pass.py`, `set-workflow-item-status.py`, `read-workflow-state.py`).
23
+ 3. If a required state transition is not supported by existing scripts, stop and update scripts first instead of writing JSON by hand.
24
+ 4. For subskill preflight setup (project root + scripts-dir + repo-status, with optional route/workflow checks), use `scripts/run-skill-entry-gate.py` instead of repeating command chains.
25
+
20
26
  ## Repo Status Gate
21
27
  1. At Cadence entry (first assistant response in the conversation), resolve `PROJECT_ROOT` with `python3 scripts/resolve-project-root.py --project-root "$PWD"` (resolve script paths from this skill directory but keep command cwd at the active project).
22
28
  2. If `"$PROJECT_ROOT/.cadence"` exists, run `python3 scripts/check-project-repo-status.py --project-root "$PROJECT_ROOT"`.
@@ -1,4 +1,17 @@
1
1
  interface:
2
2
  display_name: "Cadence"
3
3
  short_description: "Lifecycle + delivery system for structured project execution"
4
- default_prompt: "Use Cadence to guide this project from lifecycle setup through phased execution, traceability, audit, and milestone completion. Always read and apply the active SOUL persona from .cadence/SOUL.json (fallback: SOUL.json). Keep user-facing responses concise and outcome-focused, and never expose internal skill-routing or command-execution traces unless the user explicitly asks. Do not announce successful internal checks; only surface them when a check fails and blocks progress. At Cadence entry (first assistant response in a conversation), resolve PROJECT_ROOT with scripts/resolve-project-root.py --project-root \"$PWD\". If \"$PROJECT_ROOT/.cadence\" exists, run scripts/check-project-repo-status.py --project-root \"$PROJECT_ROOT\" and treat repo_enabled as the authoritative push mode (if false, keep commits local-only). Never run scripts/check-project-repo-status.py without --project-root. If \"$PROJECT_ROOT/.cadence\" is missing, run scaffold first and let scaffold establish repo mode. If `.cadence` exists but `.cadence/cadence.json` is missing, run scaffold in recovery mode before routing. After scaffold handling, run scripts/read-workflow-state.py --project-root \"$PROJECT_ROOT\" and treat route.skill_name as authoritative for the next state-changing skill. During normal multi-turn subskill conversation flow, do not rerun repo/route gates between each user reply; rerun them only when checkpointing into a new subskill, handling explicit resume/status/reroute requests, or recovering from assertion/gate failures. Invoke skills/prerequisite-gate/SKILL.md only when route.skill_name is prerequisite-gate. Invoke skills/brownfield-intake/SKILL.md only when route.skill_name is brownfield-intake so project mode and baseline capture happen before downstream ideation routing. Invoke skills/brownfield-documenter/SKILL.md only when route.skill_name is brownfield-documenter so existing-project context is documented into canonical ideation structures. If scaffold, prerequisite, and brownfield-intake complete in-thread and route advances to ideator for a greenfield project, force subskill handoff with: Start a new chat and either say \"help me define my project\" or share your project brief. If scaffold, prerequisite, and brownfield-intake complete in-thread and route advances to brownfield-documenter for a brownfield project, force subskill handoff with: Start a new chat and say \"document my existing project\". In later chats, if route.skill_name is ideator, do not rerun prerequisite or brownfield-intake; invoke skills/ideator/SKILL.md in the same chat, and if the user has not provided ideation input yet, ask one kickoff ideation question in-thread instead of handing off again. In later chats, if route.skill_name is brownfield-documenter, invoke skills/brownfield-documenter/SKILL.md and do not route to ideator unless the user explicitly requests net-new ideation discovery. When route advances from ideator or brownfield-documenter to researcher, force a handoff with: Start a new chat with a new agent and say \"plan my project\". If route.skill_name is researcher, invoke skills/researcher/SKILL.md and enforce one pass per conversation; when more passes remain, end with: Start a new chat and say \"continue research\". If user intent indicates resuming/continuing work or asking progress, invoke skills/project-progress/SKILL.md first, report current phase, then route to the next step. If the user manually requests a Cadence subskill, resolve PROJECT_ROOT with scripts/resolve-project-root.py --project-root \"$PWD\" and then run scripts/assert-workflow-route.py --skill-name <subskill> --project-root \"$PROJECT_ROOT\" before any state-changing actions. Ensure direct subskill execution follows the same Git Checkpoints policy from this main skill: run scripts/finalize-skill-checkpoint.py with each subskill's configured --scope/--checkpoint and --paths ., allow status=no_changes without failure, and treat checkpoint or push failures as blocking errors surfaced verbatim."
4
+ default_prompt: >-
5
+ Use Cadence as the orchestrator and follow skill/SKILL.md as the authoritative workflow.
6
+ On entry, resolve PROJECT_ROOT with scripts/resolve-project-root.py --project-root "$PWD".
7
+ If "$PROJECT_ROOT/.cadence" exists, run scripts/check-project-repo-status.py --project-root "$PROJECT_ROOT";
8
+ otherwise scaffold first. If .cadence exists but cadence.json is missing, run scaffold recovery.
9
+ After scaffold handling, run scripts/read-workflow-state.py --project-root "$PROJECT_ROOT" and only invoke
10
+ the skill in route.skill_name. For manual subskill requests, assert route first with
11
+ scripts/assert-workflow-route.py --skill-name <subskill> --project-root "$PROJECT_ROOT".
12
+ Never edit .cadence/cadence.json manually. Keep replies concise, do not expose internal traces unless asked,
13
+ and do not announce successful internal checks.
14
+ For each successful subskill conversation, run scripts/finalize-skill-checkpoint.py from PROJECT_ROOT with
15
+ that subskill's --scope/--checkpoint and --paths ., allow status=no_changes, and treat checkpoint/push
16
+ failures as blocking.
17
+ Use the exact handoff lines in SKILL.md for ideator, brownfield-documenter, and researcher transitions.
@@ -40,6 +40,7 @@
40
40
  ### 7. User-Facing Hygiene
41
41
  - Keep user-facing messages outcome-focused
42
42
  - Do not expose internal routing, command traces, terminal transcripts, or timing metadata unless the user explicitly asks
43
+ - Never manually edit `.cadence/cadence.json`; use Cadence scripts for all state updates
43
44
 
44
45
  ## Task Management
45
46
 
@@ -55,3 +56,4 @@
55
56
  - **Simplicity First**: Make every change as simple as possible. Impact minimal code.
56
57
  - **No Laziness**: Find root causes. No temporary fixes. Senior developer standards.
57
58
  - **Minimal Impact**: Changes should only touch what's necessary. Avoid introducing bugs.
59
+ - **Script-Only State Writes**: `.cadence/cadence.json` must only be changed via existing Cadence scripts.
@@ -405,6 +405,233 @@ def parse_payload(args: argparse.Namespace, project_root: Path) -> dict[str, Any
405
405
  return payload
406
406
 
407
407
 
408
+ def _slug_token(value: Any, fallback: str) -> str:
409
+ token = re.sub(r"[^a-z0-9]+", "-", str(value).strip().lower()).strip("-")
410
+ if token:
411
+ return token
412
+ fallback_token = re.sub(r"[^a-z0-9]+", "-", str(fallback).strip().lower()).strip("-")
413
+ return fallback_token or "item"
414
+
415
+
416
+ def _coerce_text_list(value: Any) -> list[str]:
417
+ if value is None:
418
+ return []
419
+ if isinstance(value, (list, tuple, set)):
420
+ raw = list(value)
421
+ else:
422
+ raw = [value]
423
+
424
+ values: list[str] = []
425
+ for item in raw:
426
+ text = str(item).strip()
427
+ if text and text not in values:
428
+ values.append(text)
429
+ return values
430
+
431
+
432
+ def _unique_token(seed: str, used: set[str]) -> str:
433
+ candidate = seed
434
+ index = 2
435
+ while candidate in used:
436
+ candidate = f"{seed}-{index}"
437
+ index += 1
438
+ used.add(candidate)
439
+ return candidate
440
+
441
+
442
+ def _normalized_aliases(label: str, aliases: Any) -> list[str]:
443
+ values = _coerce_text_list(aliases)
444
+ if label and label not in values:
445
+ values.insert(0, label)
446
+ return values
447
+
448
+
449
+ def repair_research_entity_links(payload: dict[str, Any]) -> dict[str, Any]:
450
+ """Auto-repair cross-block entity references to reduce avoidable validation failures."""
451
+
452
+ repairs: dict[str, Any] = {
453
+ "applied": False,
454
+ "generated_block_ids": 0,
455
+ "created_entities": 0,
456
+ "owner_assignments": 0,
457
+ "cross_block_relinks": 0,
458
+ "unknown_owner_resets": 0,
459
+ }
460
+
461
+ agenda = payload.get("research_agenda")
462
+ if not isinstance(agenda, dict):
463
+ return repairs
464
+
465
+ blocks = agenda.get("blocks")
466
+ if not isinstance(blocks, list):
467
+ return repairs
468
+
469
+ block_ids: list[str] = []
470
+ for index, block in enumerate(blocks, start=1):
471
+ if not isinstance(block, dict):
472
+ continue
473
+ block_id = str(block.get("block_id", "")).strip()
474
+ if not block_id:
475
+ block_id = f"block-{index}"
476
+ block["block_id"] = block_id
477
+ repairs["generated_block_ids"] += 1
478
+ block_ids.append(block_id)
479
+
480
+ if not block_ids:
481
+ return repairs
482
+
483
+ block_id_set = set(block_ids)
484
+ entity_registry_raw = agenda.get("entity_registry")
485
+ entity_registry_raw = entity_registry_raw if isinstance(entity_registry_raw, list) else []
486
+
487
+ used_entity_ids: set[str] = set()
488
+ entity_index: dict[str, dict[str, Any]] = {}
489
+ entity_order: list[str] = []
490
+
491
+ for index, raw_entry in enumerate(entity_registry_raw, start=1):
492
+ entry = dict(raw_entry) if isinstance(raw_entry, dict) else {"label": raw_entry}
493
+ label = str(entry.get("label") or entry.get("name") or entry.get("entity_id") or entry.get("id") or "").strip()
494
+ seed = _slug_token(entry.get("entity_id") or entry.get("id") or label, f"entity-{index}")
495
+ owner_block_id = str(entry.get("owner_block_id") or entry.get("owner") or "").strip()
496
+ kind = str(entry.get("kind") or entry.get("type") or "entity").strip() or "entity"
497
+ aliases = _normalized_aliases(label, entry.get("aliases"))
498
+
499
+ if seed in entity_index:
500
+ existing = entity_index[seed]
501
+ existing_owner = str(existing.get("owner_block_id", "")).strip()
502
+ if owner_block_id and existing_owner and owner_block_id != existing_owner:
503
+ seed = _unique_token(
504
+ f"{seed}--{_slug_token(owner_block_id, 'block')}",
505
+ used_entity_ids,
506
+ )
507
+ else:
508
+ if not existing_owner and owner_block_id:
509
+ existing["owner_block_id"] = owner_block_id
510
+ for alias in aliases:
511
+ if alias not in existing["aliases"]:
512
+ existing["aliases"].append(alias)
513
+ continue
514
+ else:
515
+ seed = _unique_token(seed, used_entity_ids)
516
+
517
+ entity = {
518
+ "entity_id": seed,
519
+ "label": label or seed.replace("-", " ").title(),
520
+ "kind": kind,
521
+ "aliases": aliases,
522
+ "owner_block_id": owner_block_id,
523
+ }
524
+ entity_index[seed] = entity
525
+ entity_order.append(seed)
526
+
527
+ clone_cache: dict[tuple[str, str], str] = {}
528
+ synthetic_index = 0
529
+
530
+ for block_index, block in enumerate(blocks, start=1):
531
+ if not isinstance(block, dict):
532
+ continue
533
+ block_id = str(block.get("block_id") or f"block-{block_index}").strip()
534
+ topics = block.get("topics")
535
+ topics = topics if isinstance(topics, list) else []
536
+
537
+ for topic in topics:
538
+ if not isinstance(topic, dict):
539
+ continue
540
+
541
+ related_raw = topic.get("related_entities")
542
+ related_entities = _coerce_text_list(related_raw)
543
+ if not related_entities:
544
+ topic["related_entities"] = []
545
+ continue
546
+
547
+ repaired_related: list[str] = []
548
+ for raw_entity in related_entities:
549
+ entity_seed = _slug_token(raw_entity, "entity")
550
+ entity_id = entity_seed
551
+
552
+ if entity_id not in entity_index:
553
+ synthetic_index += 1
554
+ if entity_id in used_entity_ids:
555
+ entity_id = _unique_token(f"{entity_seed}-{synthetic_index}", used_entity_ids)
556
+ else:
557
+ used_entity_ids.add(entity_id)
558
+ label = str(raw_entity).strip() or entity_id.replace("-", " ").title()
559
+ entity_index[entity_id] = {
560
+ "entity_id": entity_id,
561
+ "label": label,
562
+ "kind": "entity",
563
+ "aliases": [label] if label else [],
564
+ "owner_block_id": block_id,
565
+ }
566
+ entity_order.append(entity_id)
567
+ repairs["created_entities"] += 1
568
+ repairs["owner_assignments"] += 1
569
+
570
+ entity = entity_index[entity_id]
571
+ owner_block_id = str(entity.get("owner_block_id", "")).strip()
572
+
573
+ if owner_block_id and owner_block_id != block_id:
574
+ cache_key = (entity_id, block_id)
575
+ clone_id = clone_cache.get(cache_key, "")
576
+ if not clone_id:
577
+ clone_id = _unique_token(
578
+ f"{entity_id}--{_slug_token(block_id, 'block')}",
579
+ used_entity_ids,
580
+ )
581
+ clone = dict(entity)
582
+ clone["entity_id"] = clone_id
583
+ clone["owner_block_id"] = block_id
584
+ clone["aliases"] = _normalized_aliases(
585
+ str(clone.get("label", "")).strip(),
586
+ clone.get("aliases"),
587
+ )
588
+ entity_index[clone_id] = clone
589
+ entity_order.append(clone_id)
590
+ clone_cache[cache_key] = clone_id
591
+ repairs["created_entities"] += 1
592
+ repaired_related.append(clone_id)
593
+ repairs["cross_block_relinks"] += 1
594
+ continue
595
+
596
+ if not owner_block_id:
597
+ entity["owner_block_id"] = block_id
598
+ repairs["owner_assignments"] += 1
599
+
600
+ repaired_related.append(entity_id)
601
+
602
+ deduped_related: list[str] = []
603
+ for entity_id in repaired_related:
604
+ if entity_id not in deduped_related:
605
+ deduped_related.append(entity_id)
606
+ topic["related_entities"] = deduped_related
607
+
608
+ for entity_id in entity_order:
609
+ entity = entity_index.get(entity_id)
610
+ if not isinstance(entity, dict):
611
+ continue
612
+ owner_block_id = str(entity.get("owner_block_id", "")).strip()
613
+ if owner_block_id and owner_block_id not in block_id_set:
614
+ entity["owner_block_id"] = ""
615
+ repairs["unknown_owner_resets"] += 1
616
+ entity["aliases"] = _normalized_aliases(str(entity.get("label", "")).strip(), entity.get("aliases"))
617
+ entity["kind"] = str(entity.get("kind", "")).strip() or "entity"
618
+
619
+ agenda["entity_registry"] = [entity_index[entity_id] for entity_id in entity_order]
620
+ payload["research_agenda"] = agenda
621
+
622
+ repairs["applied"] = any(
623
+ int(repairs[key]) > 0
624
+ for key in (
625
+ "generated_block_ids",
626
+ "created_entities",
627
+ "owner_assignments",
628
+ "cross_block_relinks",
629
+ "unknown_owner_resets",
630
+ )
631
+ )
632
+ return repairs
633
+
634
+
408
635
  def ensure_brownfield_mode(data: dict[str, Any]) -> None:
409
636
  state = data.get("state")
410
637
  state = state if isinstance(state, dict) else {}
@@ -416,6 +643,7 @@ def ensure_brownfield_mode(data: dict[str, Any]) -> None:
416
643
  def complete_flow(args: argparse.Namespace, project_root: Path, data: dict[str, Any]) -> dict[str, Any]:
417
644
  ensure_brownfield_mode(data)
418
645
  payload = parse_payload(args, project_root)
646
+ repair_summary = repair_research_entity_links(payload)
419
647
  normalized = normalize_ideation_research(payload, require_topics=True)
420
648
  normalized = reset_research_execution(normalized)
421
649
  data["ideation"] = normalized
@@ -474,6 +702,7 @@ def complete_flow(args: argparse.Namespace, project_root: Path, data: dict[str,
474
702
  "topic_count": int(summary.get("topic_count", 0)),
475
703
  "entity_count": int(summary.get("entity_count", 0)),
476
704
  },
705
+ "payload_repairs": repair_summary,
477
706
  "next_route": data.get("workflow", {}).get("next_route", {}),
478
707
  }
479
708
 
@@ -0,0 +1,215 @@
1
+ #!/usr/bin/env python3
2
+ """Run shared Cadence subskill entry gates and emit a single JSON payload.
3
+
4
+ This helper centralizes repeated subskill preflight steps:
5
+ - resolve project root
6
+ - resolve cadence scripts dir
7
+ - run repo status gate
8
+ - optionally assert workflow route
9
+ - optionally read workflow state
10
+ """
11
+
12
+ from __future__ import annotations
13
+
14
+ import argparse
15
+ import json
16
+ import subprocess
17
+ import sys
18
+ from pathlib import Path
19
+ from typing import Any
20
+
21
+ from project_root import resolve_project_root, write_project_root_hint
22
+
23
+
24
+ SCRIPT_DIR = Path(__file__).resolve().parent
25
+ RESOLVE_SCRIPTS_DIR_SCRIPT = SCRIPT_DIR / "resolve-project-scripts-dir.py"
26
+
27
+
28
+ def run_command(command: list[str], *, cwd: Path | None = None) -> subprocess.CompletedProcess[str]:
29
+ return subprocess.run(
30
+ command,
31
+ cwd=str(cwd) if cwd else None,
32
+ capture_output=True,
33
+ text=True,
34
+ check=False,
35
+ )
36
+
37
+
38
+ def parse_args() -> argparse.Namespace:
39
+ parser = argparse.ArgumentParser(
40
+ description="Run shared Cadence subskill entry gates.",
41
+ )
42
+ parser.add_argument(
43
+ "--project-root",
44
+ default="",
45
+ help="Explicit project root path override.",
46
+ )
47
+ parser.add_argument(
48
+ "--require-cadence",
49
+ action="store_true",
50
+ help="Require .cadence to exist while resolving project root.",
51
+ )
52
+ parser.add_argument(
53
+ "--assert-skill-name",
54
+ default="",
55
+ help="Optional skill name to assert against workflow route.",
56
+ )
57
+ parser.add_argument(
58
+ "--allow-complete",
59
+ action="store_true",
60
+ help="Allow route assertion success when workflow is already complete.",
61
+ )
62
+ parser.add_argument(
63
+ "--include-workflow-state",
64
+ action="store_true",
65
+ help="Include read-workflow-state output in the response payload.",
66
+ )
67
+ parser.add_argument(
68
+ "--remote-policy",
69
+ choices=("any", "github"),
70
+ default="any",
71
+ help="Remote policy for repo-enabled detection.",
72
+ )
73
+ parser.add_argument(
74
+ "--set-local-only",
75
+ action="store_true",
76
+ help="Pass --set-local-only to check-project-repo-status.",
77
+ )
78
+ return parser.parse_args()
79
+
80
+
81
+ def fail(message: str, *, code: int = 1) -> None:
82
+ print(message, file=sys.stderr)
83
+ raise SystemExit(code)
84
+
85
+
86
+ def load_json_output(
87
+ command: list[str],
88
+ *,
89
+ error_label: str,
90
+ cwd: Path | None = None,
91
+ ) -> dict[str, Any]:
92
+ result = run_command(command, cwd=cwd)
93
+ if result.returncode != 0:
94
+ detail = result.stderr.strip() or result.stdout.strip() or error_label
95
+ fail(detail, code=result.returncode)
96
+
97
+ raw = result.stdout.strip()
98
+ if not raw:
99
+ fail(f"{error_label}: EMPTY_STDOUT")
100
+
101
+ try:
102
+ payload = json.loads(raw)
103
+ except json.JSONDecodeError as exc:
104
+ fail(f"{error_label}: INVALID_JSON: {exc}")
105
+
106
+ if not isinstance(payload, dict):
107
+ fail(f"{error_label}: PAYLOAD_MUST_BE_OBJECT")
108
+ return payload
109
+
110
+
111
+ def resolve_scripts_dir(project_root: Path) -> str:
112
+ result = run_command(
113
+ [
114
+ sys.executable,
115
+ str(RESOLVE_SCRIPTS_DIR_SCRIPT),
116
+ "--project-root",
117
+ str(project_root),
118
+ ]
119
+ )
120
+ if result.returncode != 0:
121
+ detail = result.stderr.strip() or result.stdout.strip() or "MISSING_CADENCE_SCRIPTS_DIR"
122
+ fail(detail, code=result.returncode)
123
+
124
+ scripts_dir = result.stdout.strip()
125
+ if not scripts_dir:
126
+ fail("MISSING_CADENCE_SCRIPTS_DIR")
127
+
128
+ scripts_path = Path(scripts_dir)
129
+ if not scripts_path.is_dir():
130
+ fail("INVALID_CADENCE_SCRIPTS_DIR")
131
+ return str(scripts_path)
132
+
133
+
134
+ def main() -> int:
135
+ args = parse_args()
136
+ explicit_project_root = args.project_root.strip() or None
137
+
138
+ try:
139
+ project_root, project_root_source = resolve_project_root(
140
+ script_dir=SCRIPT_DIR,
141
+ explicit_project_root=explicit_project_root,
142
+ require_cadence=bool(args.require_cadence),
143
+ allow_hint=True,
144
+ )
145
+ except ValueError as exc:
146
+ fail(str(exc))
147
+
148
+ write_project_root_hint(SCRIPT_DIR, project_root)
149
+ scripts_dir = resolve_scripts_dir(project_root)
150
+
151
+ repo_status_command = [
152
+ sys.executable,
153
+ str(Path(scripts_dir) / "check-project-repo-status.py"),
154
+ "--project-root",
155
+ str(project_root),
156
+ "--remote-policy",
157
+ str(args.remote_policy),
158
+ ]
159
+ if args.set_local_only:
160
+ repo_status_command.append("--set-local-only")
161
+
162
+ repo_status = load_json_output(
163
+ repo_status_command,
164
+ error_label="CHECK_PROJECT_REPO_STATUS_FAILED",
165
+ )
166
+
167
+ route_assertion: dict[str, Any] | None = None
168
+ assert_skill_name = str(args.assert_skill_name).strip()
169
+ if assert_skill_name:
170
+ route_command = [
171
+ sys.executable,
172
+ str(Path(scripts_dir) / "assert-workflow-route.py"),
173
+ "--skill-name",
174
+ assert_skill_name,
175
+ "--project-root",
176
+ str(project_root),
177
+ ]
178
+ if args.allow_complete:
179
+ route_command.append("--allow-complete")
180
+ route_assertion = load_json_output(
181
+ route_command,
182
+ error_label="WORKFLOW_ROUTE_CHECK_FAILED",
183
+ )
184
+
185
+ workflow_state: dict[str, Any] | None = None
186
+ if args.include_workflow_state:
187
+ workflow_state = load_json_output(
188
+ [
189
+ sys.executable,
190
+ str(Path(scripts_dir) / "read-workflow-state.py"),
191
+ "--project-root",
192
+ str(project_root),
193
+ ],
194
+ error_label="WORKFLOW_STATE_READ_FAILED",
195
+ )
196
+
197
+ payload: dict[str, Any] = {
198
+ "status": "ok",
199
+ "project_root": str(project_root),
200
+ "project_root_source": project_root_source,
201
+ "cadence_scripts_dir": scripts_dir,
202
+ "repo_enabled": bool(repo_status.get("repo_enabled", False)),
203
+ "repo_status": repo_status,
204
+ }
205
+ if route_assertion is not None:
206
+ payload["route_assertion"] = route_assertion
207
+ if workflow_state is not None:
208
+ payload["workflow_state"] = workflow_state
209
+
210
+ print(json.dumps(payload))
211
+ return 0
212
+
213
+
214
+ if __name__ == "__main__":
215
+ raise SystemExit(main())
@@ -5,41 +5,58 @@ description: Perform deep evidence-based analysis of an existing codebase and pe
5
5
 
6
6
  # Brownfield Documenter
7
7
 
8
- 1. Resolve project root by running `python3 ../../scripts/resolve-project-root.py --require-cadence` and store stdout in `PROJECT_ROOT`.
9
- 2. Resolve helper scripts dir by running `python3 ../../scripts/resolve-project-scripts-dir.py --project-root "$PROJECT_ROOT"` and store stdout in `CADENCE_SCRIPTS_DIR`.
10
- 3. Run `python3 "$CADENCE_SCRIPTS_DIR/check-project-repo-status.py" --project-root "$PROJECT_ROOT"` and parse the JSON output. Treat `repo_enabled` as the authoritative push mode (`false` means local-only commits).
11
- 4. Run brownfield discovery context extraction:
8
+ 1. Run shared skill entry gates once at conversation start:
9
+ - `python3 ../../scripts/run-skill-entry-gate.py --require-cadence`
10
+ - Parse JSON and store `PROJECT_ROOT` from `project_root`, `CADENCE_SCRIPTS_DIR` from `cadence_scripts_dir`, and push mode from `repo_enabled` (`false` means local-only commits).
11
+ - Never manually edit `.cadence/cadence.json`; all Cadence state writes must go through Cadence scripts.
12
+ 2. Run brownfield discovery context extraction:
12
13
  - `python3 "$CADENCE_SCRIPTS_DIR/run-brownfield-documentation.py" --project-root "$PROJECT_ROOT" discover`
13
- 5. `run-brownfield-documentation.py` performs workflow route assertion internally; if assertion fails, stop and surface the exact error.
14
- 6. Treat discover output as helper context only:
14
+ 3. `run-brownfield-documentation.py` performs workflow route assertion internally; if assertion fails, stop and surface the exact error.
15
+ 4. Treat discover output as helper context only:
15
16
  - It must not be treated as final documentation.
16
17
  - Use it to choose where to inspect deeply in the repository.
17
- 7. Perform AI-led deep investigation of the existing project using repository evidence:
18
+ 5. During normal brownfield documentation, do not read Cadence script source files (for example `run-brownfield-documentation.py`, `ideation_research.py`, `get-ideation.py`) to infer schema or workflow details. Only inspect Cadence internals if the user explicitly asks to debug Cadence itself.
19
+ 6. Perform AI-led deep investigation of the existing project using repository evidence:
18
20
  - inspect key docs and manifests
19
21
  - inspect runtime entrypoints and major code paths
20
22
  - inspect test surfaces, tooling, CI, and deployment configuration when present
21
23
  - infer objective, core outcome, scope boundaries, constraints, and risks from evidence
22
- 8. Build a finalized ideation payload using the same structure as greenfield ideation
23
- - payload root is full `ideation` object
24
- - `research_agenda` is required; non-research planning fields are optional in brownfield documentation
25
- - include only fields that are explicitly evidenced in the repository or confirmed by the user
26
- - do not invent `in_scope`, `out_of_scope`, `implementation_approach`, `milestones`, or `constraints` when evidence is missing
27
- - when important details are unknown, ask a focused clarification question or omit those fields and capture uncertainty in optional `assumptions` / `open_questions`
28
- - preferred evidence-backed core fields when available: `objective`, `core_outcome`, `target_audience`, `core_experience`, `risks`, `success_signals`
29
- - include required `research_agenda` with `blocks`, `entity_registry`, and `topic_index` (`topic_index` can be `{}` in payload; normalization rebuilds it)
30
- - each topic must include `topic_id`, `title`, `category`, `priority`, `why_it_matters`, `research_questions`, `keywords`, `tags`, `related_entities`
31
- - each entity must include `entity_id`, `label`, `kind`, `aliases`, `owner_block_id`
32
- - entity/topic relationships must remain block-consistent
33
- 9. Persist finalized ideation without creating extra project files:
24
+ 7. Use this canonical ideation payload contract and do not inspect Cadence Python scripts to infer schema during normal operation:
25
+ - Payload root must be a JSON object representing the full `ideation` object.
26
+ - `research_agenda` is required for brownfield completion.
27
+ - Non-research planning fields are optional in brownfield documentation and must be evidence-backed or user-confirmed.
28
+ - Do not invent unknown planning details. If information is missing, ask one focused clarification question or omit the field and record uncertainty in optional `assumptions` / `open_questions`.
29
+ 8. Build the brownfield payload with these rules:
30
+ - Preferred optional top-level fields when available: `objective`, `core_outcome`, `target_audience`, `core_experience`, `risks`, `success_signals`, `assumptions`, `open_questions`.
31
+ - Optional planning fields: `in_scope`, `out_of_scope`, `implementation_approach`, `milestones`, `constraints`.
32
+ - Required `research_agenda` keys:
33
+ - `blocks` (array; must contain at least one topic total for completion)
34
+ - `entity_registry` (array; can be empty)
35
+ - `topic_index` (object; set `{}` in payload, rebuilt during normalization)
36
+ - Each `research_agenda.blocks[]` item should include:
37
+ - `block_id`, `title`, `rationale`, `tags`, `topics`
38
+ - Each `topics[]` item should include:
39
+ - `topic_id`, `title`, `category`, `priority` (`low|medium|high`), `why_it_matters`, `research_questions`, `keywords`, `tags`, `related_entities`
40
+ - Each `entity_registry[]` item should include:
41
+ - `entity_id`, `label`, `kind`, `aliases`, `owner_block_id`
42
+ - Relationship rule:
43
+ - every id listed in topic `related_entities` must exist in `entity_registry`, and that entity's `owner_block_id` must match the topic block.
44
+ 9. Sparse payloads are allowed as long as `research_agenda` has at least one topic:
45
+ - missing topic `category` defaults to `general`
46
+ - missing topic `priority` defaults to `medium`
47
+ - missing list fields default to `[]`
48
+ - empty `entity_registry` is valid
49
+ 10. Persist finalized ideation without creating extra project files:
34
50
  - pipe payload JSON directly to stdin and run:
35
51
  - `python3 "$CADENCE_SCRIPTS_DIR/run-brownfield-documentation.py" --project-root "$PROJECT_ROOT" complete --stdin`
36
- 10. Verify persistence by running:
52
+ - `complete` automatically repairs common entity-linkage mistakes (for example cross-block entity references) and returns a `payload_repairs` summary.
53
+ 11. Verify persistence by running:
37
54
  - `python3 "$CADENCE_SCRIPTS_DIR/get-ideation.py" --project-root "$PROJECT_ROOT"`
38
- 11. Mention that granular research queries are available via:
55
+ 12. Mention that granular research queries are available via:
39
56
  - `python3 "$CADENCE_SCRIPTS_DIR/query-ideation-research.py" --project-root "$PROJECT_ROOT"`
40
- 12. End successful completion replies with this exact line:
57
+ 13. End successful completion replies with this exact line:
41
58
  - `Start a new chat with a new agent and say "plan my project".`
42
- 13. At end of this successful skill conversation, run `cd "$PROJECT_ROOT" && python3 "$CADENCE_SCRIPTS_DIR/finalize-skill-checkpoint.py" --scope brownfield-documenter --checkpoint documentation-captured --paths .`.
43
- 14. If `finalize-skill-checkpoint.py` returns `status=no_changes`, continue without failure.
44
- 15. If `finalize-skill-checkpoint.py` reports an error, stop and surface it verbatim.
45
- 16. In normal user-facing updates, report brownfield findings and persisted ideation outcomes without raw command traces or internal routing details unless explicitly requested.
59
+ 14. At end of this successful skill conversation, run `cd "$PROJECT_ROOT" && python3 "$CADENCE_SCRIPTS_DIR/finalize-skill-checkpoint.py" --scope brownfield-documenter --checkpoint documentation-captured --paths .`.
60
+ 15. If `finalize-skill-checkpoint.py` returns `status=no_changes`, continue without failure.
61
+ 16. If `finalize-skill-checkpoint.py` reports an error, stop and surface it verbatim.
62
+ 17. In normal user-facing updates, report brownfield findings and persisted ideation outcomes without raw command traces or internal routing details unless explicitly requested.
@@ -1,4 +1,16 @@
1
1
  interface:
2
2
  display_name: "Brownfield Documenter"
3
3
  short_description: "Document existing project context into ideation structures"
4
- default_prompt: "Resolve PROJECT_ROOT with scripts/resolve-project-root.py --require-cadence, then resolve CADENCE_SCRIPTS_DIR with scripts/resolve-project-scripts-dir.py --project-root \"$PROJECT_ROOT\". Run scripts/check-project-repo-status.py --project-root \"$PROJECT_ROOT\" and treat repo_enabled as authoritative for push/local-only behavior. Run scripts/run-brownfield-documentation.py --project-root \"$PROJECT_ROOT\" discover first (this command performs route assertion internally) and use the output only as helper context to guide deeper AI-led repository investigation. Perform in-depth evidence-based analysis across docs, manifests, entrypoints, major code paths, and test/tooling/deploy surfaces. Build a full ideation payload using the same contract as greenfield ideation, but require only research_agenda for brownfield documentation; non-research planning fields (for example in_scope/out_of_scope/implementation_approach/milestones/constraints) are optional and must be included only when evidenced or user-confirmed. Do not fabricate unknown details; ask focused clarifying questions for high-impact gaps or omit unknown fields and capture uncertainty in assumptions/open_questions. Persist by piping JSON to scripts/run-brownfield-documentation.py --project-root \"$PROJECT_ROOT\" complete --stdin. Do not create extra project tracking files for this flow; persist state in .cadence/cadence.json. Verify with scripts/get-ideation.py and mention scripts/query-ideation-research.py for detailed queries. End successful completion replies with: Start a new chat with a new agent and say \"plan my project\". At end of successful execution, run scripts/finalize-skill-checkpoint.py from PROJECT_ROOT with --scope brownfield-documenter --checkpoint documentation-captured --paths .; allow status=no_changes and surface failures verbatim. Keep user-facing replies concise and do not expose internal command traces unless explicitly requested."
4
+ default_prompt: >-
5
+ Follow skills/brownfield-documenter/SKILL.md for exact behavior and payload contract.
6
+ Run scripts/run-skill-entry-gate.py --require-cadence, then use its JSON output for PROJECT_ROOT
7
+ (project_root), CADENCE_SCRIPTS_DIR (cadence_scripts_dir), and push/local-only mode (repo_enabled).
8
+ Never edit .cadence/cadence.json manually.
9
+ Run scripts/run-brownfield-documentation.py --project-root "$PROJECT_ROOT" discover, perform evidence-based
10
+ analysis, then persist with scripts/run-brownfield-documentation.py --project-root "$PROJECT_ROOT" complete --stdin.
11
+ Require a research_agenda with at least one topic, verify with scripts/get-ideation.py, and mention
12
+ scripts/query-ideation-research.py for granular queries.
13
+ End success with: Start a new chat with a new agent and say "plan my project".
14
+ Finalize from PROJECT_ROOT with scripts/finalize-skill-checkpoint.py --scope brownfield-documenter
15
+ --checkpoint documentation-captured --paths . (allow status=no_changes; surface failures verbatim).
16
+ Keep replies concise and hide internal traces unless asked.