aw-ecc 1.4.31 → 1.4.47

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (259) hide show
  1. package/.claude-plugin/plugin.json +1 -1
  2. package/.codex/hooks/aw-post-tool-use.sh +8 -2
  3. package/.codex/hooks/aw-session-start.sh +11 -4
  4. package/.codex/hooks/aw-stop.sh +8 -2
  5. package/.codex/hooks/aw-user-prompt-submit.sh +10 -2
  6. package/.codex/hooks.json +8 -8
  7. package/.cursor/INSTALL.md +7 -5
  8. package/.cursor/hooks/adapter.js +41 -4
  9. package/.cursor/hooks/after-agent-response.js +62 -0
  10. package/.cursor/hooks/before-submit-prompt.js +7 -1
  11. package/.cursor/hooks/post-tool-use-failure.js +21 -0
  12. package/.cursor/hooks/post-tool-use.js +39 -0
  13. package/.cursor/hooks/shared/aw-phase-definitions.js +53 -0
  14. package/.cursor/hooks/shared/aw-phase-runner.js +3 -1
  15. package/.cursor/hooks/subagent-start.js +22 -4
  16. package/.cursor/hooks/subagent-stop.js +18 -1
  17. package/.cursor/hooks.json +23 -2
  18. package/.opencode/package.json +1 -1
  19. package/AGENTS.md +3 -3
  20. package/README.md +5 -5
  21. package/commands/adk.md +52 -0
  22. package/commands/build.md +22 -9
  23. package/commands/deploy.md +12 -0
  24. package/commands/execute.md +9 -0
  25. package/commands/feature.md +333 -0
  26. package/commands/investigate.md +18 -5
  27. package/commands/plan.md +23 -9
  28. package/commands/publish.md +65 -0
  29. package/commands/review.md +12 -0
  30. package/commands/ship.md +12 -0
  31. package/commands/test.md +12 -0
  32. package/commands/verify.md +9 -0
  33. package/hooks/hooks.json +36 -0
  34. package/manifests/install-components.json +8 -0
  35. package/manifests/install-modules.json +83 -0
  36. package/manifests/install-profiles.json +7 -0
  37. package/package.json +1 -1
  38. package/scripts/ci/validate-rules.js +51 -0
  39. package/scripts/cursor-aw-home/hooks.json +23 -2
  40. package/scripts/cursor-aw-hooks/adapter.js +41 -4
  41. package/scripts/cursor-aw-hooks/before-submit-prompt.js +7 -1
  42. package/scripts/hooks/aw-usage-commit-created.js +32 -0
  43. package/scripts/hooks/aw-usage-post-tool-use-failure.js +56 -0
  44. package/scripts/hooks/aw-usage-post-tool-use.js +242 -0
  45. package/scripts/hooks/aw-usage-prompt-submit.js +112 -0
  46. package/scripts/hooks/aw-usage-session-start.js +48 -0
  47. package/scripts/hooks/aw-usage-stop.js +182 -0
  48. package/scripts/hooks/aw-usage-telemetry-send.js +84 -0
  49. package/scripts/hooks/cost-tracker.js +3 -23
  50. package/scripts/hooks/shared/aw-phase-definitions.js +53 -0
  51. package/scripts/hooks/shared/aw-phase-runner.js +3 -1
  52. package/scripts/lib/aw-hook-contract.js +2 -2
  53. package/scripts/lib/aw-pricing.js +306 -0
  54. package/scripts/lib/aw-usage-telemetry.js +472 -0
  55. package/scripts/lib/codex-hook-config.js +8 -8
  56. package/scripts/lib/cursor-hook-config.js +25 -10
  57. package/scripts/lib/install-targets/codex-home.js +7 -0
  58. package/scripts/lib/install-targets/cursor-project.js +3 -0
  59. package/scripts/lib/install-targets/helpers.js +20 -3
  60. package/skills/aw-adk/SKILL.md +317 -0
  61. package/skills/aw-adk/agents/analyzer.md +113 -0
  62. package/skills/aw-adk/agents/comparator.md +113 -0
  63. package/skills/aw-adk/agents/grader.md +115 -0
  64. package/skills/aw-adk/assets/eval_review.html +76 -0
  65. package/skills/aw-adk/eval-viewer/generate_review.py +164 -0
  66. package/skills/aw-adk/eval-viewer/viewer.html +181 -0
  67. package/skills/aw-adk/evals/eval-colocated-placement.md +84 -0
  68. package/skills/aw-adk/evals/eval-create-agent.md +90 -0
  69. package/skills/aw-adk/evals/eval-create-command.md +98 -0
  70. package/skills/aw-adk/evals/eval-create-eval.md +89 -0
  71. package/skills/aw-adk/evals/eval-create-rule.md +99 -0
  72. package/skills/aw-adk/evals/eval-create-skill.md +97 -0
  73. package/skills/aw-adk/evals/eval-delete-agent.md +79 -0
  74. package/skills/aw-adk/evals/eval-delete-command.md +89 -0
  75. package/skills/aw-adk/evals/eval-delete-rule.md +86 -0
  76. package/skills/aw-adk/evals/eval-delete-skill.md +90 -0
  77. package/skills/aw-adk/evals/eval-meta-eval-coverage.md +78 -0
  78. package/skills/aw-adk/evals/eval-meta-eval-determinism.md +81 -0
  79. package/skills/aw-adk/evals/eval-meta-eval-false-pass.md +81 -0
  80. package/skills/aw-adk/evals/eval-score-accuracy.md +95 -0
  81. package/skills/aw-adk/evals/eval-type-redirect.md +68 -0
  82. package/skills/aw-adk/evals/evals.json +96 -0
  83. package/skills/aw-adk/references/artifact-wiring.md +162 -0
  84. package/skills/aw-adk/references/cross-ide-mapping.md +71 -0
  85. package/skills/aw-adk/references/eval-placement-guide.md +183 -0
  86. package/skills/aw-adk/references/external-resources.md +75 -0
  87. package/skills/aw-adk/references/getting-started.md +66 -0
  88. package/skills/aw-adk/references/registry-structure.md +152 -0
  89. package/skills/aw-adk/references/rubric-agent.md +36 -0
  90. package/skills/aw-adk/references/rubric-command.md +36 -0
  91. package/skills/aw-adk/references/rubric-eval.md +36 -0
  92. package/skills/aw-adk/references/rubric-meta-eval.md +132 -0
  93. package/skills/aw-adk/references/rubric-rule.md +36 -0
  94. package/skills/aw-adk/references/rubric-skill.md +36 -0
  95. package/skills/aw-adk/references/schemas.md +222 -0
  96. package/skills/aw-adk/references/template-agent.md +251 -0
  97. package/skills/aw-adk/references/template-command.md +279 -0
  98. package/skills/aw-adk/references/template-eval.md +176 -0
  99. package/skills/aw-adk/references/template-rule.md +119 -0
  100. package/skills/aw-adk/references/template-skill.md +123 -0
  101. package/skills/aw-adk/references/type-classifier.md +98 -0
  102. package/skills/aw-adk/references/writing-good-agents.md +227 -0
  103. package/skills/aw-adk/references/writing-good-commands.md +258 -0
  104. package/skills/aw-adk/references/writing-good-evals.md +271 -0
  105. package/skills/aw-adk/references/writing-good-rules.md +214 -0
  106. package/skills/aw-adk/references/writing-good-skills.md +159 -0
  107. package/skills/aw-adk/scripts/aggregate-benchmark.py +190 -0
  108. package/skills/aw-adk/scripts/lint-artifact.sh +211 -0
  109. package/skills/aw-adk/scripts/score-artifact.sh +179 -0
  110. package/skills/aw-adk/scripts/trigger-eval.py +192 -0
  111. package/skills/aw-build/SKILL.md +19 -2
  112. package/skills/aw-deploy/SKILL.md +65 -3
  113. package/skills/aw-design/SKILL.md +156 -0
  114. package/skills/aw-design/references/highrise-tokens.md +394 -0
  115. package/skills/aw-design/references/micro-interactions.md +76 -0
  116. package/skills/aw-design/references/prompt-template.md +160 -0
  117. package/skills/aw-design/references/quality-checklist.md +70 -0
  118. package/skills/aw-design/references/self-review.md +497 -0
  119. package/skills/aw-design/references/stitch-workflow.md +127 -0
  120. package/skills/aw-feature/SKILL.md +293 -0
  121. package/skills/aw-investigate/SKILL.md +17 -0
  122. package/skills/aw-plan/SKILL.md +34 -3
  123. package/skills/aw-publish/SKILL.md +300 -0
  124. package/skills/aw-publish/evals/eval-confirmation-gate.md +60 -0
  125. package/skills/aw-publish/evals/eval-intent-detection.md +111 -0
  126. package/skills/aw-publish/evals/eval-push-modes.md +67 -0
  127. package/skills/aw-publish/evals/eval-rules-push.md +60 -0
  128. package/skills/aw-publish/evals/evals.json +29 -0
  129. package/skills/aw-publish/references/push-modes.md +38 -0
  130. package/skills/aw-review/SKILL.md +88 -9
  131. package/skills/aw-rules-review/SKILL.md +124 -0
  132. package/skills/aw-rules-review/agents/openai.yaml +3 -0
  133. package/skills/aw-rules-review/scripts/generate-review-template.mjs +323 -0
  134. package/skills/aw-ship/SKILL.md +16 -0
  135. package/skills/aw-spec/SKILL.md +15 -0
  136. package/skills/aw-tasks/SKILL.md +15 -0
  137. package/skills/aw-test/SKILL.md +16 -0
  138. package/skills/aw-yolo/SKILL.md +4 -0
  139. package/skills/diagnose/SKILL.md +121 -0
  140. package/skills/diagnose/scripts/hitl-loop.template.sh +41 -0
  141. package/skills/finish-only-when-green/SKILL.md +265 -0
  142. package/skills/grill-me/SKILL.md +24 -0
  143. package/skills/grill-with-docs/SKILL.md +92 -0
  144. package/skills/grill-with-docs/adr-format.md +47 -0
  145. package/skills/grill-with-docs/context-format.md +67 -0
  146. package/skills/improve-codebase-architecture/SKILL.md +75 -0
  147. package/skills/improve-codebase-architecture/deepening.md +37 -0
  148. package/skills/improve-codebase-architecture/interface-design.md +44 -0
  149. package/skills/improve-codebase-architecture/language.md +53 -0
  150. package/skills/local-ghl-setup-from-screenshot/SKILL.md +538 -0
  151. package/skills/tdd/SKILL.md +115 -0
  152. package/skills/tdd/deep-modules.md +33 -0
  153. package/skills/tdd/interface-design.md +31 -0
  154. package/skills/tdd/mocking.md +59 -0
  155. package/skills/tdd/refactoring.md +10 -0
  156. package/skills/tdd/tests.md +61 -0
  157. package/skills/to-issues/SKILL.md +62 -0
  158. package/skills/to-prd/SKILL.md +75 -0
  159. package/skills/using-aw-skills/SKILL.md +170 -237
  160. package/skills/using-aw-skills/hooks/session-start.sh +11 -41
  161. package/skills/zoom-out/SKILL.md +24 -0
  162. package/.cursor/rules/common-agents.md +0 -53
  163. package/.cursor/rules/common-aw-routing.md +0 -43
  164. package/.cursor/rules/common-coding-style.md +0 -52
  165. package/.cursor/rules/common-development-workflow.md +0 -33
  166. package/.cursor/rules/common-git-workflow.md +0 -28
  167. package/.cursor/rules/common-hooks.md +0 -34
  168. package/.cursor/rules/common-patterns.md +0 -35
  169. package/.cursor/rules/common-performance.md +0 -59
  170. package/.cursor/rules/common-security.md +0 -33
  171. package/.cursor/rules/common-testing.md +0 -33
  172. package/.cursor/skills/api-and-interface-design/SKILL.md +0 -75
  173. package/.cursor/skills/article-writing/SKILL.md +0 -85
  174. package/.cursor/skills/aw-brainstorm/SKILL.md +0 -115
  175. package/.cursor/skills/aw-build/SKILL.md +0 -152
  176. package/.cursor/skills/aw-build/evals/build-stage-cases.json +0 -28
  177. package/.cursor/skills/aw-debug/SKILL.md +0 -49
  178. package/.cursor/skills/aw-deploy/SKILL.md +0 -101
  179. package/.cursor/skills/aw-deploy/evals/deploy-stage-cases.json +0 -32
  180. package/.cursor/skills/aw-execute/SKILL.md +0 -47
  181. package/.cursor/skills/aw-execute/references/mode-code.md +0 -47
  182. package/.cursor/skills/aw-execute/references/mode-docs.md +0 -28
  183. package/.cursor/skills/aw-execute/references/mode-infra.md +0 -44
  184. package/.cursor/skills/aw-execute/references/mode-migration.md +0 -58
  185. package/.cursor/skills/aw-execute/references/worker-implementer.md +0 -26
  186. package/.cursor/skills/aw-execute/references/worker-parallel-worker.md +0 -23
  187. package/.cursor/skills/aw-execute/references/worker-quality-reviewer.md +0 -23
  188. package/.cursor/skills/aw-execute/references/worker-spec-reviewer.md +0 -23
  189. package/.cursor/skills/aw-execute/scripts/build-worker-bundle.js +0 -229
  190. package/.cursor/skills/aw-finish/SKILL.md +0 -111
  191. package/.cursor/skills/aw-investigate/SKILL.md +0 -109
  192. package/.cursor/skills/aw-plan/SKILL.md +0 -368
  193. package/.cursor/skills/aw-prepare/SKILL.md +0 -118
  194. package/.cursor/skills/aw-review/SKILL.md +0 -118
  195. package/.cursor/skills/aw-ship/SKILL.md +0 -115
  196. package/.cursor/skills/aw-spec/SKILL.md +0 -104
  197. package/.cursor/skills/aw-tasks/SKILL.md +0 -138
  198. package/.cursor/skills/aw-test/SKILL.md +0 -118
  199. package/.cursor/skills/aw-verify/SKILL.md +0 -51
  200. package/.cursor/skills/aw-yolo/SKILL.md +0 -111
  201. package/.cursor/skills/browser-testing-with-devtools/SKILL.md +0 -81
  202. package/.cursor/skills/bun-runtime/SKILL.md +0 -84
  203. package/.cursor/skills/ci-cd-and-automation/SKILL.md +0 -71
  204. package/.cursor/skills/code-simplification/SKILL.md +0 -74
  205. package/.cursor/skills/content-engine/SKILL.md +0 -88
  206. package/.cursor/skills/context-engineering/SKILL.md +0 -74
  207. package/.cursor/skills/deprecation-and-migration/SKILL.md +0 -75
  208. package/.cursor/skills/documentation-and-adrs/SKILL.md +0 -75
  209. package/.cursor/skills/documentation-lookup/SKILL.md +0 -90
  210. package/.cursor/skills/frontend-slides/SKILL.md +0 -184
  211. package/.cursor/skills/frontend-slides/STYLE_PRESETS.md +0 -330
  212. package/.cursor/skills/frontend-ui-engineering/SKILL.md +0 -68
  213. package/.cursor/skills/git-workflow-and-versioning/SKILL.md +0 -75
  214. package/.cursor/skills/idea-refine/SKILL.md +0 -84
  215. package/.cursor/skills/incremental-implementation/SKILL.md +0 -75
  216. package/.cursor/skills/investor-materials/SKILL.md +0 -96
  217. package/.cursor/skills/investor-outreach/SKILL.md +0 -76
  218. package/.cursor/skills/market-research/SKILL.md +0 -75
  219. package/.cursor/skills/mcp-server-patterns/SKILL.md +0 -67
  220. package/.cursor/skills/nextjs-turbopack/SKILL.md +0 -44
  221. package/.cursor/skills/performance-optimization/SKILL.md +0 -77
  222. package/.cursor/skills/security-and-hardening/SKILL.md +0 -70
  223. package/.cursor/skills/using-aw-skills/SKILL.md +0 -290
  224. package/.cursor/skills/using-aw-skills/evals/skill-trigger-cases.tsv +0 -25
  225. package/.cursor/skills/using-aw-skills/evals/test-skill-triggers.sh +0 -171
  226. package/.cursor/skills/using-aw-skills/hooks/hooks.json +0 -9
  227. package/.cursor/skills/using-aw-skills/hooks/session-start.sh +0 -67
  228. package/.cursor/skills/using-platform-skills/SKILL.md +0 -163
  229. package/.cursor/skills/using-platform-skills/evals/platform-selection-cases.json +0 -52
  230. /package/.cursor/rules/{golang-coding-style.md → golang-coding-style.mdc} +0 -0
  231. /package/.cursor/rules/{golang-hooks.md → golang-hooks.mdc} +0 -0
  232. /package/.cursor/rules/{golang-patterns.md → golang-patterns.mdc} +0 -0
  233. /package/.cursor/rules/{golang-security.md → golang-security.mdc} +0 -0
  234. /package/.cursor/rules/{golang-testing.md → golang-testing.mdc} +0 -0
  235. /package/.cursor/rules/{kotlin-coding-style.md → kotlin-coding-style.mdc} +0 -0
  236. /package/.cursor/rules/{kotlin-hooks.md → kotlin-hooks.mdc} +0 -0
  237. /package/.cursor/rules/{kotlin-patterns.md → kotlin-patterns.mdc} +0 -0
  238. /package/.cursor/rules/{kotlin-security.md → kotlin-security.mdc} +0 -0
  239. /package/.cursor/rules/{kotlin-testing.md → kotlin-testing.mdc} +0 -0
  240. /package/.cursor/rules/{php-coding-style.md → php-coding-style.mdc} +0 -0
  241. /package/.cursor/rules/{php-hooks.md → php-hooks.mdc} +0 -0
  242. /package/.cursor/rules/{php-patterns.md → php-patterns.mdc} +0 -0
  243. /package/.cursor/rules/{php-security.md → php-security.mdc} +0 -0
  244. /package/.cursor/rules/{php-testing.md → php-testing.mdc} +0 -0
  245. /package/.cursor/rules/{python-coding-style.md → python-coding-style.mdc} +0 -0
  246. /package/.cursor/rules/{python-hooks.md → python-hooks.mdc} +0 -0
  247. /package/.cursor/rules/{python-patterns.md → python-patterns.mdc} +0 -0
  248. /package/.cursor/rules/{python-security.md → python-security.mdc} +0 -0
  249. /package/.cursor/rules/{python-testing.md → python-testing.mdc} +0 -0
  250. /package/.cursor/rules/{swift-coding-style.md → swift-coding-style.mdc} +0 -0
  251. /package/.cursor/rules/{swift-hooks.md → swift-hooks.mdc} +0 -0
  252. /package/.cursor/rules/{swift-patterns.md → swift-patterns.mdc} +0 -0
  253. /package/.cursor/rules/{swift-security.md → swift-security.mdc} +0 -0
  254. /package/.cursor/rules/{swift-testing.md → swift-testing.mdc} +0 -0
  255. /package/.cursor/rules/{typescript-coding-style.md → typescript-coding-style.mdc} +0 -0
  256. /package/.cursor/rules/{typescript-hooks.md → typescript-hooks.mdc} +0 -0
  257. /package/.cursor/rules/{typescript-patterns.md → typescript-patterns.mdc} +0 -0
  258. /package/.cursor/rules/{typescript-security.md → typescript-security.mdc} +0 -0
  259. /package/.cursor/rules/{typescript-testing.md → typescript-testing.mdc} +0 -0
@@ -72,6 +72,19 @@ Capture at least:
72
72
  - ADR-needed decision when the change has durable architectural impact
73
73
  - rollout, migration, or environment constraints when relevant
74
74
 
75
+ ## Human HTML Companion
76
+
77
+ Markdown `spec.md` remains canonical for agents.
78
+ When this helper writes or materially updates `spec.md`, also create or refresh `.aw_docs/features/<feature_slug>/spec.html`. HTML sidecars are required stage outputs, not advisory metadata.
79
+
80
+ Delegate to the `aw:echo` subagent with the `technical-spec` profile.
81
+ Invoking `/aw:plan` or `aw-spec` in default `dual` mode is explicit authorization to spawn exactly one `aw:echo` subagent for HTML companion generation; do not skip HTML only because no direct command is available.
82
+ Resolve output mode as: explicit user request for Markdown-only -> otherwise `dual`. `.aw_docs/config.json` and `AW_DOCS_OUTPUT_MODE` may request `dual` or `html`, but must not silently suppress required SDLC HTML sidecars.
83
+
84
+ Pass the approved direction, `spec.md`, relevant source paths, risks, alternatives, interfaces, rollout constraints, and validation strategy as the source bundle.
85
+ Record the colocated sidecar in `state.json` `html_companion_artifacts` with `source_path`, `html_path`, profile, status, `run_ref` when available, publish status, and any explicit Markdown-only skip or fallback reason.
86
+ Spawn exactly one `aw:echo` subagent and wait for the colocated `.html` sidecar before the final handoff unless the user explicitly asks not to wait. If the harness still cannot spawn `aw:echo`, create a conservative self-contained fallback HTML sidecar in the same turn using the `aw:echo` safety and design contract, record `generated_fallback` plus the blocker, and keep Markdown canonical.
87
+
75
88
  ## Common Rationalizations
76
89
 
77
90
  | Rationalization | Reality |
@@ -101,6 +114,7 @@ Before handoff, run this inline review:
101
114
  6. alternatives and decision-rationale check
102
115
  7. testing and operations completeness check
103
116
  8. ambiguity check
117
+ 9. HTML companion file exists, or the user explicitly requested Markdown-only
104
118
 
105
119
  Fix issues inline instead of carrying them into task planning.
106
120
 
@@ -124,5 +138,6 @@ Always end with:
124
138
  - `Testing Strategy`
125
139
  - `Assumptions & Constraints`
126
140
  - `Acceptance Criteria`
141
+ - `HTML Companion`
127
142
  - `Open Approval Needs`
128
143
  - `Recommended Next`
@@ -127,6 +127,19 @@ Never write:
127
127
 
128
128
  If a worker would have to guess, the task is not ready.
129
129
 
130
+ ## Human HTML Companion
131
+
132
+ Markdown `tasks.md` remains canonical for agents.
133
+ When this helper writes or materially updates `tasks.md`, also create or refresh `.aw_docs/features/<feature_slug>/tasks.html`. HTML sidecars are required stage outputs, not advisory metadata.
134
+
135
+ Delegate to the `aw:echo` subagent with the `implementation-plan` profile.
136
+ Invoking `/aw:plan` or `aw-tasks` in default `dual` mode is explicit authorization to spawn exactly one `aw:echo` subagent for HTML companion generation; do not skip HTML only because no direct command is available.
137
+ Resolve output mode as: explicit user request for Markdown-only -> otherwise `dual`. `.aw_docs/config.json` and `AW_DOCS_OUTPUT_MODE` may request `dual` or `html`, but must not silently suppress required SDLC HTML sidecars.
138
+
139
+ Pass `spec.md`, `tasks.md`, phase order, file map, parallelization metadata, validation commands, save-point expectations, and handoff notes as the source bundle.
140
+ Record the colocated sidecar in `state.json` `html_companion_artifacts` with `source_path`, `html_path`, profile, status, `run_ref` when available, publish status, and any explicit Markdown-only skip or fallback reason.
141
+ Spawn exactly one `aw:echo` subagent and wait for the colocated `.html` sidecar before the final handoff unless the user explicitly asks not to wait. If the harness still cannot spawn `aw:echo`, create a conservative self-contained fallback HTML sidecar in the same turn using the `aw:echo` safety and design contract, record `generated_fallback` plus the blocker, and keep Markdown canonical.
142
+
130
143
  ## Verification
131
144
 
132
145
  Before handoff:
@@ -138,6 +151,7 @@ Before handoff:
138
151
  5. confirm behavior-changing slices use explicit `RED -> GREEN -> REFACTOR` wording or explicitly justify why test-first is not meaningful
139
152
  6. confirm the execution mode and review mode are clear when they can be known safely
140
153
  7. confirm execution can route straight to `/aw:build`
154
+ 8. confirm the HTML companion file exists, or that the user explicitly requested Markdown-only
141
155
 
142
156
  ## Final Output Shape
143
157
 
@@ -152,4 +166,5 @@ Always end with:
152
166
  - `Review Mode`
153
167
  - `Parallel Candidates`
154
168
  - `Review Result`
169
+ - `HTML Companion`
155
170
  - `Recommended Next`
@@ -92,9 +92,23 @@ Every testing handoff must make these things obvious:
92
92
  - evidence artifacts
93
93
  - failures
94
94
  - unavailable checks
95
+ - `html_companion_artifacts`
95
96
  - blockers
96
97
  - recommended next commands
97
98
 
99
+ ## Human HTML Companion
100
+
101
+ Markdown `verification.md` remains canonical for agents.
102
+ When test writes or materially updates `verification.md`, also create or refresh `.aw_docs/features/<feature_slug>/verification.html`. HTML sidecars are required stage outputs, not advisory metadata.
103
+
104
+ Delegate to the `aw:echo` subagent with the `verification-report` profile.
105
+ Invoking `/aw:test` in default `dual` mode is explicit authorization to spawn exactly one `aw:echo` subagent for HTML companion generation; do not skip HTML only because no direct command is available.
106
+ Resolve output mode as: explicit user request for Markdown-only -> otherwise `dual`. `.aw_docs/config.json` and `AW_DOCS_OUTPUT_MODE` may request `dual` or `html`, but must not silently suppress required SDLC HTML sidecars.
107
+
108
+ Pass QA scope, checks run, pass/fail/unavailable lanes, runtime evidence, screenshots or links when safe, failures, confidence, and next command as the source bundle.
109
+ Record the colocated sidecar in `state.json` `html_companion_artifacts` with `source_path`, `html_path`, profile, status, `run_ref` when available, publish status, and any explicit Markdown-only skip or fallback reason.
110
+ Spawn exactly one `aw:echo` subagent and wait for the colocated `.html` sidecar before the final handoff unless the user explicitly asks not to wait. If the harness still cannot spawn `aw:echo`, create a conservative self-contained fallback HTML sidecar in the same turn using the `aw:echo` safety and design contract, record `generated_fallback` plus the blocker, and keep Markdown canonical.
111
+
98
112
  ## Verification
99
113
 
100
114
  Before leaving test, confirm:
@@ -104,6 +118,7 @@ Before leaving test, confirm:
104
118
  - [ ] unavailable checks are marked unavailable, not silently passed
105
119
  - [ ] fresh evidence is written to `verification.md`
106
120
  - [ ] `state.json` is updated with checks, failures, and next action
121
+ - [ ] the HTML companion file exists, or the user explicitly requested Markdown-only
107
122
 
108
123
  ## Final Output Shape
109
124
 
@@ -115,4 +130,5 @@ Always end with:
115
130
  - `Evidence`
116
131
  - `Failures`
117
132
  - `Unavailable`
133
+ - `HTML Companion`
118
134
  - `Next`
@@ -53,6 +53,8 @@ If the user asked for one stage, stay in that stage.
53
53
  4. Preserve stage artifacts.
54
54
  Internal orchestration is not permission to skip `execution.md`, `verification.md`, `release.md`, or `state.json`.
55
55
  A stage is not done until its required artifacts are written.
56
+ HTML sidecars are required whenever the delegated stage writes a canonical Markdown artifact.
57
+ When a delegated stage writes a canonical Markdown artifact, preserve that stage's `aw:echo` obligation too: produce the colocated `.aw_docs/features/<feature_slug>/<artifact_basename>.html` companion before the stage handoff. Spawn exactly one `aw:echo` subagent in default `dual` mode; record `run_ref` when the harness exposes one. If the harness still cannot spawn `aw:echo`, create a conservative self-contained fallback HTML sidecar in the same turn, record `generated_fallback` plus the blocker, and keep Markdown canonical. Markdown-only is allowed only when the user explicitly requests it for the run.
56
58
  5. Respect stage boundaries.
57
59
  `aw-yolo` coordinates stages, but it does not collapse them together.
58
60
  Build still cannot self-certify.
@@ -80,6 +82,7 @@ Always end with:
80
82
  - `Current Stage`
81
83
  - `Completed Stages`
82
84
  - `Artifacts Written`
85
+ - `HTML Companions`
83
86
  - `Blockers`
84
87
  - `Recommended Next`
85
88
 
@@ -107,5 +110,6 @@ Always end with:
107
110
  - [ ] the selected flow is the smallest correct end-to-end sequence
108
111
  - [ ] the chosen starting stage matches the current repo/artifact state
109
112
  - [ ] each stage still writes its required artifacts
113
+ - [ ] each eligible stage generated or recorded its HTML companion status
110
114
  - [ ] failed stages stop the flow instead of being hand-waved away
111
115
  - [ ] blockers name the exact stage where the run stopped
@@ -0,0 +1,121 @@
1
+ ---
2
+ name: diagnose
3
+ description: Disciplined diagnosis loop for hard bugs and performance regressions. Reproduce → minimise → hypothesise → instrument → fix → regression-test. Use when user says "diagnose this" / "debug this", reports a bug, says something is broken/throwing/failing, or describes a performance regression.
4
+ ---
5
+
6
+ # Diagnose
7
+
8
+ A discipline for hard bugs. Skip phases only when explicitly justified.
9
+
10
+ When exploring the codebase, use the project's domain glossary to get a clear mental model of the relevant modules, and check ADRs in the area you're touching.
11
+
12
+ ## When To Use
13
+
14
+ Use this for unclear bugs, regressions, performance problems, flaky behavior, repeated failed fixes, or any investigation where the next safe move is to build a trustworthy feedback loop before patching.
15
+
16
+ ## Phase 1 — Build a feedback loop
17
+
18
+ **This is the skill.** Everything else is mechanical. If you have a fast, deterministic, agent-runnable pass/fail signal for the bug, you will find the cause — bisection, hypothesis-testing, and instrumentation all just consume that signal. If you don't have one, no amount of staring at code will save you.
19
+
20
+ Spend disproportionate effort here. **Be aggressive. Be creative. Refuse to give up.**
21
+
22
+ ### Ways to construct one — try them in roughly this order
23
+
24
+ 1. **Failing test** at whatever seam reaches the bug — unit, integration, e2e.
25
+ 2. **Curl / HTTP script** against a running dev server.
26
+ 3. **CLI invocation** with a fixture input, diffing stdout against a known-good snapshot.
27
+ 4. **Headless browser script** (Playwright / Puppeteer) — drives the UI, asserts on DOM/console/network.
28
+ 5. **Replay a captured trace.** Save a real network request / payload / event log to disk; replay it through the code path in isolation.
29
+ 6. **Throwaway harness.** Spin up a minimal subset of the system (one service, mocked deps) that exercises the bug code path with a single function call.
30
+ 7. **Property / fuzz loop.** If the bug is "sometimes wrong output", run 1000 random inputs and look for the failure mode.
31
+ 8. **Bisection harness.** If the bug appeared between two known states (commit, dataset, version), automate "boot at state X, check, repeat" so you can `git bisect run` it.
32
+ 9. **Differential loop.** Run the same input through old-version vs new-version (or two configs) and diff outputs.
33
+ 10. **HITL bash script.** Last resort. If a human must click, drive _them_ with `scripts/hitl-loop.template.sh` so the loop is still structured. Captured output feeds back to you.
34
+
35
+ Build the right feedback loop, and the bug is 90% fixed.
36
+
37
+ ### Iterate on the loop itself
38
+
39
+ Treat the loop as a product. Once you have _a_ loop, ask:
40
+
41
+ - Can I make it faster? (Cache setup, skip unrelated init, narrow the test scope.)
42
+ - Can I make the signal sharper? (Assert on the specific symptom, not "didn't crash".)
43
+ - Can I make it more deterministic? (Pin time, seed RNG, isolate filesystem, freeze network.)
44
+
45
+ A 30-second flaky loop is barely better than no loop. A 2-second deterministic loop is a debugging superpower.
46
+
47
+ ### Non-deterministic bugs
48
+
49
+ The goal is not a clean repro but a **higher reproduction rate**. Loop the trigger 100×, parallelise, add stress, narrow timing windows, inject sleeps. A 50%-flake bug is debuggable; 1% is not — keep raising the rate until it's debuggable.
50
+
51
+ ### When you genuinely cannot build a loop
52
+
53
+ Stop and say so explicitly. List what you tried. Ask the user for: (a) access to whatever environment reproduces it, (b) a captured artifact (HAR file, log dump, core dump, screen recording with timestamps), or (c) permission to add temporary production instrumentation. Do **not** proceed to hypothesise without a loop.
54
+
55
+ Do not proceed to Phase 2 until you have a loop you believe in.
56
+
57
+ ## Phase 2 — Reproduce
58
+
59
+ Run the loop. Watch the bug appear.
60
+
61
+ Confirm:
62
+
63
+ - [ ] The loop produces the failure mode the **user** described — not a different failure that happens to be nearby. Wrong bug = wrong fix.
64
+ - [ ] The failure is reproducible across multiple runs (or, for non-deterministic bugs, reproducible at a high enough rate to debug against).
65
+ - [ ] You have captured the exact symptom (error message, wrong output, slow timing) so later phases can verify the fix actually addresses it.
66
+
67
+ Do not proceed until you reproduce the bug.
68
+
69
+ ## Phase 3 — Hypothesise
70
+
71
+ Generate **3–5 ranked hypotheses** before testing any of them. Single-hypothesis generation anchors on the first plausible idea.
72
+
73
+ Each hypothesis must be **falsifiable**: state the prediction it makes.
74
+
75
+ > Format: "If <X> is the cause, then <changing Y> will make the bug disappear / <changing Z> will make it worse."
76
+
77
+ If you cannot state the prediction, the hypothesis is a vibe — discard or sharpen it.
78
+
79
+ **Show the ranked list to the user before testing.** They often have domain knowledge that re-ranks instantly ("we just deployed a change to #3"), or know hypotheses they've already ruled out. Cheap checkpoint, big time saver. Don't block on it — proceed with your ranking if the user is AFK.
80
+
81
+ ## Phase 4 — Instrument
82
+
83
+ Each probe must map to a specific prediction from Phase 3. **Change one variable at a time.**
84
+
85
+ Tool preference:
86
+
87
+ 1. **Debugger / REPL inspection** if the env supports it. One breakpoint beats ten logs.
88
+ 2. **Targeted logs** at the boundaries that distinguish hypotheses.
89
+ 3. Never "log everything and grep".
90
+
91
+ **Tag every debug log** with a unique prefix, e.g. `[DEBUG-a4f2]`. Cleanup at the end becomes a single grep. Untagged logs survive; tagged logs die.
92
+
93
+ **Perf branch.** For performance regressions, logs are usually wrong. Instead: establish a baseline measurement (timing harness, `performance.now()`, profiler, query plan), then bisect. Measure first, fix second.
94
+
95
+ ## Phase 5 — Fix + regression test
96
+
97
+ Write the regression test **before the fix** — but only if there is a **correct seam** for it.
98
+
99
+ A correct seam is one where the test exercises the **real bug pattern** as it occurs at the call site. If the only available seam is too shallow (single-caller test when the bug needs multiple callers, unit test that can't replicate the chain that triggered the bug), a regression test there gives false confidence.
100
+
101
+ **If no correct seam exists, that itself is the finding.** Note it. The codebase architecture is preventing the bug from being locked down. Flag this for the next phase.
102
+
103
+ If a correct seam exists:
104
+
105
+ 1. Turn the minimised repro into a failing test at that seam.
106
+ 2. Watch it fail.
107
+ 3. Apply the fix.
108
+ 4. Watch it pass.
109
+ 5. Re-run the Phase 1 feedback loop against the original (un-minimised) scenario.
110
+
111
+ ## Phase 6 — Cleanup + post-mortem
112
+
113
+ Required before declaring done:
114
+
115
+ - [ ] Original repro no longer reproduces (re-run the Phase 1 loop)
116
+ - [ ] Regression test passes (or absence of seam is documented)
117
+ - [ ] All `[DEBUG-...]` instrumentation removed (`grep` the prefix)
118
+ - [ ] Throwaway prototypes deleted (or moved to a clearly-marked debug location)
119
+ - [ ] The hypothesis that turned out correct is stated in the commit / PR message — so the next debugger learns
120
+
121
+ **Then ask: what would have prevented this bug?** If the answer involves architectural change (no good test seam, tangled callers, hidden coupling), hand off to the `improve-codebase-architecture` skill with the specifics. Make the recommendation **after** the fix is in, not before — you have more information now than when you started.
@@ -0,0 +1,41 @@
1
+ #!/usr/bin/env bash
2
+ # Human-in-the-loop reproduction loop.
3
+ # Copy this file, edit the steps below, and run it.
4
+ # The agent runs the script; the user follows prompts in their terminal.
5
+ #
6
+ # Usage:
7
+ # bash hitl-loop.template.sh
8
+ #
9
+ # Two helpers:
10
+ # step "<instruction>" → show instruction, wait for Enter
11
+ # capture VAR "<question>" → show question, read response into VAR
12
+ #
13
+ # At the end, captured values are printed as KEY=VALUE for the agent to parse.
14
+
15
+ set -euo pipefail
16
+
17
+ step() {
18
+ printf '\n>>> %s\n' "$1"
19
+ read -r -p " [Enter when done] " _
20
+ }
21
+
22
+ capture() {
23
+ local var="$1" question="$2" answer
24
+ printf '\n>>> %s\n' "$question"
25
+ read -r -p " > " answer
26
+ printf -v "$var" '%s' "$answer"
27
+ }
28
+
29
+ # --- edit below ---------------------------------------------------------
30
+
31
+ step "Open the app at http://localhost:3000 and sign in."
32
+
33
+ capture ERRORED "Click the 'Export' button. Did it throw an error? (y/n)"
34
+
35
+ capture ERROR_MSG "Paste the error message (or 'none'):"
36
+
37
+ # --- edit above ---------------------------------------------------------
38
+
39
+ printf '\n--- Captured ---\n'
40
+ printf 'ERRORED=%s\n' "$ERRORED"
41
+ printf 'ERROR_MSG=%s\n' "$ERROR_MSG"
@@ -0,0 +1,265 @@
1
+ ---
2
+ name: finish-only-when-green
3
+ description: Use when the user says continue until done, do not stop early, loop until all tests pass, or wants artifact-based completion for package, release, setup, or validation workflows. Enforces an explicit done contract, repeated fix-and-rerun loops, blocker-only pauses, and a final proof artifact before claiming completion.
4
+ ---
5
+
6
+ # Finish Only When Green
7
+
8
+ Use this skill when the user wants persistent execution until the work is actually complete, especially for:
9
+ - release validation
10
+ - package publishing
11
+ - fresh environment setup
12
+ - end-to-end CLI smoke testing
13
+ - regression fixing loops
14
+ - “don’t stop until it’s done” requests
15
+
16
+ ## Core Rule
17
+
18
+ Do not stop at a status update if the explicit completion gates are not green.
19
+
20
+ Do not silently downgrade or replace the top-level completion contract mid-run.
21
+ If the final release gate is red, then the task is still red even if many sub-checks are green.
22
+
23
+ Do not enter the fix-and-rerun loop until `green` is defined clearly enough to be testable.
24
+
25
+ If the task has ambiguous success criteria, first tighten the goal, define the gates, and identify the proof artifact.
26
+
27
+ Only pause when one of these is true:
28
+ - a real external blocker exists
29
+ - credentials, permissions, or network access are missing
30
+ - the user must choose between materially different outcomes
31
+
32
+ Otherwise, continue the loop yourself.
33
+
34
+ ## Required Workflow
35
+
36
+ ### 1. Define Green First
37
+
38
+ Before substantial work, write down the exact completion contract in 3-6 bullets.
39
+
40
+ Use concrete gates such as:
41
+ - package published
42
+ - fresh workspace created
43
+ - init command succeeded
44
+ - Cursor/Codex/Claude smoke passed
45
+ - summary artifact written
46
+
47
+ If there is no proof artifact, the task is not done.
48
+
49
+ Prefer one final source-of-truth artifact whenever possible:
50
+ - `summary.json`
51
+ - `summary.txt`
52
+ - release-gate report
53
+
54
+ Do not substitute a collection of scattered green artifacts for the final gate unless the user explicitly changes the contract.
55
+
56
+ The definition of green must be:
57
+ - observable
58
+ - specific
59
+ - scoped
60
+ - rerunnable
61
+ - tied to one or more artifacts or commands
62
+
63
+ Bad:
64
+ - "works"
65
+ - "looks good"
66
+ - "mostly done"
67
+ - "migration seems safe"
68
+
69
+ Good:
70
+ - `npm run test:e2e:staging -- --grep @CONVERSATIONS_V2` passes
71
+ - visual diff report exists and is within threshold
72
+ - summary JSON says `GREEN`
73
+
74
+ ### 2. Ask Sharp Questions Before Looping
75
+
76
+ If the user has not defined green well enough, ask the smallest set of questions needed before entering the loop.
77
+
78
+ Ask at most 1-3 short, high-value questions.
79
+
80
+ Good question types:
81
+ - Which environment is the source of truth: local, staging, or production-like?
82
+ - What exact flows are in scope for green, and what is explicitly out of scope?
83
+ - What proof artifact should exist at the end?
84
+ - Should visual checks be exact pixel match, perceptual tolerance, or layout-contract based?
85
+ - Are we blocking on all tests, only tagged tests, or only a specific suite?
86
+
87
+ Do not ask broad or deferring questions like:
88
+ - "What do you want me to do?"
89
+ - "How should I proceed?"
90
+ - "Anything else?"
91
+
92
+ If you can make a safe default assumption, do so and state it before starting the loop.
93
+
94
+ ### 3. Write the Done Contract Down
95
+
96
+ Before the loop starts, write down:
97
+ - the goal
98
+ - the exact green state
99
+ - the commands or checks that prove it
100
+ - the proof artifact path
101
+ - what is still red at the start
102
+
103
+ If this is missing, the loop has not started yet.
104
+
105
+ ### 4. Work in a Green Loop
106
+
107
+ Run this loop until all gates pass:
108
+ 1. execute the next highest-value check
109
+ 2. inspect the failure precisely
110
+ 3. patch the smallest correct fix
111
+ 4. rerun the affected check
112
+ 5. rerun the full gate when needed
113
+
114
+ Do not keep re-running the same broken step without changing anything.
115
+
116
+ ### 5. Prefer Narrow Reruns, Then Full Confirmation
117
+
118
+ After a fix:
119
+ - rerun the failing case first
120
+ - once it passes, rerun the broader suite that proves the whole contract
121
+
122
+ Do not stop after targeted reruns if the top-level suite has not been rerun successfully.
123
+
124
+ ### 6. Report Progress Without Pretending Done
125
+
126
+ Intermediary updates should say:
127
+ - what gate is green
128
+ - what gate is still red
129
+ - what you are doing next
130
+
131
+ Do not frame “partially working” as complete.
132
+
133
+ Every substantial status update should include one explicit overall state line:
134
+ - `Overall state: GREEN`
135
+ - `Overall state: RED`
136
+
137
+ If any required gate is red, the overall state is red.
138
+
139
+ ### 7. End With Proof
140
+
141
+ A task is complete only when you can point to the final proof artifact, for example:
142
+ - `summary.json`
143
+ - `summary.txt`
144
+ - published package version
145
+ - init log
146
+ - smoke verdict files
147
+
148
+ ## Default Completion Template
149
+
150
+ When the user has not given exact gates, use:
151
+ - explicit goal statement written down
152
+ - exact green state written down
153
+ - implementation or config change applied
154
+ - focused verification passed
155
+ - broader workflow rerun passed
156
+ - final artifact saved
157
+
158
+ ## Green State Template
159
+
160
+ Use this template at the start of the task whenever green is not already explicit:
161
+
162
+ - Goal:
163
+ - Green means:
164
+ - Required checks:
165
+ - Proof artifact:
166
+ - Initial red gates:
167
+
168
+ ## Pre-Loop Checklist
169
+
170
+ Before entering the loop, confirm:
171
+ - the success criteria are concrete
172
+ - the scope is bounded
173
+ - the proving command or commands are known
174
+ - the proof artifact is known
175
+ - any unclear tradeoff has been resolved or safely assumed
176
+
177
+ If one of these is missing, clarify first.
178
+
179
+ ## Packaging / E2E Variant
180
+
181
+ For package or release workflows, default gates are:
182
+ - package version created or published
183
+ - fresh isolated init succeeded
184
+ - local init succeeded
185
+ - real harness smoke passed for the required CLIs
186
+ - final summary artifact exists and is green
187
+
188
+ Preferred execution order:
189
+ 1. fix contract drift
190
+ 2. fix generated artifact drift
191
+ 3. fix narrow harness/case failures
192
+ 4. rerun aggregated release gate
193
+ 5. publish only after the aggregated release gate is green
194
+
195
+ For migration or regression programs, default gates are:
196
+ - scope inventory exists
197
+ - machine-readable coverage or gate file exists
198
+ - high-priority gaps are identified
199
+ - targeted regression suite passes
200
+ - differential or comparison checks pass if required
201
+ - final summary artifact exists and is green
202
+
203
+ ## Ralph Loop Additions
204
+
205
+ Borrow these behaviors from Ralph Loop when the work is release-critical, verification-heavy, or prone to false positives.
206
+
207
+ ### 1. One Red Gate Per Iteration
208
+
209
+ At any given moment, choose one primary red gate.
210
+
211
+ Examples:
212
+ - `hook-contracts` is red
213
+ - `cursor-generated-output` is red
214
+ - `published-package-init` is red
215
+ - `claude:review-security-risk` is red
216
+ - `release-gate summary artifact` is red
217
+
218
+ Use that gate to drive the next fix. Do not chase multiple unclear failures at once unless they are truly independent.
219
+
220
+ ### 2. Fresh-Context Iterations
221
+
222
+ After each meaningful fix, restate the current state from fresh evidence:
223
+ - what is green
224
+ - what is still red
225
+ - what exact artifact is missing or failing
226
+
227
+ Do not let stale assumptions from earlier attempts drive the next step.
228
+
229
+ ### 3. Backpressure Through the Top-Level Gate
230
+
231
+ Use narrow reruns for diagnosis, but keep pressure on the top-level gate that proves completion.
232
+
233
+ If the final gate is still red, the workflow is still red.
234
+
235
+ ### 4. Artifact-Only Exit
236
+
237
+ A loop iteration may look successful, but the workflow is not complete until the chosen final artifact says green.
238
+
239
+ Examples:
240
+ - release summary says all gates passed
241
+ - final smoke summary includes all required harnesses and all required cases
242
+ - package validation report is fully green
243
+
244
+ ### 5. Explicit Contract Changes Only
245
+
246
+ If you discover a better proving method mid-run:
247
+ - explicitly restate the updated contract
248
+ - explain why the old one is insufficient
249
+ - then continue
250
+
251
+ Do not quietly switch from a strict gate to a weaker gate.
252
+
253
+ ## Anti-Patterns
254
+
255
+ Do not:
256
+ - enter an infinite loop without first defining what exits the loop
257
+ - confuse activity with progress when green is vague
258
+ - claim coverage is complete without a written scope inventory
259
+ - stop after the first success if later gates remain red
260
+ - stop after publishing if install/init/smoke is still unverified
261
+ - stop after one harness passes if the release gate requires all harnesses
262
+ - claim success from memory when the latest rerun is missing
263
+ - replace the final release gate with a weaker ad hoc check set without explicitly updating the contract
264
+ - say “mostly green” when the final summary artifact is missing or still red
265
+ - leave a known red gate for later if you still have a clear next step and the user asked you to continue until done
@@ -0,0 +1,24 @@
1
+ ---
2
+ name: grill-me
3
+ description: Interview the user relentlessly about a plan or design until reaching shared understanding, resolving each branch of the decision tree. Use when user wants to stress-test a plan, get grilled on their design, or mentions "grill me".
4
+ ---
5
+
6
+ # Grill Me
7
+
8
+ ## When To Use
9
+
10
+ Use this when the user explicitly wants a plan, design, proposal, or decision to be challenged through questions before it becomes an artifact or implementation direction.
11
+
12
+ ## Workflow
13
+
14
+ Interview me relentlessly about every aspect of this plan until we reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one. For each question, provide your recommended answer.
15
+
16
+ Ask the questions one at a time.
17
+
18
+ If a question can be answered by exploring the codebase, explore the codebase instead.
19
+
20
+ ## Guardrails
21
+
22
+ - Keep questions concrete and decision-oriented.
23
+ - Do not ask questions that local repo exploration can answer cheaply.
24
+ - Stop once the important decisions are clear enough for the next AW stage.
@@ -0,0 +1,92 @@
1
+ ---
2
+ name: grill-with-docs
3
+ description: Grilling session that challenges your plan against the existing domain model, sharpens terminology, and updates documentation (CONTEXT.md, ADRs) inline as decisions crystallise. Use when user wants to stress-test a plan against their project's language and documented decisions.
4
+ ---
5
+
6
+ ## When To Use
7
+
8
+ Use this inside planning when the problem is fuzzy, domain language is overloaded, acceptance criteria are under-specified, or existing docs/code may contradict the user's mental model.
9
+
10
+ <what-to-do>
11
+
12
+ Interview me relentlessly about every aspect of this plan until we reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one. For each question, provide your recommended answer.
13
+
14
+ Ask the questions one at a time, waiting for feedback on each question before continuing.
15
+
16
+ If a question can be answered by exploring the codebase, explore the codebase instead.
17
+
18
+ </what-to-do>
19
+
20
+ <supporting-info>
21
+
22
+ ## Domain awareness
23
+
24
+ During codebase exploration, also look for existing documentation:
25
+
26
+ ### File structure
27
+
28
+ Most repos have a single context:
29
+
30
+ ```
31
+ /
32
+ ├── CONTEXT.md
33
+ ├── docs/
34
+ │ └── adr/
35
+ │ ├── 0001-event-sourced-orders.md
36
+ │ └── 0002-postgres-for-write-model.md
37
+ └── src/
38
+ ```
39
+
40
+ If a `CONTEXT-MAP.md` exists at the root, the repo has multiple contexts. The map points to where each one lives:
41
+
42
+ ```
43
+ /
44
+ ├── CONTEXT-MAP.md
45
+ ├── docs/
46
+ │ └── adr/ ← system-wide decisions
47
+ ├── src/
48
+ │ ├── ordering/
49
+ │ │ ├── CONTEXT.md
50
+ │ │ └── docs/adr/ ← context-specific decisions
51
+ │ └── billing/
52
+ │ ├── CONTEXT.md
53
+ │ └── docs/adr/
54
+ ```
55
+
56
+ Create files lazily — only when you have something to write. If no `CONTEXT.md` exists, create one when the first term is resolved. If no `docs/adr/` exists, create it when the first ADR is needed.
57
+
58
+ ## During the session
59
+
60
+ ### Challenge against the glossary
61
+
62
+ When the user uses a term that conflicts with the existing language in `CONTEXT.md`, call it out immediately. "Your glossary defines 'cancellation' as X, but you seem to mean Y — which is it?"
63
+
64
+ ### Sharpen fuzzy language
65
+
66
+ When the user uses vague or overloaded terms, propose a precise canonical term. "You're saying 'account' — do you mean the Customer or the User? Those are different things."
67
+
68
+ ### Discuss concrete scenarios
69
+
70
+ When domain relationships are being discussed, stress-test them with specific scenarios. Invent scenarios that probe edge cases and force the user to be precise about the boundaries between concepts.
71
+
72
+ ### Cross-reference with code
73
+
74
+ When the user states how something works, check whether the code agrees. If you find a contradiction, surface it: "Your code cancels entire Orders, but you just said partial cancellation is possible — which is right?"
75
+
76
+ ### Update CONTEXT.md inline
77
+
78
+ When a term is resolved, update `CONTEXT.md` right there. Don't batch these up — capture them as they happen. Use the format in [context-format.md](./context-format.md).
79
+
80
+ Don't couple `CONTEXT.md` to implementation details. Only include terms that are meaningful to domain experts.
81
+
82
+ ### Offer ADRs sparingly
83
+
84
+ Only offer to create an ADR when all three are true:
85
+
86
+ 1. **Hard to reverse** — the cost of changing your mind later is meaningful
87
+ 2. **Surprising without context** — a future reader will wonder "why did they do it this way?"
88
+ 3. **The result of a real trade-off** — there were genuine alternatives and you picked one for specific reasons
89
+
90
+ If any of the three is missing, skip the ADR. Use the format in [adr-format.md](./adr-format.md).
91
+
92
+ </supporting-info>