@zigrivers/scaffold 2.1.2 → 2.38.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (391) hide show
  1. package/README.md +505 -119
  2. package/dist/cli/commands/build.d.ts.map +1 -1
  3. package/dist/cli/commands/build.js +94 -14
  4. package/dist/cli/commands/build.js.map +1 -1
  5. package/dist/cli/commands/build.test.js +30 -5
  6. package/dist/cli/commands/build.test.js.map +1 -1
  7. package/dist/cli/commands/check.d.ts +12 -0
  8. package/dist/cli/commands/check.d.ts.map +1 -0
  9. package/dist/cli/commands/check.js +311 -0
  10. package/dist/cli/commands/check.js.map +1 -0
  11. package/dist/cli/commands/check.test.d.ts +2 -0
  12. package/dist/cli/commands/check.test.d.ts.map +1 -0
  13. package/dist/cli/commands/check.test.js +412 -0
  14. package/dist/cli/commands/check.test.js.map +1 -0
  15. package/dist/cli/commands/complete.d.ts +12 -0
  16. package/dist/cli/commands/complete.d.ts.map +1 -0
  17. package/dist/cli/commands/complete.js +101 -0
  18. package/dist/cli/commands/complete.js.map +1 -0
  19. package/dist/cli/commands/complete.test.d.ts +2 -0
  20. package/dist/cli/commands/complete.test.d.ts.map +1 -0
  21. package/dist/cli/commands/complete.test.js +133 -0
  22. package/dist/cli/commands/complete.test.js.map +1 -0
  23. package/dist/cli/commands/dashboard.d.ts.map +1 -1
  24. package/dist/cli/commands/dashboard.js +12 -8
  25. package/dist/cli/commands/dashboard.js.map +1 -1
  26. package/dist/cli/commands/info.d.ts.map +1 -1
  27. package/dist/cli/commands/info.js +4 -0
  28. package/dist/cli/commands/info.js.map +1 -1
  29. package/dist/cli/commands/knowledge.d.ts.map +1 -1
  30. package/dist/cli/commands/knowledge.js +6 -2
  31. package/dist/cli/commands/knowledge.js.map +1 -1
  32. package/dist/cli/commands/knowledge.test.js +16 -11
  33. package/dist/cli/commands/knowledge.test.js.map +1 -1
  34. package/dist/cli/commands/next.d.ts.map +1 -1
  35. package/dist/cli/commands/next.js +41 -13
  36. package/dist/cli/commands/next.js.map +1 -1
  37. package/dist/cli/commands/next.test.js +3 -0
  38. package/dist/cli/commands/next.test.js.map +1 -1
  39. package/dist/cli/commands/reset.d.ts +1 -0
  40. package/dist/cli/commands/reset.d.ts.map +1 -1
  41. package/dist/cli/commands/reset.js +179 -67
  42. package/dist/cli/commands/reset.js.map +1 -1
  43. package/dist/cli/commands/reset.test.js +360 -0
  44. package/dist/cli/commands/reset.test.js.map +1 -1
  45. package/dist/cli/commands/rework.d.ts +20 -0
  46. package/dist/cli/commands/rework.d.ts.map +1 -0
  47. package/dist/cli/commands/rework.js +332 -0
  48. package/dist/cli/commands/rework.js.map +1 -0
  49. package/dist/cli/commands/rework.test.d.ts +2 -0
  50. package/dist/cli/commands/rework.test.d.ts.map +1 -0
  51. package/dist/cli/commands/rework.test.js +297 -0
  52. package/dist/cli/commands/rework.test.js.map +1 -0
  53. package/dist/cli/commands/run.d.ts.map +1 -1
  54. package/dist/cli/commands/run.js +59 -31
  55. package/dist/cli/commands/run.js.map +1 -1
  56. package/dist/cli/commands/run.test.js +288 -6
  57. package/dist/cli/commands/run.test.js.map +1 -1
  58. package/dist/cli/commands/skill.d.ts +12 -0
  59. package/dist/cli/commands/skill.d.ts.map +1 -0
  60. package/dist/cli/commands/skill.js +123 -0
  61. package/dist/cli/commands/skill.js.map +1 -0
  62. package/dist/cli/commands/skill.test.d.ts +2 -0
  63. package/dist/cli/commands/skill.test.d.ts.map +1 -0
  64. package/dist/cli/commands/skill.test.js +297 -0
  65. package/dist/cli/commands/skill.test.js.map +1 -0
  66. package/dist/cli/commands/skip.d.ts +1 -1
  67. package/dist/cli/commands/skip.d.ts.map +1 -1
  68. package/dist/cli/commands/skip.js +123 -57
  69. package/dist/cli/commands/skip.js.map +1 -1
  70. package/dist/cli/commands/skip.test.js +91 -0
  71. package/dist/cli/commands/skip.test.js.map +1 -1
  72. package/dist/cli/commands/status.d.ts +1 -0
  73. package/dist/cli/commands/status.d.ts.map +1 -1
  74. package/dist/cli/commands/status.js +57 -10
  75. package/dist/cli/commands/status.js.map +1 -1
  76. package/dist/cli/commands/status.test.js +81 -0
  77. package/dist/cli/commands/status.test.js.map +1 -1
  78. package/dist/cli/commands/update.test.js +252 -0
  79. package/dist/cli/commands/update.test.js.map +1 -1
  80. package/dist/cli/commands/version.test.js +171 -1
  81. package/dist/cli/commands/version.test.js.map +1 -1
  82. package/dist/cli/index.d.ts.map +1 -1
  83. package/dist/cli/index.js +8 -0
  84. package/dist/cli/index.js.map +1 -1
  85. package/dist/core/adapters/adapter.d.ts +14 -0
  86. package/dist/core/adapters/adapter.d.ts.map +1 -1
  87. package/dist/core/adapters/adapter.js.map +1 -1
  88. package/dist/core/adapters/adapter.test.js +10 -0
  89. package/dist/core/adapters/adapter.test.js.map +1 -1
  90. package/dist/core/adapters/claude-code.d.ts.map +1 -1
  91. package/dist/core/adapters/claude-code.js +47 -10
  92. package/dist/core/adapters/claude-code.js.map +1 -1
  93. package/dist/core/adapters/claude-code.test.js +41 -20
  94. package/dist/core/adapters/claude-code.test.js.map +1 -1
  95. package/dist/core/adapters/codex.d.ts.map +1 -1
  96. package/dist/core/adapters/codex.js +5 -1
  97. package/dist/core/adapters/codex.js.map +1 -1
  98. package/dist/core/adapters/codex.test.js +5 -0
  99. package/dist/core/adapters/codex.test.js.map +1 -1
  100. package/dist/core/adapters/universal.d.ts.map +1 -1
  101. package/dist/core/adapters/universal.js +0 -1
  102. package/dist/core/adapters/universal.js.map +1 -1
  103. package/dist/core/adapters/universal.test.js +5 -0
  104. package/dist/core/adapters/universal.test.js.map +1 -1
  105. package/dist/core/assembly/context-gatherer.d.ts.map +1 -1
  106. package/dist/core/assembly/context-gatherer.js +5 -2
  107. package/dist/core/assembly/context-gatherer.js.map +1 -1
  108. package/dist/core/assembly/engine.d.ts.map +1 -1
  109. package/dist/core/assembly/engine.js +10 -2
  110. package/dist/core/assembly/engine.js.map +1 -1
  111. package/dist/core/assembly/engine.test.js +19 -0
  112. package/dist/core/assembly/engine.test.js.map +1 -1
  113. package/dist/core/assembly/knowledge-loader.d.ts +25 -0
  114. package/dist/core/assembly/knowledge-loader.d.ts.map +1 -1
  115. package/dist/core/assembly/knowledge-loader.js +75 -2
  116. package/dist/core/assembly/knowledge-loader.js.map +1 -1
  117. package/dist/core/assembly/knowledge-loader.test.js +388 -1
  118. package/dist/core/assembly/knowledge-loader.test.js.map +1 -1
  119. package/dist/core/assembly/meta-prompt-loader.d.ts +6 -0
  120. package/dist/core/assembly/meta-prompt-loader.d.ts.map +1 -1
  121. package/dist/core/assembly/meta-prompt-loader.js +41 -25
  122. package/dist/core/assembly/meta-prompt-loader.js.map +1 -1
  123. package/dist/core/assembly/preset-loader.d.ts +10 -0
  124. package/dist/core/assembly/preset-loader.d.ts.map +1 -1
  125. package/dist/core/assembly/preset-loader.js +26 -1
  126. package/dist/core/assembly/preset-loader.js.map +1 -1
  127. package/dist/core/assembly/preset-loader.test.js +65 -1
  128. package/dist/core/assembly/preset-loader.test.js.map +1 -1
  129. package/dist/core/assembly/update-mode.d.ts.map +1 -1
  130. package/dist/core/assembly/update-mode.js +10 -4
  131. package/dist/core/assembly/update-mode.js.map +1 -1
  132. package/dist/core/assembly/update-mode.test.js +47 -0
  133. package/dist/core/assembly/update-mode.test.js.map +1 -1
  134. package/dist/core/dependency/dependency.d.ts.map +1 -1
  135. package/dist/core/dependency/dependency.js +3 -2
  136. package/dist/core/dependency/dependency.js.map +1 -1
  137. package/dist/core/dependency/dependency.test.js +2 -0
  138. package/dist/core/dependency/dependency.test.js.map +1 -1
  139. package/dist/core/dependency/eligibility.js +3 -3
  140. package/dist/core/dependency/eligibility.js.map +1 -1
  141. package/dist/core/dependency/eligibility.test.js +2 -0
  142. package/dist/core/dependency/eligibility.test.js.map +1 -1
  143. package/dist/core/dependency/graph.d.ts.map +1 -1
  144. package/dist/core/dependency/graph.js +4 -0
  145. package/dist/core/dependency/graph.js.map +1 -1
  146. package/dist/core/dependency/graph.test.d.ts +2 -0
  147. package/dist/core/dependency/graph.test.d.ts.map +1 -0
  148. package/dist/core/dependency/graph.test.js +262 -0
  149. package/dist/core/dependency/graph.test.js.map +1 -0
  150. package/dist/core/rework/phase-selector.d.ts +24 -0
  151. package/dist/core/rework/phase-selector.d.ts.map +1 -0
  152. package/dist/core/rework/phase-selector.js +98 -0
  153. package/dist/core/rework/phase-selector.js.map +1 -0
  154. package/dist/core/rework/phase-selector.test.d.ts +2 -0
  155. package/dist/core/rework/phase-selector.test.d.ts.map +1 -0
  156. package/dist/core/rework/phase-selector.test.js +138 -0
  157. package/dist/core/rework/phase-selector.test.js.map +1 -0
  158. package/dist/dashboard/generator.d.ts +48 -17
  159. package/dist/dashboard/generator.d.ts.map +1 -1
  160. package/dist/dashboard/generator.js +75 -5
  161. package/dist/dashboard/generator.js.map +1 -1
  162. package/dist/dashboard/generator.test.js +213 -5
  163. package/dist/dashboard/generator.test.js.map +1 -1
  164. package/dist/dashboard/template.d.ts +1 -1
  165. package/dist/dashboard/template.d.ts.map +1 -1
  166. package/dist/dashboard/template.js +755 -114
  167. package/dist/dashboard/template.js.map +1 -1
  168. package/dist/e2e/knowledge.test.js +4 -3
  169. package/dist/e2e/knowledge.test.js.map +1 -1
  170. package/dist/e2e/pipeline.test.js +2 -0
  171. package/dist/e2e/pipeline.test.js.map +1 -1
  172. package/dist/e2e/rework.test.d.ts +6 -0
  173. package/dist/e2e/rework.test.d.ts.map +1 -0
  174. package/dist/e2e/rework.test.js +226 -0
  175. package/dist/e2e/rework.test.js.map +1 -0
  176. package/dist/index.js +0 -0
  177. package/dist/project/adopt.test.js +2 -0
  178. package/dist/project/adopt.test.js.map +1 -1
  179. package/dist/project/claude-md.js +2 -2
  180. package/dist/project/claude-md.js.map +1 -1
  181. package/dist/project/claude-md.test.js +4 -4
  182. package/dist/project/claude-md.test.js.map +1 -1
  183. package/dist/project/detector.d.ts.map +1 -1
  184. package/dist/project/detector.js +4 -1
  185. package/dist/project/detector.js.map +1 -1
  186. package/dist/project/frontmatter.d.ts.map +1 -1
  187. package/dist/project/frontmatter.js +54 -15
  188. package/dist/project/frontmatter.js.map +1 -1
  189. package/dist/project/frontmatter.test.js +2 -2
  190. package/dist/project/frontmatter.test.js.map +1 -1
  191. package/dist/state/rework-manager.d.ts +16 -0
  192. package/dist/state/rework-manager.d.ts.map +1 -0
  193. package/dist/state/rework-manager.js +126 -0
  194. package/dist/state/rework-manager.js.map +1 -0
  195. package/dist/state/rework-manager.test.d.ts +2 -0
  196. package/dist/state/rework-manager.test.d.ts.map +1 -0
  197. package/dist/state/rework-manager.test.js +191 -0
  198. package/dist/state/rework-manager.test.js.map +1 -0
  199. package/dist/state/state-manager.d.ts +13 -0
  200. package/dist/state/state-manager.d.ts.map +1 -1
  201. package/dist/state/state-manager.js +39 -2
  202. package/dist/state/state-manager.js.map +1 -1
  203. package/dist/state/state-manager.test.js +74 -1
  204. package/dist/state/state-manager.test.js.map +1 -1
  205. package/dist/state/state-migration.d.ts +23 -0
  206. package/dist/state/state-migration.d.ts.map +1 -0
  207. package/dist/state/state-migration.js +144 -0
  208. package/dist/state/state-migration.js.map +1 -0
  209. package/dist/state/state-migration.test.d.ts +2 -0
  210. package/dist/state/state-migration.test.d.ts.map +1 -0
  211. package/dist/state/state-migration.test.js +451 -0
  212. package/dist/state/state-migration.test.js.map +1 -0
  213. package/dist/types/assembly.d.ts +2 -0
  214. package/dist/types/assembly.d.ts.map +1 -1
  215. package/dist/types/dependency.d.ts +2 -2
  216. package/dist/types/dependency.d.ts.map +1 -1
  217. package/dist/types/frontmatter.d.ts +100 -7
  218. package/dist/types/frontmatter.d.ts.map +1 -1
  219. package/dist/types/frontmatter.js +89 -1
  220. package/dist/types/frontmatter.js.map +1 -1
  221. package/dist/types/index.d.ts +1 -0
  222. package/dist/types/index.d.ts.map +1 -1
  223. package/dist/types/index.js +1 -0
  224. package/dist/types/index.js.map +1 -1
  225. package/dist/types/lock.d.ts +1 -1
  226. package/dist/types/lock.d.ts.map +1 -1
  227. package/dist/types/rework.d.ts +36 -0
  228. package/dist/types/rework.d.ts.map +1 -0
  229. package/dist/types/rework.js +2 -0
  230. package/dist/types/rework.js.map +1 -0
  231. package/dist/utils/errors.d.ts +1 -0
  232. package/dist/utils/errors.d.ts.map +1 -1
  233. package/dist/utils/errors.js +8 -0
  234. package/dist/utils/errors.js.map +1 -1
  235. package/dist/utils/fs.d.ts +6 -0
  236. package/dist/utils/fs.d.ts.map +1 -1
  237. package/dist/utils/fs.js +13 -0
  238. package/dist/utils/fs.js.map +1 -1
  239. package/dist/validation/config-validator.test.d.ts +2 -0
  240. package/dist/validation/config-validator.test.d.ts.map +1 -0
  241. package/dist/validation/config-validator.test.js +210 -0
  242. package/dist/validation/config-validator.test.js.map +1 -0
  243. package/dist/validation/dependency-validator.test.d.ts +2 -0
  244. package/dist/validation/dependency-validator.test.d.ts.map +1 -0
  245. package/dist/validation/dependency-validator.test.js +215 -0
  246. package/dist/validation/dependency-validator.test.js.map +1 -0
  247. package/dist/validation/frontmatter-validator.test.d.ts +2 -0
  248. package/dist/validation/frontmatter-validator.test.d.ts.map +1 -0
  249. package/dist/validation/frontmatter-validator.test.js +371 -0
  250. package/dist/validation/frontmatter-validator.test.js.map +1 -0
  251. package/dist/validation/state-validator.test.d.ts +2 -0
  252. package/dist/validation/state-validator.test.d.ts.map +1 -0
  253. package/dist/validation/state-validator.test.js +325 -0
  254. package/dist/validation/state-validator.test.js.map +1 -0
  255. package/dist/wizard/suggestion.test.d.ts +2 -0
  256. package/dist/wizard/suggestion.test.d.ts.map +1 -0
  257. package/dist/wizard/suggestion.test.js +115 -0
  258. package/dist/wizard/suggestion.test.js.map +1 -0
  259. package/dist/wizard/wizard.d.ts.map +1 -1
  260. package/dist/wizard/wizard.js +34 -1
  261. package/dist/wizard/wizard.js.map +1 -1
  262. package/knowledge/core/adr-craft.md +57 -0
  263. package/knowledge/core/ai-memory-management.md +246 -0
  264. package/knowledge/core/api-design.md +8 -0
  265. package/knowledge/core/automated-review-tooling.md +203 -0
  266. package/knowledge/core/claude-md-patterns.md +254 -0
  267. package/knowledge/core/coding-conventions.md +246 -0
  268. package/knowledge/core/database-design.md +8 -0
  269. package/knowledge/core/design-system-tokens.md +469 -0
  270. package/knowledge/core/dev-environment.md +223 -0
  271. package/knowledge/core/domain-modeling.md +8 -0
  272. package/knowledge/core/eval-craft.md +1008 -0
  273. package/knowledge/core/git-workflow-patterns.md +200 -0
  274. package/knowledge/core/multi-model-review-dispatch.md +250 -0
  275. package/knowledge/core/operations-runbook.md +40 -225
  276. package/knowledge/core/project-structure-patterns.md +231 -0
  277. package/knowledge/core/review-step-template.md +247 -0
  278. package/knowledge/core/{security-review.md → security-best-practices.md} +9 -1
  279. package/knowledge/core/system-architecture.md +5 -1
  280. package/knowledge/core/task-decomposition.md +174 -36
  281. package/knowledge/core/task-tracking.md +225 -0
  282. package/knowledge/core/tech-stack-selection.md +214 -0
  283. package/knowledge/core/testing-strategy.md +63 -70
  284. package/knowledge/core/user-stories.md +69 -60
  285. package/knowledge/core/user-story-innovation.md +70 -0
  286. package/knowledge/core/ux-specification.md +18 -148
  287. package/knowledge/execution/enhancement-workflow.md +201 -0
  288. package/knowledge/execution/task-claiming-strategy.md +130 -0
  289. package/knowledge/execution/tdd-execution-loop.md +172 -0
  290. package/knowledge/execution/worktree-management.md +205 -0
  291. package/knowledge/finalization/apply-fixes-and-freeze.md +177 -14
  292. package/knowledge/finalization/developer-onboarding.md +4 -0
  293. package/knowledge/finalization/implementation-playbook.md +83 -5
  294. package/knowledge/product/gap-analysis.md +5 -1
  295. package/knowledge/product/prd-craft.md +55 -34
  296. package/knowledge/product/prd-innovation.md +12 -0
  297. package/knowledge/product/vision-craft.md +213 -0
  298. package/knowledge/review/review-adr.md +44 -0
  299. package/knowledge/review/{review-api-contracts.md → review-api-design.md} +47 -1
  300. package/knowledge/review/{review-database-schema.md → review-database-design.md} +40 -1
  301. package/knowledge/review/review-domain-modeling.md +38 -1
  302. package/knowledge/review/review-implementation-tasks.md +108 -1
  303. package/knowledge/review/review-methodology.md +11 -0
  304. package/knowledge/review/review-operations.md +67 -0
  305. package/knowledge/review/review-prd.md +46 -0
  306. package/knowledge/review/review-security.md +65 -0
  307. package/knowledge/review/review-system-architecture.md +32 -2
  308. package/knowledge/review/review-testing-strategy.md +62 -0
  309. package/knowledge/review/review-user-stories.md +65 -0
  310. package/knowledge/review/{review-ux-spec.md → review-ux-specification.md} +50 -2
  311. package/knowledge/review/review-vision.md +255 -0
  312. package/knowledge/tools/release-management.md +222 -0
  313. package/knowledge/tools/session-analysis.md +215 -0
  314. package/knowledge/tools/version-strategy.md +200 -0
  315. package/knowledge/validation/critical-path-analysis.md +1 -1
  316. package/knowledge/validation/cross-phase-consistency.md +12 -0
  317. package/knowledge/validation/decision-completeness.md +13 -1
  318. package/knowledge/validation/dependency-validation.md +12 -0
  319. package/knowledge/validation/scope-management.md +12 -0
  320. package/knowledge/validation/traceability.md +12 -0
  321. package/methodology/README.md +37 -0
  322. package/methodology/custom-defaults.yml +44 -4
  323. package/methodology/deep.yml +43 -3
  324. package/methodology/mvp.yml +43 -3
  325. package/package.json +4 -3
  326. package/pipeline/architecture/review-architecture.md +36 -13
  327. package/pipeline/architecture/system-architecture.md +24 -9
  328. package/pipeline/build/multi-agent-resume.md +245 -0
  329. package/pipeline/build/multi-agent-start.md +236 -0
  330. package/pipeline/build/new-enhancement.md +456 -0
  331. package/pipeline/build/quick-task.md +381 -0
  332. package/pipeline/build/single-agent-resume.md +210 -0
  333. package/pipeline/build/single-agent-start.md +207 -0
  334. package/pipeline/consolidation/claude-md-optimization.md +76 -0
  335. package/pipeline/consolidation/workflow-audit.md +77 -0
  336. package/pipeline/decisions/adrs.md +21 -7
  337. package/pipeline/decisions/review-adrs.md +32 -11
  338. package/pipeline/environment/ai-memory-setup.md +76 -0
  339. package/pipeline/environment/automated-pr-review.md +76 -0
  340. package/pipeline/environment/design-system.md +75 -0
  341. package/pipeline/environment/dev-env-setup.md +68 -0
  342. package/pipeline/environment/git-workflow.md +73 -0
  343. package/pipeline/finalization/apply-fixes-and-freeze.md +17 -6
  344. package/pipeline/finalization/developer-onboarding-guide.md +23 -9
  345. package/pipeline/finalization/implementation-playbook.md +43 -14
  346. package/pipeline/foundation/beads.md +71 -0
  347. package/pipeline/foundation/coding-standards.md +71 -0
  348. package/pipeline/foundation/project-structure.md +73 -0
  349. package/pipeline/foundation/tdd.md +64 -0
  350. package/pipeline/foundation/tech-stack.md +74 -0
  351. package/pipeline/integration/add-e2e-testing.md +80 -0
  352. package/pipeline/modeling/domain-modeling.md +23 -8
  353. package/pipeline/modeling/review-domain-modeling.md +35 -11
  354. package/pipeline/parity/platform-parity-review.md +90 -0
  355. package/pipeline/planning/implementation-plan-review.md +67 -0
  356. package/pipeline/planning/implementation-plan.md +110 -0
  357. package/pipeline/pre/create-prd.md +34 -10
  358. package/pipeline/pre/innovate-prd.md +46 -15
  359. package/pipeline/pre/innovate-user-stories.md +47 -14
  360. package/pipeline/pre/review-prd.md +29 -8
  361. package/pipeline/pre/review-user-stories.md +34 -8
  362. package/pipeline/pre/user-stories.md +23 -8
  363. package/pipeline/quality/create-evals.md +106 -0
  364. package/pipeline/quality/operations.md +46 -17
  365. package/pipeline/quality/review-operations.md +32 -11
  366. package/pipeline/quality/review-security.md +34 -12
  367. package/pipeline/quality/review-testing.md +37 -14
  368. package/pipeline/quality/security.md +36 -10
  369. package/pipeline/quality/story-tests.md +75 -0
  370. package/pipeline/specification/api-contracts.md +28 -8
  371. package/pipeline/specification/database-schema.md +29 -8
  372. package/pipeline/specification/review-api.md +32 -11
  373. package/pipeline/specification/review-database.md +32 -11
  374. package/pipeline/specification/review-ux.md +34 -12
  375. package/pipeline/specification/ux-spec.md +35 -13
  376. package/pipeline/validation/critical-path-walkthrough.md +45 -11
  377. package/pipeline/validation/cross-phase-consistency.md +45 -11
  378. package/pipeline/validation/decision-completeness.md +45 -11
  379. package/pipeline/validation/dependency-graph-validation.md +46 -11
  380. package/pipeline/validation/implementability-dry-run.md +46 -11
  381. package/pipeline/validation/scope-creep-check.md +46 -11
  382. package/pipeline/validation/traceability-matrix.md +51 -11
  383. package/pipeline/vision/create-vision.md +267 -0
  384. package/pipeline/vision/innovate-vision.md +157 -0
  385. package/pipeline/vision/review-vision.md +149 -0
  386. package/skills/multi-model-dispatch/SKILL.md +326 -0
  387. package/skills/scaffold-pipeline/SKILL.md +210 -0
  388. package/skills/scaffold-runner/SKILL.md +619 -0
  389. package/pipeline/planning/implementation-tasks.md +0 -57
  390. package/pipeline/planning/review-tasks.md +0 -38
  391. package/pipeline/quality/testing-strategy.md +0 -42
@@ -10,6 +10,8 @@ The implementation playbook is the definitive reference for AI agents executing
10
10
 
11
11
  This is the most critical finalization document. If the onboarding guide tells agents "what this project is," the playbook tells them "how to do the work."
12
12
 
13
+ ## Summary
14
+
13
15
  ## Task Execution Protocol
14
16
 
15
17
  ### How Agents Pick Work
@@ -63,6 +65,21 @@ Read before starting:
63
65
 
64
66
  If a task does not have a context brief, the agent should create one from the specification artifacts before starting.
65
67
 
68
+ ### Minimum Context by Task Type
69
+
70
+ When a per-task context block is incomplete, agents should consult this taxonomy to ensure they have sufficient context:
71
+
72
+ | Task Type | Required Docs | Additional Context |
73
+ |-----------|--------------|-------------------|
74
+ | Backend API | `docs/api-contracts.md`, `docs/database-schema.md`, `docs/domain-models/`, `docs/coding-standards.md`, `docs/tdd-standards.md` | Relevant ADR for API style choices |
75
+ | Frontend UI | `docs/ux-spec.md`, `docs/design-system.md`, `docs/api-contracts.md`, `docs/coding-standards.md`, `docs/tdd-standards.md` | Component patterns from design system |
76
+ | Database migration | `docs/database-schema.md`, `docs/domain-models/`, `docs/operations-runbook.md` | Rollback strategy from ops runbook |
77
+ | Infrastructure/CI | `docs/dev-setup.md`, `docs/git-workflow.md`, `docs/operations-runbook.md` | Deployment pipeline stages |
78
+ | Bug fix | Relevant source code, `docs/tdd-standards.md`, `docs/coding-standards.md` | Related test files, reproduction steps |
79
+ | Security hardening | `docs/security-review.md`, `docs/api-contracts.md`, `docs/coding-standards.md` | OWASP checklist items from security review |
80
+
81
+ ## Deep Guidance
82
+
66
83
  ## Coding Standards
67
84
 
68
85
  Coding standards ensure consistency across agents. Every agent must follow these conventions without exception. Inconsistency between agents produces a codebase that feels like it was written by different teams — because it was.
@@ -288,8 +305,8 @@ Before a task is considered complete, all quality gates must pass.
288
305
  ### Gate 1: Tests Pass
289
306
 
290
307
  ```bash
291
- npm test # All tests pass
292
- npm run test:coverage # Coverage meets threshold
308
+ make test # All tests pass
309
+ make test-coverage # Coverage meets threshold
293
310
  ```
294
311
 
295
312
  Every task must include tests for the code it adds or modifies:
@@ -301,8 +318,8 @@ Every task must include tests for the code it adds or modifies:
301
318
  ### Gate 2: Lint and Type Check
302
319
 
303
320
  ```bash
304
- npm run lint # No lint errors (warnings are allowed but discouraged)
305
- npm run typecheck # No type errors
321
+ make lint # No lint errors (warnings are allowed but discouraged)
322
+ make typecheck # No type errors
306
323
  ```
307
324
 
308
325
  Do not disable lint rules with `eslint-disable` unless the rule is genuinely wrong for that specific case, and add a comment explaining why.
@@ -310,9 +327,11 @@ Do not disable lint rules with `eslint-disable` unless the rule is genuinely wro
310
327
  ### Gate 3: Build Succeeds
311
328
 
312
329
  ```bash
313
- npm run build # Production build succeeds
330
+ make build # Production build succeeds
314
331
  ```
315
332
 
333
+ > Commands shown here are examples. Use the actual commands from your project's CLAUDE.md Key Commands table.
334
+
316
335
  If the build fails with warnings, investigate. Warnings often become errors in stricter environments.
317
336
 
318
337
  ### Gate 4: Manual Verification
@@ -325,6 +344,12 @@ Automated tests are necessary but not sufficient. Always verify the feature work
325
344
 
326
345
  Run the full test suite, not just the tests for the changed code. New code can break existing features through unexpected interactions.
327
346
 
347
+ ### Gate 6: Evals
348
+
349
+ **Gate: Evals** — Run `make eval` (or project-equivalent from CLAUDE.md Key Commands). All eval checks must pass. If a specific eval fails, consult `docs/eval-standards.md` for category descriptions and resolution guidance.
350
+
351
+ Evals run collectively via `make eval`. If a specific eval category fails, consult `docs/eval-standards.md` for the category description and resolution approach.
352
+
328
353
  ## Inter-Agent Handoff
329
354
 
330
355
  When one agent completes a task and another agent will build on it, the completing agent must communicate:
@@ -402,3 +427,56 @@ The playbook is a living document. Update it when:
402
427
  - Agent coordination issues arise (add to parallel work rules)
403
428
 
404
429
  The playbook should be the first document agents read before their first task, and the document they reference throughout implementation. If an agent asks a question that the playbook should answer, the answer goes in the playbook.
430
+
431
+ ### Error Recovery
432
+
433
+ > The depth and specificity of error recovery guidance in CLAUDE.md depends on the `workflow-audit` step's methodology depth. At MVP depth, error recovery may be minimal.
434
+
435
+ When quality gates fail during implementation:
436
+
437
+ **Test failures:**
438
+ 1. Read the failing test to understand the expected behavior
439
+ 2. Check if the test is testing your change or pre-existing functionality
440
+ 3. If your change broke the test: fix the implementation, not the test
441
+ 4. If the test is wrong: document why and update the test with the fix
442
+ 5. Re-run the full test suite, not just the failing test
443
+
444
+ **CI failures:**
445
+ 1. Pull latest main and rebase your branch
446
+ 2. Run `make check` locally to reproduce the failure
447
+ 3. If the failure is environment-specific: check dev-setup.md for requirements
448
+ 4. If the failure is a flaky test: document the flakiness and retry once
449
+
450
+ ### Eval Failure Recovery
451
+
452
+ When `make eval` fails during implementation:
453
+
454
+ 1. **Read the failing test name** — eval category names indicate what's wrong (e.g., `adherence` = coding standard violation, `consistency` = cross-document mismatch)
455
+ 2. **Check `docs/eval-standards.md`** (if it exists) for category-specific guidance
456
+ 3. **Common eval failures**:
457
+ - **Adherence evals**: Code doesn't match coding-standards.md patterns. Fix: read the specific standard and adjust code.
458
+ - **Consistency evals**: Document references are stale or contradictory. Fix: update the reference to match current state.
459
+ - **Structure evals**: File/directory doesn't match project-structure.md. Fix: move files to correct location.
460
+ - **Security evals**: Missing input validation or auth check. Fix: add the missing security control per security-review.md.
461
+ 4. **If eval seems wrong**: Check if the eval itself is outdated. Flag for upstream review rather than working around it.
462
+
463
+ **Spec gap discovered during implementation:**
464
+ 1. Document the gap with specific details (what's missing, what's needed)
465
+ 2. Check if an ADR or architecture decision covers the case
466
+ 3. If the gap is small: make a judgment call, document it in the commit message
467
+ 4. If the gap is significant: pause the task and flag it for upstream resolution
468
+
469
+ **Agent produces incorrect output:**
470
+ 1. Review the task description and acceptance criteria
471
+ 2. Diff the output against the expected behavior
472
+ 3. If the task description was ambiguous: improve it for future agents
473
+ 4. Roll back the incorrect changes and retry with clearer context
474
+
475
+ ### Dependency Failure
476
+
477
+ When a task's upstream dependency hasn't merged or has failed:
478
+
479
+ 1. **Check the dependency task status** in docs/implementation-plan.md
480
+ 2. **If in-progress**: Wait for it to merge. Do not start work that depends on uncommitted changes.
481
+ 3. **If failed/blocked**: Flag for human review. The task may need to be reworked, reordered, or its dependency removed.
482
+ 4. **If the dependency is in a different agent's worktree**: Coordinate via AGENTS.md or the task tracking system. Never duplicate work.
@@ -6,7 +6,11 @@ topics: [gap-analysis, requirements, completeness, ambiguity, edge-cases]
6
6
 
7
7
  # Gap Analysis
8
8
 
9
- Gap analysis is the systematic process of finding what is missing from a set of requirements or specifications. A gap is anything that an implementing team would need to know but that the document does not tell them. Gaps are not errors (things stated incorrectly) — they are omissions (things not stated at all).
9
+ ## Summary
10
+
11
+ Gap analysis is the systematic process of finding what is missing from a set of requirements or specifications. A gap is anything that an implementing team would need to know but that the document does not tell them. Gaps are not errors (things stated incorrectly) — they are omissions (things not stated at all). The process uses section-by-section review, cross-reference checking, edge case enumeration, ambiguity detection, and contradiction detection to surface omissions before they become expensive implementation surprises.
12
+
13
+ ## Deep Guidance
10
14
 
11
15
  ## Systematic Analysis Approaches
12
16
 
@@ -8,13 +8,36 @@ topics: [prd, requirements, product, scoping]
8
8
 
9
9
  A Product Requirements Document is the single source of truth for what is being built and why. It defines the problem, the users, the scope, and the success criteria. Everything in the pipeline flows from the PRD — domain models, architecture, implementation tasks. A weak PRD propagates weakness through every downstream artifact.
10
10
 
11
- This document covers what makes a good PRD, what makes a bad one, and how to tell the difference.
11
+ ## Summary
12
12
 
13
- ## Problem Statement
13
+ ### PRD Structure
14
+
15
+ A complete PRD includes these sections:
16
+ 1. **Problem Statement** — Specific, testable, grounded in observable reality. Names a user group, describes a pain point, includes quantitative evidence.
17
+ 2. **Target Users** — Personas with roles, needs, current behavior, constraints, and success criteria. Typically 2-4 meaningful personas.
18
+ 3. **Feature Scoping** — Three explicit lists: In Scope (v1), Out of Scope, and Deferred (future). Each in-scope feature detailed enough to estimate.
19
+ 4. **Success Criteria** — Measurable outcomes tied to the problem statement with target values and measurement methods.
20
+ 5. **Constraints** — Technical, timeline, budget, team, and regulatory constraints traceable to architectural decisions.
21
+ 6. **Non-Functional Requirements** — Quantified performance, scalability, availability, security, accessibility, data, i18n, browser/device support, and monitoring requirements.
22
+ 7. **Competitive Context** — What exists, how this differs, why users would switch.
23
+
24
+ ### Quality Criteria
25
+
26
+ - Problem statement is specific and testable
27
+ - Features are prioritized with MoSCoW (Must/Should/Could/Won't)
28
+ - Success criteria have target values and measurement methods
29
+ - NFRs are quantified (not "fast" but "p95 under 200ms")
30
+ - Error scenarios and edge cases are addressed
31
+ - The PRD says WHAT, not HOW
32
+ - Every feature is detailed enough for estimation without prescribing implementation
33
+
34
+ ## Deep Guidance
35
+
36
+ ### Problem Statement
14
37
 
15
38
  The problem statement is the foundation. If it is wrong, everything built on top of it is wrong.
16
39
 
17
- ### What Makes a Good Problem Statement
40
+ #### What Makes a Good Problem Statement
18
41
 
19
42
  A good problem statement is **specific**, **testable**, and **grounded in observable reality**.
20
43
 
@@ -29,7 +52,7 @@ A good problem statement is **specific**, **testable**, and **grounded in observ
29
52
  - "Users want a better dashboard." (Aspirational, not grounded. What is wrong with the current one? What does "better" mean?)
30
53
  - "We need to modernize our technology stack." (Technology is not a problem — what user-facing or business issue does the old stack cause?)
31
54
 
32
- ### Problem Statement Checklist
55
+ #### Problem Statement Checklist
33
56
 
34
57
  - [ ] Names a specific user group (not "users" or "everyone")
35
58
  - [ ] Describes an observable behavior or pain point (not a desired state)
@@ -37,9 +60,9 @@ A good problem statement is **specific**, **testable**, and **grounded in observ
37
60
  - [ ] Does not prescribe a solution (the problem is not "we need feature X")
38
61
  - [ ] Can be validated — you can measure whether the problem is solved
39
62
 
40
- ## Target Users
63
+ ### Target Users — Detailed Persona Methodology
41
64
 
42
- ### Personas with Needs
65
+ #### Personas with Needs
43
66
 
44
67
  Each persona should have:
45
68
  - **Role or description** — Who they are in relation to the product.
@@ -69,17 +92,17 @@ Each persona should have:
69
92
 
70
93
  The bad persona tells the implementation team nothing actionable. It does not constrain design decisions.
71
94
 
72
- ### How Many Personas
95
+ #### How Many Personas
73
96
 
74
97
  Most products have 2-4 meaningful personas. If a PRD lists more than 6, the product scope is likely too broad. If it lists only 1, secondary users (admins, support staff, integration partners) may be missing.
75
98
 
76
- ### Anti-pattern: The Everything User
99
+ #### Anti-pattern: The Everything User
77
100
 
78
101
  A persona that represents all users is no persona at all. "Power users who want advanced features AND casual users who want simplicity" describes a contradiction, not a persona. Different personas may have conflicting needs — that is fine, but the PRD must state which takes priority.
79
102
 
80
- ## Feature Scoping
103
+ ### Feature Scoping — Depth
81
104
 
82
- ### What Is In, What Is Out, What Is Deferred
105
+ #### What Is In, What Is Out, What Is Deferred
83
106
 
84
107
  Every PRD should have three explicit lists:
85
108
 
@@ -126,7 +149,7 @@ Every PRD should have three explicit lists:
126
149
 
127
150
  This tells you nothing about boundaries. Is "user management" basic registration or full RBAC with teams and permissions? Is "analytics" a page view counter or a business intelligence suite?
128
151
 
129
- ### MoSCoW Prioritization
152
+ #### MoSCoW Prioritization — In Depth
130
153
 
131
154
  When the in-scope list is large, use MoSCoW to further prioritize:
132
155
 
@@ -160,7 +183,7 @@ Won't Have:
160
183
  - Social login
161
184
  ```
162
185
 
163
- ### Feature Detail Level
186
+ #### Feature Detail Level
164
187
 
165
188
  Each in-scope feature needs enough detail to be estimable:
166
189
 
@@ -175,9 +198,9 @@ Each in-scope feature needs enough detail to be estimable:
175
198
 
176
199
  The PRD says WHAT, not HOW.
177
200
 
178
- ## Success Criteria
201
+ ### Success Criteria — Depth
179
202
 
180
- ### Measurable Outcomes
203
+ #### Measurable Outcomes
181
204
 
182
205
  Success criteria define how you will know the product works. They must be measurable, specific, and tied to the problem statement.
183
206
 
@@ -193,7 +216,7 @@ Success criteria define how you will know the product works. They must be measur
193
216
  - "Revenue increases." (Not tied to the problem. Revenue can increase for many reasons.)
194
217
  - "We ship on time." (Success criteria for the project, not the product)
195
218
 
196
- ### Types of Success Criteria
219
+ #### Types of Success Criteria
197
220
 
198
221
  1. **User behavior metrics** — Conversion rates, completion rates, time-on-task, error rates.
199
222
  2. **Business metrics** — Revenue impact, cost reduction, customer acquisition.
@@ -202,9 +225,9 @@ Success criteria define how you will know the product works. They must be measur
202
225
 
203
226
  Every success criterion should have a **target value** and a **measurement method**. "Checkout abandonment under 40% as measured by analytics funnel tracking" is complete. "Checkout abandonment decreases" is not.
204
227
 
205
- ## Constraints
228
+ ### Constraints — Detailed Categories
206
229
 
207
- ### Categories of Constraints
230
+ #### Categories of Constraints
208
231
 
209
232
  **Technical constraints:**
210
233
  - Existing systems that must be integrated with.
@@ -233,7 +256,7 @@ Every success criterion should have a **target value** and a **measurement metho
233
256
  - Accessibility mandates (ADA, WCAG requirements).
234
257
  - Industry-specific regulations.
235
258
 
236
- ### How Constraints Affect Downstream Artifacts
259
+ #### How Constraints Affect Downstream Artifacts
237
260
 
238
261
  Each constraint should be traceable to architectural decisions:
239
262
  - "Must use PostgreSQL" → ADR for database choice.
@@ -241,11 +264,9 @@ Each constraint should be traceable to architectural decisions:
241
264
  - "Team of 3 developers" → Implementation tasks sized for 3 parallel workers.
242
265
  - "Launch by March 1" → Feature scope fits within timeline.
243
266
 
244
- ## Non-Functional Requirements
245
-
246
- NFRs define HOW the system should behave, not WHAT it should do. They are frequently under-specified in PRDs, which leads to expensive rework.
267
+ ### NFR Quantification Patterns
247
268
 
248
- ### Quantified NFRs
269
+ #### Quantified NFRs
249
270
 
250
271
  **Good:**
251
272
  - "Page load time: p95 under 2 seconds on 4G mobile connection."
@@ -260,7 +281,7 @@ NFRs define HOW the system should behave, not WHAT it should do. They are freque
260
281
  - "Scalable." (To what? 100 users? 1 million users? What is the growth curve?)
261
282
  - "Secure." (Against what threats? To what standard?)
262
283
 
263
- ### NFR Categories Checklist
284
+ #### NFR Categories Checklist
264
285
 
265
286
  - [ ] **Performance** — Response times (p50, p95, p99), throughput, page load times
266
287
  - [ ] **Scalability** — Concurrent users, data volume, growth rate
@@ -272,42 +293,42 @@ NFRs define HOW the system should behave, not WHAT it should do. They are freque
272
293
  - [ ] **Browser/device support** — Minimum browser versions, mobile support, responsive breakpoints
273
294
  - [ ] **Monitoring** — What needs to be observable? Alerting thresholds?
274
295
 
275
- ## Competitive Context
296
+ ### Competitive Context Analysis
276
297
 
277
- ### What to Include
298
+ #### What to Include
278
299
 
279
300
  - **What exists** — Name competing products and what they do well.
280
301
  - **How this is different** — Specific differentiators, not "we're better."
281
302
  - **Why users would switch** — What pain does this product solve that competitors do not?
282
303
  - **What to learn from** — Features or patterns from competitors worth adopting.
283
304
 
284
- ### What NOT to Include
305
+ #### What NOT to Include
285
306
 
286
307
  - Exhaustive competitor feature matrices (belongs in market research, not PRD).
287
308
  - Competitive strategy or positioning (belongs in business plan, not PRD).
288
309
  - Pricing comparisons (unless pricing is a product feature).
289
310
 
290
- ## Common PRD Failures
311
+ ### Common PRD Failures
291
312
 
292
- ### The "Requirements as Solutions" Failure
313
+ #### The "Requirements as Solutions" Failure
293
314
  PRD prescribes technical solutions instead of stating requirements. "Use Redis for caching" belongs in architecture, not the PRD. The PRD should say "response time under 200ms" — how to achieve that is an architectural decision.
294
315
 
295
- ### The "Missing Sad Path" Failure
316
+ #### The "Missing Sad Path" Failure
296
317
  PRD describes only happy paths. What happens when payment fails? When the user's session expires during checkout? When the network drops? When the form has invalid data? Every user action that can fail should have at least a sentence about what happens.
297
318
 
298
- ### The "Everyone Is a User" Failure
319
+ #### The "Everyone Is a User" Failure
299
320
  PRD addresses "users" as a monolith instead of identifying distinct personas with distinct needs. Admins, end users, API consumers, and support staff have different requirements.
300
321
 
301
- ### The "Implied API" Failure
322
+ #### The "Implied API" Failure
302
323
  PRD describes a UI but implies an API without stating it. "Users can view their order history" implies GET /orders, data model for orders, pagination, filtering, sorting. These implications should be explicit in the PRD.
303
324
 
304
- ### The "No Boundaries" Failure
325
+ #### The "No Boundaries" Failure
305
326
  PRD states what is in scope but never states what is out. Every documentation phase becomes a scope negotiation.
306
327
 
307
- ### The "Success Is Shipping" Failure
328
+ #### The "Success Is Shipping" Failure
308
329
  PRD has no success criteria beyond "launch the product." Without measurable outcomes, there is no way to know if the product solved the problem.
309
330
 
310
- ## PRD Quality Checklist
331
+ ### PRD Quality Checklist
311
332
 
312
333
  Before considering a PRD complete:
313
334
 
@@ -10,6 +10,18 @@ This knowledge covers feature-level innovation — discovering new capabilities,
10
10
 
11
11
  This is distinct from user story innovation (`user-story-innovation.md`), which covers UX-level enhancements to existing features. If an idea doesn't require a new PRD section or feature entry, it belongs in user story innovation, not here.
12
12
 
13
+ ## Summary
14
+
15
+ - **Scope**: Feature-level innovation (new capabilities, competitive gaps, defensive improvements). UX polish on existing features belongs in user story innovation.
16
+ - **Competitive analysis**: Research direct competitors, adjacent products, and emerging patterns. Classify findings as table-stakes (must-have), differentiator (evaluate), or copied-feature (skip).
17
+ - **UX gap analysis**: Evaluate first-60-seconds experience, flow friction points, and missing flows that force workarounds.
18
+ - **Missing expected features**: Search/discovery, data management (bulk import/export, undo), communication (notifications), and personalization (settings, saved views).
19
+ - **AI-native opportunities**: Natural language interfaces, auto-categorization, predictive behavior, and content generation. Must pass the "magic vs. gimmick" test.
20
+ - **Defensive product thinking**: Write plausible 1-star reviews to identify gaps; analyze abandonment barriers (complexity, performance, trust, value, integration).
21
+ - **Evaluation framework**: Cost (trivial/moderate/significant) x Impact (nice-to-have/noticeable/differentiator). Must-have v1 = differentiator at any cost up to moderate, or noticeable at trivial cost.
22
+
23
+ ## Deep Guidance
24
+
13
25
  ## Scope Boundary
14
26
 
15
27
  **In scope:**
@@ -0,0 +1,213 @@
1
+ ---
2
+ name: vision-craft
3
+ description: What makes a good product vision — strategic framing, audience definition, competitive positioning, guiding principles
4
+ topics: [vision, strategy, product, positioning, competitive-analysis]
5
+ ---
6
+
7
+ # Vision Craft
8
+
9
+ A product vision document is the strategic North Star that guides all downstream product decisions. It answers "why does this product exist?" and "what positive change does it create in the world?" — questions that are upstream of the PRD's "what should we build?" Everything in the pipeline flows from the vision. A weak vision produces a PRD without strategic grounding, which produces features without coherent purpose.
10
+
11
+ ## Summary
12
+
13
+ ### Vision Document Structure
14
+
15
+ A complete product vision document includes these sections:
16
+ 1. **Vision Statement** — One inspiring sentence describing the positive change the product creates. Not a feature description. Concise enough to remember and repeat.
17
+ 2. **Elevator Pitch** — Geoffrey Moore template: For [target customer] who [need], [product] is a [category] that [key benefit]. Unlike [alternative], our product [differentiation].
18
+ 3. **Problem Space** — The pain in vivid detail: who feels it, how they cope, why existing solutions fail.
19
+ 4. **Target Audience** — Personas defined by behaviors and motivations, not demographics. Primary and secondary audiences with context of use.
20
+ 5. **Value Proposition** — Unique value framed as outcomes, not features. Why someone would choose this over alternatives.
21
+ 6. **Competitive Landscape** — Direct competitors, indirect alternatives, "do nothing" option. Honest about strengths and weaknesses.
22
+ 7. **Guiding Principles** — 3-5 design tenets framed as prioritization tradeoffs ("We choose X over Y").
23
+ 8. **Anti-Vision** — What the product is NOT. Traps to avoid. Directions that would dilute the vision.
24
+ 9. **Business Model Intuition** — Revenue model, unit economics assumptions, go-to-market direction.
25
+ 10. **Success Criteria** — Leading indicators, year 1 milestones, year 3 aspirations, what failure looks like.
26
+ 11. **Strategic Risks & Assumptions** — Key bets, what could invalidate them, severity and mitigation.
27
+ 12. **Open Questions** — Unresolved strategic questions for future consideration.
28
+
29
+ ### Quality Criteria
30
+
31
+ - Vision statement describes positive change, not a product feature
32
+ - Vision statement is concise enough to remember after hearing once
33
+ - Guiding principles create real tradeoffs (if nobody would disagree, it's not a principle)
34
+ - Competitive analysis is honest about the product's weaknesses, not just strengths
35
+ - Target audience describes behaviors and motivations, not demographics
36
+ - Business model section addresses sustainability without being a full business plan
37
+ - Anti-vision prevents real traps, not just vague disclaimers
38
+
39
+ ## Deep Guidance
40
+
41
+ ### Vision Statement
42
+
43
+ The vision statement is the foundation. If it fails, the entire document lacks a North Star.
44
+
45
+ #### What Makes a Good Vision Statement
46
+
47
+ A good vision statement is **inspiring**, **concise**, **enduring**, and **customer-centric**. It describes the positive change the product creates in the world — not a feature, not a business metric.
48
+
49
+ **Good examples:**
50
+ - "Accelerate the world's transition to sustainable energy" (Tesla)
51
+ - "Belong anywhere" (Airbnb)
52
+ - "Create economic opportunity for every member of the global workforce" (LinkedIn)
53
+ - "Every book ever printed, in any language, all available in 60 seconds" (Kindle)
54
+ - "Make work life simpler, more pleasant, and more productive" (Slack)
55
+ - "Increase the GDP of the internet" (Stripe)
56
+
57
+ **Bad examples:**
58
+ - "Be the #1 project management tool in the enterprise market" (business metric, not positive change)
59
+ - "Build an AI-powered platform for data analytics" (solution description, not vision)
60
+ - "Provide a seamless user experience for managing tasks" (vague, feature-level)
61
+ - "Disrupt the healthcare industry" (aspirational buzzword, says nothing specific)
62
+
63
+ #### Roman Pichler's Vision Quality Checklist
64
+
65
+ - **Inspiring** — Describes a positive change that motivates people
66
+ - **Shared** — Co-created with the team, not handed down from above
67
+ - **Ethical** — Does not cause harm to people or the planet
68
+ - **Concise** — Easy to understand, remember, and repeat
69
+ - **Ambitious** — A big, audacious goal (BHAG) that stretches beyond the comfortable
70
+ - **Enduring** — Guides for 5-10 years; free from solution-specific assumptions
71
+
72
+ ### Geoffrey Moore's Elevator Pitch Template
73
+
74
+ From *Crossing the Chasm* — the most widely used single-statement framework for articulating product positioning:
75
+
76
+ ```
77
+ For [target customer]
78
+ Who [statement of need or opportunity],
79
+ The [product name] is a [product category]
80
+ That [key benefit, reason to buy].
81
+ Unlike [primary competitive alternative],
82
+ Our product [statement of primary differentiation].
83
+ ```
84
+
85
+ **When to use:** As a structured exercise to force clarity about target customer, need, category, and differentiation. The output should feel like a natural sentence, not a fill-in-the-blank template.
86
+
87
+ ### Guiding Principles
88
+
89
+ Guiding principles are design tenets that constrain decisions. They are NOT platitudes.
90
+
91
+ #### The Test
92
+
93
+ If nobody would disagree with a principle, it's not a principle — it's a platitude. "We value quality" is not a principle. "We choose correctness over speed" is a principle because it implies a real tradeoff (some teams would choose the opposite).
94
+
95
+ **Good principles (create real tradeoffs):**
96
+ - "We choose simplicity over power" (implies some features won't exist)
97
+ - "We choose transparency over control" (implies users see everything, even messy internals)
98
+ - "We choose speed of iteration over perfection" (implies shipping rough work)
99
+ - "We choose privacy over personalization" (implies less-tailored experiences)
100
+
101
+ **Bad principles (platitudes):**
102
+ - "We value user experience" (who wouldn't?)
103
+ - "We build reliable software" (this is table stakes, not a principle)
104
+ - "We care about security" (no one would say they don't)
105
+
106
+ ### Anti-Vision
107
+
108
+ The anti-vision explicitly names what the product is NOT. This is critical for preventing scope creep and maintaining strategic focus.
109
+
110
+ #### What to Include
111
+
112
+ - Features the team will be tempted to build but shouldn't
113
+ - Common traps in this product space that catch every competitor
114
+ - Directions that would dilute the core value proposition
115
+ - "If we find ourselves doing X, we've lost the plot"
116
+
117
+ #### Why It Matters
118
+
119
+ Without an anti-vision, the team defaults to "yes" for every reasonable-sounding feature request. The anti-vision gives explicit permission to say "no."
120
+
121
+ ### Competitive Landscape
122
+
123
+ #### Honest Competitive Analysis
124
+
125
+ The competitive landscape section must be honest — acknowledge what competitors do well, not just where they fall short. A dishonest competitive analysis ("all competitors are terrible") undermines credibility and leads to blind spots in product strategy.
126
+
127
+ **Structure:**
128
+ - **Direct competitors** — Products solving the same problem for the same users
129
+ - **Indirect alternatives** — Different approaches to the same underlying need
130
+ - **"Do nothing" option** — The status quo. Often the strongest competitor.
131
+
132
+ For each: what they do well, where they fall short, and why users would choose your product over them.
133
+
134
+ ### Common Anti-Patterns
135
+
136
+ 1. **Confusing Vision with Strategy** — The vision says what the world looks like when the product succeeds. The strategy says how you get there. Keep them separate.
137
+ 2. **Tying Vision to a Solution** — "Build the best X" references a specific product form. "Enable Y for Z people" survives pivots.
138
+ 3. **Failing to Inspire** — Corporate boilerplate doesn't motivate teams. Co-create the vision; don't hand it down.
139
+ 4. **Changing the Vision Frequently** — A vision should endure for years. If it changes quarterly, it's not a vision — it's a roadmap item.
140
+ 5. **Overly Broad Target Audience** — "Everyone" is not a target audience. Specificity enables focus.
141
+ 6. **Features as Needs** — Listing features (solution space) instead of needs (problem space) limits the team's design freedom.
142
+ 7. **Decorative Wall Statement** — A vision that hangs on the wall but never guides actual decisions is worse than no vision at all.
143
+
144
+ ### The Product Development Hierarchy
145
+
146
+ Vision sits at the top of this hierarchy:
147
+
148
+ ```
149
+ Company Mission / Purpose
150
+
151
+ Company Vision
152
+
153
+ Product Vision ← THIS DOCUMENT
154
+
155
+ Product Strategy
156
+
157
+ Product Requirements ← PRD (docs/plan.md)
158
+
159
+ Implementation
160
+ ```
161
+
162
+ The vision document should inform the PRD, not the other way around. When the PRD and vision conflict, revisit the vision first.
163
+
164
+ ### Success Criteria & Measurement
165
+
166
+ Success criteria in a vision document are directional — they define what "winning" looks like without the precision of a PRD's success metrics.
167
+
168
+ #### Levels of Success Measurement
169
+
170
+ - **Leading indicators** — Early signals that validate the vision direction. These are behavioral: are users doing the thing you expected? Are they coming back? Example: "Users who complete onboarding return within 48 hours."
171
+ - **Year 1 milestones** — Concrete, time-bound achievements that demonstrate market fit. Example: "1,000 active users creating at least one project per week."
172
+ - **Year 3 aspirations** — Ambitious but grounded targets that show the vision is being realized. These should feel like a stretch but not fantasy.
173
+ - **Failure indicators** — What would make this a failure even if it ships on time and works correctly? Example: "If users create an account but never return after day 1, the core value proposition is wrong."
174
+
175
+ #### Common Mistakes in Success Criteria
176
+
177
+ - Vanity metrics ("1 million downloads") instead of engagement metrics ("daily active usage")
178
+ - Unmeasurable aspirations ("users love the product") instead of observable behavior
179
+ - Missing the "failure despite shipping" scenario — the most dangerous blind spot
180
+ - Setting criteria so low they're guaranteed, removing the diagnostic value
181
+
182
+ ### Business Model Intuition
183
+
184
+ The vision document captures directional thinking about sustainability, not a financial model.
185
+
186
+ #### What to Include
187
+
188
+ - **Revenue model** — How does this make money? Subscription, freemium, marketplace commission, enterprise licensing, usage-based? Pick one primary model and explain why.
189
+ - **Unit economics direction** — What are the key cost drivers? What does a "unit" of value look like? Does the economics improve with scale?
190
+ - **Go-to-market intuition** — How do users discover this product? Product-led growth, sales-led, community-driven, partnership channels? The answer shapes everything from pricing to features.
191
+
192
+ #### What NOT to Include
193
+
194
+ - Detailed financial projections (that's a business plan)
195
+ - Multi-year revenue forecasts (that's a pitch deck)
196
+ - Competitive pricing analysis (that's market research — do it, but don't put it in the vision)
197
+
198
+ ### Vision-to-PRD Handoff
199
+
200
+ The vision document's primary downstream consumer is the PRD. A well-written vision makes PRD creation straightforward; a vague vision forces the PRD author to make strategic decisions that should have been settled upstream.
201
+
202
+ #### Handoff Checklist
203
+
204
+ Before declaring the vision ready for PRD creation, verify:
205
+
206
+ 1. **Problem Space** maps cleanly to PRD's Problem Statement
207
+ 2. **Target Audience** personas are specific enough for user stories
208
+ 3. **Guiding Principles** are concrete enough to resolve "should we build X?" questions
209
+ 4. **Competitive Landscape** provides enough context to differentiate features
210
+ 5. **Anti-Vision** is clear enough to reject out-of-scope feature requests
211
+ 6. **Open Questions** do not include anything that would block product definition
212
+
213
+ If any of these fail, the vision needs another pass before the PRD can begin.
@@ -10,6 +10,18 @@ ADRs encode the "why" behind the architecture. They must be complete (every sign
10
10
 
11
11
  Follows the review process defined in `review-methodology.md`.
12
12
 
13
+ ## Summary
14
+
15
+ - **Pass 1 — Decision Coverage**: Every significant architectural decision has an ADR; technology choices, pattern selections, and constraint trade-offs all recorded.
16
+ - **Pass 2 — Rationale Quality**: Alternatives are genuinely viable (not straw-manned); consequences are honest with both positives and negatives.
17
+ - **Pass 3 — Contradiction Detection**: No two ADRs make conflicting decisions without explicit acknowledgment; supersession relationships documented.
18
+ - **Pass 4 — Implied Decision Mining**: Decisions visible in artifacts but never formally recorded as ADRs are identified and flagged.
19
+ - **Pass 5 — Status Hygiene**: ADR statuses reflect reality; no stale "proposed" ADRs; supersession chains are clean.
20
+ - **Pass 6 — Cross-Reference Integrity**: Cross-references between ADRs are correct and bidirectional; no broken or circular reference chains.
21
+ - **Pass 7 — Downstream Readiness**: Technology and pattern decisions are finalized in "accepted" status so architecture can proceed without ambiguity.
22
+
23
+ ## Deep Guidance
24
+
13
25
  ---
14
26
 
15
27
  ## Pass 1: Decision Coverage
@@ -201,3 +213,35 @@ For each category, verify at least one accepted ADR covers it. If a category is
201
213
  - P0: "The monolith-vs-services question has two proposed ADRs (ADR-003, ADR-004) but neither is accepted. The system architecture step cannot define component boundaries."
202
214
  - P1: "Authentication approach is not covered by any ADR. The system architecture step needs to know the auth pattern to design the auth component."
203
215
  - P2: "Monitoring strategy has no ADR. This could be deferred to the operations step but should be noted."
216
+
217
+ ### Example Review Finding
218
+
219
+ ```markdown
220
+ ### Finding: Straw-man alternatives mask the real decision rationale
221
+
222
+ **Pass:** 2 — Rationale Quality
223
+ **Priority:** P0
224
+ **Location:** ADR-003 "Use React for Frontend Framework"
225
+
226
+ **Issue:** ADR-003 lists two alternatives: "Use jQuery" and "Build from scratch
227
+ with vanilla JS." Neither is a genuinely viable alternative for a 2024 SPA with
228
+ the complexity described in the PRD. The real alternatives — Vue, Svelte, Angular
229
+ — are not mentioned.
230
+
231
+ The consequences section lists four benefits and zero costs. React has well-known
232
+ trade-offs (large bundle size, JSX learning curve, frequent ecosystem churn) that
233
+ are absent.
234
+
235
+ **Impact:** When conditions change (e.g., bundle size becomes a priority, or the
236
+ team grows to include Vue-experienced developers), there is no documented rationale
237
+ for why React was chosen over comparable frameworks. The ADR cannot be meaningfully
238
+ re-evaluated because the real decision criteria were never recorded.
239
+
240
+ **Recommendation:** Replace alternatives with genuinely considered options (Vue 3,
241
+ Svelte/SvelteKit, Angular). For each, document honest pros and cons. Add negative
242
+ consequences to the React decision: bundle size overhead, ecosystem churn rate,
243
+ and dependency on the React team's architectural direction (Server Components,
244
+ compiler changes).
245
+
246
+ **Trace:** ADR-003 → blocks Architecture Phase component structure decisions
247
+ ```