@zigrivers/scaffold 2.28.1 → 2.38.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (375) hide show
  1. package/README.md +309 -136
  2. package/dist/cli/commands/build.d.ts.map +1 -1
  3. package/dist/cli/commands/build.js +94 -14
  4. package/dist/cli/commands/build.js.map +1 -1
  5. package/dist/cli/commands/build.test.js +30 -5
  6. package/dist/cli/commands/build.test.js.map +1 -1
  7. package/dist/cli/commands/check.d.ts +12 -0
  8. package/dist/cli/commands/check.d.ts.map +1 -0
  9. package/dist/cli/commands/check.js +311 -0
  10. package/dist/cli/commands/check.js.map +1 -0
  11. package/dist/cli/commands/check.test.d.ts +2 -0
  12. package/dist/cli/commands/check.test.d.ts.map +1 -0
  13. package/dist/cli/commands/check.test.js +412 -0
  14. package/dist/cli/commands/check.test.js.map +1 -0
  15. package/dist/cli/commands/complete.d.ts +12 -0
  16. package/dist/cli/commands/complete.d.ts.map +1 -0
  17. package/dist/cli/commands/complete.js +103 -0
  18. package/dist/cli/commands/complete.js.map +1 -0
  19. package/dist/cli/commands/complete.test.d.ts +2 -0
  20. package/dist/cli/commands/complete.test.d.ts.map +1 -0
  21. package/dist/cli/commands/complete.test.js +133 -0
  22. package/dist/cli/commands/complete.test.js.map +1 -0
  23. package/dist/cli/commands/dashboard.d.ts.map +1 -1
  24. package/dist/cli/commands/dashboard.js +12 -8
  25. package/dist/cli/commands/dashboard.js.map +1 -1
  26. package/dist/cli/commands/info.d.ts.map +1 -1
  27. package/dist/cli/commands/info.js +4 -0
  28. package/dist/cli/commands/info.js.map +1 -1
  29. package/dist/cli/commands/knowledge.d.ts.map +1 -1
  30. package/dist/cli/commands/knowledge.js +6 -2
  31. package/dist/cli/commands/knowledge.js.map +1 -1
  32. package/dist/cli/commands/knowledge.test.js +16 -11
  33. package/dist/cli/commands/knowledge.test.js.map +1 -1
  34. package/dist/cli/commands/next.d.ts.map +1 -1
  35. package/dist/cli/commands/next.js +41 -13
  36. package/dist/cli/commands/next.js.map +1 -1
  37. package/dist/cli/commands/next.test.js +3 -0
  38. package/dist/cli/commands/next.test.js.map +1 -1
  39. package/dist/cli/commands/reset.d.ts +1 -0
  40. package/dist/cli/commands/reset.d.ts.map +1 -1
  41. package/dist/cli/commands/reset.js +179 -67
  42. package/dist/cli/commands/reset.js.map +1 -1
  43. package/dist/cli/commands/reset.test.js +360 -0
  44. package/dist/cli/commands/reset.test.js.map +1 -1
  45. package/dist/cli/commands/rework.d.ts +20 -0
  46. package/dist/cli/commands/rework.d.ts.map +1 -0
  47. package/dist/cli/commands/rework.js +332 -0
  48. package/dist/cli/commands/rework.js.map +1 -0
  49. package/dist/cli/commands/rework.test.d.ts +2 -0
  50. package/dist/cli/commands/rework.test.d.ts.map +1 -0
  51. package/dist/cli/commands/rework.test.js +297 -0
  52. package/dist/cli/commands/rework.test.js.map +1 -0
  53. package/dist/cli/commands/run.d.ts.map +1 -1
  54. package/dist/cli/commands/run.js +59 -31
  55. package/dist/cli/commands/run.js.map +1 -1
  56. package/dist/cli/commands/run.test.js +288 -6
  57. package/dist/cli/commands/run.test.js.map +1 -1
  58. package/dist/cli/commands/skill.d.ts +12 -0
  59. package/dist/cli/commands/skill.d.ts.map +1 -0
  60. package/dist/cli/commands/skill.js +123 -0
  61. package/dist/cli/commands/skill.js.map +1 -0
  62. package/dist/cli/commands/skill.test.d.ts +2 -0
  63. package/dist/cli/commands/skill.test.d.ts.map +1 -0
  64. package/dist/cli/commands/skill.test.js +297 -0
  65. package/dist/cli/commands/skill.test.js.map +1 -0
  66. package/dist/cli/commands/skip.d.ts +1 -1
  67. package/dist/cli/commands/skip.d.ts.map +1 -1
  68. package/dist/cli/commands/skip.js +123 -57
  69. package/dist/cli/commands/skip.js.map +1 -1
  70. package/dist/cli/commands/skip.test.js +91 -0
  71. package/dist/cli/commands/skip.test.js.map +1 -1
  72. package/dist/cli/commands/status.d.ts +1 -0
  73. package/dist/cli/commands/status.d.ts.map +1 -1
  74. package/dist/cli/commands/status.js +57 -10
  75. package/dist/cli/commands/status.js.map +1 -1
  76. package/dist/cli/commands/status.test.js +81 -0
  77. package/dist/cli/commands/status.test.js.map +1 -1
  78. package/dist/cli/commands/update.test.js +252 -0
  79. package/dist/cli/commands/update.test.js.map +1 -1
  80. package/dist/cli/commands/version.test.js +171 -1
  81. package/dist/cli/commands/version.test.js.map +1 -1
  82. package/dist/cli/index.d.ts.map +1 -1
  83. package/dist/cli/index.js +8 -0
  84. package/dist/cli/index.js.map +1 -1
  85. package/dist/core/adapters/adapter.d.ts +14 -0
  86. package/dist/core/adapters/adapter.d.ts.map +1 -1
  87. package/dist/core/adapters/adapter.js.map +1 -1
  88. package/dist/core/adapters/adapter.test.js +10 -0
  89. package/dist/core/adapters/adapter.test.js.map +1 -1
  90. package/dist/core/adapters/claude-code.d.ts.map +1 -1
  91. package/dist/core/adapters/claude-code.js +47 -10
  92. package/dist/core/adapters/claude-code.js.map +1 -1
  93. package/dist/core/adapters/claude-code.test.js +41 -20
  94. package/dist/core/adapters/claude-code.test.js.map +1 -1
  95. package/dist/core/adapters/codex.d.ts.map +1 -1
  96. package/dist/core/adapters/codex.js +5 -1
  97. package/dist/core/adapters/codex.js.map +1 -1
  98. package/dist/core/adapters/codex.test.js +5 -0
  99. package/dist/core/adapters/codex.test.js.map +1 -1
  100. package/dist/core/adapters/universal.d.ts.map +1 -1
  101. package/dist/core/adapters/universal.js +0 -1
  102. package/dist/core/adapters/universal.js.map +1 -1
  103. package/dist/core/adapters/universal.test.js +5 -0
  104. package/dist/core/adapters/universal.test.js.map +1 -1
  105. package/dist/core/assembly/context-gatherer.d.ts.map +1 -1
  106. package/dist/core/assembly/context-gatherer.js +5 -2
  107. package/dist/core/assembly/context-gatherer.js.map +1 -1
  108. package/dist/core/assembly/engine.d.ts.map +1 -1
  109. package/dist/core/assembly/engine.js +10 -2
  110. package/dist/core/assembly/engine.js.map +1 -1
  111. package/dist/core/assembly/engine.test.js +19 -0
  112. package/dist/core/assembly/engine.test.js.map +1 -1
  113. package/dist/core/assembly/knowledge-loader.d.ts +25 -0
  114. package/dist/core/assembly/knowledge-loader.d.ts.map +1 -1
  115. package/dist/core/assembly/knowledge-loader.js +75 -2
  116. package/dist/core/assembly/knowledge-loader.js.map +1 -1
  117. package/dist/core/assembly/knowledge-loader.test.js +388 -1
  118. package/dist/core/assembly/knowledge-loader.test.js.map +1 -1
  119. package/dist/core/assembly/meta-prompt-loader.d.ts +6 -0
  120. package/dist/core/assembly/meta-prompt-loader.d.ts.map +1 -1
  121. package/dist/core/assembly/meta-prompt-loader.js +41 -25
  122. package/dist/core/assembly/meta-prompt-loader.js.map +1 -1
  123. package/dist/core/assembly/preset-loader.d.ts +10 -0
  124. package/dist/core/assembly/preset-loader.d.ts.map +1 -1
  125. package/dist/core/assembly/preset-loader.js +26 -1
  126. package/dist/core/assembly/preset-loader.js.map +1 -1
  127. package/dist/core/assembly/preset-loader.test.js +65 -1
  128. package/dist/core/assembly/preset-loader.test.js.map +1 -1
  129. package/dist/core/assembly/update-mode.d.ts.map +1 -1
  130. package/dist/core/assembly/update-mode.js +10 -4
  131. package/dist/core/assembly/update-mode.js.map +1 -1
  132. package/dist/core/assembly/update-mode.test.js +47 -0
  133. package/dist/core/assembly/update-mode.test.js.map +1 -1
  134. package/dist/core/dependency/dependency.d.ts.map +1 -1
  135. package/dist/core/dependency/dependency.js +3 -2
  136. package/dist/core/dependency/dependency.js.map +1 -1
  137. package/dist/core/dependency/dependency.test.js +2 -0
  138. package/dist/core/dependency/dependency.test.js.map +1 -1
  139. package/dist/core/dependency/eligibility.js +3 -3
  140. package/dist/core/dependency/eligibility.js.map +1 -1
  141. package/dist/core/dependency/eligibility.test.js +2 -0
  142. package/dist/core/dependency/eligibility.test.js.map +1 -1
  143. package/dist/core/dependency/graph.d.ts.map +1 -1
  144. package/dist/core/dependency/graph.js +4 -0
  145. package/dist/core/dependency/graph.js.map +1 -1
  146. package/dist/core/dependency/graph.test.d.ts +2 -0
  147. package/dist/core/dependency/graph.test.d.ts.map +1 -0
  148. package/dist/core/dependency/graph.test.js +262 -0
  149. package/dist/core/dependency/graph.test.js.map +1 -0
  150. package/dist/core/rework/phase-selector.d.ts +24 -0
  151. package/dist/core/rework/phase-selector.d.ts.map +1 -0
  152. package/dist/core/rework/phase-selector.js +98 -0
  153. package/dist/core/rework/phase-selector.js.map +1 -0
  154. package/dist/core/rework/phase-selector.test.d.ts +2 -0
  155. package/dist/core/rework/phase-selector.test.d.ts.map +1 -0
  156. package/dist/core/rework/phase-selector.test.js +138 -0
  157. package/dist/core/rework/phase-selector.test.js.map +1 -0
  158. package/dist/dashboard/generator.d.ts +48 -17
  159. package/dist/dashboard/generator.d.ts.map +1 -1
  160. package/dist/dashboard/generator.js +75 -5
  161. package/dist/dashboard/generator.js.map +1 -1
  162. package/dist/dashboard/generator.test.js +213 -5
  163. package/dist/dashboard/generator.test.js.map +1 -1
  164. package/dist/dashboard/template.d.ts +1 -1
  165. package/dist/dashboard/template.d.ts.map +1 -1
  166. package/dist/dashboard/template.js +755 -114
  167. package/dist/dashboard/template.js.map +1 -1
  168. package/dist/e2e/knowledge.test.js +4 -3
  169. package/dist/e2e/knowledge.test.js.map +1 -1
  170. package/dist/e2e/pipeline.test.js +2 -0
  171. package/dist/e2e/pipeline.test.js.map +1 -1
  172. package/dist/e2e/rework.test.d.ts +6 -0
  173. package/dist/e2e/rework.test.d.ts.map +1 -0
  174. package/dist/e2e/rework.test.js +226 -0
  175. package/dist/e2e/rework.test.js.map +1 -0
  176. package/dist/index.js +0 -0
  177. package/dist/project/adopt.test.js +2 -0
  178. package/dist/project/adopt.test.js.map +1 -1
  179. package/dist/project/claude-md.js +2 -2
  180. package/dist/project/claude-md.js.map +1 -1
  181. package/dist/project/claude-md.test.js +4 -4
  182. package/dist/project/claude-md.test.js.map +1 -1
  183. package/dist/project/detector.d.ts.map +1 -1
  184. package/dist/project/detector.js +4 -1
  185. package/dist/project/detector.js.map +1 -1
  186. package/dist/project/frontmatter.d.ts.map +1 -1
  187. package/dist/project/frontmatter.js +54 -15
  188. package/dist/project/frontmatter.js.map +1 -1
  189. package/dist/project/frontmatter.test.js +2 -2
  190. package/dist/project/frontmatter.test.js.map +1 -1
  191. package/dist/state/rework-manager.d.ts +16 -0
  192. package/dist/state/rework-manager.d.ts.map +1 -0
  193. package/dist/state/rework-manager.js +126 -0
  194. package/dist/state/rework-manager.js.map +1 -0
  195. package/dist/state/rework-manager.test.d.ts +2 -0
  196. package/dist/state/rework-manager.test.d.ts.map +1 -0
  197. package/dist/state/rework-manager.test.js +191 -0
  198. package/dist/state/rework-manager.test.js.map +1 -0
  199. package/dist/state/state-manager.d.ts +13 -0
  200. package/dist/state/state-manager.d.ts.map +1 -1
  201. package/dist/state/state-manager.js +39 -2
  202. package/dist/state/state-manager.js.map +1 -1
  203. package/dist/state/state-manager.test.js +74 -1
  204. package/dist/state/state-manager.test.js.map +1 -1
  205. package/dist/state/state-migration.d.ts +23 -0
  206. package/dist/state/state-migration.d.ts.map +1 -0
  207. package/dist/state/state-migration.js +144 -0
  208. package/dist/state/state-migration.js.map +1 -0
  209. package/dist/state/state-migration.test.d.ts +2 -0
  210. package/dist/state/state-migration.test.d.ts.map +1 -0
  211. package/dist/state/state-migration.test.js +451 -0
  212. package/dist/state/state-migration.test.js.map +1 -0
  213. package/dist/types/assembly.d.ts +2 -0
  214. package/dist/types/assembly.d.ts.map +1 -1
  215. package/dist/types/dependency.d.ts +2 -2
  216. package/dist/types/dependency.d.ts.map +1 -1
  217. package/dist/types/frontmatter.d.ts +100 -7
  218. package/dist/types/frontmatter.d.ts.map +1 -1
  219. package/dist/types/frontmatter.js +89 -1
  220. package/dist/types/frontmatter.js.map +1 -1
  221. package/dist/types/index.d.ts +1 -0
  222. package/dist/types/index.d.ts.map +1 -1
  223. package/dist/types/index.js +1 -0
  224. package/dist/types/index.js.map +1 -1
  225. package/dist/types/lock.d.ts +1 -1
  226. package/dist/types/lock.d.ts.map +1 -1
  227. package/dist/types/rework.d.ts +36 -0
  228. package/dist/types/rework.d.ts.map +1 -0
  229. package/dist/types/rework.js +2 -0
  230. package/dist/types/rework.js.map +1 -0
  231. package/dist/utils/errors.d.ts +1 -0
  232. package/dist/utils/errors.d.ts.map +1 -1
  233. package/dist/utils/errors.js +8 -0
  234. package/dist/utils/errors.js.map +1 -1
  235. package/dist/utils/fs.d.ts +6 -0
  236. package/dist/utils/fs.d.ts.map +1 -1
  237. package/dist/utils/fs.js +13 -0
  238. package/dist/utils/fs.js.map +1 -1
  239. package/dist/validation/config-validator.test.d.ts +2 -0
  240. package/dist/validation/config-validator.test.d.ts.map +1 -0
  241. package/dist/validation/config-validator.test.js +210 -0
  242. package/dist/validation/config-validator.test.js.map +1 -0
  243. package/dist/validation/dependency-validator.test.d.ts +2 -0
  244. package/dist/validation/dependency-validator.test.d.ts.map +1 -0
  245. package/dist/validation/dependency-validator.test.js +215 -0
  246. package/dist/validation/dependency-validator.test.js.map +1 -0
  247. package/dist/validation/frontmatter-validator.test.d.ts +2 -0
  248. package/dist/validation/frontmatter-validator.test.d.ts.map +1 -0
  249. package/dist/validation/frontmatter-validator.test.js +371 -0
  250. package/dist/validation/frontmatter-validator.test.js.map +1 -0
  251. package/dist/validation/state-validator.test.d.ts +2 -0
  252. package/dist/validation/state-validator.test.d.ts.map +1 -0
  253. package/dist/validation/state-validator.test.js +325 -0
  254. package/dist/validation/state-validator.test.js.map +1 -0
  255. package/dist/wizard/suggestion.test.d.ts +2 -0
  256. package/dist/wizard/suggestion.test.d.ts.map +1 -0
  257. package/dist/wizard/suggestion.test.js +115 -0
  258. package/dist/wizard/suggestion.test.js.map +1 -0
  259. package/dist/wizard/wizard.d.ts.map +1 -1
  260. package/dist/wizard/wizard.js +34 -1
  261. package/dist/wizard/wizard.js.map +1 -1
  262. package/knowledge/core/adr-craft.md +4 -0
  263. package/knowledge/core/api-design.md +4 -0
  264. package/knowledge/core/automated-review-tooling.md +203 -0
  265. package/knowledge/core/coding-conventions.md +1 -1
  266. package/knowledge/core/database-design.md +4 -0
  267. package/knowledge/core/design-system-tokens.md +4 -0
  268. package/knowledge/core/domain-modeling.md +4 -0
  269. package/knowledge/core/git-workflow-patterns.md +200 -0
  270. package/knowledge/core/operations-runbook.md +5 -1
  271. package/knowledge/core/security-best-practices.md +4 -0
  272. package/knowledge/core/system-architecture.md +5 -1
  273. package/knowledge/core/task-decomposition.md +118 -3
  274. package/knowledge/core/user-story-innovation.md +13 -0
  275. package/knowledge/core/ux-specification.md +13 -0
  276. package/knowledge/execution/enhancement-workflow.md +201 -0
  277. package/knowledge/execution/task-claiming-strategy.md +130 -0
  278. package/knowledge/execution/tdd-execution-loop.md +172 -0
  279. package/knowledge/execution/worktree-management.md +205 -0
  280. package/knowledge/finalization/apply-fixes-and-freeze.md +12 -0
  281. package/knowledge/finalization/developer-onboarding.md +4 -0
  282. package/knowledge/finalization/implementation-playbook.md +83 -5
  283. package/knowledge/product/gap-analysis.md +5 -1
  284. package/knowledge/product/prd-innovation.md +12 -0
  285. package/knowledge/product/vision-craft.md +213 -0
  286. package/knowledge/review/review-adr.md +12 -0
  287. package/knowledge/review/review-api-design.md +13 -0
  288. package/knowledge/review/review-database-design.md +13 -0
  289. package/knowledge/review/review-domain-modeling.md +5 -1
  290. package/knowledge/review/review-implementation-tasks.md +58 -1
  291. package/knowledge/review/review-methodology.md +11 -0
  292. package/knowledge/review/review-operations.md +12 -0
  293. package/knowledge/review/review-prd.md +13 -0
  294. package/knowledge/review/review-security.md +12 -0
  295. package/knowledge/review/review-system-architecture.md +4 -2
  296. package/knowledge/review/review-testing-strategy.md +11 -0
  297. package/knowledge/review/review-user-stories.md +11 -0
  298. package/knowledge/review/review-ux-specification.md +13 -1
  299. package/knowledge/review/review-vision.md +255 -0
  300. package/knowledge/tools/release-management.md +222 -0
  301. package/knowledge/tools/session-analysis.md +215 -0
  302. package/knowledge/tools/version-strategy.md +200 -0
  303. package/knowledge/validation/critical-path-analysis.md +1 -1
  304. package/knowledge/validation/cross-phase-consistency.md +12 -0
  305. package/knowledge/validation/decision-completeness.md +13 -1
  306. package/knowledge/validation/dependency-validation.md +12 -0
  307. package/knowledge/validation/scope-management.md +12 -0
  308. package/knowledge/validation/traceability.md +12 -0
  309. package/methodology/README.md +37 -0
  310. package/methodology/custom-defaults.yml +12 -1
  311. package/methodology/deep.yml +11 -0
  312. package/methodology/mvp.yml +11 -0
  313. package/package.json +3 -3
  314. package/pipeline/architecture/review-architecture.md +18 -7
  315. package/pipeline/architecture/system-architecture.md +11 -8
  316. package/pipeline/build/multi-agent-resume.md +245 -0
  317. package/pipeline/build/multi-agent-start.md +236 -0
  318. package/pipeline/build/new-enhancement.md +456 -0
  319. package/pipeline/build/quick-task.md +381 -0
  320. package/pipeline/build/single-agent-resume.md +210 -0
  321. package/pipeline/build/single-agent-start.md +207 -0
  322. package/pipeline/consolidation/claude-md-optimization.md +11 -8
  323. package/pipeline/consolidation/workflow-audit.md +15 -11
  324. package/pipeline/decisions/adrs.md +7 -5
  325. package/pipeline/decisions/review-adrs.md +14 -6
  326. package/pipeline/environment/ai-memory-setup.md +18 -12
  327. package/pipeline/environment/automated-pr-review.md +10 -4
  328. package/pipeline/environment/design-system.md +9 -7
  329. package/pipeline/environment/dev-env-setup.md +8 -5
  330. package/pipeline/environment/git-workflow.md +3 -1
  331. package/pipeline/finalization/apply-fixes-and-freeze.md +16 -5
  332. package/pipeline/finalization/developer-onboarding-guide.md +22 -8
  333. package/pipeline/finalization/implementation-playbook.md +40 -11
  334. package/pipeline/foundation/beads.md +10 -7
  335. package/pipeline/foundation/coding-standards.md +6 -3
  336. package/pipeline/foundation/project-structure.md +5 -1
  337. package/pipeline/foundation/tdd.md +10 -6
  338. package/pipeline/foundation/tech-stack.md +9 -9
  339. package/pipeline/integration/add-e2e-testing.md +21 -6
  340. package/pipeline/modeling/domain-modeling.md +10 -7
  341. package/pipeline/modeling/review-domain-modeling.md +17 -6
  342. package/pipeline/parity/platform-parity-review.md +31 -11
  343. package/pipeline/planning/implementation-plan-review.md +21 -10
  344. package/pipeline/planning/implementation-plan.md +52 -19
  345. package/pipeline/pre/create-prd.md +22 -7
  346. package/pipeline/pre/innovate-prd.md +10 -8
  347. package/pipeline/pre/innovate-user-stories.md +9 -7
  348. package/pipeline/pre/review-prd.md +11 -2
  349. package/pipeline/pre/review-user-stories.md +12 -3
  350. package/pipeline/pre/user-stories.md +12 -7
  351. package/pipeline/quality/create-evals.md +10 -6
  352. package/pipeline/quality/operations.md +16 -12
  353. package/pipeline/quality/review-operations.md +19 -10
  354. package/pipeline/quality/review-security.md +21 -11
  355. package/pipeline/quality/review-testing.md +23 -12
  356. package/pipeline/quality/security.md +17 -13
  357. package/pipeline/quality/story-tests.md +6 -4
  358. package/pipeline/specification/api-contracts.md +11 -6
  359. package/pipeline/specification/database-schema.md +12 -6
  360. package/pipeline/specification/review-api.md +18 -9
  361. package/pipeline/specification/review-database.md +18 -9
  362. package/pipeline/specification/review-ux.md +20 -10
  363. package/pipeline/specification/ux-spec.md +8 -5
  364. package/pipeline/validation/critical-path-walkthrough.md +14 -7
  365. package/pipeline/validation/cross-phase-consistency.md +14 -7
  366. package/pipeline/validation/decision-completeness.md +14 -7
  367. package/pipeline/validation/dependency-graph-validation.md +15 -7
  368. package/pipeline/validation/implementability-dry-run.md +15 -7
  369. package/pipeline/validation/scope-creep-check.md +15 -7
  370. package/pipeline/validation/traceability-matrix.md +20 -7
  371. package/pipeline/vision/create-vision.md +267 -0
  372. package/pipeline/vision/innovate-vision.md +157 -0
  373. package/pipeline/vision/review-vision.md +149 -0
  374. package/skills/scaffold-pipeline/SKILL.md +33 -18
  375. package/skills/scaffold-runner/SKILL.md +172 -18
@@ -10,6 +10,19 @@ The database schema translates domain entities and their relationships into pers
10
10
 
11
11
  Follows the review process defined in `review-methodology.md`.
12
12
 
13
+ ## Summary
14
+
15
+ - **Pass 1 — Entity Coverage**: Every domain entity requiring persistence maps to a table; no domain concept is missing from the schema.
16
+ - **Pass 2 — Relationship Fidelity**: Schema relationships accurately reflect domain model cardinality and direction; no missing or fabricated foreign keys.
17
+ - **Pass 3 — Normalization Justification**: Normalization level of each table is justified; deliberate denormalization has documented rationale tied to access patterns.
18
+ - **Pass 4 — Index Coverage**: Indexes cover known query patterns from architecture data flows; no critical query requires a full table scan.
19
+ - **Pass 5 — Constraint Enforcement**: Database constraints (NOT NULL, UNIQUE, CHECK, FK) enforce domain invariants where possible.
20
+ - **Pass 6 — Migration Safety**: Migration plan handles rollbacks and data preservation; destructive operations identified; data migrations separated from schema migrations.
21
+ - **Pass 7 — Cross-Schema Consistency**: Multi-database naming conventions, shared identifiers, and cross-database references are consistent.
22
+ - **Pass 8 — Downstream Readiness**: Schema supports efficient CRUD, list/search queries, relationship traversal, and aggregates needed by API contracts.
23
+
24
+ ## Deep Guidance
25
+
13
26
  ---
14
27
 
15
28
  ## Pass 1: Entity Coverage
@@ -6,10 +6,14 @@ topics: [review, domain-modeling, ddd, bounded-contexts]
6
6
 
7
7
  # Review: Domain Modeling
8
8
 
9
- Domain models are the foundation of the entire pipeline. Every subsequent phase builds on them. A gap or error here compounds through ADRs, architecture, database schema, API contracts, and implementation tasks. This review uses 10 passes targeting the specific ways domain models fail.
9
+ ## Summary
10
+
11
+ Domain models are the foundation of the entire pipeline. Every subsequent phase builds on them. A gap or error here compounds through ADRs, architecture, database schema, API contracts, and implementation tasks. This review uses 10 passes targeting the specific ways domain models fail: (1) PRD coverage audit, (2) bounded context integrity, (3) entity vs value object classification, (4) aggregate boundary validation, (5) domain event completeness, (6) invariant specification, (7) ubiquitous language consistency, (8) cross-domain relationship clarity, (9) downstream readiness, and (10) internal consistency.
10
12
 
11
13
  Follows the review process defined in `review-methodology.md`.
12
14
 
15
+ ## Deep Guidance
16
+
13
17
  ---
14
18
 
15
19
  ## Pass 1: PRD Coverage Audit
@@ -6,10 +6,23 @@ topics: [review, tasks, planning, decomposition, agents]
6
6
 
7
7
  # Review: Implementation Tasks
8
8
 
9
- The implementation tasks document translates the architecture into discrete, actionable work items that AI agents can execute. Each task must be self-contained enough for a single agent session, correctly ordered by dependency, and clear enough to implement without asking questions. This review uses 7 passes targeting the specific ways implementation tasks fail.
9
+ The implementation tasks document translates the architecture into discrete, actionable work items that AI agents can execute. Each task must be self-contained enough for a single agent session, correctly ordered by dependency, and clear enough to implement without asking questions. This review uses 8 passes targeting the specific ways implementation tasks fail.
10
10
 
11
11
  Follows the review process defined in `review-methodology.md`.
12
12
 
13
+ ## Summary
14
+
15
+ - **Pass 1 — Architecture Coverage**: Every architectural component, module, and integration point has corresponding tasks; cross-cutting concerns and infrastructure included.
16
+ - **Pass 2 — Missing Dependencies**: Task dependencies are complete and correct; no circular dependencies; no implicit prerequisites left undeclared.
17
+ - **Pass 3 — Task Sizing**: No task too large for a single agent session (30-60 min) or too small to be meaningful; clear scope boundaries.
18
+ - **Pass 4 — Acceptance Criteria**: Every task has clear, testable criteria covering happy path and at least one error/edge case.
19
+ - **Pass 5 — Critical Path Accuracy**: The identified critical path is actually the longest dependency chain; near-critical paths identified.
20
+ - **Pass 6 — Parallelization Validity**: Tasks marked as parallel are truly independent; no shared state, files, or undeclared dependencies.
21
+ - **Pass 7 — Agent Context**: Each task specifies which documents/sections the implementing agent should read; context is sufficient and minimal.
22
+ - **Pass 8 — Agent Executability**: Every task complies with the 5 agent sizing rules (three-file, 150-line, single-concern, decision-free, test co-location); exceptions are justified.
23
+
24
+ ## Deep Guidance
25
+
13
26
  ---
14
27
 
15
28
  ## Pass 1: Architecture Coverage
@@ -203,6 +216,50 @@ AI agents have limited context windows. If a task does not specify what to read,
203
216
 
204
217
  ---
205
218
 
219
+ ## Pass 8: Agent Executability
220
+
221
+ ### What to Check
222
+
223
+ Every task complies with the five agent executability rules. Tasks exceeding limits without justification must be split.
224
+
225
+ - **Three-File Rule**: Count application files each task modifies (test files excluded). Flag any task touching 4+ files. Check for `<!-- agent-size-exception -->` annotations on flagged tasks.
226
+ - **150-Line Budget**: Estimate net-new lines per task based on the task description scope. Flag tasks likely to produce 200+ lines. Signals: "implement X with Y and Z", multiple acceptance criteria spanning different modules, multi-layer work.
227
+ - **Single-Concern Rule**: Check each task description for "and" connecting unrelated work. Flag tasks spanning multiple architectural layers or feature domains.
228
+ - **Decision-Free Execution**: Scan for unresolved design decisions. Red flags: "choose", "determine", "decide", "evaluate options", "select the best approach", "pick the right", "figure out". Every design choice must be resolved in the task description.
229
+ - **Test Co-location**: Verify every task that produces application code also includes test requirements. Flag any "write tests for tasks X-Y" aggregation pattern. Flag tasks with no test mention.
230
+
231
+ ### Why This Matters
232
+
233
+ Large tasks are the #1 cause of AI agent failure during implementation. When a task requires reading 5+ files, holding multiple abstractions in context, and writing 300+ lines — agents lose coherence, make inconsistent changes, or run out of context window. Tasks with unresolved design decisions cause agents to make architectural choices they shouldn't, producing inconsistent implementations across tasks. Deferred testing produces untestable code and violates TDD.
234
+
235
+ ### How to Check
236
+
237
+ 1. For each task, count the application files it modifies (exclude test files). Flag 4+ files.
238
+ 2. Estimate net-new application code lines from the task scope. Flag 200+ estimated lines.
239
+ 3. Read the task description. Does it contain "and" connecting distinct concerns? Flag it.
240
+ 4. Scan for decision language: "choose", "determine", "decide", "evaluate", "select", "figure out". Flag any unresolved decisions.
241
+ 5. Check test requirements. Does every code-producing task specify what to test? Flag tasks with no test mention or deferred testing.
242
+ 6. For flagged tasks, check for `<!-- agent-size-exception: reason -->`. Accept justified exceptions; flag unjustified ones.
243
+ 7. For each P0/P1 finding, provide a specific split recommendation: name the sub-tasks, list files each owns, specify dependencies between them.
244
+
245
+ ### Severity
246
+
247
+ - P0: Task exceeds 6+ files or 300+ estimated lines — must split immediately, no exceptions
248
+ - P1: Task violates three-file rule without justification — must split or add exception annotation
249
+ - P1: Task violates 150-line budget without justification — must split or justify
250
+ - P1: Task contains unresolved design decisions — must resolve in task description
251
+ - P2: Task has "and" connecting concerns but stays within limits — recommend split
252
+ - P2: Test requirements vague ("add appropriate tests") or deferred — strengthen with specifics
253
+ - P3: Task near limits (3 files, ~150 lines) — note as borderline, no action required
254
+
255
+ ### What a Finding Looks Like
256
+
257
+ - P1: "Task BD-15 'Implement order management API' modifies 5 files (routes, controller, service, validator, model). Violates three-file rule. Split into: BD-15a 'Create order model and migration' (1 file + migration), BD-15b 'Implement order service with validation' (2 files), BD-15c 'Add order routes and controller' (2 files, depends on BD-15a, BD-15b)."
258
+ - P1: "Task BD-22 'Build settings page' says 'determine whether to use tabs or accordion for organizing preferences.' This is an unresolved design decision. The task description must specify the layout pattern."
259
+ - P2: "Task BD-08 'Set up error handling AND configure logging' connects two concerns with 'and'. Recommend splitting into error handling task and logging task."
260
+
261
+ ---
262
+
206
263
  ## Common Review Anti-Patterns
207
264
 
208
265
  ### 1. Reviewing Tasks in Isolation
@@ -8,6 +8,17 @@ topics: [review, methodology, quality-assurance, multi-pass]
8
8
 
9
9
  This document defines the shared process for reviewing pipeline artifacts. It covers HOW to review, not WHAT to check — each artifact type has its own review knowledge base document with domain-specific passes and failure modes. Every review phase (1a through 10a) follows this process.
10
10
 
11
+ ## Summary
12
+
13
+ - **Multi-pass review**: Each pass has a single focus (coverage, consistency, structure, downstream readiness). Passes are ordered broadest-to-most-specific.
14
+ - **Finding severity**: P0 blocks next phase (must fix), P1 is a significant gap (should fix), P2 is an improvement opportunity (fix if time permits), P3 is nice-to-have (skip).
15
+ - **Fix planning**: Group findings by root cause, same section, and same severity. Fix all P0s first, then P1s. Never fix ad hoc.
16
+ - **Re-validation**: After applying fixes, re-run the specific passes that produced the findings. Stop when no new P0/P1 findings appear.
17
+ - **Downstream readiness gate**: Final check verifies the next phase can proceed with these artifacts. Outcomes: pass, conditional pass, or fail.
18
+ - **Review report**: Structured output with executive summary, findings by pass, fix plan, fix log, re-validation results, and downstream readiness assessment.
19
+
20
+ ## Deep Guidance
21
+
11
22
  ## Multi-Pass Review Structure
12
23
 
13
24
  ### Why Multiple Passes
@@ -10,6 +10,18 @@ The operations runbook defines how the system is deployed, monitored, and mainta
10
10
 
11
11
  Follows the review process defined in `review-methodology.md`.
12
12
 
13
+ ## Summary
14
+
15
+ - **Pass 1 — Deployment Strategy Completeness**: Full deploy lifecycle documented from merged PR to running production, including build, test, stage, deploy, verify, and rollback stages.
16
+ - **Pass 2 — Rollback Procedures**: Every deployment type has a corresponding rollback procedure; database rollbacks addressed separately from code rollbacks.
17
+ - **Pass 3 — Monitoring Coverage**: Infrastructure, application, and business metrics identified with dashboards defined for all critical system components.
18
+ - **Pass 4 — Alerting Thresholds**: Alerts have justified thresholds based on baselines, severity levels map to response expectations, and alert fatigue is considered.
19
+ - **Pass 5 — Runbook Scenarios**: Common failure scenarios have step-by-step runbook entries covering symptoms, diagnosis, resolution, verification, and escalation.
20
+ - **Pass 6 — Dev Environment Parity**: Local development environment reasonably matches production behavior; documented deviations with implications.
21
+ - **Pass 7 — DR/Backup Coverage**: Disaster recovery approach documented with RTO/RPO targets; backup strategy covers all persistent data stores.
22
+
23
+ ## Deep Guidance
24
+
13
25
  ---
14
26
 
15
27
  ## Pass 1: Deployment Strategy Completeness
@@ -10,6 +10,19 @@ The PRD is the foundation of the entire pipeline. Every subsequent phase builds
10
10
 
11
11
  Follows the review process defined in `review-methodology.md`.
12
12
 
13
+ ## Summary
14
+
15
+ - **Pass 1 — Problem Statement Rigor**: Verify the problem is specific, testable, grounded in evidence, and names a specific user group without prescribing solutions.
16
+ - **Pass 2 — Persona & Stakeholder Coverage**: Ensure personas are goal-driven with constraints and context; 2-4 meaningful personas covering all stakeholder groups.
17
+ - **Pass 3 — Feature Scoping Completeness**: Confirm in-scope, out-of-scope, and deferred lists exist; features are specific enough to estimate with prioritization applied.
18
+ - **Pass 4 — Success Criteria Measurability**: Every criterion needs a target value, measurement method, and tie-back to the problem statement.
19
+ - **Pass 5 — NFR Quantification**: All NFR categories addressed with quantified targets and conditions, not adjectives.
20
+ - **Pass 6 — Constraint & Dependency Documentation**: Technical, timeline, budget, team, and regulatory constraints present with traceable downstream impact.
21
+ - **Pass 7 — Error & Edge Case Coverage**: Sad paths for every feature with user input or external dependencies; failure modes for third-party integrations.
22
+ - **Pass 8 — Downstream Readiness for User Stories**: Features specific enough to map to stories, personas specific enough to be actors, business rules explicit enough for acceptance criteria.
23
+
24
+ ## Deep Guidance
25
+
13
26
  ---
14
27
 
15
28
  ## Pass 1: Problem Statement Rigor
@@ -10,6 +10,18 @@ The security review document assesses the system's security posture across authe
10
10
 
11
11
  Follows the review process defined in `review-methodology.md`.
12
12
 
13
+ ## Summary
14
+
15
+ - **Pass 1 — OWASP Coverage**: Every OWASP Top 10 category addressed with project-specific analysis, not generic checklist advice.
16
+ - **Pass 2 — Auth/AuthZ Boundary Alignment**: Security boundaries align with API contract auth requirements; no access control gaps between security review and API enforcement.
17
+ - **Pass 3 — Secrets Management**: No secrets in code or version control; rotation strategy exists; vault/secrets manager integration specified for all secret categories.
18
+ - **Pass 4 — Dependency Audit Coverage**: Vulnerability scanning integrated into CI covering direct and transitive dependencies; response policy for discovered vulnerabilities.
19
+ - **Pass 5 — Threat Model Scenarios**: Structured threat model (STRIDE/PASTA) covering all trust boundaries with specific, project-relevant threat scenarios and mapped mitigations.
20
+ - **Pass 6 — Data Classification**: Data categorized by sensitivity level with handling requirements per category; regulatory compliance addressed.
21
+ - **Pass 7 — Input Validation**: Validation at all system boundaries (not just frontend) covering type, format, range, and business rules.
22
+
23
+ ## Deep Guidance
24
+
13
25
  ---
14
26
 
15
27
  ## Pass 1: OWASP Coverage
@@ -6,12 +6,14 @@ topics: [review, architecture, components, data-flow, modules]
6
6
 
7
7
  # Review: System Architecture
8
8
 
9
- The system architecture document translates domain models and ADR decisions into a concrete component structure, data flows, and module organization. It is the primary reference for all subsequent phases — database schema, API contracts, UX spec, and implementation tasks all derive from it. Errors here propagate everywhere.
9
+ ## Summary
10
10
 
11
- This review uses 10 passes targeting the specific ways architecture documents fail.
11
+ The system architecture document translates domain models and ADR decisions into a concrete component structure, data flows, and module organization. It is the primary reference for all subsequent phases — database schema, API contracts, UX spec, and implementation tasks all derive from it. Errors here propagate everywhere. This review uses 10 passes targeting the specific ways architecture documents fail: (1) domain model coverage, (2) ADR constraint compliance, (3) data flow completeness, (4) module structure integrity, (5) state consistency, (6) diagram/prose consistency, (7) extension point integrity, (8) invariant verification, (9) downstream readiness, and (10) internal consistency.
12
12
 
13
13
  Follows the review process defined in `review-methodology.md`.
14
14
 
15
+ ## Deep Guidance
16
+
15
17
  ---
16
18
 
17
19
  ## Pass 1: Domain Model Coverage
@@ -10,6 +10,17 @@ The testing strategy defines how the system will be verified at every layer. It
10
10
 
11
11
  Follows the review process defined in `review-methodology.md`.
12
12
 
13
+ ## Summary
14
+
15
+ - **Pass 1 — Coverage Gaps by Layer**: Each architectural layer has test coverage defined; test pyramid is balanced (not top-heavy or bottom-heavy).
16
+ - **Pass 2 — Domain Invariant Test Cases**: Every domain invariant has at least one corresponding test scenario covering positive and negative cases.
17
+ - **Pass 3 — Test Environment Assumptions**: Test environment matches production constraints; database engines, service configurations, and test data are realistic.
18
+ - **Pass 4 — Performance Test Coverage**: Performance-critical paths have benchmarks with specific thresholds; load and stress testing scenarios defined.
19
+ - **Pass 5 — Integration Boundary Coverage**: All component integration points have integration tests using real (not mocked) dependencies.
20
+ - **Pass 6 — Quality Gate Completeness**: CI pipeline gates cover linting, type checking, tests, and security scanning; gates block deployment on failure.
21
+
22
+ ## Deep Guidance
23
+
13
24
  ---
14
25
 
15
26
  ## Pass 1: Coverage Gaps by Layer
@@ -10,6 +10,17 @@ User stories translate PRD requirements into user-facing behavior with testable
10
10
 
11
11
  Follows the review process defined in `review-methodology.md`.
12
12
 
13
+ ## Summary
14
+
15
+ - **Pass 1 — PRD Coverage**: Every PRD feature, flow, and requirement has at least one corresponding user story; no silent coverage gaps.
16
+ - **Pass 2 — Acceptance Criteria Quality**: Every story has testable, unambiguous Given/When/Then criteria covering happy path and at least one error/edge case.
17
+ - **Pass 3 — Story Independence**: Stories can be implemented independently; dependencies are explicit, not hidden; no circular dependencies.
18
+ - **Pass 4 — Persona Coverage**: Every PRD-defined persona has stories; every story maps to a valid, defined persona.
19
+ - **Pass 5 — Sizing & Splittability**: No story too large for 1-3 agent sessions or too small to be meaningful; oversized stories have clear split points.
20
+ - **Pass 6 — Downstream Readiness**: Domain entities, events, aggregate boundaries, and business rules are discoverable from acceptance criteria for domain modeling.
21
+
22
+ ## Deep Guidance
23
+
13
24
  ---
14
25
 
15
26
  ## Pass 1: PRD Coverage
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: review-ux-specification
3
3
  description: Failure modes and review passes specific to UI/UX specification artifacts
4
- topics: [review, ux, design, accessibility, responsive]
4
+ topics: [review, ux, design, accessibility, responsive-design]
5
5
  ---
6
6
 
7
7
  # Review: UX Specification
@@ -10,6 +10,18 @@ The UX specification translates user journeys from the PRD and component archite
10
10
 
11
11
  Follows the review process defined in `review-methodology.md`.
12
12
 
13
+ ## Summary
14
+
15
+ - **Pass 1 — User Journey Coverage vs PRD**: Every user-facing PRD feature has a corresponding screen, flow, or interaction; non-happy-path journeys covered.
16
+ - **Pass 2 — Accessibility Compliance**: WCAG level stated; keyboard navigation, screen reader support, color contrast, and focus management specified.
17
+ - **Pass 3 — Interaction State Completeness**: Every component has all states defined: empty, loading, populated, error, disabled, and edge states.
18
+ - **Pass 4 — Design System Consistency**: Colors, spacing, typography reference design system tokens, not one-off values.
19
+ - **Pass 5 — Responsive Breakpoint Coverage**: Behavior defined for all breakpoints; navigation, data tables, and forms adapt appropriately.
20
+ - **Pass 6 — Error State Handling**: Every user action that can fail has a designed error state with user-friendly messages and clear recovery paths.
21
+ - **Pass 7 — Component Hierarchy vs Architecture**: Frontend components in UX spec align with architecture component boundaries and state management approach.
22
+
23
+ ## Deep Guidance
24
+
13
25
  ---
14
26
 
15
27
  ## Pass 1: User Journey Coverage vs PRD
@@ -0,0 +1,255 @@
1
+ ---
2
+ name: review-vision
3
+ description: Vision-specific review passes, failure modes, and quality criteria for product vision documents
4
+ topics: [review, vision, product-strategy, validation]
5
+ ---
6
+
7
+ # Review: Product Vision
8
+
9
+ The product vision document sets the strategic direction for everything downstream. It defines why the product exists, who it serves, what makes it different, and what traps to avoid. A weak vision produces a PRD that lacks focus, user stories that lack purpose, and an architecture that lacks guiding constraints. This review uses 5 passes targeting the specific ways vision artifacts fail.
10
+
11
+ Follows the review process defined in `review-methodology.md`.
12
+
13
+ ---
14
+
15
+ ## Summary
16
+
17
+ Vision review validates that the product vision is specific enough to guide decisions, inspiring enough to align a team, and honest enough to withstand scrutiny. The 5 passes target: (1) vision clarity -- is the vision statement specific, inspiring, and actionable, (2) target audience -- are users defined by behaviors and motivations rather than demographics, (3) competitive landscape -- is the analysis honest about strengths and not just weaknesses, (4) guiding principles -- do they create real tradeoffs with X-over-Y format, and (5) anti-vision -- does it name specific traps rather than vague disclaimers.
18
+
19
+ ---
20
+
21
+ ## Deep Guidance
22
+
23
+ ## Pass 1: Vision Clarity
24
+
25
+ ### What to Check
26
+
27
+ - Is the vision statement specific to THIS product, not a generic mission statement?
28
+ - Does it inspire action, not just describe a category?
29
+ - Is it actionable -- could a team use it to make a yes/no decision about a feature?
30
+ - Does it avoid jargon, buzzwords, and empty superlatives ("best-in-class," "world-class," "revolutionary")?
31
+ - Is it short enough to remember (1-3 sentences)?
32
+
33
+ ### Why This Matters
34
+
35
+ The vision statement is the single most referenced artifact in the pipeline. It appears in PRD context, guides user story prioritization, and informs architecture trade-offs. A generic vision like "make the best project management tool" provides zero signal -- it cannot distinguish between features to build and features to skip. A specific vision like "help 2-person freelance teams track client work without learning project management" makes every downstream decision easier.
36
+
37
+ ### How to Check
38
+
39
+ 1. Read the vision statement in isolation -- does it name a specific outcome for a specific group?
40
+ 2. Try the "swap test" -- could you replace the product name with a competitor's name and have the vision still be true? If yes, it is not specific enough
41
+ 3. Try the "decision test" -- present two hypothetical features and ask whether the vision helps you choose between them. If it does not, the vision is too vague
42
+ 4. Check for buzzwords: "leverage," "synergy," "best-in-class," "end-to-end," "seamless" -- these add words without adding meaning
43
+ 5. Check length -- if the vision takes more than 30 seconds to read aloud, it is too long to internalize
44
+
45
+ ### What a Finding Looks Like
46
+
47
+ - P0: "Vision statement is 'To be the leading platform for enterprise collaboration.' This could describe Slack, Teams, Notion, or Confluence. It names no specific user group, no specific problem, and no specific differentiation."
48
+ - P1: "Vision statement is specific but contains 'seamless end-to-end experience' -- this phrase adds no decision-making value. Replace with the specific experience being described."
49
+ - P2: "Vision is 4 paragraphs long. Distill to 1-3 sentences that a team member could recite from memory."
50
+
51
+ ### Common Failure Modes
52
+
53
+ - **Category description**: The vision describes a market category, not a product direction ("We build developer tools")
54
+ - **Aspiration without specificity**: The vision is inspiring but cannot guide decisions ("Empower teams to do their best work")
55
+ - **Solution masquerading as vision**: The vision describes a technology choice, not a user outcome ("AI-powered analytics platform")
56
+
57
+ ---
58
+
59
+ ## Pass 2: Target Audience
60
+
61
+ ### What to Check
62
+
63
+ - Is the target audience defined by behaviors, motivations, and constraints -- not demographics?
64
+ - Does the audience description create clear inclusion/exclusion criteria?
65
+ - Are there signs of the "everyone" trap (audience so broad it provides no prioritization signal)?
66
+ - Does the audience description explain WHY these people need this product specifically?
67
+
68
+ ### Why This Matters
69
+
70
+ Demographics (age, location, job title) do not predict product needs. Behaviors and motivations do. "Marketing managers aged 30-45" tells you nothing about what to build. "Solo marketers who manage 5+ channels without a team and need to appear more capable than they are" tells you everything. The audience definition flows directly into PRD personas -- vague audiences produce vague personas produce vague user stories.
71
+
72
+ ### How to Check
73
+
74
+ 1. Check whether the audience is defined by observable behaviors ("currently uses spreadsheets to track...") versus demographics ("25-40 year old professionals")
75
+ 2. Check for motivations -- WHY does this audience need the product? What is the underlying drive?
76
+ 3. Check for constraints -- what limits this audience? Budget? Time? Technical skill? Team size?
77
+ 4. Apply the "exclusion test" -- does the audience definition clearly exclude some potential users? If not, it is too broad
78
+ 5. Check that the audience connects to the vision -- is this the audience that the vision serves?
79
+
80
+ ### What a Finding Looks Like
81
+
82
+ - P0: "Target audience is 'businesses of all sizes.' This excludes nobody and provides no prioritization signal. The PRD cannot write meaningful personas from this."
83
+ - P1: "Target audience mentions 'small business owners' but defines them only by company size (<50 employees), not by behaviors, pain points, or motivations."
84
+ - P2: "Audience description is behavior-based but does not explain why existing solutions fail this group."
85
+
86
+ ### Common Failure Modes
87
+
88
+ - **Demographic-only**: Defined by who they are, not what they do ("SMB owners aged 25-45")
89
+ - **Too broad**: Audience includes everyone ("teams of any size in any industry")
90
+ - **Missing motivation**: Describes the audience but not why they need THIS product
91
+ - **No exclusion criteria**: Cannot determine who is NOT the target audience
92
+
93
+ ---
94
+
95
+ ## Pass 3: Competitive Landscape
96
+
97
+ ### What to Check
98
+
99
+ - Does the competitive analysis honestly assess competitors' strengths, not just their weaknesses?
100
+ - Are competitors named specifically, not referred to generically ("existing solutions")?
101
+ - Is the differentiation based on substance (different approach, different audience, different trade-offs) not superficiality ("better UX")?
102
+ - Does the analysis acknowledge what competitors do well that this product will NOT try to replicate?
103
+
104
+ ### Why This Matters
105
+
106
+ A competitive landscape that only lists competitor weaknesses produces false confidence. Competitors have strengths -- users chose them for reasons. Understanding those reasons prevents building a product that is strictly worse in dimensions users care about. Differentiation based on "we'll just do it better" is not differentiation -- it is a bet that the team is more competent than established competitors with more resources.
107
+
108
+ ### How to Check
109
+
110
+ 1. For each named competitor, check that at least one genuine strength is acknowledged
111
+ 2. Check that differentiation is structural (different trade-off, different audience segment, different approach) not aspirational ("better design")
112
+ 3. Verify competitors are named specifically -- "Competitor X" or "the market" provides no signal
113
+ 4. Check whether the analysis acknowledges what the product will NOT compete on (conceding dimensions to competitors)
114
+ 5. Look for the "better at everything" anti-pattern -- if the product claims superiority in every dimension, the analysis is dishonest
115
+
116
+ ### What a Finding Looks Like
117
+
118
+ - P0: "Competitive section lists 4 competitors but only describes their weaknesses. No competitor strengths are acknowledged. This produces a false picture of the market and prevents honest differentiation."
119
+ - P1: "Differentiation claim is 'better user experience.' This is not structural differentiation -- every product claims this. What specific design trade-off creates a different experience?"
120
+ - P2: "Competitors are referred to as 'existing solutions' and 'current tools' without naming them. Specific names enable specific analysis."
121
+
122
+ ### Common Failure Modes
123
+
124
+ - **Weakness-only analysis**: Lists only what competitors do poorly, creating false confidence
125
+ - **Aspirational differentiation**: Claims superiority without structural basis ("we'll be faster, simpler, and more powerful")
126
+ - **Generic competitors**: References "the market" or "existing solutions" without naming specific products
127
+ - **Missing concessions**: Does not acknowledge what the product will deliberately NOT compete on
128
+
129
+ ---
130
+
131
+ ## Pass 4: Guiding Principles
132
+
133
+ ### What to Check
134
+
135
+ - Are principles in X-over-Y format, creating real trade-offs?
136
+ - Does each principle rule out a specific, tempting alternative?
137
+ - Could a reasonable person disagree with the principle (i.e., the "over Y" option is genuinely attractive)?
138
+ - Are principles specific enough to resolve a real product decision?
139
+
140
+ ### Why This Matters
141
+
142
+ Guiding principles that do not create trade-offs are platitudes. "We value quality" is not a principle -- nobody advocates for poor quality. "We value correctness over speed-to-market" is a principle because speed-to-market is genuinely valuable and someone could reasonably choose it. X-over-Y format forces the vision author to name what the product will sacrifice, which is the only way principles become useful for downstream decision-making.
143
+
144
+ ### How to Check
145
+
146
+ 1. For each principle, check for X-over-Y structure -- is something being chosen OVER something else?
147
+ 2. Apply the "reasonable disagreement" test -- would a smart, well-intentioned person choose Y over X? If not, the principle is a platitude
148
+ 3. Construct a hypothetical product decision and check whether the principle resolves it
149
+ 4. Check that the set of principles covers the most common trade-off dimensions for this product type (simplicity vs. power, speed vs. correctness, flexibility vs. consistency, etc.)
150
+ 5. Verify no two principles contradict each other
151
+
152
+ ### What a Finding Looks Like
153
+
154
+ - P0: "Principles include 'We value simplicity, quality, and user delight.' These are not trade-offs -- they are universally desirable attributes. No team would advocate for complexity, poor quality, or user frustration."
155
+ - P1: "Principle 'Convention over configuration' is in X-over-Y format but does not specify what conventions or what configuration options are sacrificed. Too abstract to resolve a real decision."
156
+ - P2: "Principles are well-formed but do not cover the speed-vs-correctness dimension, which is a common tension for this product type."
157
+
158
+ ### Common Failure Modes
159
+
160
+ - **Platitudes**: Principles everyone agrees with ("we value quality") that rule out nothing
161
+ - **Missing sacrifice**: X-over-Y format but Y is not genuinely attractive ("quality over bugs")
162
+ - **Too abstract**: Principles are directionally correct but too vague to resolve specific decisions
163
+ - **Contradictory pairs**: Two principles that cannot both be followed ("move fast" and "never ship bugs")
164
+
165
+ ---
166
+
167
+ ## Pass 5: Anti-Vision
168
+
169
+ ### What to Check
170
+
171
+ - Does the anti-vision name specific, tempting traps -- not vague disclaimers?
172
+ - Are the anti-vision items things the team could plausibly drift into (not absurd strawmen)?
173
+ - Does each item explain WHY it is tempting and HOW to recognize the drift?
174
+ - Is the anti-vision specific to THIS product, not generic warnings?
175
+
176
+ ### Why This Matters
177
+
178
+ The anti-vision is the vision's immune system. It names the specific failure modes that are most likely given the product's domain, team, and competitive landscape. Without it, teams drift toward common traps without recognizing the drift. A good anti-vision makes the team uncomfortable because it names things they might actually do -- not things no reasonable team would do.
179
+
180
+ ### How to Check
181
+
182
+ 1. For each anti-vision item, check specificity -- does it name a concrete behavior or outcome, not a vague category?
183
+ 2. Apply the "temptation test" -- is this something the team could plausibly drift into? If the answer is "obviously not," the anti-vision item is a strawman
184
+ 3. Check whether each item explains the mechanism: why is this trap tempting, and what are the early warning signs?
185
+ 4. Verify the anti-vision items connect to the product domain -- are they specific to THIS type of product?
186
+ 5. Check that anti-vision items complement guiding principles -- if a principle says "simplicity over power," the anti-vision should name a specific way the product might become complex
187
+
188
+ ### What a Finding Looks Like
189
+
190
+ - P0: "Anti-vision section says 'We will not build a bad product.' This is not an anti-vision -- it is a tautology. Name specific traps: 'We will not become a feature-comparison checklist tool that matches competitors feature-for-feature while losing our core simplicity advantage.'"
191
+ - P1: "Anti-vision names 'scope creep' as a trap but does not explain which specific scope expansion is most tempting for this product or how to recognize it early."
192
+ - P2: "Anti-vision items are specific but do not connect to the guiding principles. Each principle's 'Y' (the sacrificed value) should have a corresponding anti-vision item that names the drift toward Y."
193
+
194
+ ### Common Failure Modes
195
+
196
+ - **Vague disclaimers**: "We won't lose focus" -- too generic to be actionable
197
+ - **Absurd strawmen**: Names failures no team would pursue ("we won't build an insecure product")
198
+ - **Missing mechanism**: Names the trap but not why it is tempting or how to detect drift
199
+ - **Generic warnings**: Anti-vision items apply to any product, not THIS product specifically
200
+
201
+ ---
202
+
203
+ ## Finding Report Template
204
+
205
+ ```markdown
206
+ ## Vision Review Report
207
+
208
+ ### Pass 1: Vision Clarity
209
+ - **P1**: Vision statement "Build the best project management tool" is a category description, not a product vision. It cannot guide feature trade-offs. Recommendation: rewrite as a specific change statement.
210
+
211
+ ### Pass 2: Target Audience
212
+ - No findings
213
+
214
+ ### Pass 3: Competitive Landscape
215
+ - **P2**: Competitor "Acme" is described by weaknesses only. Add at least one acknowledged strength.
216
+
217
+ ### Pass 4: Guiding Principles
218
+ - **P0**: Principles are platitudes ("quality", "simplicity") without X-over-Y trade-offs. Cannot resolve downstream decisions.
219
+
220
+ ### Pass 5: Anti-Vision
221
+ - **P1**: Anti-vision says "avoid scope creep" without naming which specific scope expansion is tempting.
222
+
223
+ ### Summary
224
+ - P0: 1 | P1: 2 | P2: 1 | P3: 0
225
+ - Blocks downstream: Yes (P0 in guiding principles)
226
+ ```
227
+
228
+ ## Severity Examples for Vision Documents
229
+
230
+ ### P0 (Blocks downstream phases)
231
+
232
+ - Vision statement is a category description that cannot guide any decision
233
+ - Target audience is "everyone" -- PRD cannot write meaningful personas
234
+ - No guiding principles exist -- all downstream trade-offs are unresolved
235
+ - Anti-vision is absent entirely
236
+
237
+ ### P1 (Causes significant downstream quality issues)
238
+
239
+ - Vision is specific but contains unfalsifiable claims
240
+ - Target audience is demographic-only with no behavioral definition
241
+ - Competitive analysis lists only competitor weaknesses
242
+ - Principles exist but are platitudes without real trade-offs
243
+
244
+ ### P2 (Minor issues, fix during iteration)
245
+
246
+ - Vision is slightly too long to memorize
247
+ - One competitor is described generically rather than by name
248
+ - One principle is well-formed but could be more specific
249
+ - Anti-vision items are specific but miss one common trap for this product type
250
+
251
+ ### P3 (Observations for future improvement)
252
+
253
+ - Competitive landscape could include an emerging competitor
254
+ - Anti-vision could add early warning indicators for each trap
255
+ - Principles could be ordered by frequency of application