@zigrivers/scaffold 2.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +477 -0
- package/dist/cli/commands/adopt.d.ts +12 -0
- package/dist/cli/commands/adopt.d.ts.map +1 -0
- package/dist/cli/commands/adopt.js +107 -0
- package/dist/cli/commands/adopt.js.map +1 -0
- package/dist/cli/commands/adopt.test.d.ts +2 -0
- package/dist/cli/commands/adopt.test.d.ts.map +1 -0
- package/dist/cli/commands/adopt.test.js +277 -0
- package/dist/cli/commands/adopt.test.js.map +1 -0
- package/dist/cli/commands/build.d.ts +12 -0
- package/dist/cli/commands/build.d.ts.map +1 -0
- package/dist/cli/commands/build.js +105 -0
- package/dist/cli/commands/build.js.map +1 -0
- package/dist/cli/commands/build.test.d.ts +2 -0
- package/dist/cli/commands/build.test.d.ts.map +1 -0
- package/dist/cli/commands/build.test.js +272 -0
- package/dist/cli/commands/build.test.js.map +1 -0
- package/dist/cli/commands/dashboard.d.ts +14 -0
- package/dist/cli/commands/dashboard.d.ts.map +1 -0
- package/dist/cli/commands/dashboard.js +102 -0
- package/dist/cli/commands/dashboard.js.map +1 -0
- package/dist/cli/commands/dashboard.test.d.ts +2 -0
- package/dist/cli/commands/dashboard.test.d.ts.map +1 -0
- package/dist/cli/commands/dashboard.test.js +142 -0
- package/dist/cli/commands/dashboard.test.js.map +1 -0
- package/dist/cli/commands/decisions.d.ts +13 -0
- package/dist/cli/commands/decisions.d.ts.map +1 -0
- package/dist/cli/commands/decisions.js +62 -0
- package/dist/cli/commands/decisions.js.map +1 -0
- package/dist/cli/commands/decisions.test.d.ts +2 -0
- package/dist/cli/commands/decisions.test.d.ts.map +1 -0
- package/dist/cli/commands/decisions.test.js +154 -0
- package/dist/cli/commands/decisions.test.js.map +1 -0
- package/dist/cli/commands/info.d.ts +12 -0
- package/dist/cli/commands/info.d.ts.map +1 -0
- package/dist/cli/commands/info.js +110 -0
- package/dist/cli/commands/info.js.map +1 -0
- package/dist/cli/commands/info.test.d.ts +2 -0
- package/dist/cli/commands/info.test.d.ts.map +1 -0
- package/dist/cli/commands/info.test.js +392 -0
- package/dist/cli/commands/info.test.js.map +1 -0
- package/dist/cli/commands/init.d.ts +13 -0
- package/dist/cli/commands/init.d.ts.map +1 -0
- package/dist/cli/commands/init.js +46 -0
- package/dist/cli/commands/init.js.map +1 -0
- package/dist/cli/commands/init.test.d.ts +2 -0
- package/dist/cli/commands/init.test.d.ts.map +1 -0
- package/dist/cli/commands/init.test.js +156 -0
- package/dist/cli/commands/init.test.js.map +1 -0
- package/dist/cli/commands/knowledge.d.ts +4 -0
- package/dist/cli/commands/knowledge.d.ts.map +1 -0
- package/dist/cli/commands/knowledge.js +346 -0
- package/dist/cli/commands/knowledge.js.map +1 -0
- package/dist/cli/commands/knowledge.test.d.ts +2 -0
- package/dist/cli/commands/knowledge.test.d.ts.map +1 -0
- package/dist/cli/commands/knowledge.test.js +293 -0
- package/dist/cli/commands/knowledge.test.js.map +1 -0
- package/dist/cli/commands/list.d.ts +12 -0
- package/dist/cli/commands/list.d.ts.map +1 -0
- package/dist/cli/commands/list.js +73 -0
- package/dist/cli/commands/list.js.map +1 -0
- package/dist/cli/commands/list.test.d.ts +2 -0
- package/dist/cli/commands/list.test.d.ts.map +1 -0
- package/dist/cli/commands/list.test.js +166 -0
- package/dist/cli/commands/list.test.js.map +1 -0
- package/dist/cli/commands/next.d.ts +12 -0
- package/dist/cli/commands/next.d.ts.map +1 -0
- package/dist/cli/commands/next.js +75 -0
- package/dist/cli/commands/next.js.map +1 -0
- package/dist/cli/commands/next.test.d.ts +2 -0
- package/dist/cli/commands/next.test.d.ts.map +1 -0
- package/dist/cli/commands/next.test.js +236 -0
- package/dist/cli/commands/next.test.js.map +1 -0
- package/dist/cli/commands/reset.d.ts +13 -0
- package/dist/cli/commands/reset.d.ts.map +1 -0
- package/dist/cli/commands/reset.js +105 -0
- package/dist/cli/commands/reset.js.map +1 -0
- package/dist/cli/commands/reset.test.d.ts +2 -0
- package/dist/cli/commands/reset.test.d.ts.map +1 -0
- package/dist/cli/commands/reset.test.js +211 -0
- package/dist/cli/commands/reset.test.js.map +1 -0
- package/dist/cli/commands/run.d.ts +14 -0
- package/dist/cli/commands/run.d.ts.map +1 -0
- package/dist/cli/commands/run.js +379 -0
- package/dist/cli/commands/run.js.map +1 -0
- package/dist/cli/commands/run.test.d.ts +2 -0
- package/dist/cli/commands/run.test.d.ts.map +1 -0
- package/dist/cli/commands/run.test.js +535 -0
- package/dist/cli/commands/run.test.js.map +1 -0
- package/dist/cli/commands/skip.d.ts +13 -0
- package/dist/cli/commands/skip.d.ts.map +1 -0
- package/dist/cli/commands/skip.js +123 -0
- package/dist/cli/commands/skip.js.map +1 -0
- package/dist/cli/commands/skip.test.d.ts +2 -0
- package/dist/cli/commands/skip.test.d.ts.map +1 -0
- package/dist/cli/commands/skip.test.js +339 -0
- package/dist/cli/commands/skip.test.js.map +1 -0
- package/dist/cli/commands/status.d.ts +12 -0
- package/dist/cli/commands/status.d.ts.map +1 -0
- package/dist/cli/commands/status.js +79 -0
- package/dist/cli/commands/status.js.map +1 -0
- package/dist/cli/commands/status.test.d.ts +2 -0
- package/dist/cli/commands/status.test.d.ts.map +1 -0
- package/dist/cli/commands/status.test.js +245 -0
- package/dist/cli/commands/status.test.js.map +1 -0
- package/dist/cli/commands/update.d.ts +11 -0
- package/dist/cli/commands/update.d.ts.map +1 -0
- package/dist/cli/commands/update.js +159 -0
- package/dist/cli/commands/update.js.map +1 -0
- package/dist/cli/commands/update.test.d.ts +2 -0
- package/dist/cli/commands/update.test.d.ts.map +1 -0
- package/dist/cli/commands/update.test.js +140 -0
- package/dist/cli/commands/update.test.js.map +1 -0
- package/dist/cli/commands/validate.d.ts +12 -0
- package/dist/cli/commands/validate.d.ts.map +1 -0
- package/dist/cli/commands/validate.js +65 -0
- package/dist/cli/commands/validate.js.map +1 -0
- package/dist/cli/commands/validate.test.d.ts +2 -0
- package/dist/cli/commands/validate.test.d.ts.map +1 -0
- package/dist/cli/commands/validate.test.js +159 -0
- package/dist/cli/commands/validate.test.js.map +1 -0
- package/dist/cli/commands/version.d.ts +13 -0
- package/dist/cli/commands/version.d.ts.map +1 -0
- package/dist/cli/commands/version.js +89 -0
- package/dist/cli/commands/version.js.map +1 -0
- package/dist/cli/commands/version.test.d.ts +2 -0
- package/dist/cli/commands/version.test.d.ts.map +1 -0
- package/dist/cli/commands/version.test.js +63 -0
- package/dist/cli/commands/version.test.js.map +1 -0
- package/dist/cli/index.d.ts +4 -0
- package/dist/cli/index.d.ts.map +1 -0
- package/dist/cli/index.js +72 -0
- package/dist/cli/index.js.map +1 -0
- package/dist/cli/index.test.d.ts +2 -0
- package/dist/cli/index.test.d.ts.map +1 -0
- package/dist/cli/index.test.js +8 -0
- package/dist/cli/index.test.js.map +1 -0
- package/dist/cli/middleware/output-mode.d.ts +21 -0
- package/dist/cli/middleware/output-mode.d.ts.map +1 -0
- package/dist/cli/middleware/output-mode.js +27 -0
- package/dist/cli/middleware/output-mode.js.map +1 -0
- package/dist/cli/middleware/output-mode.test.d.ts +2 -0
- package/dist/cli/middleware/output-mode.test.d.ts.map +1 -0
- package/dist/cli/middleware/output-mode.test.js +41 -0
- package/dist/cli/middleware/output-mode.test.js.map +1 -0
- package/dist/cli/middleware/project-root.d.ts +21 -0
- package/dist/cli/middleware/project-root.d.ts.map +1 -0
- package/dist/cli/middleware/project-root.js +54 -0
- package/dist/cli/middleware/project-root.js.map +1 -0
- package/dist/cli/middleware/project-root.test.d.ts +2 -0
- package/dist/cli/middleware/project-root.test.d.ts.map +1 -0
- package/dist/cli/middleware/project-root.test.js +112 -0
- package/dist/cli/middleware/project-root.test.js.map +1 -0
- package/dist/cli/output/auto.d.ts +18 -0
- package/dist/cli/output/auto.d.ts.map +1 -0
- package/dist/cli/output/auto.js +43 -0
- package/dist/cli/output/auto.js.map +1 -0
- package/dist/cli/output/context.d.ts +19 -0
- package/dist/cli/output/context.d.ts.map +1 -0
- package/dist/cli/output/context.js +15 -0
- package/dist/cli/output/context.js.map +1 -0
- package/dist/cli/output/context.test.d.ts +2 -0
- package/dist/cli/output/context.test.d.ts.map +1 -0
- package/dist/cli/output/context.test.js +335 -0
- package/dist/cli/output/context.test.js.map +1 -0
- package/dist/cli/output/error-display.d.ts +31 -0
- package/dist/cli/output/error-display.d.ts.map +1 -0
- package/dist/cli/output/error-display.js +79 -0
- package/dist/cli/output/error-display.js.map +1 -0
- package/dist/cli/output/error-display.test.d.ts +2 -0
- package/dist/cli/output/error-display.test.d.ts.map +1 -0
- package/dist/cli/output/error-display.test.js +230 -0
- package/dist/cli/output/error-display.test.js.map +1 -0
- package/dist/cli/output/interactive.d.ts +22 -0
- package/dist/cli/output/interactive.d.ts.map +1 -0
- package/dist/cli/output/interactive.js +126 -0
- package/dist/cli/output/interactive.js.map +1 -0
- package/dist/cli/output/json.d.ts +17 -0
- package/dist/cli/output/json.d.ts.map +1 -0
- package/dist/cli/output/json.js +62 -0
- package/dist/cli/output/json.js.map +1 -0
- package/dist/cli/types.d.ts +11 -0
- package/dist/cli/types.d.ts.map +1 -0
- package/dist/cli/types.js +2 -0
- package/dist/cli/types.js.map +1 -0
- package/dist/config/loader.d.ts +22 -0
- package/dist/config/loader.d.ts.map +1 -0
- package/dist/config/loader.js +159 -0
- package/dist/config/loader.js.map +1 -0
- package/dist/config/loader.test.d.ts +2 -0
- package/dist/config/loader.test.d.ts.map +1 -0
- package/dist/config/loader.test.js +226 -0
- package/dist/config/loader.test.js.map +1 -0
- package/dist/config/migration.d.ts +15 -0
- package/dist/config/migration.d.ts.map +1 -0
- package/dist/config/migration.js +39 -0
- package/dist/config/migration.js.map +1 -0
- package/dist/config/migration.test.d.ts +2 -0
- package/dist/config/migration.test.d.ts.map +1 -0
- package/dist/config/migration.test.js +44 -0
- package/dist/config/migration.test.js.map +1 -0
- package/dist/config/schema.d.ts +121 -0
- package/dist/config/schema.d.ts.map +1 -0
- package/dist/config/schema.js +22 -0
- package/dist/config/schema.js.map +1 -0
- package/dist/config/schema.test.d.ts +2 -0
- package/dist/config/schema.test.d.ts.map +1 -0
- package/dist/config/schema.test.js +126 -0
- package/dist/config/schema.test.js.map +1 -0
- package/dist/core/adapters/adapter.d.ts +64 -0
- package/dist/core/adapters/adapter.d.ts.map +1 -0
- package/dist/core/adapters/adapter.js +25 -0
- package/dist/core/adapters/adapter.js.map +1 -0
- package/dist/core/adapters/adapter.test.d.ts +2 -0
- package/dist/core/adapters/adapter.test.d.ts.map +1 -0
- package/dist/core/adapters/adapter.test.js +175 -0
- package/dist/core/adapters/adapter.test.js.map +1 -0
- package/dist/core/adapters/claude-code.d.ts +9 -0
- package/dist/core/adapters/claude-code.d.ts.map +1 -0
- package/dist/core/adapters/claude-code.js +34 -0
- package/dist/core/adapters/claude-code.js.map +1 -0
- package/dist/core/adapters/claude-code.test.d.ts +2 -0
- package/dist/core/adapters/claude-code.test.d.ts.map +1 -0
- package/dist/core/adapters/claude-code.test.js +100 -0
- package/dist/core/adapters/claude-code.test.js.map +1 -0
- package/dist/core/adapters/codex.d.ts +10 -0
- package/dist/core/adapters/codex.d.ts.map +1 -0
- package/dist/core/adapters/codex.js +61 -0
- package/dist/core/adapters/codex.js.map +1 -0
- package/dist/core/adapters/codex.test.d.ts +2 -0
- package/dist/core/adapters/codex.test.d.ts.map +1 -0
- package/dist/core/adapters/codex.test.js +122 -0
- package/dist/core/adapters/codex.test.js.map +1 -0
- package/dist/core/adapters/universal.d.ts +10 -0
- package/dist/core/adapters/universal.d.ts.map +1 -0
- package/dist/core/adapters/universal.js +45 -0
- package/dist/core/adapters/universal.js.map +1 -0
- package/dist/core/adapters/universal.test.d.ts +2 -0
- package/dist/core/adapters/universal.test.d.ts.map +1 -0
- package/dist/core/adapters/universal.test.js +121 -0
- package/dist/core/adapters/universal.test.js.map +1 -0
- package/dist/core/assembly/context-gatherer.d.ts +17 -0
- package/dist/core/assembly/context-gatherer.d.ts.map +1 -0
- package/dist/core/assembly/context-gatherer.js +49 -0
- package/dist/core/assembly/context-gatherer.js.map +1 -0
- package/dist/core/assembly/context-gatherer.test.d.ts +2 -0
- package/dist/core/assembly/context-gatherer.test.d.ts.map +1 -0
- package/dist/core/assembly/context-gatherer.test.js +252 -0
- package/dist/core/assembly/context-gatherer.test.js.map +1 -0
- package/dist/core/assembly/depth-resolver.d.ts +11 -0
- package/dist/core/assembly/depth-resolver.d.ts.map +1 -0
- package/dist/core/assembly/depth-resolver.js +23 -0
- package/dist/core/assembly/depth-resolver.js.map +1 -0
- package/dist/core/assembly/depth-resolver.test.d.ts +2 -0
- package/dist/core/assembly/depth-resolver.test.d.ts.map +1 -0
- package/dist/core/assembly/depth-resolver.test.js +100 -0
- package/dist/core/assembly/depth-resolver.test.js.map +1 -0
- package/dist/core/assembly/engine.d.ts +22 -0
- package/dist/core/assembly/engine.d.ts.map +1 -0
- package/dist/core/assembly/engine.js +215 -0
- package/dist/core/assembly/engine.js.map +1 -0
- package/dist/core/assembly/engine.test.d.ts +2 -0
- package/dist/core/assembly/engine.test.d.ts.map +1 -0
- package/dist/core/assembly/engine.test.js +462 -0
- package/dist/core/assembly/engine.test.js.map +1 -0
- package/dist/core/assembly/instruction-loader.d.ts +16 -0
- package/dist/core/assembly/instruction-loader.d.ts.map +1 -0
- package/dist/core/assembly/instruction-loader.js +40 -0
- package/dist/core/assembly/instruction-loader.js.map +1 -0
- package/dist/core/assembly/instruction-loader.test.d.ts +2 -0
- package/dist/core/assembly/instruction-loader.test.d.ts.map +1 -0
- package/dist/core/assembly/instruction-loader.test.js +109 -0
- package/dist/core/assembly/instruction-loader.test.js.map +1 -0
- package/dist/core/assembly/knowledge-loader.d.ts +34 -0
- package/dist/core/assembly/knowledge-loader.d.ts.map +1 -0
- package/dist/core/assembly/knowledge-loader.js +204 -0
- package/dist/core/assembly/knowledge-loader.js.map +1 -0
- package/dist/core/assembly/knowledge-loader.test.d.ts +2 -0
- package/dist/core/assembly/knowledge-loader.test.d.ts.map +1 -0
- package/dist/core/assembly/knowledge-loader.test.js +205 -0
- package/dist/core/assembly/knowledge-loader.test.js.map +1 -0
- package/dist/core/assembly/meta-prompt-loader.d.ts +13 -0
- package/dist/core/assembly/meta-prompt-loader.d.ts.map +1 -0
- package/dist/core/assembly/meta-prompt-loader.js +91 -0
- package/dist/core/assembly/meta-prompt-loader.js.map +1 -0
- package/dist/core/assembly/meta-prompt-loader.test.d.ts +2 -0
- package/dist/core/assembly/meta-prompt-loader.test.d.ts.map +1 -0
- package/dist/core/assembly/meta-prompt-loader.test.js +232 -0
- package/dist/core/assembly/meta-prompt-loader.test.js.map +1 -0
- package/dist/core/assembly/methodology-change.d.ts +27 -0
- package/dist/core/assembly/methodology-change.d.ts.map +1 -0
- package/dist/core/assembly/methodology-change.js +41 -0
- package/dist/core/assembly/methodology-change.js.map +1 -0
- package/dist/core/assembly/methodology-change.test.d.ts +2 -0
- package/dist/core/assembly/methodology-change.test.d.ts.map +1 -0
- package/dist/core/assembly/methodology-change.test.js +145 -0
- package/dist/core/assembly/methodology-change.test.js.map +1 -0
- package/dist/core/assembly/methodology-resolver.d.ts +11 -0
- package/dist/core/assembly/methodology-resolver.d.ts.map +1 -0
- package/dist/core/assembly/methodology-resolver.js +19 -0
- package/dist/core/assembly/methodology-resolver.js.map +1 -0
- package/dist/core/assembly/methodology-resolver.test.d.ts +2 -0
- package/dist/core/assembly/methodology-resolver.test.d.ts.map +1 -0
- package/dist/core/assembly/methodology-resolver.test.js +87 -0
- package/dist/core/assembly/methodology-resolver.test.js.map +1 -0
- package/dist/core/assembly/preset-loader.d.ts +26 -0
- package/dist/core/assembly/preset-loader.d.ts.map +1 -0
- package/dist/core/assembly/preset-loader.js +146 -0
- package/dist/core/assembly/preset-loader.js.map +1 -0
- package/dist/core/assembly/preset-loader.test.d.ts +2 -0
- package/dist/core/assembly/preset-loader.test.d.ts.map +1 -0
- package/dist/core/assembly/preset-loader.test.js +107 -0
- package/dist/core/assembly/preset-loader.test.js.map +1 -0
- package/dist/core/assembly/update-mode.d.ts +25 -0
- package/dist/core/assembly/update-mode.d.ts.map +1 -0
- package/dist/core/assembly/update-mode.js +70 -0
- package/dist/core/assembly/update-mode.js.map +1 -0
- package/dist/core/assembly/update-mode.test.d.ts +2 -0
- package/dist/core/assembly/update-mode.test.d.ts.map +1 -0
- package/dist/core/assembly/update-mode.test.js +235 -0
- package/dist/core/assembly/update-mode.test.js.map +1 -0
- package/dist/core/dependency/dependency.d.ts +20 -0
- package/dist/core/dependency/dependency.d.ts.map +1 -0
- package/dist/core/dependency/dependency.js +104 -0
- package/dist/core/dependency/dependency.js.map +1 -0
- package/dist/core/dependency/dependency.test.d.ts +2 -0
- package/dist/core/dependency/dependency.test.d.ts.map +1 -0
- package/dist/core/dependency/dependency.test.js +166 -0
- package/dist/core/dependency/dependency.test.js.map +1 -0
- package/dist/core/dependency/eligibility.d.ts +17 -0
- package/dist/core/dependency/eligibility.d.ts.map +1 -0
- package/dist/core/dependency/eligibility.js +60 -0
- package/dist/core/dependency/eligibility.js.map +1 -0
- package/dist/core/dependency/eligibility.test.d.ts +2 -0
- package/dist/core/dependency/eligibility.test.d.ts.map +1 -0
- package/dist/core/dependency/eligibility.test.js +198 -0
- package/dist/core/dependency/eligibility.test.js.map +1 -0
- package/dist/core/dependency/graph.d.ts +12 -0
- package/dist/core/dependency/graph.d.ts.map +1 -0
- package/dist/core/dependency/graph.js +34 -0
- package/dist/core/dependency/graph.js.map +1 -0
- package/dist/core/knowledge/knowledge-update-assembler.d.ts +24 -0
- package/dist/core/knowledge/knowledge-update-assembler.d.ts.map +1 -0
- package/dist/core/knowledge/knowledge-update-assembler.js +46 -0
- package/dist/core/knowledge/knowledge-update-assembler.js.map +1 -0
- package/dist/core/knowledge/knowledge-update-assembler.test.d.ts +2 -0
- package/dist/core/knowledge/knowledge-update-assembler.test.d.ts.map +1 -0
- package/dist/core/knowledge/knowledge-update-assembler.test.js +93 -0
- package/dist/core/knowledge/knowledge-update-assembler.test.js.map +1 -0
- package/dist/core/knowledge/knowledge-update-template.md +55 -0
- package/dist/dashboard/generator.d.ts +37 -0
- package/dist/dashboard/generator.d.ts.map +1 -0
- package/dist/dashboard/generator.js +42 -0
- package/dist/dashboard/generator.js.map +1 -0
- package/dist/dashboard/generator.test.d.ts +2 -0
- package/dist/dashboard/generator.test.d.ts.map +1 -0
- package/dist/dashboard/generator.test.js +186 -0
- package/dist/dashboard/generator.test.js.map +1 -0
- package/dist/dashboard/template.d.ts +4 -0
- package/dist/dashboard/template.d.ts.map +1 -0
- package/dist/dashboard/template.js +190 -0
- package/dist/dashboard/template.js.map +1 -0
- package/dist/e2e/commands.test.d.ts +9 -0
- package/dist/e2e/commands.test.d.ts.map +1 -0
- package/dist/e2e/commands.test.js +499 -0
- package/dist/e2e/commands.test.js.map +1 -0
- package/dist/e2e/init.test.d.ts +10 -0
- package/dist/e2e/init.test.d.ts.map +1 -0
- package/dist/e2e/init.test.js +180 -0
- package/dist/e2e/init.test.js.map +1 -0
- package/dist/e2e/knowledge.test.d.ts +2 -0
- package/dist/e2e/knowledge.test.d.ts.map +1 -0
- package/dist/e2e/knowledge.test.js +103 -0
- package/dist/e2e/knowledge.test.js.map +1 -0
- package/dist/e2e/pipeline.test.d.ts +8 -0
- package/dist/e2e/pipeline.test.d.ts.map +1 -0
- package/dist/e2e/pipeline.test.js +295 -0
- package/dist/e2e/pipeline.test.js.map +1 -0
- package/dist/index.d.ts +3 -0
- package/dist/index.d.ts.map +1 -0
- package/dist/index.js +7 -0
- package/dist/index.js.map +1 -0
- package/dist/project/adopt.d.ts +28 -0
- package/dist/project/adopt.d.ts.map +1 -0
- package/dist/project/adopt.js +49 -0
- package/dist/project/adopt.js.map +1 -0
- package/dist/project/adopt.test.d.ts +2 -0
- package/dist/project/adopt.test.d.ts.map +1 -0
- package/dist/project/adopt.test.js +220 -0
- package/dist/project/adopt.test.js.map +1 -0
- package/dist/project/claude-md.d.ts +33 -0
- package/dist/project/claude-md.d.ts.map +1 -0
- package/dist/project/claude-md.js +112 -0
- package/dist/project/claude-md.js.map +1 -0
- package/dist/project/claude-md.test.d.ts +2 -0
- package/dist/project/claude-md.test.d.ts.map +1 -0
- package/dist/project/claude-md.test.js +151 -0
- package/dist/project/claude-md.test.js.map +1 -0
- package/dist/project/detector.d.ts +7 -0
- package/dist/project/detector.d.ts.map +1 -0
- package/dist/project/detector.js +78 -0
- package/dist/project/detector.js.map +1 -0
- package/dist/project/detector.test.d.ts +2 -0
- package/dist/project/detector.test.d.ts.map +1 -0
- package/dist/project/detector.test.js +137 -0
- package/dist/project/detector.test.js.map +1 -0
- package/dist/project/frontmatter.d.ts +17 -0
- package/dist/project/frontmatter.d.ts.map +1 -0
- package/dist/project/frontmatter.js +236 -0
- package/dist/project/frontmatter.js.map +1 -0
- package/dist/project/frontmatter.test.d.ts +2 -0
- package/dist/project/frontmatter.test.d.ts.map +1 -0
- package/dist/project/frontmatter.test.js +218 -0
- package/dist/project/frontmatter.test.js.map +1 -0
- package/dist/project/signals.d.ts +12 -0
- package/dist/project/signals.d.ts.map +1 -0
- package/dist/project/signals.js +2 -0
- package/dist/project/signals.js.map +1 -0
- package/dist/state/completion.d.ts +22 -0
- package/dist/state/completion.d.ts.map +1 -0
- package/dist/state/completion.js +82 -0
- package/dist/state/completion.js.map +1 -0
- package/dist/state/completion.test.d.ts +2 -0
- package/dist/state/completion.test.d.ts.map +1 -0
- package/dist/state/completion.test.js +246 -0
- package/dist/state/completion.test.js.map +1 -0
- package/dist/state/decision-logger.d.ts +16 -0
- package/dist/state/decision-logger.d.ts.map +1 -0
- package/dist/state/decision-logger.js +80 -0
- package/dist/state/decision-logger.js.map +1 -0
- package/dist/state/decision-logger.test.d.ts +2 -0
- package/dist/state/decision-logger.test.d.ts.map +1 -0
- package/dist/state/decision-logger.test.js +182 -0
- package/dist/state/decision-logger.test.js.map +1 -0
- package/dist/state/lock-manager.d.ts +18 -0
- package/dist/state/lock-manager.d.ts.map +1 -0
- package/dist/state/lock-manager.js +134 -0
- package/dist/state/lock-manager.js.map +1 -0
- package/dist/state/lock-manager.test.d.ts +2 -0
- package/dist/state/lock-manager.test.d.ts.map +1 -0
- package/dist/state/lock-manager.test.js +190 -0
- package/dist/state/lock-manager.test.js.map +1 -0
- package/dist/state/state-manager.d.ts +37 -0
- package/dist/state/state-manager.d.ts.map +1 -0
- package/dist/state/state-manager.js +125 -0
- package/dist/state/state-manager.js.map +1 -0
- package/dist/state/state-manager.test.d.ts +2 -0
- package/dist/state/state-manager.test.d.ts.map +1 -0
- package/dist/state/state-manager.test.js +240 -0
- package/dist/state/state-manager.test.js.map +1 -0
- package/dist/types/adapter.d.ts +24 -0
- package/dist/types/adapter.d.ts.map +1 -0
- package/dist/types/adapter.js +2 -0
- package/dist/types/adapter.js.map +1 -0
- package/dist/types/assembly.d.ts +89 -0
- package/dist/types/assembly.d.ts.map +1 -0
- package/dist/types/assembly.js +2 -0
- package/dist/types/assembly.js.map +1 -0
- package/dist/types/claude-md.d.ts +11 -0
- package/dist/types/claude-md.d.ts.map +1 -0
- package/dist/types/claude-md.js +2 -0
- package/dist/types/claude-md.js.map +1 -0
- package/dist/types/cli.d.ts +15 -0
- package/dist/types/cli.d.ts.map +1 -0
- package/dist/types/cli.js +2 -0
- package/dist/types/cli.js.map +1 -0
- package/dist/types/config.d.ts +40 -0
- package/dist/types/config.d.ts.map +1 -0
- package/dist/types/config.js +2 -0
- package/dist/types/config.js.map +1 -0
- package/dist/types/decision.d.ts +14 -0
- package/dist/types/decision.d.ts.map +1 -0
- package/dist/types/decision.js +2 -0
- package/dist/types/decision.js.map +1 -0
- package/dist/types/dependency.d.ts +12 -0
- package/dist/types/dependency.d.ts.map +1 -0
- package/dist/types/dependency.js +2 -0
- package/dist/types/dependency.js.map +1 -0
- package/dist/types/enums.d.ts +23 -0
- package/dist/types/enums.d.ts.map +1 -0
- package/dist/types/enums.js +11 -0
- package/dist/types/enums.js.map +1 -0
- package/dist/types/enums.test.d.ts +2 -0
- package/dist/types/enums.test.d.ts.map +1 -0
- package/dist/types/enums.test.js +13 -0
- package/dist/types/enums.test.js.map +1 -0
- package/dist/types/errors.d.ts +24 -0
- package/dist/types/errors.d.ts.map +1 -0
- package/dist/types/errors.js +2 -0
- package/dist/types/errors.js.map +1 -0
- package/dist/types/frontmatter.d.ts +43 -0
- package/dist/types/frontmatter.d.ts.map +1 -0
- package/dist/types/frontmatter.js +2 -0
- package/dist/types/frontmatter.js.map +1 -0
- package/dist/types/index.d.ts +14 -0
- package/dist/types/index.d.ts.map +1 -0
- package/dist/types/index.js +14 -0
- package/dist/types/index.js.map +1 -0
- package/dist/types/lock.d.ts +10 -0
- package/dist/types/lock.d.ts.map +1 -0
- package/dist/types/lock.js +2 -0
- package/dist/types/lock.js.map +1 -0
- package/dist/types/state.d.ts +49 -0
- package/dist/types/state.d.ts.map +1 -0
- package/dist/types/state.js +2 -0
- package/dist/types/state.js.map +1 -0
- package/dist/types/wizard.d.ts +14 -0
- package/dist/types/wizard.d.ts.map +1 -0
- package/dist/types/wizard.js +2 -0
- package/dist/types/wizard.js.map +1 -0
- package/dist/utils/errors.d.ts +42 -0
- package/dist/utils/errors.d.ts.map +1 -0
- package/dist/utils/errors.js +232 -0
- package/dist/utils/errors.js.map +1 -0
- package/dist/utils/errors.test.d.ts +2 -0
- package/dist/utils/errors.test.d.ts.map +1 -0
- package/dist/utils/errors.test.js +91 -0
- package/dist/utils/errors.test.js.map +1 -0
- package/dist/utils/fs.d.ts +11 -0
- package/dist/utils/fs.d.ts.map +1 -0
- package/dist/utils/fs.js +20 -0
- package/dist/utils/fs.js.map +1 -0
- package/dist/utils/fs.test.d.ts +2 -0
- package/dist/utils/fs.test.d.ts.map +1 -0
- package/dist/utils/fs.test.js +93 -0
- package/dist/utils/fs.test.js.map +1 -0
- package/dist/utils/index.d.ts +4 -0
- package/dist/utils/index.d.ts.map +1 -0
- package/dist/utils/index.js +4 -0
- package/dist/utils/index.js.map +1 -0
- package/dist/utils/levenshtein.d.ts +11 -0
- package/dist/utils/levenshtein.d.ts.map +1 -0
- package/dist/utils/levenshtein.js +37 -0
- package/dist/utils/levenshtein.js.map +1 -0
- package/dist/utils/levenshtein.test.d.ts +2 -0
- package/dist/utils/levenshtein.test.d.ts.map +1 -0
- package/dist/utils/levenshtein.test.js +34 -0
- package/dist/utils/levenshtein.test.js.map +1 -0
- package/dist/validation/config-validator.d.ts +10 -0
- package/dist/validation/config-validator.d.ts.map +1 -0
- package/dist/validation/config-validator.js +11 -0
- package/dist/validation/config-validator.js.map +1 -0
- package/dist/validation/dependency-validator.d.ts +10 -0
- package/dist/validation/dependency-validator.d.ts.map +1 -0
- package/dist/validation/dependency-validator.js +34 -0
- package/dist/validation/dependency-validator.js.map +1 -0
- package/dist/validation/frontmatter-validator.d.ts +12 -0
- package/dist/validation/frontmatter-validator.d.ts.map +1 -0
- package/dist/validation/frontmatter-validator.js +50 -0
- package/dist/validation/frontmatter-validator.js.map +1 -0
- package/dist/validation/index.d.ts +19 -0
- package/dist/validation/index.d.ts.map +1 -0
- package/dist/validation/index.js +64 -0
- package/dist/validation/index.js.map +1 -0
- package/dist/validation/index.test.d.ts +2 -0
- package/dist/validation/index.test.d.ts.map +1 -0
- package/dist/validation/index.test.js +241 -0
- package/dist/validation/index.test.js.map +1 -0
- package/dist/validation/state-validator.d.ts +15 -0
- package/dist/validation/state-validator.d.ts.map +1 -0
- package/dist/validation/state-validator.js +104 -0
- package/dist/validation/state-validator.js.map +1 -0
- package/dist/wizard/questions.d.ts +18 -0
- package/dist/wizard/questions.d.ts.map +1 -0
- package/dist/wizard/questions.js +46 -0
- package/dist/wizard/questions.js.map +1 -0
- package/dist/wizard/suggestion.d.ts +10 -0
- package/dist/wizard/suggestion.d.ts.map +1 -0
- package/dist/wizard/suggestion.js +17 -0
- package/dist/wizard/suggestion.js.map +1 -0
- package/dist/wizard/wizard.d.ts +19 -0
- package/dist/wizard/wizard.d.ts.map +1 -0
- package/dist/wizard/wizard.js +104 -0
- package/dist/wizard/wizard.js.map +1 -0
- package/dist/wizard/wizard.test.d.ts +2 -0
- package/dist/wizard/wizard.test.d.ts.map +1 -0
- package/dist/wizard/wizard.test.js +167 -0
- package/dist/wizard/wizard.test.js.map +1 -0
- package/knowledge/core/adr-craft.md +281 -0
- package/knowledge/core/api-design.md +501 -0
- package/knowledge/core/database-design.md +380 -0
- package/knowledge/core/domain-modeling.md +317 -0
- package/knowledge/core/operations-runbook.md +513 -0
- package/knowledge/core/security-review.md +523 -0
- package/knowledge/core/system-architecture.md +402 -0
- package/knowledge/core/task-decomposition.md +372 -0
- package/knowledge/core/testing-strategy.md +409 -0
- package/knowledge/core/user-stories.md +337 -0
- package/knowledge/core/user-story-innovation.md +171 -0
- package/knowledge/core/ux-specification.md +380 -0
- package/knowledge/finalization/apply-fixes-and-freeze.md +93 -0
- package/knowledge/finalization/developer-onboarding.md +376 -0
- package/knowledge/finalization/implementation-playbook.md +404 -0
- package/knowledge/product/gap-analysis.md +305 -0
- package/knowledge/product/prd-craft.md +324 -0
- package/knowledge/product/prd-innovation.md +204 -0
- package/knowledge/review/review-adr.md +203 -0
- package/knowledge/review/review-api-contracts.md +233 -0
- package/knowledge/review/review-database-schema.md +229 -0
- package/knowledge/review/review-domain-modeling.md +288 -0
- package/knowledge/review/review-implementation-tasks.md +202 -0
- package/knowledge/review/review-methodology.md +215 -0
- package/knowledge/review/review-operations.md +212 -0
- package/knowledge/review/review-prd.md +235 -0
- package/knowledge/review/review-security.md +213 -0
- package/knowledge/review/review-system-architecture.md +296 -0
- package/knowledge/review/review-testing-strategy.md +176 -0
- package/knowledge/review/review-user-stories.md +172 -0
- package/knowledge/review/review-ux-spec.md +208 -0
- package/knowledge/validation/critical-path-analysis.md +203 -0
- package/knowledge/validation/cross-phase-consistency.md +181 -0
- package/knowledge/validation/decision-completeness.md +218 -0
- package/knowledge/validation/dependency-validation.md +233 -0
- package/knowledge/validation/implementability-review.md +252 -0
- package/knowledge/validation/scope-management.md +223 -0
- package/knowledge/validation/traceability.md +198 -0
- package/methodology/custom-defaults.yml +43 -0
- package/methodology/deep.yml +42 -0
- package/methodology/mvp.yml +42 -0
- package/package.json +58 -0
- package/pipeline/architecture/review-architecture.md +44 -0
- package/pipeline/architecture/system-architecture.md +45 -0
- package/pipeline/decisions/adrs.md +45 -0
- package/pipeline/decisions/review-adrs.md +39 -0
- package/pipeline/finalization/apply-fixes-and-freeze.md +39 -0
- package/pipeline/finalization/developer-onboarding-guide.md +36 -0
- package/pipeline/finalization/implementation-playbook.md +45 -0
- package/pipeline/modeling/domain-modeling.md +57 -0
- package/pipeline/modeling/review-domain-modeling.md +41 -0
- package/pipeline/planning/implementation-tasks.md +57 -0
- package/pipeline/planning/review-tasks.md +38 -0
- package/pipeline/pre/create-prd.md +45 -0
- package/pipeline/pre/innovate-prd.md +47 -0
- package/pipeline/pre/innovate-user-stories.md +47 -0
- package/pipeline/pre/review-prd.md +44 -0
- package/pipeline/pre/review-user-stories.md +43 -0
- package/pipeline/pre/user-stories.md +48 -0
- package/pipeline/quality/operations.md +42 -0
- package/pipeline/quality/review-operations.md +37 -0
- package/pipeline/quality/review-security.md +40 -0
- package/pipeline/quality/review-testing.md +39 -0
- package/pipeline/quality/security.md +44 -0
- package/pipeline/quality/testing-strategy.md +42 -0
- package/pipeline/specification/api-contracts.md +44 -0
- package/pipeline/specification/database-schema.md +41 -0
- package/pipeline/specification/review-api.md +40 -0
- package/pipeline/specification/review-database.md +39 -0
- package/pipeline/specification/review-ux.md +38 -0
- package/pipeline/specification/ux-spec.md +43 -0
- package/pipeline/validation/critical-path-walkthrough.md +37 -0
- package/pipeline/validation/cross-phase-consistency.md +35 -0
- package/pipeline/validation/decision-completeness.md +36 -0
- package/pipeline/validation/dependency-graph-validation.md +36 -0
- package/pipeline/validation/implementability-dry-run.md +36 -0
- package/pipeline/validation/scope-creep-check.md +38 -0
- package/pipeline/validation/traceability-matrix.md +36 -0
|
@@ -0,0 +1,296 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: review-system-architecture
|
|
3
|
+
description: Failure modes and review passes specific to system architecture documents
|
|
4
|
+
topics: [review, architecture, components, data-flow, modules]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Review: System Architecture
|
|
8
|
+
|
|
9
|
+
The system architecture document translates domain models and ADR decisions into a concrete component structure, data flows, and module organization. It is the primary reference for all subsequent phases — database schema, API contracts, UX spec, and implementation tasks all derive from it. Errors here propagate everywhere.
|
|
10
|
+
|
|
11
|
+
This review uses 10 passes targeting the specific ways architecture documents fail.
|
|
12
|
+
|
|
13
|
+
Follows the review process defined in `review-methodology.md`.
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## Pass 1: Domain Model Coverage
|
|
18
|
+
|
|
19
|
+
### What to Check
|
|
20
|
+
|
|
21
|
+
Every domain model (entity, aggregate, bounded context) maps to at least one component or module in the architecture. No domain concept is left without an architectural home.
|
|
22
|
+
|
|
23
|
+
### Why This Matters
|
|
24
|
+
|
|
25
|
+
Unmapped domain concepts are features that have nowhere to live. When an implementing agent encounters a domain entity with no architectural home, it either creates an ad hoc module (fragmenting the architecture) or shoehorns it into an existing module (creating a god module). Both outcomes degrade system structure.
|
|
26
|
+
|
|
27
|
+
### How to Check
|
|
28
|
+
|
|
29
|
+
1. List every bounded context from domain models
|
|
30
|
+
2. For each context, verify there is a corresponding module, service, or component in the architecture
|
|
31
|
+
3. List every aggregate root within each context
|
|
32
|
+
4. For each aggregate, verify its data and behavior are housed in the identified component
|
|
33
|
+
5. Check that domain relationships (context map) are reflected in component interactions
|
|
34
|
+
6. Verify that domain events map to communication channels between components
|
|
35
|
+
|
|
36
|
+
### What a Finding Looks Like
|
|
37
|
+
|
|
38
|
+
- P0: "Bounded context 'Notifications' from domain models has no corresponding component in the architecture. Six domain events reference notification delivery but no component handles them."
|
|
39
|
+
- P1: "Aggregate 'SubscriptionPlan' is in domain models but its behavior is split between 'BillingService' and 'UserService' without clear ownership."
|
|
40
|
+
- P2: "Domain event 'InventoryReserved' is documented but the architecture does not show which component publishes it."
|
|
41
|
+
|
|
42
|
+
---
|
|
43
|
+
|
|
44
|
+
## Pass 2: ADR Constraint Compliance
|
|
45
|
+
|
|
46
|
+
### What to Check
|
|
47
|
+
|
|
48
|
+
The architecture respects every accepted ADR decision. Technology choices, pattern decisions, and constraints documented in ADRs are reflected in the architecture.
|
|
49
|
+
|
|
50
|
+
### Why This Matters
|
|
51
|
+
|
|
52
|
+
ADRs are binding decisions. An architecture that ignores an ADR creates a contradiction that implementing agents must resolve on the fly. If ADR-005 says "PostgreSQL for all persistent data" but the architecture shows a MongoDB component, agents face a contradiction with no resolution path.
|
|
53
|
+
|
|
54
|
+
### How to Check
|
|
55
|
+
|
|
56
|
+
1. List every accepted ADR and its core decision
|
|
57
|
+
2. For each ADR, trace its impact on the architecture: which components, data flows, or patterns does it constrain?
|
|
58
|
+
3. Verify the architecture conforms to each constraint
|
|
59
|
+
4. For ADRs with negative consequences, verify the architecture accounts for mitigation strategies
|
|
60
|
+
5. Check that architectural patterns match ADR decisions (if ADR says "hexagonal architecture," verify port/adapter structure)
|
|
61
|
+
6. Verify technology selections in the architecture match ADR technology decisions
|
|
62
|
+
|
|
63
|
+
### What a Finding Looks Like
|
|
64
|
+
|
|
65
|
+
- P0: "ADR-007 decides 'event-driven communication between bounded contexts' but the architecture shows synchronous REST calls between Order and Inventory services."
|
|
66
|
+
- P1: "ADR-003 specifies 'monolith-first approach' but the architecture describes five separate services without explaining the planned extraction path."
|
|
67
|
+
- P2: "ADR-011 notes 'caching adds invalidation complexity' as a negative consequence, but the architecture's caching component does not address invalidation strategy."
|
|
68
|
+
|
|
69
|
+
---
|
|
70
|
+
|
|
71
|
+
## Pass 3: Data Flow Completeness
|
|
72
|
+
|
|
73
|
+
### What to Check
|
|
74
|
+
|
|
75
|
+
Every component appears in at least one data flow. All data flows have a clear source, destination, protocol, and payload description. No orphaned components exist.
|
|
76
|
+
|
|
77
|
+
### Why This Matters
|
|
78
|
+
|
|
79
|
+
Components that appear in no data flow are either unnecessary (dead architecture) or have undocumented interactions (hidden coupling). Both are problems. Missing data flows mean the implementing agent does not know how data gets into or out of a component — they must invent the integration at implementation time.
|
|
80
|
+
|
|
81
|
+
### How to Check
|
|
82
|
+
|
|
83
|
+
1. List every component in the architecture
|
|
84
|
+
2. For each component, verify it appears as source or destination in at least one data flow
|
|
85
|
+
3. For each data flow, verify: source is a real component, destination is a real component, protocol/mechanism is specified (HTTP, events, database, file), data shape or payload is described
|
|
86
|
+
4. Check for bidirectional flows that are only documented in one direction
|
|
87
|
+
5. Verify error flows: what happens when a data flow fails? Is the error path documented?
|
|
88
|
+
6. Check for external system interactions: are third-party APIs, external databases, or external services documented as data flow endpoints?
|
|
89
|
+
|
|
90
|
+
### What a Finding Looks Like
|
|
91
|
+
|
|
92
|
+
- P0: "Component 'AnalyticsEngine' appears in the component diagram but is not referenced in any data flow. It has no documented inputs or outputs."
|
|
93
|
+
- P1: "Data flow from 'OrderService' to 'NotificationService' does not specify the communication mechanism. Is it synchronous HTTP, async events, or direct function calls?"
|
|
94
|
+
- P2: "Error flow for payment processing failure is missing. What happens when the payment gateway returns an error? Where does the error propagate?"
|
|
95
|
+
|
|
96
|
+
---
|
|
97
|
+
|
|
98
|
+
## Pass 4: Module Structure Integrity
|
|
99
|
+
|
|
100
|
+
### What to Check
|
|
101
|
+
|
|
102
|
+
The module/directory structure has no circular dependencies, reasonable sizes, clear boundaries, and follows the patterns specified in ADRs.
|
|
103
|
+
|
|
104
|
+
### Why This Matters
|
|
105
|
+
|
|
106
|
+
Circular module dependencies make the system impossible to build, test, or deploy independently. Overly large modules become maintenance nightmares. Unclear boundaries lead to feature leakage, where functionality drifts into the wrong module because the right one is ambiguous.
|
|
107
|
+
|
|
108
|
+
### How to Check
|
|
109
|
+
|
|
110
|
+
1. Trace the import/dependency direction between modules — draw the dependency graph
|
|
111
|
+
2. Check for cycles in the dependency graph (A depends on B depends on C depends on A)
|
|
112
|
+
3. Verify the dependency direction aligns with the domain model's upstream/downstream relationships
|
|
113
|
+
4. Check module sizes: are any modules housing too many responsibilities? (More than one bounded context worth of functionality)
|
|
114
|
+
5. Verify that shared/common modules are minimal — they tend to become dumping grounds
|
|
115
|
+
6. Check that the file/directory structure matches the module boundaries (not split across directories or merged into one)
|
|
116
|
+
|
|
117
|
+
### What a Finding Looks Like
|
|
118
|
+
|
|
119
|
+
- P0: "'auth' module imports from 'orders' module, and 'orders' module imports from 'auth'. This circular dependency must be broken — introduce an interface or event."
|
|
120
|
+
- P1: "The 'core' module contains entities from three different bounded contexts. It should be split to maintain domain boundaries."
|
|
121
|
+
- P2: "The 'utils' module has grown to 15 files. Consider whether these utilities belong in the modules that use them."
|
|
122
|
+
|
|
123
|
+
---
|
|
124
|
+
|
|
125
|
+
## Pass 5: State Consistency
|
|
126
|
+
|
|
127
|
+
### What to Check
|
|
128
|
+
|
|
129
|
+
State management design covers all identified state stores and their interactions. State transitions are consistent with domain events. No state is managed in two places without synchronization.
|
|
130
|
+
|
|
131
|
+
### Why This Matters
|
|
132
|
+
|
|
133
|
+
Inconsistent state is the source of the most difficult-to-debug production issues. When the same conceptual state is managed in two places (database and cache, two services, client and server), they drift apart. State management must be explicit about what is the source of truth and how consistency is maintained.
|
|
134
|
+
|
|
135
|
+
### How to Check
|
|
136
|
+
|
|
137
|
+
1. List every state store in the architecture (databases, caches, session stores, client-side state, queues)
|
|
138
|
+
2. For each state store, identify what data it holds and which component owns it
|
|
139
|
+
3. Check for the same data appearing in multiple stores — is synchronization documented?
|
|
140
|
+
4. Verify that state transitions correspond to domain events
|
|
141
|
+
5. Check for derived state: is it cached? How is it invalidated?
|
|
142
|
+
6. Look for implicit state: component memory, local files, environment variables that hold state between requests
|
|
143
|
+
|
|
144
|
+
### What a Finding Looks Like
|
|
145
|
+
|
|
146
|
+
- P0: "User preferences are stored in both the UserService database and a client-side cache with no documented synchronization mechanism. These will drift."
|
|
147
|
+
- P1: "Order status is derived from the last OrderEvent, but the architecture also shows an 'order_status' column in the orders table. Two sources of truth."
|
|
148
|
+
- P2: "Session state is described as 'in-memory' but the deployment section mentions multiple instances. In-memory session state does not work with horizontal scaling."
|
|
149
|
+
|
|
150
|
+
---
|
|
151
|
+
|
|
152
|
+
## Pass 6: Diagram/Prose Consistency
|
|
153
|
+
|
|
154
|
+
### What to Check
|
|
155
|
+
|
|
156
|
+
Architecture diagrams and narrative prose describe the same system. Component names match. Relationships match. No components appear in diagrams but not prose, or vice versa.
|
|
157
|
+
|
|
158
|
+
### Why This Matters
|
|
159
|
+
|
|
160
|
+
Diagrams and prose inevitably drift when maintained independently. Implementing agents read both and expect them to agree. When a diagram shows four services but the prose describes three, the agent does not know which is correct. Consistent diagram/prose is the minimum bar for a trustworthy architecture document.
|
|
161
|
+
|
|
162
|
+
### How to Check
|
|
163
|
+
|
|
164
|
+
1. List every component named in diagrams
|
|
165
|
+
2. List every component named in prose
|
|
166
|
+
3. Verify 1:1 correspondence — every diagrammed component has a prose description, every prose-described component appears in a diagram
|
|
167
|
+
4. Check component names: do diagrams and prose use the same names?
|
|
168
|
+
5. Check relationships: do diagrams and prose describe the same connections between components?
|
|
169
|
+
6. Check directionality: do arrows in diagrams match the dependency/data flow direction described in prose?
|
|
170
|
+
|
|
171
|
+
### What a Finding Looks Like
|
|
172
|
+
|
|
173
|
+
- P0: "The component diagram shows a 'Gateway' component that is not mentioned anywhere in the prose sections. Is this the API Gateway described in the 'Request Routing' section under a different name?"
|
|
174
|
+
- P1: "The prose describes data flowing from Frontend to Backend to Database, but the data flow diagram shows Frontend connecting directly to Database for reads."
|
|
175
|
+
- P2: "Component is called 'AuthService' in the diagram and 'Authentication Module' in the prose. Use one name."
|
|
176
|
+
|
|
177
|
+
---
|
|
178
|
+
|
|
179
|
+
## Pass 7: Extension Point Integrity
|
|
180
|
+
|
|
181
|
+
### What to Check
|
|
182
|
+
|
|
183
|
+
Extension points are designed with concrete interfaces, not merely listed. Each extension point specifies what can be extended, how to extend it, and what the constraints are.
|
|
184
|
+
|
|
185
|
+
### Why This Matters
|
|
186
|
+
|
|
187
|
+
"This is extensible" without design details is useless to implementing agents. They need to know the extension mechanism (plugin interface, middleware chain, event hooks, configuration), the contract (what an extension receives, what it returns, what it must not do), and examples of intended extensions.
|
|
188
|
+
|
|
189
|
+
### How to Check
|
|
190
|
+
|
|
191
|
+
1. List all claimed extension points in the architecture
|
|
192
|
+
2. For each, check: is there a concrete interface or contract? (Not just "this module is extensible")
|
|
193
|
+
3. Verify the extension mechanism is specified: plugin pattern, event hooks, middleware, strategy pattern, etc.
|
|
194
|
+
4. Check that the extension contract is clear: what inputs, what outputs, what side effects are allowed?
|
|
195
|
+
5. Verify at least one example use case for each extension point
|
|
196
|
+
6. Check that extension points align with likely future requirements from the PRD
|
|
197
|
+
|
|
198
|
+
### What a Finding Looks Like
|
|
199
|
+
|
|
200
|
+
- P1: "Architecture says 'the payment system supports multiple payment providers via plugins' but does not define the plugin interface, lifecycle, or registration mechanism."
|
|
201
|
+
- P1: "Authentication extension point lists 'social login providers can be added' but does not specify the provider interface or token exchange contract."
|
|
202
|
+
- P2: "Notification extension point is well-designed but lacks an example of how a new channel (e.g., SMS) would be added."
|
|
203
|
+
|
|
204
|
+
---
|
|
205
|
+
|
|
206
|
+
## Pass 8: Invariant Verification
|
|
207
|
+
|
|
208
|
+
### What to Check
|
|
209
|
+
|
|
210
|
+
The architecture preserves domain invariants. Invariants identified in domain models are enforceable within the architecture's component and transaction boundaries.
|
|
211
|
+
|
|
212
|
+
### Why This Matters
|
|
213
|
+
|
|
214
|
+
An invariant that requires two components to coordinate atomically cannot be enforced if those components are separate services with no transaction mechanism. The architecture must ensure that invariant enforcement boundaries match component boundaries — or provide explicit mechanisms (sagas, compensating transactions) for cross-boundary invariants.
|
|
215
|
+
|
|
216
|
+
### How to Check
|
|
217
|
+
|
|
218
|
+
1. List every domain invariant from domain models
|
|
219
|
+
2. For each invariant, identify which architectural component(s) are responsible for enforcing it
|
|
220
|
+
3. If the invariant spans a single component, verify the component has access to all required state
|
|
221
|
+
4. If the invariant spans multiple components, verify a coordination mechanism is documented (distributed transaction, saga, event-driven consistency)
|
|
222
|
+
5. For cross-component invariants, check the consistency model: strong consistency (must be atomic) or eventual consistency (can tolerate temporary violations)?
|
|
223
|
+
6. Verify that the consistency model aligns with the business tolerance stated in domain models
|
|
224
|
+
|
|
225
|
+
### What a Finding Looks Like
|
|
226
|
+
|
|
227
|
+
- P0: "Invariant 'order total must equal sum of line items minus discounts' requires Order and Pricing data, but these are in separate services with no documented coordination mechanism."
|
|
228
|
+
- P1: "Invariant 'user cannot have duplicate email' spans UserService and AuthService. Which service enforces this? What happens if both create a user simultaneously?"
|
|
229
|
+
- P2: "Invariant enforcement is documented but the consistency model (strong vs. eventual) is not specified."
|
|
230
|
+
|
|
231
|
+
---
|
|
232
|
+
|
|
233
|
+
## Pass 9: Downstream Readiness
|
|
234
|
+
|
|
235
|
+
### What to Check
|
|
236
|
+
|
|
237
|
+
Downstream steps (database schema, API contracts, UX spec, implementation tasks) can proceed with this architecture document.
|
|
238
|
+
|
|
239
|
+
### Why This Matters
|
|
240
|
+
|
|
241
|
+
Four phases consume the architecture document simultaneously or in rapid succession. Gaps in the architecture create cascading ambiguity across all four downstream phases.
|
|
242
|
+
|
|
243
|
+
### How to Check
|
|
244
|
+
|
|
245
|
+
The database schema step needs:
|
|
246
|
+
1. Data storage components identified with their technology and role
|
|
247
|
+
2. Entity-to-storage mapping clear enough to design tables/collections
|
|
248
|
+
3. Data relationships explicit enough to define foreign keys or references
|
|
249
|
+
|
|
250
|
+
The API contracts step needs:
|
|
251
|
+
1. Component interfaces defined at operation level
|
|
252
|
+
2. Communication protocols specified (REST, GraphQL, gRPC)
|
|
253
|
+
3. Auth/authz architecture clear enough to define per-endpoint requirements
|
|
254
|
+
|
|
255
|
+
The UX spec step needs:
|
|
256
|
+
1. Frontend component architecture defined
|
|
257
|
+
2. State management approach specified
|
|
258
|
+
3. API integration points identified from the frontend perspective
|
|
259
|
+
|
|
260
|
+
The implementation tasks step needs:
|
|
261
|
+
1. Module boundaries clear enough to define task scope
|
|
262
|
+
2. Dependencies between modules explicit enough to define task ordering
|
|
263
|
+
3. Component complexity visible enough to estimate task sizing
|
|
264
|
+
|
|
265
|
+
### What a Finding Looks Like
|
|
266
|
+
|
|
267
|
+
- P0: "No data storage architecture section. The database schema step cannot begin database design without knowing what databases exist and what data each holds."
|
|
268
|
+
- P1: "Frontend architecture section describes 'a React app' without component structure. The UX spec step needs at least a high-level component hierarchy."
|
|
269
|
+
- P2: "Module dependencies are clear but not explicitly listed in a format that the implementation tasks step can directly use for task dependency ordering."
|
|
270
|
+
|
|
271
|
+
---
|
|
272
|
+
|
|
273
|
+
## Pass 10: Internal Consistency
|
|
274
|
+
|
|
275
|
+
### What to Check
|
|
276
|
+
|
|
277
|
+
Terminology, cross-references, and structural claims are internally consistent. The document does not contradict itself.
|
|
278
|
+
|
|
279
|
+
### Why This Matters
|
|
280
|
+
|
|
281
|
+
Architecture documents are long. Inconsistencies between early and late sections indicate that the document was written incrementally without reconciliation passes. Each inconsistency is a potential source of confusion for implementing agents.
|
|
282
|
+
|
|
283
|
+
### How to Check
|
|
284
|
+
|
|
285
|
+
1. Build a terminology list from the document — every component name, pattern name, and technology reference
|
|
286
|
+
2. Check for variant names (same component called different things in different sections)
|
|
287
|
+
3. Verify cross-references: when one section says "as described in the Data Flow section," check that the Data Flow section actually describes it
|
|
288
|
+
4. Check for quantitative consistency: if Section 2 says "three services" and Section 5 describes four, which is correct?
|
|
289
|
+
5. Verify that the module structure section and the component diagram describe the same set of modules
|
|
290
|
+
6. Check that technology versions, library names, and tool references are consistent throughout
|
|
291
|
+
|
|
292
|
+
### What a Finding Looks Like
|
|
293
|
+
|
|
294
|
+
- P1: "Section 3 describes the system as having five microservices, but the component diagram shows six. The 'Scheduler' component appears in the diagram but not in the prose."
|
|
295
|
+
- P1: "The architecture uses 'API Gateway' in sections 2-4 and 'Reverse Proxy' in section 6 for what appears to be the same component."
|
|
296
|
+
- P2: "Node.js version is stated as 18 in section 1 and 20 in the deployment section."
|
|
@@ -0,0 +1,176 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: review-testing-strategy
|
|
3
|
+
description: Failure modes and review passes specific to testing and quality strategy artifacts
|
|
4
|
+
topics: [review, testing, quality, coverage, test-pyramid]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Review: Testing Strategy
|
|
8
|
+
|
|
9
|
+
The testing strategy defines how the system will be verified at every layer. It must cover unit tests through end-to-end tests, address domain-specific invariants, align with the architecture's component boundaries, and define quality gates for CI/CD. This review uses 6 passes targeting the specific ways testing strategies fail.
|
|
10
|
+
|
|
11
|
+
Follows the review process defined in `review-methodology.md`.
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## Pass 1: Coverage Gaps by Layer
|
|
16
|
+
|
|
17
|
+
### What to Check
|
|
18
|
+
|
|
19
|
+
Each architectural layer (domain logic, application services, API, database, frontend, integration points) has test coverage defined. The test pyramid is balanced — not top-heavy (too many E2E tests) or bottom-heavy (unit tests only, no integration).
|
|
20
|
+
|
|
21
|
+
### Why This Matters
|
|
22
|
+
|
|
23
|
+
Missing test coverage at any layer creates a blind spot where bugs hide. Too many E2E tests create slow, flaky test suites that developers disable. Too few integration tests mean components work in isolation but fail when connected. The testing strategy must specify what gets tested at which level with clear rationale.
|
|
24
|
+
|
|
25
|
+
### How to Check
|
|
26
|
+
|
|
27
|
+
1. List every architectural layer from the system architecture
|
|
28
|
+
2. For each layer, verify the testing strategy specifies: what types of tests, what the tests verify, approximate coverage expectations
|
|
29
|
+
3. Check the test pyramid balance: unit tests should be most numerous, integration tests fewer, E2E tests fewest
|
|
30
|
+
4. Verify that each layer's test scope matches its architectural responsibility
|
|
31
|
+
5. Check for layers with no test coverage defined — these are the blind spots
|
|
32
|
+
6. Verify that the testing strategy addresses external dependencies: how are third-party APIs, databases, and services handled in tests? (Mocks, test doubles, contract tests, testcontainers)
|
|
33
|
+
|
|
34
|
+
### What a Finding Looks Like
|
|
35
|
+
|
|
36
|
+
- P0: "The database layer has no test coverage defined. No mention of schema validation tests, migration tests, or query correctness tests."
|
|
37
|
+
- P1: "Testing strategy defines unit tests and E2E tests but no integration tests. Components are tested in isolation and in full-system context, but never at the boundary — this misses component integration bugs."
|
|
38
|
+
- P1: "External API dependencies (payment gateway, email service) have no test approach defined. Are they mocked? Stubbed? Is there a contract test?"
|
|
39
|
+
- P2: "Test pyramid is inverted: 50 E2E tests, 20 integration tests, 15 unit tests. This will produce a slow, flaky test suite."
|
|
40
|
+
|
|
41
|
+
---
|
|
42
|
+
|
|
43
|
+
## Pass 2: Domain Invariant Test Cases
|
|
44
|
+
|
|
45
|
+
### What to Check
|
|
46
|
+
|
|
47
|
+
Every domain invariant from the domain models has at least one corresponding test scenario defined. Invariants are the highest-value test targets because they define business correctness.
|
|
48
|
+
|
|
49
|
+
### Why This Matters
|
|
50
|
+
|
|
51
|
+
Domain invariants are the rules the business cannot tolerate being violated. If the invariant "order total must equal sum of line items minus discounts" is not tested, a calculation bug could ship to production. Invariant violations are often the most expensive production bugs — they corrupt data, break financial calculations, or violate regulatory requirements.
|
|
52
|
+
|
|
53
|
+
### How to Check
|
|
54
|
+
|
|
55
|
+
1. List every domain invariant from the domain models
|
|
56
|
+
2. For each invariant, find at least one test scenario in the testing strategy
|
|
57
|
+
3. Check that test scenarios cover both the positive case (invariant holds) and negative case (invariant is violated — system rejects the operation)
|
|
58
|
+
4. Verify that edge cases are considered: boundary values, null/empty inputs, concurrent modifications
|
|
59
|
+
5. Check for invariants that span multiple aggregates — these need integration-level tests, not just unit tests
|
|
60
|
+
6. Verify that invariant tests are classified at the correct pyramid level: aggregate-internal invariants at unit level, cross-aggregate invariants at integration level
|
|
61
|
+
|
|
62
|
+
### What a Finding Looks Like
|
|
63
|
+
|
|
64
|
+
- P0: "Domain invariant 'account balance cannot be negative' has no corresponding test scenario. This is a financial correctness requirement that must be tested."
|
|
65
|
+
- P1: "Invariant 'email must be unique per tenant' has a unit test scenario but no integration test. The unit test mocks the uniqueness check — only an integration test against the real database can verify the constraint."
|
|
66
|
+
- P2: "Invariant 'order must have at least one line item' has a positive test but no negative test (what happens when creating an order with zero items)."
|
|
67
|
+
|
|
68
|
+
---
|
|
69
|
+
|
|
70
|
+
## Pass 3: Test Environment Assumptions
|
|
71
|
+
|
|
72
|
+
### What to Check
|
|
73
|
+
|
|
74
|
+
The test environment described in the strategy matches production constraints. Database versions, service configurations, and external dependency behavior in tests reflect what will exist in production.
|
|
75
|
+
|
|
76
|
+
### Why This Matters
|
|
77
|
+
|
|
78
|
+
Tests that pass against SQLite but fail against PostgreSQL. Tests that mock a payment gateway's happy path but never test the timeout behavior that will happen in production. Test environment mismatches are the primary reason "tests pass, production breaks." The testing strategy must be explicit about how the test environment relates to production.
|
|
79
|
+
|
|
80
|
+
### How to Check
|
|
81
|
+
|
|
82
|
+
1. Compare the test database to the production database: same engine? Same version? Same configuration?
|
|
83
|
+
2. Check external service test doubles: do they replicate the real service's behavior, including errors, latency, and edge cases?
|
|
84
|
+
3. Verify that test data represents realistic production conditions: data volumes, data shapes, edge case values
|
|
85
|
+
4. Check for environment-specific behavior: timezone handling, locale, file system paths, network configuration
|
|
86
|
+
5. Verify that CI/CD test environment is specified and matches local test environment
|
|
87
|
+
6. Check for assumptions about test ordering or test isolation — are tests truly independent?
|
|
88
|
+
|
|
89
|
+
### What a Finding Looks Like
|
|
90
|
+
|
|
91
|
+
- P0: "Tests run against SQLite but production uses PostgreSQL. SQLite and PostgreSQL have different type systems, different locking behavior, and different SQL dialect support. Tests will pass locally and fail in production."
|
|
92
|
+
- P1: "Payment gateway is mocked to always return success. No test scenario covers timeout, network error, or declined payment — all common production scenarios."
|
|
93
|
+
- P2: "Test data uses 5 records per table but production will have millions. Performance-sensitive queries are not tested at scale."
|
|
94
|
+
|
|
95
|
+
---
|
|
96
|
+
|
|
97
|
+
## Pass 4: Performance Test Coverage
|
|
98
|
+
|
|
99
|
+
### What to Check
|
|
100
|
+
|
|
101
|
+
Performance-critical paths identified in the PRD or architecture have benchmarks defined. Load testing, stress testing, and latency requirements have corresponding test scenarios.
|
|
102
|
+
|
|
103
|
+
### Why This Matters
|
|
104
|
+
|
|
105
|
+
Performance requirements stated in the PRD ("sub-200ms API response," "handle 1000 concurrent users") are meaningless without tests that verify them. Performance regressions are invisible to functional tests — the system still returns correct results, just slowly. By the time performance issues are discovered in production, the fix often requires architectural changes.
|
|
106
|
+
|
|
107
|
+
### How to Check
|
|
108
|
+
|
|
109
|
+
1. List performance requirements from the PRD and architecture (response time, throughput, concurrent users, data volume)
|
|
110
|
+
2. For each requirement, find a corresponding performance test scenario
|
|
111
|
+
3. Verify that benchmarks have specific thresholds (not "should be fast" but "95th percentile response time < 200ms")
|
|
112
|
+
4. Check for load testing: is the expected concurrent user load tested?
|
|
113
|
+
5. Check for stress testing: what happens beyond expected load? (Graceful degradation vs. crash)
|
|
114
|
+
6. Verify that performance tests run in an environment representative of production (not a developer laptop)
|
|
115
|
+
7. Check for performance regression detection: are benchmarks tracked over time?
|
|
116
|
+
|
|
117
|
+
### What a Finding Looks Like
|
|
118
|
+
|
|
119
|
+
- P0: "PRD requires 'sub-200ms response time for search queries' but no performance test scenario exists. There is no way to verify this requirement is met."
|
|
120
|
+
- P1: "Load testing scenario exists for 100 concurrent users, but the PRD targets 10,000. The test does not verify the actual requirement."
|
|
121
|
+
- P2: "Performance tests exist but have no regression tracking. A performance degradation from a code change will not be detected until it reaches production."
|
|
122
|
+
|
|
123
|
+
---
|
|
124
|
+
|
|
125
|
+
## Pass 5: Integration Boundary Coverage
|
|
126
|
+
|
|
127
|
+
### What to Check
|
|
128
|
+
|
|
129
|
+
All component integration points have integration tests defined. Every API call between services, every database query pattern, every message queue interaction has a test at the integration level.
|
|
130
|
+
|
|
131
|
+
### Why This Matters
|
|
132
|
+
|
|
133
|
+
Integration boundaries are where bugs hide. Each component may work perfectly in isolation (unit tests pass) but fail when connected to another component due to serialization mismatches, protocol errors, authentication failures, or contract violations. Integration tests catch these by testing real component interactions.
|
|
134
|
+
|
|
135
|
+
### How to Check
|
|
136
|
+
|
|
137
|
+
1. List every integration point from the architecture: service-to-service calls, database queries, message queue producers/consumers, external API integrations
|
|
138
|
+
2. For each integration point, verify a test scenario exists at the integration level
|
|
139
|
+
3. Check that integration tests use real (or realistic) dependencies, not mocks (that is what unit tests are for)
|
|
140
|
+
4. Verify that contract tests exist for external APIs: when the external API changes, do tests catch the break?
|
|
141
|
+
5. Check for async integration points (message queues, webhooks): are these tested with real async behavior, including ordering, retry, and failure scenarios?
|
|
142
|
+
6. Verify that database integration tests cover actual query execution (not mocked repositories)
|
|
143
|
+
|
|
144
|
+
### What a Finding Looks Like
|
|
145
|
+
|
|
146
|
+
- P0: "OrderService calls InventoryService to reserve stock, but no integration test verifies this interaction. If the request format changes, the break is undetected."
|
|
147
|
+
- P1: "Database repository tests mock the database connection. There are no tests that execute actual SQL against a real database — schema errors, query syntax errors, and constraint violations are invisible."
|
|
148
|
+
- P2: "Event consumer integration tests verify message processing but not message ordering or duplicate handling."
|
|
149
|
+
|
|
150
|
+
---
|
|
151
|
+
|
|
152
|
+
## Pass 6: Quality Gate Completeness
|
|
153
|
+
|
|
154
|
+
### What to Check
|
|
155
|
+
|
|
156
|
+
The CI pipeline quality gates catch all intended issues before code reaches production. Gates cover linting, type checking, unit tests, integration tests, security scanning, and any project-specific checks.
|
|
157
|
+
|
|
158
|
+
### Why This Matters
|
|
159
|
+
|
|
160
|
+
A quality gate that exists in documentation but not in CI is not a gate. If the testing strategy says "all code must pass linting" but the CI pipeline does not run a linter, the gate is aspirational. Quality gates must be concrete: what tool, what configuration, what threshold, what happens on failure.
|
|
161
|
+
|
|
162
|
+
### How to Check
|
|
163
|
+
|
|
164
|
+
1. List every quality requirement from the testing strategy (code coverage thresholds, linting rules, type checking, security scanning)
|
|
165
|
+
2. For each requirement, verify it maps to a specific CI pipeline step
|
|
166
|
+
3. Check that gate failure blocks deployment (not just warns)
|
|
167
|
+
4. Verify code coverage thresholds are specified and enforced: what percentage? Per file or overall? Is it a gate or a report?
|
|
168
|
+
5. Check for security scanning: dependency vulnerability scanning, static analysis, secrets detection
|
|
169
|
+
6. Verify that the gate order is correct: fast checks first (lint, type check), slow checks later (integration tests, E2E tests)
|
|
170
|
+
7. Check for missing gates: database migration validation, API contract validation (schema against implementation), documentation generation
|
|
171
|
+
|
|
172
|
+
### What a Finding Looks Like
|
|
173
|
+
|
|
174
|
+
- P0: "Testing strategy requires 80% code coverage but the CI pipeline has no coverage reporting or enforcement. The requirement is unverifiable."
|
|
175
|
+
- P1: "Security scanning is listed as a quality requirement but no specific tool or CI pipeline step implements it."
|
|
176
|
+
- P2: "Quality gates run linting, unit tests, and integration tests, but do not validate database migrations. A broken migration would pass all gates and fail in production."
|
|
@@ -0,0 +1,172 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: review-user-stories
|
|
3
|
+
description: Failure modes and review passes specific to user story artifacts
|
|
4
|
+
topics: [review, user-stories, coverage, acceptance-criteria, INVEST, testability]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Review: User Stories
|
|
8
|
+
|
|
9
|
+
User stories translate PRD requirements into user-facing behavior with testable acceptance criteria. Each story must be traceable back to the PRD, specific enough to implement, and consumable by downstream phases (domain modeling, UX specification, task decomposition). This review uses 6 passes targeting the specific ways user story artifacts fail.
|
|
10
|
+
|
|
11
|
+
Follows the review process defined in `review-methodology.md`.
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## Pass 1: PRD Coverage
|
|
16
|
+
|
|
17
|
+
### What to Check
|
|
18
|
+
|
|
19
|
+
Every PRD feature, flow, and requirement has at least one corresponding user story. No PRD requirement is left without a story to implement it.
|
|
20
|
+
|
|
21
|
+
### Why This Matters
|
|
22
|
+
|
|
23
|
+
Missing stories mean missing implementation tasks downstream. A PRD feature with no story will not appear in the implementation tasks, will not be implemented, and will be discovered only during validation or user testing. Coverage gaps are the highest-severity story failure because they propagate silently through the entire pipeline.
|
|
24
|
+
|
|
25
|
+
### How to Check
|
|
26
|
+
|
|
27
|
+
1. Extract every distinct feature and requirement from the PRD (including implicit requirements like error handling, validation, accessibility)
|
|
28
|
+
2. For each requirement, find the corresponding user story or stories
|
|
29
|
+
3. Check that every PRD user persona has at least one story
|
|
30
|
+
4. Check that every PRD user journey/flow has stories covering the complete path (not just the happy path)
|
|
31
|
+
5. Flag any PRD requirement with no matching story
|
|
32
|
+
6. Flag compound PRD requirements that should have been split into multiple stories
|
|
33
|
+
|
|
34
|
+
### What a Finding Looks Like
|
|
35
|
+
|
|
36
|
+
- P0: "PRD Section 4.2 describes a 'Team Invitation' feature with 3 user flows (invite by email, invite by link, bulk invite). No user stories exist for any of these flows."
|
|
37
|
+
- P1: "PRD describes both SSO and email/password authentication. Stories exist for email/password only — SSO has no coverage."
|
|
38
|
+
- P2: "PRD mentions 'accessibility compliance' as a requirement. No stories have accessibility-specific acceptance criteria."
|
|
39
|
+
|
|
40
|
+
---
|
|
41
|
+
|
|
42
|
+
## Pass 2: Acceptance Criteria Quality
|
|
43
|
+
|
|
44
|
+
### What to Check
|
|
45
|
+
|
|
46
|
+
Every story has testable, unambiguous acceptance criteria. Criteria should be specific enough that two different agents implementing the same story would produce functionally equivalent results.
|
|
47
|
+
|
|
48
|
+
### Why This Matters
|
|
49
|
+
|
|
50
|
+
Vague acceptance criteria produce vague tasks and untestable implementations. An agent reading "the feature should work correctly" has no way to know when it's done. Clear Given/When/Then criteria become test cases during implementation, ensuring the story is verifiably complete.
|
|
51
|
+
|
|
52
|
+
### How to Check
|
|
53
|
+
|
|
54
|
+
1. For each story, check that acceptance criteria exist (not blank or placeholder)
|
|
55
|
+
2. Check that criteria use Given/When/Then format (at depth ≥ 3) or are otherwise structured and testable
|
|
56
|
+
3. Check that criteria are specific — no subjective language ("intuitive," "fast," "user-friendly")
|
|
57
|
+
4. Check that criteria cover the primary success path AND at least one error/edge case
|
|
58
|
+
5. Check that criteria include boundary conditions where applicable (max lengths, empty states, concurrent access)
|
|
59
|
+
6. Verify each criterion has a clear pass/fail condition — could an agent write an automated test for it?
|
|
60
|
+
|
|
61
|
+
### What a Finding Looks Like
|
|
62
|
+
|
|
63
|
+
- P0: "US-005 has no acceptance criteria at all. Cannot verify when this story is complete."
|
|
64
|
+
- P1: "US-012 acceptance criteria says 'works correctly and is user-friendly' — not testable. Needs Given/When/Then scenarios."
|
|
65
|
+
- P1: "US-018 covers only the happy path. No criteria for: invalid input, network failure, duplicate submission, session timeout."
|
|
66
|
+
- P2: "US-031 criteria mention 'fast response time' without defining what fast means. Add a specific threshold (e.g., < 500ms at p95)."
|
|
67
|
+
|
|
68
|
+
---
|
|
69
|
+
|
|
70
|
+
## Pass 3: Story Independence
|
|
71
|
+
|
|
72
|
+
### What to Check
|
|
73
|
+
|
|
74
|
+
Stories can be implemented independently without hidden coupling. Dependencies between stories are explicit, not assumed.
|
|
75
|
+
|
|
76
|
+
### Why This Matters
|
|
77
|
+
|
|
78
|
+
Coupled stories create false parallelization opportunities. If two stories secretly share state or assume a specific implementation order, assigning them to parallel agents causes conflicts, rework, or subtle bugs. Explicit dependencies flow into task decomposition; hidden dependencies create surprises during implementation.
|
|
79
|
+
|
|
80
|
+
### How to Check
|
|
81
|
+
|
|
82
|
+
1. For each story, check if its acceptance criteria reference behavior defined in another story
|
|
83
|
+
2. Check for shared state assumptions — two stories that both read or write the same data entity without acknowledging the overlap
|
|
84
|
+
3. Check for implicit ordering — Story B's acceptance criteria assume Story A's output exists, but no dependency is documented
|
|
85
|
+
4. Check for circular dependencies — Story A depends on B, and B depends on A
|
|
86
|
+
5. Verify that documented dependencies are necessary (not just thematic grouping)
|
|
87
|
+
|
|
88
|
+
### What a Finding Looks Like
|
|
89
|
+
|
|
90
|
+
- P1: "US-008 (edit user profile) and US-009 (upload profile photo) both modify user profile state. Neither acknowledges the other. If implemented in parallel, they may conflict on the profile data model."
|
|
91
|
+
- P1: "US-015 acceptance criteria says 'Given the user has completed onboarding' — this implicitly requires US-014 (onboarding flow) to be complete, but no dependency is documented."
|
|
92
|
+
- P2: "US-025 and US-026 are listed as independent but share a 'notification preferences' data structure. Consider documenting the shared dependency."
|
|
93
|
+
|
|
94
|
+
---
|
|
95
|
+
|
|
96
|
+
## Pass 4: Persona Coverage
|
|
97
|
+
|
|
98
|
+
### What to Check
|
|
99
|
+
|
|
100
|
+
Every PRD-defined persona has stories representing their goals and workflows. Every story maps to a valid, defined persona.
|
|
101
|
+
|
|
102
|
+
### Why This Matters
|
|
103
|
+
|
|
104
|
+
Missing persona coverage means entire user segments have no stories, no tasks, and no implementation. Stories referencing undefined personas create confusion — agents don't know who they're building for, and acceptance criteria lack context.
|
|
105
|
+
|
|
106
|
+
### How to Check
|
|
107
|
+
|
|
108
|
+
1. List all personas defined in the PRD
|
|
109
|
+
2. For each persona, count the stories attributed to them
|
|
110
|
+
3. Flag personas with zero stories
|
|
111
|
+
4. Flag stories that reference a persona not defined in the PRD
|
|
112
|
+
5. Check that high-priority personas (primary users) have proportionally more stories than secondary personas
|
|
113
|
+
6. Verify that each persona's PRD-defined goals are addressed by their stories
|
|
114
|
+
|
|
115
|
+
### What a Finding Looks Like
|
|
116
|
+
|
|
117
|
+
- P1: "PRD defines 4 personas: Student, Teacher, Admin, Parent. The 'Parent' persona has zero stories — their entire journey (viewing student progress, receiving notifications) is unaddressed."
|
|
118
|
+
- P2: "US-020 is written as 'As a power user, I want...' but 'power user' is not a defined persona. Should this be 'Admin' or 'Teacher'?"
|
|
119
|
+
- P2: "The 'Admin' persona has 2 stories, but the PRD describes 6 admin-specific features. 4 features have no admin story."
|
|
120
|
+
|
|
121
|
+
---
|
|
122
|
+
|
|
123
|
+
## Pass 5: Sizing & Splittability
|
|
124
|
+
|
|
125
|
+
### What to Check
|
|
126
|
+
|
|
127
|
+
No story is too large for a single agent session (1-3 focused sessions). No story is so small it adds unnecessary overhead. Stories that are too large should have obvious split points.
|
|
128
|
+
|
|
129
|
+
### Why This Matters
|
|
130
|
+
|
|
131
|
+
Oversized stories produce oversized tasks that exceed an agent's context window or session length. The agent either produces incomplete work or loses context partway through. Undersized stories create coordination overhead — each story needs its own review, testing, and integration cycle.
|
|
132
|
+
|
|
133
|
+
### How to Check
|
|
134
|
+
|
|
135
|
+
1. Count acceptance criteria per story — more than 8 suggests the story is too large
|
|
136
|
+
2. Check if a story spans multiple workflows or user journeys — each journey should be its own story
|
|
137
|
+
3. Check if a story covers multiple data variations that could be split (e.g., "create any type of post" → text, image, video)
|
|
138
|
+
4. Check if a story handles both happy path and all error cases in one — consider splitting error handling into its own story
|
|
139
|
+
5. Flag stories with only 1 trivial acceptance criterion — consider combining with a related story
|
|
140
|
+
6. For oversized stories, identify the split heuristic that applies (workflow step, data variation, CRUD operation, user role, happy/sad path)
|
|
141
|
+
|
|
142
|
+
### What a Finding Looks Like
|
|
143
|
+
|
|
144
|
+
- P1: "US-003 has 12 acceptance criteria spanning 3 distinct workflows (create, edit, and archive projects). Split by operation into 3 stories."
|
|
145
|
+
- P1: "US-017 covers the entire checkout flow (cart review, address, payment, confirmation) in one story. Split by workflow step."
|
|
146
|
+
- P2: "US-022 ('update display name') and US-023 ('update email address') are trivially small and share the same profile editing context. Consider combining into 'edit profile fields.'"
|
|
147
|
+
|
|
148
|
+
---
|
|
149
|
+
|
|
150
|
+
## Pass 6: Downstream Readiness
|
|
151
|
+
|
|
152
|
+
### What to Check
|
|
153
|
+
|
|
154
|
+
The domain modeling step can consume these stories productively. Entities, events, and aggregate boundaries should be discoverable from story acceptance criteria without guesswork.
|
|
155
|
+
|
|
156
|
+
### Why This Matters
|
|
157
|
+
|
|
158
|
+
Stories are the primary input to domain discovery in the domain modeling step. If acceptance criteria are written at too high a level (no mention of data entities, no state transitions, no business rules), domain modeling has to infer the domain model from vague descriptions. This produces weaker domain models and increases the chance of misalignment between stories and architecture.
|
|
159
|
+
|
|
160
|
+
### How to Check
|
|
161
|
+
|
|
162
|
+
1. Sample 3-5 representative stories from different epics
|
|
163
|
+
2. For each, attempt to identify: entities (nouns), domain events (state changes), and aggregate boundaries (transactional consistency requirements) from the acceptance criteria alone
|
|
164
|
+
3. Check that the same entity is named consistently across stories (not "User" in one story and "Account" in another and "Member" in a third)
|
|
165
|
+
4. Check that state transitions are explicit in the acceptance criteria — "when X happens, the order status changes from pending to confirmed" rather than "the order is processed"
|
|
166
|
+
5. Check that business rules (invariants) appear in acceptance criteria — "a class cannot have more than 30 students" is discoverable; "class size is managed" is not
|
|
167
|
+
|
|
168
|
+
### What a Finding Looks Like
|
|
169
|
+
|
|
170
|
+
- P1: "US-007 ('As a teacher, I want to manage my classes') — acceptance criteria say 'classes are managed correctly.' No mention of what entities are involved (Class, Enrollment, Student?), what state transitions occur, or what business rules apply. Domain modeling will have to guess."
|
|
171
|
+
- P2: "Cross-story entity naming is inconsistent: US-003 uses 'User,' US-008 uses 'Account,' US-015 uses 'Member.' These may be different bounded context terms or may be accidental inconsistency — clarify before domain modeling."
|
|
172
|
+
- P2: "Stories in the 'Payments' epic mention 'processing a payment' but no acceptance criteria describe the payment lifecycle states (pending → processing → completed/failed). Domain events cannot be discovered from these stories."
|