@zigrivers/scaffold 2.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +477 -0
- package/dist/cli/commands/adopt.d.ts +12 -0
- package/dist/cli/commands/adopt.d.ts.map +1 -0
- package/dist/cli/commands/adopt.js +107 -0
- package/dist/cli/commands/adopt.js.map +1 -0
- package/dist/cli/commands/adopt.test.d.ts +2 -0
- package/dist/cli/commands/adopt.test.d.ts.map +1 -0
- package/dist/cli/commands/adopt.test.js +277 -0
- package/dist/cli/commands/adopt.test.js.map +1 -0
- package/dist/cli/commands/build.d.ts +12 -0
- package/dist/cli/commands/build.d.ts.map +1 -0
- package/dist/cli/commands/build.js +105 -0
- package/dist/cli/commands/build.js.map +1 -0
- package/dist/cli/commands/build.test.d.ts +2 -0
- package/dist/cli/commands/build.test.d.ts.map +1 -0
- package/dist/cli/commands/build.test.js +272 -0
- package/dist/cli/commands/build.test.js.map +1 -0
- package/dist/cli/commands/dashboard.d.ts +14 -0
- package/dist/cli/commands/dashboard.d.ts.map +1 -0
- package/dist/cli/commands/dashboard.js +102 -0
- package/dist/cli/commands/dashboard.js.map +1 -0
- package/dist/cli/commands/dashboard.test.d.ts +2 -0
- package/dist/cli/commands/dashboard.test.d.ts.map +1 -0
- package/dist/cli/commands/dashboard.test.js +142 -0
- package/dist/cli/commands/dashboard.test.js.map +1 -0
- package/dist/cli/commands/decisions.d.ts +13 -0
- package/dist/cli/commands/decisions.d.ts.map +1 -0
- package/dist/cli/commands/decisions.js +62 -0
- package/dist/cli/commands/decisions.js.map +1 -0
- package/dist/cli/commands/decisions.test.d.ts +2 -0
- package/dist/cli/commands/decisions.test.d.ts.map +1 -0
- package/dist/cli/commands/decisions.test.js +154 -0
- package/dist/cli/commands/decisions.test.js.map +1 -0
- package/dist/cli/commands/info.d.ts +12 -0
- package/dist/cli/commands/info.d.ts.map +1 -0
- package/dist/cli/commands/info.js +110 -0
- package/dist/cli/commands/info.js.map +1 -0
- package/dist/cli/commands/info.test.d.ts +2 -0
- package/dist/cli/commands/info.test.d.ts.map +1 -0
- package/dist/cli/commands/info.test.js +392 -0
- package/dist/cli/commands/info.test.js.map +1 -0
- package/dist/cli/commands/init.d.ts +13 -0
- package/dist/cli/commands/init.d.ts.map +1 -0
- package/dist/cli/commands/init.js +46 -0
- package/dist/cli/commands/init.js.map +1 -0
- package/dist/cli/commands/init.test.d.ts +2 -0
- package/dist/cli/commands/init.test.d.ts.map +1 -0
- package/dist/cli/commands/init.test.js +156 -0
- package/dist/cli/commands/init.test.js.map +1 -0
- package/dist/cli/commands/knowledge.d.ts +4 -0
- package/dist/cli/commands/knowledge.d.ts.map +1 -0
- package/dist/cli/commands/knowledge.js +346 -0
- package/dist/cli/commands/knowledge.js.map +1 -0
- package/dist/cli/commands/knowledge.test.d.ts +2 -0
- package/dist/cli/commands/knowledge.test.d.ts.map +1 -0
- package/dist/cli/commands/knowledge.test.js +293 -0
- package/dist/cli/commands/knowledge.test.js.map +1 -0
- package/dist/cli/commands/list.d.ts +12 -0
- package/dist/cli/commands/list.d.ts.map +1 -0
- package/dist/cli/commands/list.js +73 -0
- package/dist/cli/commands/list.js.map +1 -0
- package/dist/cli/commands/list.test.d.ts +2 -0
- package/dist/cli/commands/list.test.d.ts.map +1 -0
- package/dist/cli/commands/list.test.js +166 -0
- package/dist/cli/commands/list.test.js.map +1 -0
- package/dist/cli/commands/next.d.ts +12 -0
- package/dist/cli/commands/next.d.ts.map +1 -0
- package/dist/cli/commands/next.js +75 -0
- package/dist/cli/commands/next.js.map +1 -0
- package/dist/cli/commands/next.test.d.ts +2 -0
- package/dist/cli/commands/next.test.d.ts.map +1 -0
- package/dist/cli/commands/next.test.js +236 -0
- package/dist/cli/commands/next.test.js.map +1 -0
- package/dist/cli/commands/reset.d.ts +13 -0
- package/dist/cli/commands/reset.d.ts.map +1 -0
- package/dist/cli/commands/reset.js +105 -0
- package/dist/cli/commands/reset.js.map +1 -0
- package/dist/cli/commands/reset.test.d.ts +2 -0
- package/dist/cli/commands/reset.test.d.ts.map +1 -0
- package/dist/cli/commands/reset.test.js +211 -0
- package/dist/cli/commands/reset.test.js.map +1 -0
- package/dist/cli/commands/run.d.ts +14 -0
- package/dist/cli/commands/run.d.ts.map +1 -0
- package/dist/cli/commands/run.js +379 -0
- package/dist/cli/commands/run.js.map +1 -0
- package/dist/cli/commands/run.test.d.ts +2 -0
- package/dist/cli/commands/run.test.d.ts.map +1 -0
- package/dist/cli/commands/run.test.js +535 -0
- package/dist/cli/commands/run.test.js.map +1 -0
- package/dist/cli/commands/skip.d.ts +13 -0
- package/dist/cli/commands/skip.d.ts.map +1 -0
- package/dist/cli/commands/skip.js +123 -0
- package/dist/cli/commands/skip.js.map +1 -0
- package/dist/cli/commands/skip.test.d.ts +2 -0
- package/dist/cli/commands/skip.test.d.ts.map +1 -0
- package/dist/cli/commands/skip.test.js +339 -0
- package/dist/cli/commands/skip.test.js.map +1 -0
- package/dist/cli/commands/status.d.ts +12 -0
- package/dist/cli/commands/status.d.ts.map +1 -0
- package/dist/cli/commands/status.js +79 -0
- package/dist/cli/commands/status.js.map +1 -0
- package/dist/cli/commands/status.test.d.ts +2 -0
- package/dist/cli/commands/status.test.d.ts.map +1 -0
- package/dist/cli/commands/status.test.js +245 -0
- package/dist/cli/commands/status.test.js.map +1 -0
- package/dist/cli/commands/update.d.ts +11 -0
- package/dist/cli/commands/update.d.ts.map +1 -0
- package/dist/cli/commands/update.js +159 -0
- package/dist/cli/commands/update.js.map +1 -0
- package/dist/cli/commands/update.test.d.ts +2 -0
- package/dist/cli/commands/update.test.d.ts.map +1 -0
- package/dist/cli/commands/update.test.js +140 -0
- package/dist/cli/commands/update.test.js.map +1 -0
- package/dist/cli/commands/validate.d.ts +12 -0
- package/dist/cli/commands/validate.d.ts.map +1 -0
- package/dist/cli/commands/validate.js +65 -0
- package/dist/cli/commands/validate.js.map +1 -0
- package/dist/cli/commands/validate.test.d.ts +2 -0
- package/dist/cli/commands/validate.test.d.ts.map +1 -0
- package/dist/cli/commands/validate.test.js +159 -0
- package/dist/cli/commands/validate.test.js.map +1 -0
- package/dist/cli/commands/version.d.ts +13 -0
- package/dist/cli/commands/version.d.ts.map +1 -0
- package/dist/cli/commands/version.js +89 -0
- package/dist/cli/commands/version.js.map +1 -0
- package/dist/cli/commands/version.test.d.ts +2 -0
- package/dist/cli/commands/version.test.d.ts.map +1 -0
- package/dist/cli/commands/version.test.js +63 -0
- package/dist/cli/commands/version.test.js.map +1 -0
- package/dist/cli/index.d.ts +4 -0
- package/dist/cli/index.d.ts.map +1 -0
- package/dist/cli/index.js +72 -0
- package/dist/cli/index.js.map +1 -0
- package/dist/cli/index.test.d.ts +2 -0
- package/dist/cli/index.test.d.ts.map +1 -0
- package/dist/cli/index.test.js +8 -0
- package/dist/cli/index.test.js.map +1 -0
- package/dist/cli/middleware/output-mode.d.ts +21 -0
- package/dist/cli/middleware/output-mode.d.ts.map +1 -0
- package/dist/cli/middleware/output-mode.js +27 -0
- package/dist/cli/middleware/output-mode.js.map +1 -0
- package/dist/cli/middleware/output-mode.test.d.ts +2 -0
- package/dist/cli/middleware/output-mode.test.d.ts.map +1 -0
- package/dist/cli/middleware/output-mode.test.js +41 -0
- package/dist/cli/middleware/output-mode.test.js.map +1 -0
- package/dist/cli/middleware/project-root.d.ts +21 -0
- package/dist/cli/middleware/project-root.d.ts.map +1 -0
- package/dist/cli/middleware/project-root.js +54 -0
- package/dist/cli/middleware/project-root.js.map +1 -0
- package/dist/cli/middleware/project-root.test.d.ts +2 -0
- package/dist/cli/middleware/project-root.test.d.ts.map +1 -0
- package/dist/cli/middleware/project-root.test.js +112 -0
- package/dist/cli/middleware/project-root.test.js.map +1 -0
- package/dist/cli/output/auto.d.ts +18 -0
- package/dist/cli/output/auto.d.ts.map +1 -0
- package/dist/cli/output/auto.js +43 -0
- package/dist/cli/output/auto.js.map +1 -0
- package/dist/cli/output/context.d.ts +19 -0
- package/dist/cli/output/context.d.ts.map +1 -0
- package/dist/cli/output/context.js +15 -0
- package/dist/cli/output/context.js.map +1 -0
- package/dist/cli/output/context.test.d.ts +2 -0
- package/dist/cli/output/context.test.d.ts.map +1 -0
- package/dist/cli/output/context.test.js +335 -0
- package/dist/cli/output/context.test.js.map +1 -0
- package/dist/cli/output/error-display.d.ts +31 -0
- package/dist/cli/output/error-display.d.ts.map +1 -0
- package/dist/cli/output/error-display.js +79 -0
- package/dist/cli/output/error-display.js.map +1 -0
- package/dist/cli/output/error-display.test.d.ts +2 -0
- package/dist/cli/output/error-display.test.d.ts.map +1 -0
- package/dist/cli/output/error-display.test.js +230 -0
- package/dist/cli/output/error-display.test.js.map +1 -0
- package/dist/cli/output/interactive.d.ts +22 -0
- package/dist/cli/output/interactive.d.ts.map +1 -0
- package/dist/cli/output/interactive.js +126 -0
- package/dist/cli/output/interactive.js.map +1 -0
- package/dist/cli/output/json.d.ts +17 -0
- package/dist/cli/output/json.d.ts.map +1 -0
- package/dist/cli/output/json.js +62 -0
- package/dist/cli/output/json.js.map +1 -0
- package/dist/cli/types.d.ts +11 -0
- package/dist/cli/types.d.ts.map +1 -0
- package/dist/cli/types.js +2 -0
- package/dist/cli/types.js.map +1 -0
- package/dist/config/loader.d.ts +22 -0
- package/dist/config/loader.d.ts.map +1 -0
- package/dist/config/loader.js +159 -0
- package/dist/config/loader.js.map +1 -0
- package/dist/config/loader.test.d.ts +2 -0
- package/dist/config/loader.test.d.ts.map +1 -0
- package/dist/config/loader.test.js +226 -0
- package/dist/config/loader.test.js.map +1 -0
- package/dist/config/migration.d.ts +15 -0
- package/dist/config/migration.d.ts.map +1 -0
- package/dist/config/migration.js +39 -0
- package/dist/config/migration.js.map +1 -0
- package/dist/config/migration.test.d.ts +2 -0
- package/dist/config/migration.test.d.ts.map +1 -0
- package/dist/config/migration.test.js +44 -0
- package/dist/config/migration.test.js.map +1 -0
- package/dist/config/schema.d.ts +121 -0
- package/dist/config/schema.d.ts.map +1 -0
- package/dist/config/schema.js +22 -0
- package/dist/config/schema.js.map +1 -0
- package/dist/config/schema.test.d.ts +2 -0
- package/dist/config/schema.test.d.ts.map +1 -0
- package/dist/config/schema.test.js +126 -0
- package/dist/config/schema.test.js.map +1 -0
- package/dist/core/adapters/adapter.d.ts +64 -0
- package/dist/core/adapters/adapter.d.ts.map +1 -0
- package/dist/core/adapters/adapter.js +25 -0
- package/dist/core/adapters/adapter.js.map +1 -0
- package/dist/core/adapters/adapter.test.d.ts +2 -0
- package/dist/core/adapters/adapter.test.d.ts.map +1 -0
- package/dist/core/adapters/adapter.test.js +175 -0
- package/dist/core/adapters/adapter.test.js.map +1 -0
- package/dist/core/adapters/claude-code.d.ts +9 -0
- package/dist/core/adapters/claude-code.d.ts.map +1 -0
- package/dist/core/adapters/claude-code.js +34 -0
- package/dist/core/adapters/claude-code.js.map +1 -0
- package/dist/core/adapters/claude-code.test.d.ts +2 -0
- package/dist/core/adapters/claude-code.test.d.ts.map +1 -0
- package/dist/core/adapters/claude-code.test.js +100 -0
- package/dist/core/adapters/claude-code.test.js.map +1 -0
- package/dist/core/adapters/codex.d.ts +10 -0
- package/dist/core/adapters/codex.d.ts.map +1 -0
- package/dist/core/adapters/codex.js +61 -0
- package/dist/core/adapters/codex.js.map +1 -0
- package/dist/core/adapters/codex.test.d.ts +2 -0
- package/dist/core/adapters/codex.test.d.ts.map +1 -0
- package/dist/core/adapters/codex.test.js +122 -0
- package/dist/core/adapters/codex.test.js.map +1 -0
- package/dist/core/adapters/universal.d.ts +10 -0
- package/dist/core/adapters/universal.d.ts.map +1 -0
- package/dist/core/adapters/universal.js +45 -0
- package/dist/core/adapters/universal.js.map +1 -0
- package/dist/core/adapters/universal.test.d.ts +2 -0
- package/dist/core/adapters/universal.test.d.ts.map +1 -0
- package/dist/core/adapters/universal.test.js +121 -0
- package/dist/core/adapters/universal.test.js.map +1 -0
- package/dist/core/assembly/context-gatherer.d.ts +17 -0
- package/dist/core/assembly/context-gatherer.d.ts.map +1 -0
- package/dist/core/assembly/context-gatherer.js +49 -0
- package/dist/core/assembly/context-gatherer.js.map +1 -0
- package/dist/core/assembly/context-gatherer.test.d.ts +2 -0
- package/dist/core/assembly/context-gatherer.test.d.ts.map +1 -0
- package/dist/core/assembly/context-gatherer.test.js +252 -0
- package/dist/core/assembly/context-gatherer.test.js.map +1 -0
- package/dist/core/assembly/depth-resolver.d.ts +11 -0
- package/dist/core/assembly/depth-resolver.d.ts.map +1 -0
- package/dist/core/assembly/depth-resolver.js +23 -0
- package/dist/core/assembly/depth-resolver.js.map +1 -0
- package/dist/core/assembly/depth-resolver.test.d.ts +2 -0
- package/dist/core/assembly/depth-resolver.test.d.ts.map +1 -0
- package/dist/core/assembly/depth-resolver.test.js +100 -0
- package/dist/core/assembly/depth-resolver.test.js.map +1 -0
- package/dist/core/assembly/engine.d.ts +22 -0
- package/dist/core/assembly/engine.d.ts.map +1 -0
- package/dist/core/assembly/engine.js +215 -0
- package/dist/core/assembly/engine.js.map +1 -0
- package/dist/core/assembly/engine.test.d.ts +2 -0
- package/dist/core/assembly/engine.test.d.ts.map +1 -0
- package/dist/core/assembly/engine.test.js +462 -0
- package/dist/core/assembly/engine.test.js.map +1 -0
- package/dist/core/assembly/instruction-loader.d.ts +16 -0
- package/dist/core/assembly/instruction-loader.d.ts.map +1 -0
- package/dist/core/assembly/instruction-loader.js +40 -0
- package/dist/core/assembly/instruction-loader.js.map +1 -0
- package/dist/core/assembly/instruction-loader.test.d.ts +2 -0
- package/dist/core/assembly/instruction-loader.test.d.ts.map +1 -0
- package/dist/core/assembly/instruction-loader.test.js +109 -0
- package/dist/core/assembly/instruction-loader.test.js.map +1 -0
- package/dist/core/assembly/knowledge-loader.d.ts +34 -0
- package/dist/core/assembly/knowledge-loader.d.ts.map +1 -0
- package/dist/core/assembly/knowledge-loader.js +204 -0
- package/dist/core/assembly/knowledge-loader.js.map +1 -0
- package/dist/core/assembly/knowledge-loader.test.d.ts +2 -0
- package/dist/core/assembly/knowledge-loader.test.d.ts.map +1 -0
- package/dist/core/assembly/knowledge-loader.test.js +205 -0
- package/dist/core/assembly/knowledge-loader.test.js.map +1 -0
- package/dist/core/assembly/meta-prompt-loader.d.ts +13 -0
- package/dist/core/assembly/meta-prompt-loader.d.ts.map +1 -0
- package/dist/core/assembly/meta-prompt-loader.js +91 -0
- package/dist/core/assembly/meta-prompt-loader.js.map +1 -0
- package/dist/core/assembly/meta-prompt-loader.test.d.ts +2 -0
- package/dist/core/assembly/meta-prompt-loader.test.d.ts.map +1 -0
- package/dist/core/assembly/meta-prompt-loader.test.js +232 -0
- package/dist/core/assembly/meta-prompt-loader.test.js.map +1 -0
- package/dist/core/assembly/methodology-change.d.ts +27 -0
- package/dist/core/assembly/methodology-change.d.ts.map +1 -0
- package/dist/core/assembly/methodology-change.js +41 -0
- package/dist/core/assembly/methodology-change.js.map +1 -0
- package/dist/core/assembly/methodology-change.test.d.ts +2 -0
- package/dist/core/assembly/methodology-change.test.d.ts.map +1 -0
- package/dist/core/assembly/methodology-change.test.js +145 -0
- package/dist/core/assembly/methodology-change.test.js.map +1 -0
- package/dist/core/assembly/methodology-resolver.d.ts +11 -0
- package/dist/core/assembly/methodology-resolver.d.ts.map +1 -0
- package/dist/core/assembly/methodology-resolver.js +19 -0
- package/dist/core/assembly/methodology-resolver.js.map +1 -0
- package/dist/core/assembly/methodology-resolver.test.d.ts +2 -0
- package/dist/core/assembly/methodology-resolver.test.d.ts.map +1 -0
- package/dist/core/assembly/methodology-resolver.test.js +87 -0
- package/dist/core/assembly/methodology-resolver.test.js.map +1 -0
- package/dist/core/assembly/preset-loader.d.ts +26 -0
- package/dist/core/assembly/preset-loader.d.ts.map +1 -0
- package/dist/core/assembly/preset-loader.js +146 -0
- package/dist/core/assembly/preset-loader.js.map +1 -0
- package/dist/core/assembly/preset-loader.test.d.ts +2 -0
- package/dist/core/assembly/preset-loader.test.d.ts.map +1 -0
- package/dist/core/assembly/preset-loader.test.js +107 -0
- package/dist/core/assembly/preset-loader.test.js.map +1 -0
- package/dist/core/assembly/update-mode.d.ts +25 -0
- package/dist/core/assembly/update-mode.d.ts.map +1 -0
- package/dist/core/assembly/update-mode.js +70 -0
- package/dist/core/assembly/update-mode.js.map +1 -0
- package/dist/core/assembly/update-mode.test.d.ts +2 -0
- package/dist/core/assembly/update-mode.test.d.ts.map +1 -0
- package/dist/core/assembly/update-mode.test.js +235 -0
- package/dist/core/assembly/update-mode.test.js.map +1 -0
- package/dist/core/dependency/dependency.d.ts +20 -0
- package/dist/core/dependency/dependency.d.ts.map +1 -0
- package/dist/core/dependency/dependency.js +104 -0
- package/dist/core/dependency/dependency.js.map +1 -0
- package/dist/core/dependency/dependency.test.d.ts +2 -0
- package/dist/core/dependency/dependency.test.d.ts.map +1 -0
- package/dist/core/dependency/dependency.test.js +166 -0
- package/dist/core/dependency/dependency.test.js.map +1 -0
- package/dist/core/dependency/eligibility.d.ts +17 -0
- package/dist/core/dependency/eligibility.d.ts.map +1 -0
- package/dist/core/dependency/eligibility.js +60 -0
- package/dist/core/dependency/eligibility.js.map +1 -0
- package/dist/core/dependency/eligibility.test.d.ts +2 -0
- package/dist/core/dependency/eligibility.test.d.ts.map +1 -0
- package/dist/core/dependency/eligibility.test.js +198 -0
- package/dist/core/dependency/eligibility.test.js.map +1 -0
- package/dist/core/dependency/graph.d.ts +12 -0
- package/dist/core/dependency/graph.d.ts.map +1 -0
- package/dist/core/dependency/graph.js +34 -0
- package/dist/core/dependency/graph.js.map +1 -0
- package/dist/core/knowledge/knowledge-update-assembler.d.ts +24 -0
- package/dist/core/knowledge/knowledge-update-assembler.d.ts.map +1 -0
- package/dist/core/knowledge/knowledge-update-assembler.js +46 -0
- package/dist/core/knowledge/knowledge-update-assembler.js.map +1 -0
- package/dist/core/knowledge/knowledge-update-assembler.test.d.ts +2 -0
- package/dist/core/knowledge/knowledge-update-assembler.test.d.ts.map +1 -0
- package/dist/core/knowledge/knowledge-update-assembler.test.js +93 -0
- package/dist/core/knowledge/knowledge-update-assembler.test.js.map +1 -0
- package/dist/core/knowledge/knowledge-update-template.md +55 -0
- package/dist/dashboard/generator.d.ts +37 -0
- package/dist/dashboard/generator.d.ts.map +1 -0
- package/dist/dashboard/generator.js +42 -0
- package/dist/dashboard/generator.js.map +1 -0
- package/dist/dashboard/generator.test.d.ts +2 -0
- package/dist/dashboard/generator.test.d.ts.map +1 -0
- package/dist/dashboard/generator.test.js +186 -0
- package/dist/dashboard/generator.test.js.map +1 -0
- package/dist/dashboard/template.d.ts +4 -0
- package/dist/dashboard/template.d.ts.map +1 -0
- package/dist/dashboard/template.js +190 -0
- package/dist/dashboard/template.js.map +1 -0
- package/dist/e2e/commands.test.d.ts +9 -0
- package/dist/e2e/commands.test.d.ts.map +1 -0
- package/dist/e2e/commands.test.js +499 -0
- package/dist/e2e/commands.test.js.map +1 -0
- package/dist/e2e/init.test.d.ts +10 -0
- package/dist/e2e/init.test.d.ts.map +1 -0
- package/dist/e2e/init.test.js +180 -0
- package/dist/e2e/init.test.js.map +1 -0
- package/dist/e2e/knowledge.test.d.ts +2 -0
- package/dist/e2e/knowledge.test.d.ts.map +1 -0
- package/dist/e2e/knowledge.test.js +103 -0
- package/dist/e2e/knowledge.test.js.map +1 -0
- package/dist/e2e/pipeline.test.d.ts +8 -0
- package/dist/e2e/pipeline.test.d.ts.map +1 -0
- package/dist/e2e/pipeline.test.js +295 -0
- package/dist/e2e/pipeline.test.js.map +1 -0
- package/dist/index.d.ts +3 -0
- package/dist/index.d.ts.map +1 -0
- package/dist/index.js +7 -0
- package/dist/index.js.map +1 -0
- package/dist/project/adopt.d.ts +28 -0
- package/dist/project/adopt.d.ts.map +1 -0
- package/dist/project/adopt.js +49 -0
- package/dist/project/adopt.js.map +1 -0
- package/dist/project/adopt.test.d.ts +2 -0
- package/dist/project/adopt.test.d.ts.map +1 -0
- package/dist/project/adopt.test.js +220 -0
- package/dist/project/adopt.test.js.map +1 -0
- package/dist/project/claude-md.d.ts +33 -0
- package/dist/project/claude-md.d.ts.map +1 -0
- package/dist/project/claude-md.js +112 -0
- package/dist/project/claude-md.js.map +1 -0
- package/dist/project/claude-md.test.d.ts +2 -0
- package/dist/project/claude-md.test.d.ts.map +1 -0
- package/dist/project/claude-md.test.js +151 -0
- package/dist/project/claude-md.test.js.map +1 -0
- package/dist/project/detector.d.ts +7 -0
- package/dist/project/detector.d.ts.map +1 -0
- package/dist/project/detector.js +78 -0
- package/dist/project/detector.js.map +1 -0
- package/dist/project/detector.test.d.ts +2 -0
- package/dist/project/detector.test.d.ts.map +1 -0
- package/dist/project/detector.test.js +137 -0
- package/dist/project/detector.test.js.map +1 -0
- package/dist/project/frontmatter.d.ts +17 -0
- package/dist/project/frontmatter.d.ts.map +1 -0
- package/dist/project/frontmatter.js +236 -0
- package/dist/project/frontmatter.js.map +1 -0
- package/dist/project/frontmatter.test.d.ts +2 -0
- package/dist/project/frontmatter.test.d.ts.map +1 -0
- package/dist/project/frontmatter.test.js +218 -0
- package/dist/project/frontmatter.test.js.map +1 -0
- package/dist/project/signals.d.ts +12 -0
- package/dist/project/signals.d.ts.map +1 -0
- package/dist/project/signals.js +2 -0
- package/dist/project/signals.js.map +1 -0
- package/dist/state/completion.d.ts +22 -0
- package/dist/state/completion.d.ts.map +1 -0
- package/dist/state/completion.js +82 -0
- package/dist/state/completion.js.map +1 -0
- package/dist/state/completion.test.d.ts +2 -0
- package/dist/state/completion.test.d.ts.map +1 -0
- package/dist/state/completion.test.js +246 -0
- package/dist/state/completion.test.js.map +1 -0
- package/dist/state/decision-logger.d.ts +16 -0
- package/dist/state/decision-logger.d.ts.map +1 -0
- package/dist/state/decision-logger.js +80 -0
- package/dist/state/decision-logger.js.map +1 -0
- package/dist/state/decision-logger.test.d.ts +2 -0
- package/dist/state/decision-logger.test.d.ts.map +1 -0
- package/dist/state/decision-logger.test.js +182 -0
- package/dist/state/decision-logger.test.js.map +1 -0
- package/dist/state/lock-manager.d.ts +18 -0
- package/dist/state/lock-manager.d.ts.map +1 -0
- package/dist/state/lock-manager.js +134 -0
- package/dist/state/lock-manager.js.map +1 -0
- package/dist/state/lock-manager.test.d.ts +2 -0
- package/dist/state/lock-manager.test.d.ts.map +1 -0
- package/dist/state/lock-manager.test.js +190 -0
- package/dist/state/lock-manager.test.js.map +1 -0
- package/dist/state/state-manager.d.ts +37 -0
- package/dist/state/state-manager.d.ts.map +1 -0
- package/dist/state/state-manager.js +125 -0
- package/dist/state/state-manager.js.map +1 -0
- package/dist/state/state-manager.test.d.ts +2 -0
- package/dist/state/state-manager.test.d.ts.map +1 -0
- package/dist/state/state-manager.test.js +240 -0
- package/dist/state/state-manager.test.js.map +1 -0
- package/dist/types/adapter.d.ts +24 -0
- package/dist/types/adapter.d.ts.map +1 -0
- package/dist/types/adapter.js +2 -0
- package/dist/types/adapter.js.map +1 -0
- package/dist/types/assembly.d.ts +89 -0
- package/dist/types/assembly.d.ts.map +1 -0
- package/dist/types/assembly.js +2 -0
- package/dist/types/assembly.js.map +1 -0
- package/dist/types/claude-md.d.ts +11 -0
- package/dist/types/claude-md.d.ts.map +1 -0
- package/dist/types/claude-md.js +2 -0
- package/dist/types/claude-md.js.map +1 -0
- package/dist/types/cli.d.ts +15 -0
- package/dist/types/cli.d.ts.map +1 -0
- package/dist/types/cli.js +2 -0
- package/dist/types/cli.js.map +1 -0
- package/dist/types/config.d.ts +40 -0
- package/dist/types/config.d.ts.map +1 -0
- package/dist/types/config.js +2 -0
- package/dist/types/config.js.map +1 -0
- package/dist/types/decision.d.ts +14 -0
- package/dist/types/decision.d.ts.map +1 -0
- package/dist/types/decision.js +2 -0
- package/dist/types/decision.js.map +1 -0
- package/dist/types/dependency.d.ts +12 -0
- package/dist/types/dependency.d.ts.map +1 -0
- package/dist/types/dependency.js +2 -0
- package/dist/types/dependency.js.map +1 -0
- package/dist/types/enums.d.ts +23 -0
- package/dist/types/enums.d.ts.map +1 -0
- package/dist/types/enums.js +11 -0
- package/dist/types/enums.js.map +1 -0
- package/dist/types/enums.test.d.ts +2 -0
- package/dist/types/enums.test.d.ts.map +1 -0
- package/dist/types/enums.test.js +13 -0
- package/dist/types/enums.test.js.map +1 -0
- package/dist/types/errors.d.ts +24 -0
- package/dist/types/errors.d.ts.map +1 -0
- package/dist/types/errors.js +2 -0
- package/dist/types/errors.js.map +1 -0
- package/dist/types/frontmatter.d.ts +43 -0
- package/dist/types/frontmatter.d.ts.map +1 -0
- package/dist/types/frontmatter.js +2 -0
- package/dist/types/frontmatter.js.map +1 -0
- package/dist/types/index.d.ts +14 -0
- package/dist/types/index.d.ts.map +1 -0
- package/dist/types/index.js +14 -0
- package/dist/types/index.js.map +1 -0
- package/dist/types/lock.d.ts +10 -0
- package/dist/types/lock.d.ts.map +1 -0
- package/dist/types/lock.js +2 -0
- package/dist/types/lock.js.map +1 -0
- package/dist/types/state.d.ts +49 -0
- package/dist/types/state.d.ts.map +1 -0
- package/dist/types/state.js +2 -0
- package/dist/types/state.js.map +1 -0
- package/dist/types/wizard.d.ts +14 -0
- package/dist/types/wizard.d.ts.map +1 -0
- package/dist/types/wizard.js +2 -0
- package/dist/types/wizard.js.map +1 -0
- package/dist/utils/errors.d.ts +42 -0
- package/dist/utils/errors.d.ts.map +1 -0
- package/dist/utils/errors.js +232 -0
- package/dist/utils/errors.js.map +1 -0
- package/dist/utils/errors.test.d.ts +2 -0
- package/dist/utils/errors.test.d.ts.map +1 -0
- package/dist/utils/errors.test.js +91 -0
- package/dist/utils/errors.test.js.map +1 -0
- package/dist/utils/fs.d.ts +11 -0
- package/dist/utils/fs.d.ts.map +1 -0
- package/dist/utils/fs.js +20 -0
- package/dist/utils/fs.js.map +1 -0
- package/dist/utils/fs.test.d.ts +2 -0
- package/dist/utils/fs.test.d.ts.map +1 -0
- package/dist/utils/fs.test.js +93 -0
- package/dist/utils/fs.test.js.map +1 -0
- package/dist/utils/index.d.ts +4 -0
- package/dist/utils/index.d.ts.map +1 -0
- package/dist/utils/index.js +4 -0
- package/dist/utils/index.js.map +1 -0
- package/dist/utils/levenshtein.d.ts +11 -0
- package/dist/utils/levenshtein.d.ts.map +1 -0
- package/dist/utils/levenshtein.js +37 -0
- package/dist/utils/levenshtein.js.map +1 -0
- package/dist/utils/levenshtein.test.d.ts +2 -0
- package/dist/utils/levenshtein.test.d.ts.map +1 -0
- package/dist/utils/levenshtein.test.js +34 -0
- package/dist/utils/levenshtein.test.js.map +1 -0
- package/dist/validation/config-validator.d.ts +10 -0
- package/dist/validation/config-validator.d.ts.map +1 -0
- package/dist/validation/config-validator.js +11 -0
- package/dist/validation/config-validator.js.map +1 -0
- package/dist/validation/dependency-validator.d.ts +10 -0
- package/dist/validation/dependency-validator.d.ts.map +1 -0
- package/dist/validation/dependency-validator.js +34 -0
- package/dist/validation/dependency-validator.js.map +1 -0
- package/dist/validation/frontmatter-validator.d.ts +12 -0
- package/dist/validation/frontmatter-validator.d.ts.map +1 -0
- package/dist/validation/frontmatter-validator.js +50 -0
- package/dist/validation/frontmatter-validator.js.map +1 -0
- package/dist/validation/index.d.ts +19 -0
- package/dist/validation/index.d.ts.map +1 -0
- package/dist/validation/index.js +64 -0
- package/dist/validation/index.js.map +1 -0
- package/dist/validation/index.test.d.ts +2 -0
- package/dist/validation/index.test.d.ts.map +1 -0
- package/dist/validation/index.test.js +241 -0
- package/dist/validation/index.test.js.map +1 -0
- package/dist/validation/state-validator.d.ts +15 -0
- package/dist/validation/state-validator.d.ts.map +1 -0
- package/dist/validation/state-validator.js +104 -0
- package/dist/validation/state-validator.js.map +1 -0
- package/dist/wizard/questions.d.ts +18 -0
- package/dist/wizard/questions.d.ts.map +1 -0
- package/dist/wizard/questions.js +46 -0
- package/dist/wizard/questions.js.map +1 -0
- package/dist/wizard/suggestion.d.ts +10 -0
- package/dist/wizard/suggestion.d.ts.map +1 -0
- package/dist/wizard/suggestion.js +17 -0
- package/dist/wizard/suggestion.js.map +1 -0
- package/dist/wizard/wizard.d.ts +19 -0
- package/dist/wizard/wizard.d.ts.map +1 -0
- package/dist/wizard/wizard.js +104 -0
- package/dist/wizard/wizard.js.map +1 -0
- package/dist/wizard/wizard.test.d.ts +2 -0
- package/dist/wizard/wizard.test.d.ts.map +1 -0
- package/dist/wizard/wizard.test.js +167 -0
- package/dist/wizard/wizard.test.js.map +1 -0
- package/knowledge/core/adr-craft.md +281 -0
- package/knowledge/core/api-design.md +501 -0
- package/knowledge/core/database-design.md +380 -0
- package/knowledge/core/domain-modeling.md +317 -0
- package/knowledge/core/operations-runbook.md +513 -0
- package/knowledge/core/security-review.md +523 -0
- package/knowledge/core/system-architecture.md +402 -0
- package/knowledge/core/task-decomposition.md +372 -0
- package/knowledge/core/testing-strategy.md +409 -0
- package/knowledge/core/user-stories.md +337 -0
- package/knowledge/core/user-story-innovation.md +171 -0
- package/knowledge/core/ux-specification.md +380 -0
- package/knowledge/finalization/apply-fixes-and-freeze.md +93 -0
- package/knowledge/finalization/developer-onboarding.md +376 -0
- package/knowledge/finalization/implementation-playbook.md +404 -0
- package/knowledge/product/gap-analysis.md +305 -0
- package/knowledge/product/prd-craft.md +324 -0
- package/knowledge/product/prd-innovation.md +204 -0
- package/knowledge/review/review-adr.md +203 -0
- package/knowledge/review/review-api-contracts.md +233 -0
- package/knowledge/review/review-database-schema.md +229 -0
- package/knowledge/review/review-domain-modeling.md +288 -0
- package/knowledge/review/review-implementation-tasks.md +202 -0
- package/knowledge/review/review-methodology.md +215 -0
- package/knowledge/review/review-operations.md +212 -0
- package/knowledge/review/review-prd.md +235 -0
- package/knowledge/review/review-security.md +213 -0
- package/knowledge/review/review-system-architecture.md +296 -0
- package/knowledge/review/review-testing-strategy.md +176 -0
- package/knowledge/review/review-user-stories.md +172 -0
- package/knowledge/review/review-ux-spec.md +208 -0
- package/knowledge/validation/critical-path-analysis.md +203 -0
- package/knowledge/validation/cross-phase-consistency.md +181 -0
- package/knowledge/validation/decision-completeness.md +218 -0
- package/knowledge/validation/dependency-validation.md +233 -0
- package/knowledge/validation/implementability-review.md +252 -0
- package/knowledge/validation/scope-management.md +223 -0
- package/knowledge/validation/traceability.md +198 -0
- package/methodology/custom-defaults.yml +43 -0
- package/methodology/deep.yml +42 -0
- package/methodology/mvp.yml +42 -0
- package/package.json +58 -0
- package/pipeline/architecture/review-architecture.md +44 -0
- package/pipeline/architecture/system-architecture.md +45 -0
- package/pipeline/decisions/adrs.md +45 -0
- package/pipeline/decisions/review-adrs.md +39 -0
- package/pipeline/finalization/apply-fixes-and-freeze.md +39 -0
- package/pipeline/finalization/developer-onboarding-guide.md +36 -0
- package/pipeline/finalization/implementation-playbook.md +45 -0
- package/pipeline/modeling/domain-modeling.md +57 -0
- package/pipeline/modeling/review-domain-modeling.md +41 -0
- package/pipeline/planning/implementation-tasks.md +57 -0
- package/pipeline/planning/review-tasks.md +38 -0
- package/pipeline/pre/create-prd.md +45 -0
- package/pipeline/pre/innovate-prd.md +47 -0
- package/pipeline/pre/innovate-user-stories.md +47 -0
- package/pipeline/pre/review-prd.md +44 -0
- package/pipeline/pre/review-user-stories.md +43 -0
- package/pipeline/pre/user-stories.md +48 -0
- package/pipeline/quality/operations.md +42 -0
- package/pipeline/quality/review-operations.md +37 -0
- package/pipeline/quality/review-security.md +40 -0
- package/pipeline/quality/review-testing.md +39 -0
- package/pipeline/quality/security.md +44 -0
- package/pipeline/quality/testing-strategy.md +42 -0
- package/pipeline/specification/api-contracts.md +44 -0
- package/pipeline/specification/database-schema.md +41 -0
- package/pipeline/specification/review-api.md +40 -0
- package/pipeline/specification/review-database.md +39 -0
- package/pipeline/specification/review-ux.md +38 -0
- package/pipeline/specification/ux-spec.md +43 -0
- package/pipeline/validation/critical-path-walkthrough.md +37 -0
- package/pipeline/validation/cross-phase-consistency.md +35 -0
- package/pipeline/validation/decision-completeness.md +36 -0
- package/pipeline/validation/dependency-graph-validation.md +36 -0
- package/pipeline/validation/implementability-dry-run.md +36 -0
- package/pipeline/validation/scope-creep-check.md +38 -0
- package/pipeline/validation/traceability-matrix.md +36 -0
|
@@ -0,0 +1,202 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: review-implementation-tasks
|
|
3
|
+
description: Failure modes and review passes specific to implementation tasks artifacts
|
|
4
|
+
topics: [review, tasks, planning, decomposition, agents]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Review: Implementation Tasks
|
|
8
|
+
|
|
9
|
+
The implementation tasks document translates the architecture into discrete, actionable work items that AI agents can execute. Each task must be self-contained enough for a single agent session, correctly ordered by dependency, and clear enough to implement without asking questions. This review uses 7 passes targeting the specific ways implementation tasks fail.
|
|
10
|
+
|
|
11
|
+
Follows the review process defined in `review-methodology.md`.
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## Pass 1: Architecture Coverage
|
|
16
|
+
|
|
17
|
+
### What to Check
|
|
18
|
+
|
|
19
|
+
Every architectural component, module, and integration point has corresponding implementation tasks. No part of the architecture is left without work items.
|
|
20
|
+
|
|
21
|
+
### Why This Matters
|
|
22
|
+
|
|
23
|
+
Uncovered components are discovered during implementation when an agent realizes a dependency has no task. This blocks the agent, creates unplanned work, and disrupts the critical path. Coverage gaps typically occur in cross-cutting concerns (logging, error handling, auth middleware) and infrastructure (CI/CD, deployment, database migrations).
|
|
24
|
+
|
|
25
|
+
### How to Check
|
|
26
|
+
|
|
27
|
+
1. List every component from the system architecture document
|
|
28
|
+
2. For each component, find implementation tasks that cover it
|
|
29
|
+
3. Flag components with no corresponding tasks
|
|
30
|
+
4. Check cross-cutting concerns: logging, error handling, authentication/authorization middleware, configuration management, health checks
|
|
31
|
+
5. Check infrastructure tasks: database migration scripts, CI/CD pipeline setup, deployment configuration, environment setup
|
|
32
|
+
6. Check integration tasks: component-to-component wiring, API client generation, event bus configuration
|
|
33
|
+
7. Verify that testing tasks exist alongside implementation tasks (not deferred to "later")
|
|
34
|
+
|
|
35
|
+
### What a Finding Looks Like
|
|
36
|
+
|
|
37
|
+
- P0: "Architecture describes an 'API Gateway' component with routing, rate limiting, and auth validation, but no implementation tasks exist for it. Five downstream tasks assume it exists."
|
|
38
|
+
- P1: "Database migration tasks cover schema creation but no task covers seed data or test fixtures. The testing strategy requires test data."
|
|
39
|
+
- P2: "Logging infrastructure is mentioned in architecture but has no dedicated task. Individual component tasks may handle it ad hoc, creating inconsistent logging."
|
|
40
|
+
|
|
41
|
+
---
|
|
42
|
+
|
|
43
|
+
## Pass 2: Missing Dependencies
|
|
44
|
+
|
|
45
|
+
### What to Check
|
|
46
|
+
|
|
47
|
+
Task dependencies are complete and correct. No task assumes a prerequisite that is not listed as a dependency. No circular dependencies exist.
|
|
48
|
+
|
|
49
|
+
### Why This Matters
|
|
50
|
+
|
|
51
|
+
Missing dependencies cause agents to start work that immediately blocks — the agent picks up a task, discovers it depends on something not yet built, and wastes a session. Circular dependencies make it impossible to determine a valid execution order. Both destroy parallelization efficiency.
|
|
52
|
+
|
|
53
|
+
### How to Check
|
|
54
|
+
|
|
55
|
+
1. For each task, read its description and acceptance criteria
|
|
56
|
+
2. Identify everything the task needs to exist before it can start (database tables, API endpoints, shared libraries, configuration)
|
|
57
|
+
3. Verify each prerequisite is listed as a dependency
|
|
58
|
+
4. Check for implicit dependencies: "implement user dashboard" implicitly depends on "implement user authentication" — is this explicit?
|
|
59
|
+
5. Build the full dependency graph and check for cycles
|
|
60
|
+
6. Verify that the dependency graph has at least one task with no dependencies (the starting point)
|
|
61
|
+
7. Check for over-specified dependencies: tasks blocked on things they do not actually need, creating artificial bottlenecks
|
|
62
|
+
|
|
63
|
+
### What a Finding Looks Like
|
|
64
|
+
|
|
65
|
+
- P0: "Task 'Implement order API endpoints' has no dependency on 'Create database schema.' The API task cannot start without tables to query."
|
|
66
|
+
- P1: "Tasks 'Implement user service' and 'Implement auth middleware' depend on each other. Circular dependency — determine which can be built first with a mock."
|
|
67
|
+
- P2: "Task 'Build product listing page' lists 'Deploy staging environment' as a dependency. This is over-specified — the page can be built and tested locally."
|
|
68
|
+
|
|
69
|
+
---
|
|
70
|
+
|
|
71
|
+
## Pass 3: Task Sizing
|
|
72
|
+
|
|
73
|
+
### What to Check
|
|
74
|
+
|
|
75
|
+
No task is too large for a single agent session (typically 30-60 minutes of focused work). No task is too small to be meaningful (trivial one-line changes should be grouped). Tasks have a clear scope boundary.
|
|
76
|
+
|
|
77
|
+
### Why This Matters
|
|
78
|
+
|
|
79
|
+
Too-large tasks exceed agent context windows and session limits. The agent runs out of context mid-task, produces incomplete work, and the next session must understand and continue partial progress — which is error-prone. Too-small tasks create overhead (setup, context loading, validation) that exceeds the actual work.
|
|
80
|
+
|
|
81
|
+
### How to Check
|
|
82
|
+
|
|
83
|
+
1. For each task, estimate the implementation scope: how many files touched, how many functions written, how much logic?
|
|
84
|
+
2. Flag tasks that involve more than one major component or module — these are likely too large
|
|
85
|
+
3. Flag tasks that involve more than 5-7 files — these may exceed agent context
|
|
86
|
+
4. Flag tasks that are trivial (rename a variable, update a config value) — these should be grouped into a larger task
|
|
87
|
+
5. Check that each task has a clear boundary: when does the agent stop? "Implement the order module" has no clear boundary; "Implement order creation endpoint with validation" does
|
|
88
|
+
6. Verify that tasks do not mix concerns: a single task should not be "implement auth AND set up database"
|
|
89
|
+
|
|
90
|
+
### What a Finding Looks Like
|
|
91
|
+
|
|
92
|
+
- P0: "Task 'Implement the entire backend' is a single task covering 15 architectural components, 40+ files, and hundreds of functions. This must be decomposed into component-level tasks."
|
|
93
|
+
- P1: "Task 'Set up user service with authentication, authorization, profile management, and email verification' covers four distinct features. Split into separate tasks."
|
|
94
|
+
- P2: "Task 'Update README with API documentation link' is a one-line change. Group with other documentation tasks."
|
|
95
|
+
|
|
96
|
+
---
|
|
97
|
+
|
|
98
|
+
## Pass 4: Acceptance Criteria
|
|
99
|
+
|
|
100
|
+
### What to Check
|
|
101
|
+
|
|
102
|
+
Every task has clear, testable acceptance criteria that define "done." Criteria are specific enough that an agent can verify its own work.
|
|
103
|
+
|
|
104
|
+
### Why This Matters
|
|
105
|
+
|
|
106
|
+
Without acceptance criteria, agents do not know when to stop. They either under-deliver (missing edge cases, skipping error handling) or over-deliver (adding features not asked for, over-engineering). Clear criteria also enable automated verification — if the criteria are testable, CI can validate them.
|
|
107
|
+
|
|
108
|
+
### How to Check
|
|
109
|
+
|
|
110
|
+
1. For each task, read the acceptance criteria
|
|
111
|
+
2. Check that criteria are testable assertions, not vague goals: "user can log in" is vague; "POST /auth/login returns 200 with JWT token when given valid credentials, 401 with error message when given invalid credentials" is testable
|
|
112
|
+
3. Verify criteria cover the happy path AND at least one error/edge case
|
|
113
|
+
4. Check that criteria reference specific inputs and expected outputs
|
|
114
|
+
5. Look for criteria that say "should work correctly" or "handle errors properly" — these are not actionable
|
|
115
|
+
6. Verify that criteria align with the API contract, database schema, and UX spec (no contradictions with upstream artifacts)
|
|
116
|
+
|
|
117
|
+
### What a Finding Looks Like
|
|
118
|
+
|
|
119
|
+
- P0: "Task 'Implement payment processing' has acceptance criteria: 'Payments should work.' This is untestable. Specify: which payment methods, what validation, what error responses, what idempotency behavior."
|
|
120
|
+
- P1: "Task 'Build user registration' criteria say 'user can register' but do not specify validation rules (password requirements, email format, duplicate handling)."
|
|
121
|
+
- P2: "Acceptance criteria reference 'standard error format' without specifying what that format is. Link to the error contract in the API spec."
|
|
122
|
+
|
|
123
|
+
---
|
|
124
|
+
|
|
125
|
+
## Pass 5: Critical Path Accuracy
|
|
126
|
+
|
|
127
|
+
### What to Check
|
|
128
|
+
|
|
129
|
+
The identified critical path is actually the longest dependency chain. Moving tasks on/off the critical path would not shorten total project duration.
|
|
130
|
+
|
|
131
|
+
### Why This Matters
|
|
132
|
+
|
|
133
|
+
An incorrect critical path means optimization effort is misdirected. If the team parallelizes work on the perceived critical path but the actual bottleneck is elsewhere, total project duration does not improve. The critical path determines the minimum project duration — optimizing anything else has zero impact on delivery date.
|
|
134
|
+
|
|
135
|
+
### How to Check
|
|
136
|
+
|
|
137
|
+
1. Trace the longest dependency chain from start to finish — this is the critical path
|
|
138
|
+
2. Compare with the documented critical path — do they match?
|
|
139
|
+
3. Check for hidden long chains: integration tasks, end-to-end testing, deployment setup — these are often on the actual critical path but not recognized
|
|
140
|
+
4. Verify that critical path tasks are not blocked by non-critical tasks (this would extend the critical path)
|
|
141
|
+
5. Check for near-critical paths: chains that are only 1-2 tasks shorter than the critical path. These become the critical path if any task slips.
|
|
142
|
+
6. Verify that critical path tasks have clear owners and no ambiguity — these are the tasks that cannot afford delays
|
|
143
|
+
|
|
144
|
+
### What a Finding Looks Like
|
|
145
|
+
|
|
146
|
+
- P0: "The documented critical path is: schema -> API -> frontend. But the actual longest chain is: schema -> API -> integration tests -> deployment pipeline -> end-to-end tests, which is 2 tasks longer."
|
|
147
|
+
- P1: "Critical path task 'Implement auth service' depends on non-critical task 'Design admin dashboard.' This dependency makes the admin dashboard silently critical."
|
|
148
|
+
- P2: "Two dependency chains are within one task of the critical path length. These near-critical paths should be identified to guide resource allocation."
|
|
149
|
+
|
|
150
|
+
---
|
|
151
|
+
|
|
152
|
+
## Pass 6: Parallelization Validity
|
|
153
|
+
|
|
154
|
+
### What to Check
|
|
155
|
+
|
|
156
|
+
Tasks marked as parallelizable are truly independent. They do not share state, modify the same files, or have undeclared dependencies on each other's output.
|
|
157
|
+
|
|
158
|
+
### Why This Matters
|
|
159
|
+
|
|
160
|
+
False parallelization causes merge conflicts, race conditions, and wasted work. If two agents build features that both modify the same shared module, their changes conflict at merge time. One agent's work may need to be redone. Worse, if both agents assume they own a shared resource, they may produce incompatible implementations.
|
|
161
|
+
|
|
162
|
+
### How to Check
|
|
163
|
+
|
|
164
|
+
1. For each set of tasks marked as parallel, check: do they modify the same files?
|
|
165
|
+
2. Check for shared state: do parallel tasks both write to the same database tables, configuration files, or shared modules?
|
|
166
|
+
3. Check for shared dependencies: if both tasks depend on a shared library, will one task's changes to that library affect the other?
|
|
167
|
+
4. Verify that parallel tasks produce independent outputs that can be merged without conflict
|
|
168
|
+
5. Check for ordering assumptions: does parallel task A assume parallel task B has or has not completed?
|
|
169
|
+
6. Look for shared infrastructure: if both tasks need to modify CI/CD configuration, they will conflict
|
|
170
|
+
|
|
171
|
+
### What a Finding Looks Like
|
|
172
|
+
|
|
173
|
+
- P0: "Tasks 'Implement user service' and 'Implement auth middleware' are marked as parallel, but both modify 'src/middleware/index.ts'. These will produce merge conflicts."
|
|
174
|
+
- P1: "Tasks 'Build order API' and 'Build inventory API' are parallel but both need to modify the shared database connection configuration. Sequence the config setup first."
|
|
175
|
+
- P2: "Parallel tasks 'Build feature A' and 'Build feature B' both add entries to the routing table. Minor merge conflict risk — document the resolution strategy."
|
|
176
|
+
|
|
177
|
+
---
|
|
178
|
+
|
|
179
|
+
## Pass 7: Agent Context
|
|
180
|
+
|
|
181
|
+
### What to Check
|
|
182
|
+
|
|
183
|
+
Each task specifies which documents and artifacts the implementing agent should read before starting. The context is sufficient for the agent to complete the task without hunting for information.
|
|
184
|
+
|
|
185
|
+
### Why This Matters
|
|
186
|
+
|
|
187
|
+
AI agents have limited context windows. If a task does not specify what to read, the agent either loads too much context (wasting tokens, risking truncation) or too little (missing crucial design decisions). Explicit context references are the difference between an agent that executes efficiently and one that spends half its session discovering what it needs to know.
|
|
188
|
+
|
|
189
|
+
### How to Check
|
|
190
|
+
|
|
191
|
+
1. For each task, verify a context section lists the specific documents/sections to read
|
|
192
|
+
2. Check that the listed context is sufficient: does it cover the relevant architecture section, API contract, database schema, and UX spec for this task?
|
|
193
|
+
3. Check that the listed context is minimal: does it include only what is needed for this specific task, not the entire project documentation?
|
|
194
|
+
4. Verify that context references are specific: "docs/system-architecture.md, Section 3.2: Order Service" not just "docs/system-architecture.md"
|
|
195
|
+
5. Check for missing context: does the task require knowledge that is not in any listed document? (This may indicate a documentation gap)
|
|
196
|
+
6. Verify that coding standards, testing strategy, and git workflow references are included where relevant
|
|
197
|
+
|
|
198
|
+
### What a Finding Looks Like
|
|
199
|
+
|
|
200
|
+
- P0: "Task 'Implement order creation endpoint' lists no context documents. The agent needs the API contract (endpoint spec), database schema (orders table), domain model (Order aggregate invariants), and architecture section (Order Service design)."
|
|
201
|
+
- P1: "Task 'Build user dashboard' references the architecture document but not the UX spec. The agent will build the component structure correctly but not the visual design."
|
|
202
|
+
- P2: "Task context references 'docs/system-architecture.md' without specifying which section. The agent will load the entire 2000-line document instead of the relevant 100-line section."
|
|
@@ -0,0 +1,215 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: review-methodology
|
|
3
|
+
description: Shared process for conducting multi-pass reviews of documentation artifacts
|
|
4
|
+
topics: [review, methodology, quality-assurance, multi-pass]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Review Methodology
|
|
8
|
+
|
|
9
|
+
This document defines the shared process for reviewing pipeline artifacts. It covers HOW to review, not WHAT to check — each artifact type has its own review knowledge base document with domain-specific passes and failure modes. Every review phase (1a through 10a) follows this process.
|
|
10
|
+
|
|
11
|
+
## Multi-Pass Review Structure
|
|
12
|
+
|
|
13
|
+
### Why Multiple Passes
|
|
14
|
+
|
|
15
|
+
A single read-through catches surface errors but misses structural problems. The human tendency (and the AI tendency) is to get anchored on the first issue found and lose track of the broader picture. Multi-pass review forces systematic coverage by constraining each pass to one failure mode category.
|
|
16
|
+
|
|
17
|
+
Each pass has a single focus: coverage, consistency, structural integrity, or downstream readiness. The reviewer re-reads the artifact with fresh eyes each time, looking for one thing. This is slower than a single pass but catches 3-5x more issues in practice.
|
|
18
|
+
|
|
19
|
+
### Pass Ordering
|
|
20
|
+
|
|
21
|
+
Order passes from broadest to most specific:
|
|
22
|
+
|
|
23
|
+
1. **Coverage passes first** — Is everything present that should be? Missing content is the highest-impact failure mode because it means entire aspects of the system are unspecified. Coverage gaps compound downstream: a missing domain in the domain modeling step means missing ADRs in the decisions step, missing components in the architecture step, missing tables in the specification step, and so on.
|
|
24
|
+
|
|
25
|
+
2. **Consistency passes second** — Does everything agree with itself and with upstream artifacts? Inconsistencies are the second-highest-impact failure because they create ambiguity for implementing agents. When two documents disagree, the agent guesses — and guesses wrong.
|
|
26
|
+
|
|
27
|
+
3. **Structural integrity passes third** — Is the artifact well-formed? Are relationships explicit? Are boundaries clean? Structural issues cause implementation friction: circular dependencies, unclear ownership, ambiguous boundaries.
|
|
28
|
+
|
|
29
|
+
4. **Downstream readiness last** — Can the next phase proceed? This pass validates that the artifact provides everything its consumers need. It is the gate that determines whether to proceed or iterate.
|
|
30
|
+
|
|
31
|
+
### Pass Execution
|
|
32
|
+
|
|
33
|
+
For each pass:
|
|
34
|
+
|
|
35
|
+
1. State the pass name and what you are looking for
|
|
36
|
+
2. Re-read the entire artifact (or the relevant sections) with only that lens
|
|
37
|
+
3. Record every finding, even if minor — categorize later
|
|
38
|
+
4. Do not fix anything during a pass — record only
|
|
39
|
+
5. After completing all findings for this pass, move to the next pass
|
|
40
|
+
|
|
41
|
+
Do not combine passes. The discipline of single-focus reading is the mechanism that catches issues a general-purpose review misses.
|
|
42
|
+
|
|
43
|
+
## Finding Categorization
|
|
44
|
+
|
|
45
|
+
Every finding gets a severity level. Severity determines whether the finding blocks progress or gets deferred.
|
|
46
|
+
|
|
47
|
+
### P0: Blocks Next Phase
|
|
48
|
+
|
|
49
|
+
The artifact cannot be consumed by the next pipeline phase in its current state. The next phase would produce incorrect output or be unable to proceed.
|
|
50
|
+
|
|
51
|
+
**Examples:**
|
|
52
|
+
- A domain entity referenced by three other models is completely undefined
|
|
53
|
+
- An ADR contradicts another ADR with no acknowledgment, and the architecture depends on both
|
|
54
|
+
- A database schema is missing tables for an entire bounded context
|
|
55
|
+
- An API endpoint references a data type that does not exist in any domain model
|
|
56
|
+
|
|
57
|
+
**Action:** Must fix before proceeding. No exceptions.
|
|
58
|
+
|
|
59
|
+
### P1: Significant Gap
|
|
60
|
+
|
|
61
|
+
The artifact is usable but has a meaningful gap that will cause rework downstream. The next phase can proceed but will need to make assumptions that may be wrong.
|
|
62
|
+
|
|
63
|
+
**Examples:**
|
|
64
|
+
- An aggregate is missing one invariant that affects validation logic
|
|
65
|
+
- An ADR lists alternatives but does not evaluate them
|
|
66
|
+
- A data flow diagram omits error paths
|
|
67
|
+
- An API endpoint is missing error response definitions
|
|
68
|
+
|
|
69
|
+
**Action:** Should fix before proceeding. Fix unless the cost of fixing now significantly exceeds the cost of fixing during the downstream phase (rare).
|
|
70
|
+
|
|
71
|
+
### P2: Improvement Opportunity
|
|
72
|
+
|
|
73
|
+
The artifact is correct and usable but could be clearer, more precise, or better organized. The next phase can proceed without issue.
|
|
74
|
+
|
|
75
|
+
**Examples:**
|
|
76
|
+
- A domain model uses informal language where a precise definition would help
|
|
77
|
+
- An ADR's consequences section is vague but the decision is clear
|
|
78
|
+
- A diagram uses inconsistent notation but the meaning is unambiguous
|
|
79
|
+
- An API contract could benefit from more examples
|
|
80
|
+
|
|
81
|
+
**Action:** Fix if time permits. Log for future improvement.
|
|
82
|
+
|
|
83
|
+
### P3: Nice-to-Have
|
|
84
|
+
|
|
85
|
+
Stylistic, formatting, or polish issues. No impact on correctness or downstream consumption.
|
|
86
|
+
|
|
87
|
+
**Examples:**
|
|
88
|
+
- Inconsistent heading capitalization
|
|
89
|
+
- A diagram could be reformatted for readability
|
|
90
|
+
- A section could be reordered for flow
|
|
91
|
+
- Minor wording improvements
|
|
92
|
+
|
|
93
|
+
**Action:** Fix during finalization phase if at all. Do not spend review time on these.
|
|
94
|
+
|
|
95
|
+
## Fix Planning
|
|
96
|
+
|
|
97
|
+
After all passes are complete and findings are categorized, create a fix plan before making any changes. Ad hoc fixing (fixing issues as you find them) risks:
|
|
98
|
+
|
|
99
|
+
- Introducing new issues while fixing old ones
|
|
100
|
+
- Fixing a symptom instead of a root cause (two findings may share one fix)
|
|
101
|
+
- Spending time on P2/P3 issues before P0/P1 are resolved
|
|
102
|
+
|
|
103
|
+
### Grouping Findings
|
|
104
|
+
|
|
105
|
+
Group related findings into fix batches:
|
|
106
|
+
|
|
107
|
+
1. **Same root cause** — Multiple findings that stem from a single missing concept, incorrect assumption, or structural issue. Fix the root cause once.
|
|
108
|
+
2. **Same section** — Findings in the same part of the artifact that can be addressed in a single editing pass.
|
|
109
|
+
3. **Same severity** — Process all P0s first, then P1s. Do not interleave.
|
|
110
|
+
|
|
111
|
+
### Prioritizing by Downstream Impact
|
|
112
|
+
|
|
113
|
+
Within the same severity level, prioritize fixes that have the most downstream impact:
|
|
114
|
+
|
|
115
|
+
- Fixes that affect multiple downstream phases rank higher than single-phase impacts
|
|
116
|
+
- Fixes that change structure (adding entities, changing boundaries) rank higher than fixes that change details (clarifying descriptions, adding examples)
|
|
117
|
+
- Fixes to artifacts consumed by many later phases rank higher (domain models affect everything; API contracts affect fewer phases)
|
|
118
|
+
|
|
119
|
+
### Fix Plan Format
|
|
120
|
+
|
|
121
|
+
```markdown
|
|
122
|
+
## Fix Plan
|
|
123
|
+
|
|
124
|
+
### Batch 1: [Root cause or theme] (P0)
|
|
125
|
+
- Finding 1.1: [description]
|
|
126
|
+
- Finding 1.3: [description]
|
|
127
|
+
- Fix approach: [what to change and why]
|
|
128
|
+
- Affected sections: [list]
|
|
129
|
+
|
|
130
|
+
### Batch 2: [Root cause or theme] (P0)
|
|
131
|
+
- Finding 2.1: [description]
|
|
132
|
+
- Fix approach: [what to change and why]
|
|
133
|
+
- Affected sections: [list]
|
|
134
|
+
|
|
135
|
+
### Batch 3: [Root cause or theme] (P1)
|
|
136
|
+
...
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
## Re-Validation
|
|
140
|
+
|
|
141
|
+
After applying all fixes in a batch, re-run the specific passes that produced the findings in that batch. This is not optional — fixes routinely introduce new issues.
|
|
142
|
+
|
|
143
|
+
### What to Check
|
|
144
|
+
|
|
145
|
+
1. The original findings are resolved (the specific issues no longer exist)
|
|
146
|
+
2. The fix did not break anything checked by the same pass (re-read the full pass scope, not just the fixed section)
|
|
147
|
+
3. The fix did not introduce inconsistencies with other parts of the artifact (quick consistency check)
|
|
148
|
+
|
|
149
|
+
### When to Stop
|
|
150
|
+
|
|
151
|
+
Re-validation is complete when:
|
|
152
|
+
- All P0 and P1 findings are resolved
|
|
153
|
+
- Re-validation produced no new P0 or P1 findings
|
|
154
|
+
- Any new P2/P3 findings are logged but do not block progress
|
|
155
|
+
|
|
156
|
+
If re-validation produces new P0/P1 findings, create a new fix batch and repeat. If this cycle repeats more than twice, the artifact likely has a structural problem that requires rethinking a section rather than patching individual issues.
|
|
157
|
+
|
|
158
|
+
## Downstream Readiness Gate
|
|
159
|
+
|
|
160
|
+
The final check in every review: can the next phase proceed with these artifacts?
|
|
161
|
+
|
|
162
|
+
### How to Evaluate
|
|
163
|
+
|
|
164
|
+
1. Read the meta-prompt for the next phase — what inputs does it require?
|
|
165
|
+
2. For each required input, verify the current artifact provides it with sufficient detail and clarity
|
|
166
|
+
3. For each quality criterion in the next phase's meta-prompt, verify the current artifact supports it
|
|
167
|
+
4. Identify any questions the next phase's author would need to ask — each question is a gap
|
|
168
|
+
|
|
169
|
+
### Gate Outcomes
|
|
170
|
+
|
|
171
|
+
- **Pass** — The next phase can proceed. All required information is present and unambiguous.
|
|
172
|
+
- **Conditional pass** — The next phase can proceed but should be aware of specific limitations or assumptions. Document these as handoff notes.
|
|
173
|
+
- **Fail** — The next phase cannot produce correct output. Specific gaps must be addressed first.
|
|
174
|
+
|
|
175
|
+
A conditional pass is the most common outcome. Document the conditions clearly so the next phase knows what assumptions it is inheriting.
|
|
176
|
+
|
|
177
|
+
## Review Report Format
|
|
178
|
+
|
|
179
|
+
Every review produces a structured report. This format ensures consistency across all review phases and makes it possible to track review quality over time.
|
|
180
|
+
|
|
181
|
+
```markdown
|
|
182
|
+
# Review Report: [Artifact Name]
|
|
183
|
+
|
|
184
|
+
## Executive Summary
|
|
185
|
+
[2-3 sentences: overall artifact quality, number of findings by severity,
|
|
186
|
+
whether downstream gate passed]
|
|
187
|
+
|
|
188
|
+
## Findings by Pass
|
|
189
|
+
|
|
190
|
+
### Pass N: [Pass Name]
|
|
191
|
+
| # | Severity | Finding | Location |
|
|
192
|
+
|---|----------|---------|----------|
|
|
193
|
+
| 1 | P0 | [description] | [section/line] |
|
|
194
|
+
| 2 | P1 | [description] | [section/line] |
|
|
195
|
+
|
|
196
|
+
### Pass N+1: [Pass Name]
|
|
197
|
+
...
|
|
198
|
+
|
|
199
|
+
## Fix Plan
|
|
200
|
+
[Grouped fix batches as described above]
|
|
201
|
+
|
|
202
|
+
## Fix Log
|
|
203
|
+
| Batch | Findings Addressed | Changes Made | New Issues |
|
|
204
|
+
|-------|-------------------|--------------|------------|
|
|
205
|
+
| 1 | 1.1, 1.3 | [summary] | None |
|
|
206
|
+
| 2 | 2.1 | [summary] | 2.1a (P2) |
|
|
207
|
+
|
|
208
|
+
## Re-Validation Results
|
|
209
|
+
[Which passes were re-run, what was found]
|
|
210
|
+
|
|
211
|
+
## Downstream Readiness Assessment
|
|
212
|
+
- **Gate result:** Pass | Conditional Pass | Fail
|
|
213
|
+
- **Handoff notes:** [specific items the next phase should be aware of]
|
|
214
|
+
- **Remaining P2/P3 items:** [count and brief summary, for future reference]
|
|
215
|
+
```
|
|
@@ -0,0 +1,212 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: review-operations
|
|
3
|
+
description: Failure modes and review passes specific to operations and deployment runbook artifacts
|
|
4
|
+
topics: [review, operations, deployment, monitoring, runbooks]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Review: Operations & Deployment
|
|
8
|
+
|
|
9
|
+
The operations runbook defines how the system is deployed, monitored, and maintained in production. It must cover the full deployment lifecycle, provide runbook procedures for common failure scenarios, and ensure the development environment reasonably mirrors production. This review uses 7 passes targeting the specific ways operations documentation fails.
|
|
10
|
+
|
|
11
|
+
Follows the review process defined in `review-methodology.md`.
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## Pass 1: Deployment Strategy Completeness
|
|
16
|
+
|
|
17
|
+
### What to Check
|
|
18
|
+
|
|
19
|
+
The full deploy lifecycle is documented: how code gets from a merged PR to running in production. Build, test, stage, deploy, verify, and rollback stages are all covered.
|
|
20
|
+
|
|
21
|
+
### Why This Matters
|
|
22
|
+
|
|
23
|
+
An incomplete deployment strategy means the team is one configuration error away from a production outage with no recovery plan. Every gap in the deployment pipeline is a place where a deployment can fail silently — code that passes CI but is never actually deployed, deployments that succeed but skip health checks, environments that drift from the documented configuration.
|
|
24
|
+
|
|
25
|
+
### How to Check
|
|
26
|
+
|
|
27
|
+
1. Trace the deployment pipeline from commit to production: build step, test step, staging deployment, production deployment, post-deploy verification
|
|
28
|
+
2. Verify each stage has a clear trigger (manual, automatic), success criteria, and failure behavior
|
|
29
|
+
3. Check for environment progression: does code move through dev -> staging -> production? Can environments be skipped?
|
|
30
|
+
4. Verify that deployment artifacts are specified: Docker images, serverless packages, compiled binaries — what gets deployed?
|
|
31
|
+
5. Check for blue-green or canary deployment patterns if mentioned in ADRs — are they fully designed or just named?
|
|
32
|
+
6. Verify that deployment credentials, access controls, and approval requirements are documented (who can deploy, who can approve)
|
|
33
|
+
7. Check for database migration integration: when do migrations run relative to code deployment?
|
|
34
|
+
|
|
35
|
+
### What a Finding Looks Like
|
|
36
|
+
|
|
37
|
+
- P0: "Deployment pipeline shows build -> test -> production with no staging environment. There is no way to verify a deployment before it reaches production."
|
|
38
|
+
- P1: "Database migrations are not mentioned in the deployment pipeline. When do migrations run? Before or after the new code deploys? What happens if a migration fails mid-deploy?"
|
|
39
|
+
- P1: "Post-deploy verification is missing. After deployment, how is the team notified that the new version is healthy? No health check, no smoke test, no monitoring check."
|
|
40
|
+
- P2: "Deployment approvals are not specified. Can any developer deploy to production, or is approval required?"
|
|
41
|
+
|
|
42
|
+
---
|
|
43
|
+
|
|
44
|
+
## Pass 2: Rollback Procedures
|
|
45
|
+
|
|
46
|
+
### What to Check
|
|
47
|
+
|
|
48
|
+
Every deployment type has a corresponding rollback procedure. Rollback is tested (or at least testable), not just documented. Database rollbacks are addressed separately from code rollbacks.
|
|
49
|
+
|
|
50
|
+
### Why This Matters
|
|
51
|
+
|
|
52
|
+
Rollback is the emergency brake. When a deployment causes a production incident, the first response is to roll back. If the rollback procedure is untested, incomplete, or does not exist, the team is stuck debugging a production issue under pressure instead of reverting to a known-good state. Database rollbacks are especially critical — code can be swapped instantly, but data changes cannot.
|
|
53
|
+
|
|
54
|
+
### How to Check
|
|
55
|
+
|
|
56
|
+
1. For each deployment type (code deploy, database migration, configuration change, infrastructure change), verify a rollback procedure exists
|
|
57
|
+
2. Check that code rollback specifies the mechanism: redeploy previous version, revert container tag, infrastructure-as-code rollback
|
|
58
|
+
3. Check that database rollback addresses: can migrations be reversed? What about data migrations (not just schema)?
|
|
59
|
+
4. Verify rollback has a time estimate: how long does a rollback take?
|
|
60
|
+
5. Check for rollback testing: is the rollback procedure tested periodically, or only discovered during an incident?
|
|
61
|
+
6. Verify that partial deployment rollback is addressed: what if only 2 of 5 services deployed before the failure?
|
|
62
|
+
7. Check for data consistency during rollback: if the new code wrote data in a new format, does the old code handle it?
|
|
63
|
+
|
|
64
|
+
### What a Finding Looks Like
|
|
65
|
+
|
|
66
|
+
- P0: "No rollback procedure exists. If a deployment causes a production issue, the team has no documented way to revert."
|
|
67
|
+
- P0: "Database migration rollback says 'reverse the migration' but the migration drops a column. Column data is lost — rollback is impossible without a backup."
|
|
68
|
+
- P1: "Code rollback procedure exists but does not address database schema compatibility. Rolling back code to version N-1 while the database is at schema version N will cause errors."
|
|
69
|
+
- P2: "Rollback time estimate is missing. The team does not know whether rollback takes 30 seconds or 30 minutes."
|
|
70
|
+
|
|
71
|
+
---
|
|
72
|
+
|
|
73
|
+
## Pass 3: Monitoring Coverage
|
|
74
|
+
|
|
75
|
+
### What to Check
|
|
76
|
+
|
|
77
|
+
All critical system metrics are identified, dashboards are defined, and monitoring covers infrastructure, application, and business metrics.
|
|
78
|
+
|
|
79
|
+
### Why This Matters
|
|
80
|
+
|
|
81
|
+
Without monitoring, production issues are discovered by users, not by the team. The time between "something breaks" and "the team knows about it" determines the blast radius of every incident. Monitoring must cover three layers: infrastructure (servers, containers, network), application (response times, error rates, throughput), and business (transaction volume, conversion rates, revenue).
|
|
82
|
+
|
|
83
|
+
### How to Check
|
|
84
|
+
|
|
85
|
+
1. Verify infrastructure metrics are specified: CPU, memory, disk, network, container health
|
|
86
|
+
2. Verify application metrics are specified: request rate, error rate, response time (p50, p95, p99), active connections
|
|
87
|
+
3. Check for business metrics: transaction volume, user signups, conversion rates — metrics that indicate the system is functioning correctly from a business perspective
|
|
88
|
+
4. Verify that every component from the architecture has at least one monitored metric
|
|
89
|
+
5. Check for dependency monitoring: are external services (databases, third-party APIs, message queues) monitored for availability and latency?
|
|
90
|
+
6. Verify that monitoring covers error categorization: not just "errors happened" but "what type of errors" (4xx vs. 5xx, timeout vs. validation)
|
|
91
|
+
7. Check for dashboard specifications: what dashboards exist, what do they show, who uses them?
|
|
92
|
+
|
|
93
|
+
### What a Finding Looks Like
|
|
94
|
+
|
|
95
|
+
- P0: "No application-level metrics are defined. The operations runbook mentions 'monitoring' but does not specify what is monitored."
|
|
96
|
+
- P1: "Infrastructure metrics (CPU, memory) are monitored but application error rates are not. A bug causing 100% 500 errors would not trigger an alert."
|
|
97
|
+
- P1: "External database monitoring is not mentioned. If the database becomes slow or unavailable, the monitoring system will not detect it until application health checks fail."
|
|
98
|
+
- P2: "Business metrics (order volume, revenue) are not monitored. The system could be returning empty results for all product queries without triggering any alert."
|
|
99
|
+
|
|
100
|
+
---
|
|
101
|
+
|
|
102
|
+
## Pass 4: Alerting Thresholds
|
|
103
|
+
|
|
104
|
+
### What to Check
|
|
105
|
+
|
|
106
|
+
Alerts have justified thresholds (not arbitrary values). Alert severity levels map to response expectations. Alert fatigue is considered — not everything is a page.
|
|
107
|
+
|
|
108
|
+
### Why This Matters
|
|
109
|
+
|
|
110
|
+
Arbitrary thresholds cause two problems. Thresholds too low create alert storms — the on-call engineer gets paged for normal traffic spikes and learns to ignore alerts. Thresholds too high mean real incidents go undetected. Justified thresholds based on baseline behavior and business impact ensure alerts are both actionable and timely.
|
|
111
|
+
|
|
112
|
+
### How to Check
|
|
113
|
+
|
|
114
|
+
1. For each alert, check that the threshold has a rationale: why this number? Based on baseline data, SLA requirements, or capacity limits?
|
|
115
|
+
2. Verify alert severity levels are defined: page (wake someone up), warn (investigate next business day), info (log for review)
|
|
116
|
+
3. Check that page-level alerts are reserved for conditions that affect users or revenue — not internal metrics that can wait
|
|
117
|
+
4. Verify de-duplication and grouping: if a server flaps, does it generate one alert or hundreds?
|
|
118
|
+
5. Check for missing alerts: are there monitored metrics that have no corresponding alert? (Monitoring without alerting means no one is watching the dashboard)
|
|
119
|
+
6. Verify alert routing: who gets which alerts? Is the on-call rotation documented?
|
|
120
|
+
7. Check for alert testing: are alerts tested (fire a synthetic failure and verify the alert triggers)?
|
|
121
|
+
|
|
122
|
+
### What a Finding Looks Like
|
|
123
|
+
|
|
124
|
+
- P0: "Error rate alert threshold is 'greater than 0' — any single error triggers a page. This will cause alert fatigue within the first day of production."
|
|
125
|
+
- P1: "CPU usage alert threshold is 80% with no justification. Is 80% normal during peak traffic? Is 60% already a problem? The threshold needs to be based on baseline behavior."
|
|
126
|
+
- P1: "Alerts exist but no on-call rotation or escalation path is documented. When an alert fires at 3 AM, who receives it?"
|
|
127
|
+
- P2: "Alert for disk usage exists but no alert for disk growth rate. A slow disk leak will only trigger when the disk is nearly full, leaving little time to respond."
|
|
128
|
+
|
|
129
|
+
---
|
|
130
|
+
|
|
131
|
+
## Pass 5: Runbook Scenarios
|
|
132
|
+
|
|
133
|
+
### What to Check
|
|
134
|
+
|
|
135
|
+
Common failure scenarios have runbook entries with step-by-step resolution procedures. Scenarios cover the failures most likely to occur and most impactful when they do.
|
|
136
|
+
|
|
137
|
+
### Why This Matters
|
|
138
|
+
|
|
139
|
+
During an incident, the on-call engineer is under stress, possibly working at 3 AM, and possibly unfamiliar with the subsystem that failed. A runbook provides step-by-step guidance so they do not need to debug from first principles. Missing runbook scenarios mean the engineer improvises under pressure — increasing resolution time and risk of making things worse.
|
|
140
|
+
|
|
141
|
+
### How to Check
|
|
142
|
+
|
|
143
|
+
1. List the most likely failure scenarios: database connection loss, external API outage, out-of-memory, certificate expiration, disk full, deployment failure, high latency
|
|
144
|
+
2. For each scenario, verify a runbook entry exists
|
|
145
|
+
3. Check that each runbook entry includes: symptoms (how to recognize this failure), diagnosis steps (how to confirm the root cause), resolution steps (how to fix it), verification (how to confirm it is fixed), post-mortem (what to document after the incident)
|
|
146
|
+
4. Verify that runbook steps are specific and actionable: "check the logs" is too vague; "run `kubectl logs deployment/order-service -n production --tail=100` and look for `ConnectionRefused` errors" is actionable
|
|
147
|
+
5. Check for escalation paths: when should the on-call engineer escalate to a senior engineer or the team lead?
|
|
148
|
+
6. Verify that runbook entries reference the correct tools, dashboards, and access paths
|
|
149
|
+
|
|
150
|
+
### What a Finding Looks Like
|
|
151
|
+
|
|
152
|
+
- P0: "No runbook entries exist. The operations runbook discusses monitoring and alerting but provides no incident response procedures."
|
|
153
|
+
- P1: "Database connection failure runbook says 'restart the database connection pool.' How? What command? What service? What if it does not recover after restart?"
|
|
154
|
+
- P2: "Runbook entries exist for infrastructure failures but not for application-level failures (e.g., a bug causing 500 errors on a specific endpoint)."
|
|
155
|
+
|
|
156
|
+
---
|
|
157
|
+
|
|
158
|
+
## Pass 6: Dev Environment Parity
|
|
159
|
+
|
|
160
|
+
### What to Check
|
|
161
|
+
|
|
162
|
+
The local development environment reasonably matches production. Developers can run the full system locally with realistic behavior. Environment differences are documented.
|
|
163
|
+
|
|
164
|
+
### Why This Matters
|
|
165
|
+
|
|
166
|
+
When the development environment diverges from production, "works on my machine" becomes the default. Bugs that only appear in production are impossible to reproduce locally, increasing debugging time from hours to days. Dev environment parity is not about identical hardware — it is about identical behavior for application-level concerns.
|
|
167
|
+
|
|
168
|
+
### How to Check
|
|
169
|
+
|
|
170
|
+
1. Compare the dev environment stack to production: same database engine? Same message queue? Same cache? Same auth provider?
|
|
171
|
+
2. Check for documented deviations: if production uses AWS SQS but dev uses a local queue, is this documented with its implications?
|
|
172
|
+
3. Verify that local setup instructions exist and are complete: can a new developer go from clone to running system?
|
|
173
|
+
4. Check that seed data or test fixtures exist for local development
|
|
174
|
+
5. Verify that environment variables, configuration, and secrets management for local development are documented
|
|
175
|
+
6. Check for containerization: if production runs in containers, does local development also use containers?
|
|
176
|
+
7. Verify that local SSL/TLS handling matches production if HTTPS is required
|
|
177
|
+
|
|
178
|
+
### What a Finding Looks Like
|
|
179
|
+
|
|
180
|
+
- P0: "No local development setup instructions exist. A new developer cannot run the system locally."
|
|
181
|
+
- P1: "Production uses PostgreSQL 15 but local development uses SQLite. SQL dialect differences will cause bugs that only appear in production."
|
|
182
|
+
- P1: "Production uses Redis for session storage but local development stores sessions in memory. Multi-instance behavior cannot be tested locally."
|
|
183
|
+
- P2: "Local development uses mock email service but production uses SendGrid. Email formatting and delivery behavior differences are not documented."
|
|
184
|
+
|
|
185
|
+
---
|
|
186
|
+
|
|
187
|
+
## Pass 7: DR/Backup Coverage
|
|
188
|
+
|
|
189
|
+
### What to Check
|
|
190
|
+
|
|
191
|
+
Disaster recovery approach is documented. Backup strategy covers all persistent data. Recovery time objectives (RTO) and recovery point objectives (RPO) are specified.
|
|
192
|
+
|
|
193
|
+
### Why This Matters
|
|
194
|
+
|
|
195
|
+
Without a backup strategy, data loss is permanent. Without a disaster recovery plan, a region outage or infrastructure failure takes the system offline indefinitely. RTO and RPO define the business tolerance for downtime and data loss — without them, the team does not know whether their backup strategy is sufficient.
|
|
196
|
+
|
|
197
|
+
### How to Check
|
|
198
|
+
|
|
199
|
+
1. Verify backup strategy covers all persistent data stores: primary database, file storage, message queues (if persistent), configuration stores
|
|
200
|
+
2. Check that backup frequency is specified and aligns with RPO: if RPO is 1 hour, backups must run at least hourly
|
|
201
|
+
3. Verify backup retention policy: how long are backups kept? Is there a legal/compliance requirement?
|
|
202
|
+
4. Check that backup restoration is documented and tested: can the team actually restore from a backup?
|
|
203
|
+
5. Verify DR strategy: multi-region, failover, warm standby, or cold recovery? What is the expected RTO?
|
|
204
|
+
6. Check for data encryption at rest: are backups encrypted? Where are encryption keys stored?
|
|
205
|
+
7. Verify that DR testing is planned: is there a schedule for testing recovery procedures?
|
|
206
|
+
|
|
207
|
+
### What a Finding Looks Like
|
|
208
|
+
|
|
209
|
+
- P0: "No backup strategy is documented. If the primary database is corrupted or lost, data recovery is impossible."
|
|
210
|
+
- P0: "Backups run daily but the RPO is 15 minutes. Up to 24 hours of data could be lost, far exceeding the business tolerance."
|
|
211
|
+
- P1: "Backup restoration procedure says 'restore from backup' with no specifics. What tool? What command? How long does it take? What is the verification step?"
|
|
212
|
+
- P2: "DR strategy exists but has never been tested. The team does not know if recovery actually works within the stated RTO."
|