@zigrivers/scaffold 2.1.2 → 2.38.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +505 -119
- package/dist/cli/commands/build.d.ts.map +1 -1
- package/dist/cli/commands/build.js +94 -14
- package/dist/cli/commands/build.js.map +1 -1
- package/dist/cli/commands/build.test.js +30 -5
- package/dist/cli/commands/build.test.js.map +1 -1
- package/dist/cli/commands/check.d.ts +12 -0
- package/dist/cli/commands/check.d.ts.map +1 -0
- package/dist/cli/commands/check.js +311 -0
- package/dist/cli/commands/check.js.map +1 -0
- package/dist/cli/commands/check.test.d.ts +2 -0
- package/dist/cli/commands/check.test.d.ts.map +1 -0
- package/dist/cli/commands/check.test.js +412 -0
- package/dist/cli/commands/check.test.js.map +1 -0
- package/dist/cli/commands/complete.d.ts +12 -0
- package/dist/cli/commands/complete.d.ts.map +1 -0
- package/dist/cli/commands/complete.js +101 -0
- package/dist/cli/commands/complete.js.map +1 -0
- package/dist/cli/commands/complete.test.d.ts +2 -0
- package/dist/cli/commands/complete.test.d.ts.map +1 -0
- package/dist/cli/commands/complete.test.js +133 -0
- package/dist/cli/commands/complete.test.js.map +1 -0
- package/dist/cli/commands/dashboard.d.ts.map +1 -1
- package/dist/cli/commands/dashboard.js +12 -8
- package/dist/cli/commands/dashboard.js.map +1 -1
- package/dist/cli/commands/info.d.ts.map +1 -1
- package/dist/cli/commands/info.js +4 -0
- package/dist/cli/commands/info.js.map +1 -1
- package/dist/cli/commands/knowledge.d.ts.map +1 -1
- package/dist/cli/commands/knowledge.js +6 -2
- package/dist/cli/commands/knowledge.js.map +1 -1
- package/dist/cli/commands/knowledge.test.js +16 -11
- package/dist/cli/commands/knowledge.test.js.map +1 -1
- package/dist/cli/commands/next.d.ts.map +1 -1
- package/dist/cli/commands/next.js +41 -13
- package/dist/cli/commands/next.js.map +1 -1
- package/dist/cli/commands/next.test.js +3 -0
- package/dist/cli/commands/next.test.js.map +1 -1
- package/dist/cli/commands/reset.d.ts +1 -0
- package/dist/cli/commands/reset.d.ts.map +1 -1
- package/dist/cli/commands/reset.js +179 -67
- package/dist/cli/commands/reset.js.map +1 -1
- package/dist/cli/commands/reset.test.js +360 -0
- package/dist/cli/commands/reset.test.js.map +1 -1
- package/dist/cli/commands/rework.d.ts +20 -0
- package/dist/cli/commands/rework.d.ts.map +1 -0
- package/dist/cli/commands/rework.js +332 -0
- package/dist/cli/commands/rework.js.map +1 -0
- package/dist/cli/commands/rework.test.d.ts +2 -0
- package/dist/cli/commands/rework.test.d.ts.map +1 -0
- package/dist/cli/commands/rework.test.js +297 -0
- package/dist/cli/commands/rework.test.js.map +1 -0
- package/dist/cli/commands/run.d.ts.map +1 -1
- package/dist/cli/commands/run.js +59 -31
- package/dist/cli/commands/run.js.map +1 -1
- package/dist/cli/commands/run.test.js +288 -6
- package/dist/cli/commands/run.test.js.map +1 -1
- package/dist/cli/commands/skill.d.ts +12 -0
- package/dist/cli/commands/skill.d.ts.map +1 -0
- package/dist/cli/commands/skill.js +123 -0
- package/dist/cli/commands/skill.js.map +1 -0
- package/dist/cli/commands/skill.test.d.ts +2 -0
- package/dist/cli/commands/skill.test.d.ts.map +1 -0
- package/dist/cli/commands/skill.test.js +297 -0
- package/dist/cli/commands/skill.test.js.map +1 -0
- package/dist/cli/commands/skip.d.ts +1 -1
- package/dist/cli/commands/skip.d.ts.map +1 -1
- package/dist/cli/commands/skip.js +123 -57
- package/dist/cli/commands/skip.js.map +1 -1
- package/dist/cli/commands/skip.test.js +91 -0
- package/dist/cli/commands/skip.test.js.map +1 -1
- package/dist/cli/commands/status.d.ts +1 -0
- package/dist/cli/commands/status.d.ts.map +1 -1
- package/dist/cli/commands/status.js +57 -10
- package/dist/cli/commands/status.js.map +1 -1
- package/dist/cli/commands/status.test.js +81 -0
- package/dist/cli/commands/status.test.js.map +1 -1
- package/dist/cli/commands/update.test.js +252 -0
- package/dist/cli/commands/update.test.js.map +1 -1
- package/dist/cli/commands/version.test.js +171 -1
- package/dist/cli/commands/version.test.js.map +1 -1
- package/dist/cli/index.d.ts.map +1 -1
- package/dist/cli/index.js +8 -0
- package/dist/cli/index.js.map +1 -1
- package/dist/core/adapters/adapter.d.ts +14 -0
- package/dist/core/adapters/adapter.d.ts.map +1 -1
- package/dist/core/adapters/adapter.js.map +1 -1
- package/dist/core/adapters/adapter.test.js +10 -0
- package/dist/core/adapters/adapter.test.js.map +1 -1
- package/dist/core/adapters/claude-code.d.ts.map +1 -1
- package/dist/core/adapters/claude-code.js +47 -10
- package/dist/core/adapters/claude-code.js.map +1 -1
- package/dist/core/adapters/claude-code.test.js +41 -20
- package/dist/core/adapters/claude-code.test.js.map +1 -1
- package/dist/core/adapters/codex.d.ts.map +1 -1
- package/dist/core/adapters/codex.js +5 -1
- package/dist/core/adapters/codex.js.map +1 -1
- package/dist/core/adapters/codex.test.js +5 -0
- package/dist/core/adapters/codex.test.js.map +1 -1
- package/dist/core/adapters/universal.d.ts.map +1 -1
- package/dist/core/adapters/universal.js +0 -1
- package/dist/core/adapters/universal.js.map +1 -1
- package/dist/core/adapters/universal.test.js +5 -0
- package/dist/core/adapters/universal.test.js.map +1 -1
- package/dist/core/assembly/context-gatherer.d.ts.map +1 -1
- package/dist/core/assembly/context-gatherer.js +5 -2
- package/dist/core/assembly/context-gatherer.js.map +1 -1
- package/dist/core/assembly/engine.d.ts.map +1 -1
- package/dist/core/assembly/engine.js +10 -2
- package/dist/core/assembly/engine.js.map +1 -1
- package/dist/core/assembly/engine.test.js +19 -0
- package/dist/core/assembly/engine.test.js.map +1 -1
- package/dist/core/assembly/knowledge-loader.d.ts +25 -0
- package/dist/core/assembly/knowledge-loader.d.ts.map +1 -1
- package/dist/core/assembly/knowledge-loader.js +75 -2
- package/dist/core/assembly/knowledge-loader.js.map +1 -1
- package/dist/core/assembly/knowledge-loader.test.js +388 -1
- package/dist/core/assembly/knowledge-loader.test.js.map +1 -1
- package/dist/core/assembly/meta-prompt-loader.d.ts +6 -0
- package/dist/core/assembly/meta-prompt-loader.d.ts.map +1 -1
- package/dist/core/assembly/meta-prompt-loader.js +41 -25
- package/dist/core/assembly/meta-prompt-loader.js.map +1 -1
- package/dist/core/assembly/preset-loader.d.ts +10 -0
- package/dist/core/assembly/preset-loader.d.ts.map +1 -1
- package/dist/core/assembly/preset-loader.js +26 -1
- package/dist/core/assembly/preset-loader.js.map +1 -1
- package/dist/core/assembly/preset-loader.test.js +65 -1
- package/dist/core/assembly/preset-loader.test.js.map +1 -1
- package/dist/core/assembly/update-mode.d.ts.map +1 -1
- package/dist/core/assembly/update-mode.js +10 -4
- package/dist/core/assembly/update-mode.js.map +1 -1
- package/dist/core/assembly/update-mode.test.js +47 -0
- package/dist/core/assembly/update-mode.test.js.map +1 -1
- package/dist/core/dependency/dependency.d.ts.map +1 -1
- package/dist/core/dependency/dependency.js +3 -2
- package/dist/core/dependency/dependency.js.map +1 -1
- package/dist/core/dependency/dependency.test.js +2 -0
- package/dist/core/dependency/dependency.test.js.map +1 -1
- package/dist/core/dependency/eligibility.js +3 -3
- package/dist/core/dependency/eligibility.js.map +1 -1
- package/dist/core/dependency/eligibility.test.js +2 -0
- package/dist/core/dependency/eligibility.test.js.map +1 -1
- package/dist/core/dependency/graph.d.ts.map +1 -1
- package/dist/core/dependency/graph.js +4 -0
- package/dist/core/dependency/graph.js.map +1 -1
- package/dist/core/dependency/graph.test.d.ts +2 -0
- package/dist/core/dependency/graph.test.d.ts.map +1 -0
- package/dist/core/dependency/graph.test.js +262 -0
- package/dist/core/dependency/graph.test.js.map +1 -0
- package/dist/core/rework/phase-selector.d.ts +24 -0
- package/dist/core/rework/phase-selector.d.ts.map +1 -0
- package/dist/core/rework/phase-selector.js +98 -0
- package/dist/core/rework/phase-selector.js.map +1 -0
- package/dist/core/rework/phase-selector.test.d.ts +2 -0
- package/dist/core/rework/phase-selector.test.d.ts.map +1 -0
- package/dist/core/rework/phase-selector.test.js +138 -0
- package/dist/core/rework/phase-selector.test.js.map +1 -0
- package/dist/dashboard/generator.d.ts +48 -17
- package/dist/dashboard/generator.d.ts.map +1 -1
- package/dist/dashboard/generator.js +75 -5
- package/dist/dashboard/generator.js.map +1 -1
- package/dist/dashboard/generator.test.js +213 -5
- package/dist/dashboard/generator.test.js.map +1 -1
- package/dist/dashboard/template.d.ts +1 -1
- package/dist/dashboard/template.d.ts.map +1 -1
- package/dist/dashboard/template.js +755 -114
- package/dist/dashboard/template.js.map +1 -1
- package/dist/e2e/knowledge.test.js +4 -3
- package/dist/e2e/knowledge.test.js.map +1 -1
- package/dist/e2e/pipeline.test.js +2 -0
- package/dist/e2e/pipeline.test.js.map +1 -1
- package/dist/e2e/rework.test.d.ts +6 -0
- package/dist/e2e/rework.test.d.ts.map +1 -0
- package/dist/e2e/rework.test.js +226 -0
- package/dist/e2e/rework.test.js.map +1 -0
- package/dist/index.js +0 -0
- package/dist/project/adopt.test.js +2 -0
- package/dist/project/adopt.test.js.map +1 -1
- package/dist/project/claude-md.js +2 -2
- package/dist/project/claude-md.js.map +1 -1
- package/dist/project/claude-md.test.js +4 -4
- package/dist/project/claude-md.test.js.map +1 -1
- package/dist/project/detector.d.ts.map +1 -1
- package/dist/project/detector.js +4 -1
- package/dist/project/detector.js.map +1 -1
- package/dist/project/frontmatter.d.ts.map +1 -1
- package/dist/project/frontmatter.js +54 -15
- package/dist/project/frontmatter.js.map +1 -1
- package/dist/project/frontmatter.test.js +2 -2
- package/dist/project/frontmatter.test.js.map +1 -1
- package/dist/state/rework-manager.d.ts +16 -0
- package/dist/state/rework-manager.d.ts.map +1 -0
- package/dist/state/rework-manager.js +126 -0
- package/dist/state/rework-manager.js.map +1 -0
- package/dist/state/rework-manager.test.d.ts +2 -0
- package/dist/state/rework-manager.test.d.ts.map +1 -0
- package/dist/state/rework-manager.test.js +191 -0
- package/dist/state/rework-manager.test.js.map +1 -0
- package/dist/state/state-manager.d.ts +13 -0
- package/dist/state/state-manager.d.ts.map +1 -1
- package/dist/state/state-manager.js +39 -2
- package/dist/state/state-manager.js.map +1 -1
- package/dist/state/state-manager.test.js +74 -1
- package/dist/state/state-manager.test.js.map +1 -1
- package/dist/state/state-migration.d.ts +23 -0
- package/dist/state/state-migration.d.ts.map +1 -0
- package/dist/state/state-migration.js +144 -0
- package/dist/state/state-migration.js.map +1 -0
- package/dist/state/state-migration.test.d.ts +2 -0
- package/dist/state/state-migration.test.d.ts.map +1 -0
- package/dist/state/state-migration.test.js +451 -0
- package/dist/state/state-migration.test.js.map +1 -0
- package/dist/types/assembly.d.ts +2 -0
- package/dist/types/assembly.d.ts.map +1 -1
- package/dist/types/dependency.d.ts +2 -2
- package/dist/types/dependency.d.ts.map +1 -1
- package/dist/types/frontmatter.d.ts +100 -7
- package/dist/types/frontmatter.d.ts.map +1 -1
- package/dist/types/frontmatter.js +89 -1
- package/dist/types/frontmatter.js.map +1 -1
- package/dist/types/index.d.ts +1 -0
- package/dist/types/index.d.ts.map +1 -1
- package/dist/types/index.js +1 -0
- package/dist/types/index.js.map +1 -1
- package/dist/types/lock.d.ts +1 -1
- package/dist/types/lock.d.ts.map +1 -1
- package/dist/types/rework.d.ts +36 -0
- package/dist/types/rework.d.ts.map +1 -0
- package/dist/types/rework.js +2 -0
- package/dist/types/rework.js.map +1 -0
- package/dist/utils/errors.d.ts +1 -0
- package/dist/utils/errors.d.ts.map +1 -1
- package/dist/utils/errors.js +8 -0
- package/dist/utils/errors.js.map +1 -1
- package/dist/utils/fs.d.ts +6 -0
- package/dist/utils/fs.d.ts.map +1 -1
- package/dist/utils/fs.js +13 -0
- package/dist/utils/fs.js.map +1 -1
- package/dist/validation/config-validator.test.d.ts +2 -0
- package/dist/validation/config-validator.test.d.ts.map +1 -0
- package/dist/validation/config-validator.test.js +210 -0
- package/dist/validation/config-validator.test.js.map +1 -0
- package/dist/validation/dependency-validator.test.d.ts +2 -0
- package/dist/validation/dependency-validator.test.d.ts.map +1 -0
- package/dist/validation/dependency-validator.test.js +215 -0
- package/dist/validation/dependency-validator.test.js.map +1 -0
- package/dist/validation/frontmatter-validator.test.d.ts +2 -0
- package/dist/validation/frontmatter-validator.test.d.ts.map +1 -0
- package/dist/validation/frontmatter-validator.test.js +371 -0
- package/dist/validation/frontmatter-validator.test.js.map +1 -0
- package/dist/validation/state-validator.test.d.ts +2 -0
- package/dist/validation/state-validator.test.d.ts.map +1 -0
- package/dist/validation/state-validator.test.js +325 -0
- package/dist/validation/state-validator.test.js.map +1 -0
- package/dist/wizard/suggestion.test.d.ts +2 -0
- package/dist/wizard/suggestion.test.d.ts.map +1 -0
- package/dist/wizard/suggestion.test.js +115 -0
- package/dist/wizard/suggestion.test.js.map +1 -0
- package/dist/wizard/wizard.d.ts.map +1 -1
- package/dist/wizard/wizard.js +34 -1
- package/dist/wizard/wizard.js.map +1 -1
- package/knowledge/core/adr-craft.md +57 -0
- package/knowledge/core/ai-memory-management.md +246 -0
- package/knowledge/core/api-design.md +8 -0
- package/knowledge/core/automated-review-tooling.md +203 -0
- package/knowledge/core/claude-md-patterns.md +254 -0
- package/knowledge/core/coding-conventions.md +246 -0
- package/knowledge/core/database-design.md +8 -0
- package/knowledge/core/design-system-tokens.md +469 -0
- package/knowledge/core/dev-environment.md +223 -0
- package/knowledge/core/domain-modeling.md +8 -0
- package/knowledge/core/eval-craft.md +1008 -0
- package/knowledge/core/git-workflow-patterns.md +200 -0
- package/knowledge/core/multi-model-review-dispatch.md +250 -0
- package/knowledge/core/operations-runbook.md +40 -225
- package/knowledge/core/project-structure-patterns.md +231 -0
- package/knowledge/core/review-step-template.md +247 -0
- package/knowledge/core/{security-review.md → security-best-practices.md} +9 -1
- package/knowledge/core/system-architecture.md +5 -1
- package/knowledge/core/task-decomposition.md +174 -36
- package/knowledge/core/task-tracking.md +225 -0
- package/knowledge/core/tech-stack-selection.md +214 -0
- package/knowledge/core/testing-strategy.md +63 -70
- package/knowledge/core/user-stories.md +69 -60
- package/knowledge/core/user-story-innovation.md +70 -0
- package/knowledge/core/ux-specification.md +18 -148
- package/knowledge/execution/enhancement-workflow.md +201 -0
- package/knowledge/execution/task-claiming-strategy.md +130 -0
- package/knowledge/execution/tdd-execution-loop.md +172 -0
- package/knowledge/execution/worktree-management.md +205 -0
- package/knowledge/finalization/apply-fixes-and-freeze.md +177 -14
- package/knowledge/finalization/developer-onboarding.md +4 -0
- package/knowledge/finalization/implementation-playbook.md +83 -5
- package/knowledge/product/gap-analysis.md +5 -1
- package/knowledge/product/prd-craft.md +55 -34
- package/knowledge/product/prd-innovation.md +12 -0
- package/knowledge/product/vision-craft.md +213 -0
- package/knowledge/review/review-adr.md +44 -0
- package/knowledge/review/{review-api-contracts.md → review-api-design.md} +47 -1
- package/knowledge/review/{review-database-schema.md → review-database-design.md} +40 -1
- package/knowledge/review/review-domain-modeling.md +38 -1
- package/knowledge/review/review-implementation-tasks.md +108 -1
- package/knowledge/review/review-methodology.md +11 -0
- package/knowledge/review/review-operations.md +67 -0
- package/knowledge/review/review-prd.md +46 -0
- package/knowledge/review/review-security.md +65 -0
- package/knowledge/review/review-system-architecture.md +32 -2
- package/knowledge/review/review-testing-strategy.md +62 -0
- package/knowledge/review/review-user-stories.md +65 -0
- package/knowledge/review/{review-ux-spec.md → review-ux-specification.md} +50 -2
- package/knowledge/review/review-vision.md +255 -0
- package/knowledge/tools/release-management.md +222 -0
- package/knowledge/tools/session-analysis.md +215 -0
- package/knowledge/tools/version-strategy.md +200 -0
- package/knowledge/validation/critical-path-analysis.md +1 -1
- package/knowledge/validation/cross-phase-consistency.md +12 -0
- package/knowledge/validation/decision-completeness.md +13 -1
- package/knowledge/validation/dependency-validation.md +12 -0
- package/knowledge/validation/scope-management.md +12 -0
- package/knowledge/validation/traceability.md +12 -0
- package/methodology/README.md +37 -0
- package/methodology/custom-defaults.yml +44 -4
- package/methodology/deep.yml +43 -3
- package/methodology/mvp.yml +43 -3
- package/package.json +4 -3
- package/pipeline/architecture/review-architecture.md +36 -13
- package/pipeline/architecture/system-architecture.md +24 -9
- package/pipeline/build/multi-agent-resume.md +245 -0
- package/pipeline/build/multi-agent-start.md +236 -0
- package/pipeline/build/new-enhancement.md +456 -0
- package/pipeline/build/quick-task.md +381 -0
- package/pipeline/build/single-agent-resume.md +210 -0
- package/pipeline/build/single-agent-start.md +207 -0
- package/pipeline/consolidation/claude-md-optimization.md +76 -0
- package/pipeline/consolidation/workflow-audit.md +77 -0
- package/pipeline/decisions/adrs.md +21 -7
- package/pipeline/decisions/review-adrs.md +32 -11
- package/pipeline/environment/ai-memory-setup.md +76 -0
- package/pipeline/environment/automated-pr-review.md +76 -0
- package/pipeline/environment/design-system.md +75 -0
- package/pipeline/environment/dev-env-setup.md +68 -0
- package/pipeline/environment/git-workflow.md +73 -0
- package/pipeline/finalization/apply-fixes-and-freeze.md +17 -6
- package/pipeline/finalization/developer-onboarding-guide.md +23 -9
- package/pipeline/finalization/implementation-playbook.md +43 -14
- package/pipeline/foundation/beads.md +71 -0
- package/pipeline/foundation/coding-standards.md +71 -0
- package/pipeline/foundation/project-structure.md +73 -0
- package/pipeline/foundation/tdd.md +64 -0
- package/pipeline/foundation/tech-stack.md +74 -0
- package/pipeline/integration/add-e2e-testing.md +80 -0
- package/pipeline/modeling/domain-modeling.md +23 -8
- package/pipeline/modeling/review-domain-modeling.md +35 -11
- package/pipeline/parity/platform-parity-review.md +90 -0
- package/pipeline/planning/implementation-plan-review.md +67 -0
- package/pipeline/planning/implementation-plan.md +110 -0
- package/pipeline/pre/create-prd.md +34 -10
- package/pipeline/pre/innovate-prd.md +46 -15
- package/pipeline/pre/innovate-user-stories.md +47 -14
- package/pipeline/pre/review-prd.md +29 -8
- package/pipeline/pre/review-user-stories.md +34 -8
- package/pipeline/pre/user-stories.md +23 -8
- package/pipeline/quality/create-evals.md +106 -0
- package/pipeline/quality/operations.md +46 -17
- package/pipeline/quality/review-operations.md +32 -11
- package/pipeline/quality/review-security.md +34 -12
- package/pipeline/quality/review-testing.md +37 -14
- package/pipeline/quality/security.md +36 -10
- package/pipeline/quality/story-tests.md +75 -0
- package/pipeline/specification/api-contracts.md +28 -8
- package/pipeline/specification/database-schema.md +29 -8
- package/pipeline/specification/review-api.md +32 -11
- package/pipeline/specification/review-database.md +32 -11
- package/pipeline/specification/review-ux.md +34 -12
- package/pipeline/specification/ux-spec.md +35 -13
- package/pipeline/validation/critical-path-walkthrough.md +45 -11
- package/pipeline/validation/cross-phase-consistency.md +45 -11
- package/pipeline/validation/decision-completeness.md +45 -11
- package/pipeline/validation/dependency-graph-validation.md +46 -11
- package/pipeline/validation/implementability-dry-run.md +46 -11
- package/pipeline/validation/scope-creep-check.md +46 -11
- package/pipeline/validation/traceability-matrix.md +51 -11
- package/pipeline/vision/create-vision.md +267 -0
- package/pipeline/vision/innovate-vision.md +157 -0
- package/pipeline/vision/review-vision.md +149 -0
- package/skills/multi-model-dispatch/SKILL.md +326 -0
- package/skills/scaffold-pipeline/SKILL.md +210 -0
- package/skills/scaffold-runner/SKILL.md +619 -0
- package/pipeline/planning/implementation-tasks.md +0 -57
- package/pipeline/planning/review-tasks.md +0 -38
- package/pipeline/quality/testing-strategy.md +0 -42
|
@@ -10,6 +10,17 @@ The testing strategy defines how the system will be verified at every layer. It
|
|
|
10
10
|
|
|
11
11
|
Follows the review process defined in `review-methodology.md`.
|
|
12
12
|
|
|
13
|
+
## Summary
|
|
14
|
+
|
|
15
|
+
- **Pass 1 — Coverage Gaps by Layer**: Each architectural layer has test coverage defined; test pyramid is balanced (not top-heavy or bottom-heavy).
|
|
16
|
+
- **Pass 2 — Domain Invariant Test Cases**: Every domain invariant has at least one corresponding test scenario covering positive and negative cases.
|
|
17
|
+
- **Pass 3 — Test Environment Assumptions**: Test environment matches production constraints; database engines, service configurations, and test data are realistic.
|
|
18
|
+
- **Pass 4 — Performance Test Coverage**: Performance-critical paths have benchmarks with specific thresholds; load and stress testing scenarios defined.
|
|
19
|
+
- **Pass 5 — Integration Boundary Coverage**: All component integration points have integration tests using real (not mocked) dependencies.
|
|
20
|
+
- **Pass 6 — Quality Gate Completeness**: CI pipeline gates cover linting, type checking, tests, and security scanning; gates block deployment on failure.
|
|
21
|
+
|
|
22
|
+
## Deep Guidance
|
|
23
|
+
|
|
13
24
|
---
|
|
14
25
|
|
|
15
26
|
## Pass 1: Coverage Gaps by Layer
|
|
@@ -174,3 +185,54 @@ A quality gate that exists in documentation but not in CI is not a gate. If the
|
|
|
174
185
|
- P0: "Testing strategy requires 80% code coverage but the CI pipeline has no coverage reporting or enforcement. The requirement is unverifiable."
|
|
175
186
|
- P1: "Security scanning is listed as a quality requirement but no specific tool or CI pipeline step implements it."
|
|
176
187
|
- P2: "Quality gates run linting, unit tests, and integration tests, but do not validate database migrations. A broken migration would pass all gates and fail in production."
|
|
188
|
+
|
|
189
|
+
---
|
|
190
|
+
|
|
191
|
+
## Common Review Anti-Patterns
|
|
192
|
+
|
|
193
|
+
### 1. Copy-Pasted Generic Strategy
|
|
194
|
+
|
|
195
|
+
The testing strategy is a boilerplate document that says "we will have unit tests, integration tests, and E2E tests" without connecting to the actual architecture. No mention of specific components, no mapping of test types to architectural layers, no project-specific invariants.
|
|
196
|
+
|
|
197
|
+
**How to spot it:** The strategy could be copy-pasted into any other project and still read correctly. No component names, no domain terms, no architecture-specific decisions.
|
|
198
|
+
|
|
199
|
+
### 2. Testing Strategy Disconnected from Architecture
|
|
200
|
+
|
|
201
|
+
The strategy defines test types and coverage goals but does not reference the system architecture. Tests are organized by test framework (Jest unit tests, Playwright E2E tests) rather than by architectural component. This makes it impossible to verify coverage — you cannot tell which components are tested and which are not.
|
|
202
|
+
|
|
203
|
+
**How to spot it:** Search for component names from the architecture document. If none appear in the testing strategy, the two documents are disconnected.
|
|
204
|
+
|
|
205
|
+
### 3. Mock-Everything Mentality
|
|
206
|
+
|
|
207
|
+
Every external dependency is mocked, including the database. Unit test coverage is high, but no test ever executes a real query, a real HTTP call, or a real message queue interaction. The test suite provides confidence that the mocking layer works, not that the system works.
|
|
208
|
+
|
|
209
|
+
**Example finding:**
|
|
210
|
+
|
|
211
|
+
```markdown
|
|
212
|
+
## Finding: TSR-009
|
|
213
|
+
|
|
214
|
+
**Priority:** P1
|
|
215
|
+
**Pass:** Integration Boundary Coverage (Pass 5)
|
|
216
|
+
**Document:** docs/testing-strategy.md, Section 4.2
|
|
217
|
+
|
|
218
|
+
**Issue:** All database tests use an in-memory mock repository. The repository interface
|
|
219
|
+
is tested, but no test ever executes SQL against a real PostgreSQL instance. The following
|
|
220
|
+
risks are untested: query syntax errors, constraint violations, transaction isolation
|
|
221
|
+
behavior, migration correctness.
|
|
222
|
+
|
|
223
|
+
**Recommendation:** Add integration tests using testcontainers or a CI-managed PostgreSQL
|
|
224
|
+
instance for at least the OrderRepository and UserRepository (the two repositories with
|
|
225
|
+
complex queries).
|
|
226
|
+
```
|
|
227
|
+
|
|
228
|
+
### 4. No Negative Test Scenarios
|
|
229
|
+
|
|
230
|
+
The strategy defines tests for the happy path but never specifies what happens when things fail. No test scenarios for invalid input, network timeouts, concurrent modification, or resource exhaustion. The system is verified to work when everything goes right — the most uninteresting case.
|
|
231
|
+
|
|
232
|
+
**How to spot it:** Scan test scenario descriptions for words like "invalid," "timeout," "failure," "error," "reject," "concurrent," "duplicate." If these are absent, negative scenarios are missing.
|
|
233
|
+
|
|
234
|
+
### 5. Coverage Percentage as the Only Quality Metric
|
|
235
|
+
|
|
236
|
+
The strategy defines 80% code coverage as the quality gate but specifies no other quality criteria. High coverage with no assertion quality means tests that execute code paths without verifying behavior — "tests" that call functions and ignore the return value. Coverage measures how much code was run, not whether it was tested correctly.
|
|
237
|
+
|
|
238
|
+
**How to spot it:** The quality gates section mentions only code coverage. No mention of mutation testing, assertion density, test execution time budgets, or flakiness tracking.
|
|
@@ -10,6 +10,17 @@ User stories translate PRD requirements into user-facing behavior with testable
|
|
|
10
10
|
|
|
11
11
|
Follows the review process defined in `review-methodology.md`.
|
|
12
12
|
|
|
13
|
+
## Summary
|
|
14
|
+
|
|
15
|
+
- **Pass 1 — PRD Coverage**: Every PRD feature, flow, and requirement has at least one corresponding user story; no silent coverage gaps.
|
|
16
|
+
- **Pass 2 — Acceptance Criteria Quality**: Every story has testable, unambiguous Given/When/Then criteria covering happy path and at least one error/edge case.
|
|
17
|
+
- **Pass 3 — Story Independence**: Stories can be implemented independently; dependencies are explicit, not hidden; no circular dependencies.
|
|
18
|
+
- **Pass 4 — Persona Coverage**: Every PRD-defined persona has stories; every story maps to a valid, defined persona.
|
|
19
|
+
- **Pass 5 — Sizing & Splittability**: No story too large for 1-3 agent sessions or too small to be meaningful; oversized stories have clear split points.
|
|
20
|
+
- **Pass 6 — Downstream Readiness**: Domain entities, events, aggregate boundaries, and business rules are discoverable from acceptance criteria for domain modeling.
|
|
21
|
+
|
|
22
|
+
## Deep Guidance
|
|
23
|
+
|
|
13
24
|
---
|
|
14
25
|
|
|
15
26
|
## Pass 1: PRD Coverage
|
|
@@ -170,3 +181,57 @@ Stories are the primary input to domain discovery in the domain modeling step. I
|
|
|
170
181
|
- P1: "US-007 ('As a teacher, I want to manage my classes') — acceptance criteria say 'classes are managed correctly.' No mention of what entities are involved (Class, Enrollment, Student?), what state transitions occur, or what business rules apply. Domain modeling will have to guess."
|
|
171
182
|
- P2: "Cross-story entity naming is inconsistent: US-003 uses 'User,' US-008 uses 'Account,' US-015 uses 'Member.' These may be different bounded context terms or may be accidental inconsistency — clarify before domain modeling."
|
|
172
183
|
- P2: "Stories in the 'Payments' epic mention 'processing a payment' but no acceptance criteria describe the payment lifecycle states (pending → processing → completed/failed). Domain events cannot be discovered from these stories."
|
|
184
|
+
|
|
185
|
+
---
|
|
186
|
+
|
|
187
|
+
## Common Review Anti-Patterns
|
|
188
|
+
|
|
189
|
+
### 1. Reviewing Against a Generic Checklist Instead of the PRD
|
|
190
|
+
|
|
191
|
+
The reviewer checks whether stories have acceptance criteria and follow INVEST principles, but never opens the PRD to verify coverage. The stories could be missing entire PRD features and this review would not catch it. Reviews must cross-reference the PRD — checking story quality without checking story completeness misses the highest-severity failure mode.
|
|
192
|
+
|
|
193
|
+
**How to spot it:** The review report contains no references to specific PRD sections. Findings are all about story quality (vague criteria, poor sizing) and none about story coverage (missing features, missing flows).
|
|
194
|
+
|
|
195
|
+
### 2. Accepting Vague Acceptance Criteria as "Good Enough"
|
|
196
|
+
|
|
197
|
+
The reviewer sees acceptance criteria like "user can manage their profile" and does not flag it because the intent is clear. But intent is not implementation guidance. Two agents reading "manage their profile" will implement different field sets, different validation rules, and different UX flows. Acceptance criteria must be testable — if you cannot write an automated test directly from the criterion, it is too vague.
|
|
198
|
+
|
|
199
|
+
**Example finding:**
|
|
200
|
+
|
|
201
|
+
```markdown
|
|
202
|
+
## Finding: USR-014
|
|
203
|
+
|
|
204
|
+
**Priority:** P1
|
|
205
|
+
**Pass:** Acceptance Criteria Quality (Pass 2)
|
|
206
|
+
**Document:** docs/user-stories.md, US-008
|
|
207
|
+
|
|
208
|
+
**Issue:** Acceptance criteria for US-008 ("As a user, I want to manage my profile"):
|
|
209
|
+
- "Given I am logged in, when I update my profile, then my changes are saved"
|
|
210
|
+
|
|
211
|
+
This criterion does not specify: which fields are editable, what validation rules apply,
|
|
212
|
+
whether partial updates are supported, what happens on validation failure, or whether
|
|
213
|
+
changes require re-authentication (e.g., email change).
|
|
214
|
+
|
|
215
|
+
**Recommendation:** Replace with specific Given/When/Then scenarios:
|
|
216
|
+
- Given I am logged in, when I change my display name to a valid name (1-100 chars), then my display name is updated
|
|
217
|
+
- Given I am logged in, when I change my email, then a verification email is sent to the new address and the email is not changed until verified
|
|
218
|
+
- Given I am logged in, when I submit a display name longer than 100 characters, then I see a validation error
|
|
219
|
+
```
|
|
220
|
+
|
|
221
|
+
### 3. Ignoring Story Dependencies
|
|
222
|
+
|
|
223
|
+
The reviewer checks each story in isolation but never maps dependencies between stories. Stories that secretly depend on each other are not flagged. This creates false parallelization opportunities downstream — the implementation tasks phase will mark these as parallel, and agents will produce conflicting work.
|
|
224
|
+
|
|
225
|
+
**How to spot it:** The review report has no findings from Pass 3 (Story Independence). Dependencies are only discovered later during implementation tasks or during actual implementation.
|
|
226
|
+
|
|
227
|
+
### 4. Persona Name Drift Without Flagging
|
|
228
|
+
|
|
229
|
+
The PRD defines personas as "Teacher," "Student," and "Admin." Stories reference "Instructor," "Learner," and "Administrator." The reviewer does not flag the terminology mismatch because the mapping is obvious to a human. But downstream, the domain model and implementation tasks may use either set of terms inconsistently, creating confusion.
|
|
230
|
+
|
|
231
|
+
**How to spot it:** Compare persona names in the PRD with persona names in story "As a..." statements. Any mismatch is a finding, even if the intent is obvious.
|
|
232
|
+
|
|
233
|
+
### 5. Reviewing Only Happy-Path Stories
|
|
234
|
+
|
|
235
|
+
The reviewer verifies that the main user flows have stories but does not check for error handling, edge cases, or administrative workflows. Stories exist for "user creates an account" and "user places an order" but not for "user enters invalid payment info," "user tries to order an out-of-stock item," or "admin resolves a disputed transaction." These missing stories become missing tasks and missing implementations.
|
|
236
|
+
|
|
237
|
+
**How to spot it:** Count the ratio of happy-path stories to error/edge-case stories. If the ratio is heavily skewed (e.g., 20 happy-path stories and 2 error stories), error handling is systematically under-specified.
|
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
|
-
name: review-ux-
|
|
2
|
+
name: review-ux-specification
|
|
3
3
|
description: Failure modes and review passes specific to UI/UX specification artifacts
|
|
4
|
-
topics: [review, ux, design, accessibility, responsive]
|
|
4
|
+
topics: [review, ux, design, accessibility, responsive-design]
|
|
5
5
|
---
|
|
6
6
|
|
|
7
7
|
# Review: UX Specification
|
|
@@ -10,6 +10,18 @@ The UX specification translates user journeys from the PRD and component archite
|
|
|
10
10
|
|
|
11
11
|
Follows the review process defined in `review-methodology.md`.
|
|
12
12
|
|
|
13
|
+
## Summary
|
|
14
|
+
|
|
15
|
+
- **Pass 1 — User Journey Coverage vs PRD**: Every user-facing PRD feature has a corresponding screen, flow, or interaction; non-happy-path journeys covered.
|
|
16
|
+
- **Pass 2 — Accessibility Compliance**: WCAG level stated; keyboard navigation, screen reader support, color contrast, and focus management specified.
|
|
17
|
+
- **Pass 3 — Interaction State Completeness**: Every component has all states defined: empty, loading, populated, error, disabled, and edge states.
|
|
18
|
+
- **Pass 4 — Design System Consistency**: Colors, spacing, typography reference design system tokens, not one-off values.
|
|
19
|
+
- **Pass 5 — Responsive Breakpoint Coverage**: Behavior defined for all breakpoints; navigation, data tables, and forms adapt appropriately.
|
|
20
|
+
- **Pass 6 — Error State Handling**: Every user action that can fail has a designed error state with user-friendly messages and clear recovery paths.
|
|
21
|
+
- **Pass 7 — Component Hierarchy vs Architecture**: Frontend components in UX spec align with architecture component boundaries and state management approach.
|
|
22
|
+
|
|
23
|
+
## Deep Guidance
|
|
24
|
+
|
|
13
25
|
---
|
|
14
26
|
|
|
15
27
|
## Pass 1: User Journey Coverage vs PRD
|
|
@@ -206,3 +218,39 @@ When the UX spec designs components that do not match the architecture's compone
|
|
|
206
218
|
- P1: "The UX spec designs an 'OrderSummaryWidget' that combines order details, customer info, and payment status. The architecture separates these into three independent components (OrderComponent, CustomerComponent, PaymentComponent) with separate data sources."
|
|
207
219
|
- P1: "The UX spec assumes global state for user preferences (accessible from any component), but the architecture specifies component-local state with prop drilling."
|
|
208
220
|
- P2: "The UX spec's 'ProductCard' component bundles product image, price, and add-to-cart button. The architecture models 'ProductDisplay' and 'CartAction' as separate concerns."
|
|
221
|
+
|
|
222
|
+
### Example Review Finding
|
|
223
|
+
|
|
224
|
+
```markdown
|
|
225
|
+
### Finding: Dashboard has no empty state or loading state design
|
|
226
|
+
|
|
227
|
+
**Pass:** 3 — Interaction State Completeness
|
|
228
|
+
**Priority:** P0
|
|
229
|
+
**Location:** UX Spec Section 4.1 "User Dashboard"
|
|
230
|
+
|
|
231
|
+
**Issue:** The dashboard screen shows charts (order volume, revenue trend) and
|
|
232
|
+
summary metrics (total orders, account balance, recent activity). The spec provides
|
|
233
|
+
only the populated state — what the screen looks like with data.
|
|
234
|
+
|
|
235
|
+
Missing states:
|
|
236
|
+
- **Empty state:** A new user with zero orders sees empty chart containers with
|
|
237
|
+
no axes, no labels, and no guidance. The metrics show "$0" and "0 orders" with
|
|
238
|
+
no context.
|
|
239
|
+
- **Loading state:** When dashboard data is being fetched (3 separate API calls
|
|
240
|
+
per the API contract), what does the user see? No skeleton, spinner, or
|
|
241
|
+
progressive loading is specified.
|
|
242
|
+
- **Partial error state:** If the revenue chart API fails but the orders API
|
|
243
|
+
succeeds, does the entire dashboard show an error, or just the revenue widget?
|
|
244
|
+
|
|
245
|
+
**Impact:** Implementing agents will either show blank containers (confusing for
|
|
246
|
+
new users), a full-page spinner (poor perceived performance), or nothing at all
|
|
247
|
+
while loading. The first-time user experience — which is critical for activation
|
|
248
|
+
metrics in the PRD — is completely undesigned.
|
|
249
|
+
|
|
250
|
+
**Recommendation:** Design three additional states:
|
|
251
|
+
1. Empty state with onboarding CTA ("Create your first order to see analytics here")
|
|
252
|
+
2. Skeleton loading state with placeholder shapes matching the populated layout
|
|
253
|
+
3. Per-widget error state with retry button, so partial failures are isolated
|
|
254
|
+
|
|
255
|
+
**Trace:** UX Spec 4.1 → PRD Success Metric "70% user activation within 7 days"
|
|
256
|
+
```
|
|
@@ -0,0 +1,255 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: review-vision
|
|
3
|
+
description: Vision-specific review passes, failure modes, and quality criteria for product vision documents
|
|
4
|
+
topics: [review, vision, product-strategy, validation]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Review: Product Vision
|
|
8
|
+
|
|
9
|
+
The product vision document sets the strategic direction for everything downstream. It defines why the product exists, who it serves, what makes it different, and what traps to avoid. A weak vision produces a PRD that lacks focus, user stories that lack purpose, and an architecture that lacks guiding constraints. This review uses 5 passes targeting the specific ways vision artifacts fail.
|
|
10
|
+
|
|
11
|
+
Follows the review process defined in `review-methodology.md`.
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## Summary
|
|
16
|
+
|
|
17
|
+
Vision review validates that the product vision is specific enough to guide decisions, inspiring enough to align a team, and honest enough to withstand scrutiny. The 5 passes target: (1) vision clarity -- is the vision statement specific, inspiring, and actionable, (2) target audience -- are users defined by behaviors and motivations rather than demographics, (3) competitive landscape -- is the analysis honest about strengths and not just weaknesses, (4) guiding principles -- do they create real tradeoffs with X-over-Y format, and (5) anti-vision -- does it name specific traps rather than vague disclaimers.
|
|
18
|
+
|
|
19
|
+
---
|
|
20
|
+
|
|
21
|
+
## Deep Guidance
|
|
22
|
+
|
|
23
|
+
## Pass 1: Vision Clarity
|
|
24
|
+
|
|
25
|
+
### What to Check
|
|
26
|
+
|
|
27
|
+
- Is the vision statement specific to THIS product, not a generic mission statement?
|
|
28
|
+
- Does it inspire action, not just describe a category?
|
|
29
|
+
- Is it actionable -- could a team use it to make a yes/no decision about a feature?
|
|
30
|
+
- Does it avoid jargon, buzzwords, and empty superlatives ("best-in-class," "world-class," "revolutionary")?
|
|
31
|
+
- Is it short enough to remember (1-3 sentences)?
|
|
32
|
+
|
|
33
|
+
### Why This Matters
|
|
34
|
+
|
|
35
|
+
The vision statement is the single most referenced artifact in the pipeline. It appears in PRD context, guides user story prioritization, and informs architecture trade-offs. A generic vision like "make the best project management tool" provides zero signal -- it cannot distinguish between features to build and features to skip. A specific vision like "help 2-person freelance teams track client work without learning project management" makes every downstream decision easier.
|
|
36
|
+
|
|
37
|
+
### How to Check
|
|
38
|
+
|
|
39
|
+
1. Read the vision statement in isolation -- does it name a specific outcome for a specific group?
|
|
40
|
+
2. Try the "swap test" -- could you replace the product name with a competitor's name and have the vision still be true? If yes, it is not specific enough
|
|
41
|
+
3. Try the "decision test" -- present two hypothetical features and ask whether the vision helps you choose between them. If it does not, the vision is too vague
|
|
42
|
+
4. Check for buzzwords: "leverage," "synergy," "best-in-class," "end-to-end," "seamless" -- these add words without adding meaning
|
|
43
|
+
5. Check length -- if the vision takes more than 30 seconds to read aloud, it is too long to internalize
|
|
44
|
+
|
|
45
|
+
### What a Finding Looks Like
|
|
46
|
+
|
|
47
|
+
- P0: "Vision statement is 'To be the leading platform for enterprise collaboration.' This could describe Slack, Teams, Notion, or Confluence. It names no specific user group, no specific problem, and no specific differentiation."
|
|
48
|
+
- P1: "Vision statement is specific but contains 'seamless end-to-end experience' -- this phrase adds no decision-making value. Replace with the specific experience being described."
|
|
49
|
+
- P2: "Vision is 4 paragraphs long. Distill to 1-3 sentences that a team member could recite from memory."
|
|
50
|
+
|
|
51
|
+
### Common Failure Modes
|
|
52
|
+
|
|
53
|
+
- **Category description**: The vision describes a market category, not a product direction ("We build developer tools")
|
|
54
|
+
- **Aspiration without specificity**: The vision is inspiring but cannot guide decisions ("Empower teams to do their best work")
|
|
55
|
+
- **Solution masquerading as vision**: The vision describes a technology choice, not a user outcome ("AI-powered analytics platform")
|
|
56
|
+
|
|
57
|
+
---
|
|
58
|
+
|
|
59
|
+
## Pass 2: Target Audience
|
|
60
|
+
|
|
61
|
+
### What to Check
|
|
62
|
+
|
|
63
|
+
- Is the target audience defined by behaviors, motivations, and constraints -- not demographics?
|
|
64
|
+
- Does the audience description create clear inclusion/exclusion criteria?
|
|
65
|
+
- Are there signs of the "everyone" trap (audience so broad it provides no prioritization signal)?
|
|
66
|
+
- Does the audience description explain WHY these people need this product specifically?
|
|
67
|
+
|
|
68
|
+
### Why This Matters
|
|
69
|
+
|
|
70
|
+
Demographics (age, location, job title) do not predict product needs. Behaviors and motivations do. "Marketing managers aged 30-45" tells you nothing about what to build. "Solo marketers who manage 5+ channels without a team and need to appear more capable than they are" tells you everything. The audience definition flows directly into PRD personas -- vague audiences produce vague personas produce vague user stories.
|
|
71
|
+
|
|
72
|
+
### How to Check
|
|
73
|
+
|
|
74
|
+
1. Check whether the audience is defined by observable behaviors ("currently uses spreadsheets to track...") versus demographics ("25-40 year old professionals")
|
|
75
|
+
2. Check for motivations -- WHY does this audience need the product? What is the underlying drive?
|
|
76
|
+
3. Check for constraints -- what limits this audience? Budget? Time? Technical skill? Team size?
|
|
77
|
+
4. Apply the "exclusion test" -- does the audience definition clearly exclude some potential users? If not, it is too broad
|
|
78
|
+
5. Check that the audience connects to the vision -- is this the audience that the vision serves?
|
|
79
|
+
|
|
80
|
+
### What a Finding Looks Like
|
|
81
|
+
|
|
82
|
+
- P0: "Target audience is 'businesses of all sizes.' This excludes nobody and provides no prioritization signal. The PRD cannot write meaningful personas from this."
|
|
83
|
+
- P1: "Target audience mentions 'small business owners' but defines them only by company size (<50 employees), not by behaviors, pain points, or motivations."
|
|
84
|
+
- P2: "Audience description is behavior-based but does not explain why existing solutions fail this group."
|
|
85
|
+
|
|
86
|
+
### Common Failure Modes
|
|
87
|
+
|
|
88
|
+
- **Demographic-only**: Defined by who they are, not what they do ("SMB owners aged 25-45")
|
|
89
|
+
- **Too broad**: Audience includes everyone ("teams of any size in any industry")
|
|
90
|
+
- **Missing motivation**: Describes the audience but not why they need THIS product
|
|
91
|
+
- **No exclusion criteria**: Cannot determine who is NOT the target audience
|
|
92
|
+
|
|
93
|
+
---
|
|
94
|
+
|
|
95
|
+
## Pass 3: Competitive Landscape
|
|
96
|
+
|
|
97
|
+
### What to Check
|
|
98
|
+
|
|
99
|
+
- Does the competitive analysis honestly assess competitors' strengths, not just their weaknesses?
|
|
100
|
+
- Are competitors named specifically, not referred to generically ("existing solutions")?
|
|
101
|
+
- Is the differentiation based on substance (different approach, different audience, different trade-offs) not superficiality ("better UX")?
|
|
102
|
+
- Does the analysis acknowledge what competitors do well that this product will NOT try to replicate?
|
|
103
|
+
|
|
104
|
+
### Why This Matters
|
|
105
|
+
|
|
106
|
+
A competitive landscape that only lists competitor weaknesses produces false confidence. Competitors have strengths -- users chose them for reasons. Understanding those reasons prevents building a product that is strictly worse in dimensions users care about. Differentiation based on "we'll just do it better" is not differentiation -- it is a bet that the team is more competent than established competitors with more resources.
|
|
107
|
+
|
|
108
|
+
### How to Check
|
|
109
|
+
|
|
110
|
+
1. For each named competitor, check that at least one genuine strength is acknowledged
|
|
111
|
+
2. Check that differentiation is structural (different trade-off, different audience segment, different approach) not aspirational ("better design")
|
|
112
|
+
3. Verify competitors are named specifically -- "Competitor X" or "the market" provides no signal
|
|
113
|
+
4. Check whether the analysis acknowledges what the product will NOT compete on (conceding dimensions to competitors)
|
|
114
|
+
5. Look for the "better at everything" anti-pattern -- if the product claims superiority in every dimension, the analysis is dishonest
|
|
115
|
+
|
|
116
|
+
### What a Finding Looks Like
|
|
117
|
+
|
|
118
|
+
- P0: "Competitive section lists 4 competitors but only describes their weaknesses. No competitor strengths are acknowledged. This produces a false picture of the market and prevents honest differentiation."
|
|
119
|
+
- P1: "Differentiation claim is 'better user experience.' This is not structural differentiation -- every product claims this. What specific design trade-off creates a different experience?"
|
|
120
|
+
- P2: "Competitors are referred to as 'existing solutions' and 'current tools' without naming them. Specific names enable specific analysis."
|
|
121
|
+
|
|
122
|
+
### Common Failure Modes
|
|
123
|
+
|
|
124
|
+
- **Weakness-only analysis**: Lists only what competitors do poorly, creating false confidence
|
|
125
|
+
- **Aspirational differentiation**: Claims superiority without structural basis ("we'll be faster, simpler, and more powerful")
|
|
126
|
+
- **Generic competitors**: References "the market" or "existing solutions" without naming specific products
|
|
127
|
+
- **Missing concessions**: Does not acknowledge what the product will deliberately NOT compete on
|
|
128
|
+
|
|
129
|
+
---
|
|
130
|
+
|
|
131
|
+
## Pass 4: Guiding Principles
|
|
132
|
+
|
|
133
|
+
### What to Check
|
|
134
|
+
|
|
135
|
+
- Are principles in X-over-Y format, creating real trade-offs?
|
|
136
|
+
- Does each principle rule out a specific, tempting alternative?
|
|
137
|
+
- Could a reasonable person disagree with the principle (i.e., the "over Y" option is genuinely attractive)?
|
|
138
|
+
- Are principles specific enough to resolve a real product decision?
|
|
139
|
+
|
|
140
|
+
### Why This Matters
|
|
141
|
+
|
|
142
|
+
Guiding principles that do not create trade-offs are platitudes. "We value quality" is not a principle -- nobody advocates for poor quality. "We value correctness over speed-to-market" is a principle because speed-to-market is genuinely valuable and someone could reasonably choose it. X-over-Y format forces the vision author to name what the product will sacrifice, which is the only way principles become useful for downstream decision-making.
|
|
143
|
+
|
|
144
|
+
### How to Check
|
|
145
|
+
|
|
146
|
+
1. For each principle, check for X-over-Y structure -- is something being chosen OVER something else?
|
|
147
|
+
2. Apply the "reasonable disagreement" test -- would a smart, well-intentioned person choose Y over X? If not, the principle is a platitude
|
|
148
|
+
3. Construct a hypothetical product decision and check whether the principle resolves it
|
|
149
|
+
4. Check that the set of principles covers the most common trade-off dimensions for this product type (simplicity vs. power, speed vs. correctness, flexibility vs. consistency, etc.)
|
|
150
|
+
5. Verify no two principles contradict each other
|
|
151
|
+
|
|
152
|
+
### What a Finding Looks Like
|
|
153
|
+
|
|
154
|
+
- P0: "Principles include 'We value simplicity, quality, and user delight.' These are not trade-offs -- they are universally desirable attributes. No team would advocate for complexity, poor quality, or user frustration."
|
|
155
|
+
- P1: "Principle 'Convention over configuration' is in X-over-Y format but does not specify what conventions or what configuration options are sacrificed. Too abstract to resolve a real decision."
|
|
156
|
+
- P2: "Principles are well-formed but do not cover the speed-vs-correctness dimension, which is a common tension for this product type."
|
|
157
|
+
|
|
158
|
+
### Common Failure Modes
|
|
159
|
+
|
|
160
|
+
- **Platitudes**: Principles everyone agrees with ("we value quality") that rule out nothing
|
|
161
|
+
- **Missing sacrifice**: X-over-Y format but Y is not genuinely attractive ("quality over bugs")
|
|
162
|
+
- **Too abstract**: Principles are directionally correct but too vague to resolve specific decisions
|
|
163
|
+
- **Contradictory pairs**: Two principles that cannot both be followed ("move fast" and "never ship bugs")
|
|
164
|
+
|
|
165
|
+
---
|
|
166
|
+
|
|
167
|
+
## Pass 5: Anti-Vision
|
|
168
|
+
|
|
169
|
+
### What to Check
|
|
170
|
+
|
|
171
|
+
- Does the anti-vision name specific, tempting traps -- not vague disclaimers?
|
|
172
|
+
- Are the anti-vision items things the team could plausibly drift into (not absurd strawmen)?
|
|
173
|
+
- Does each item explain WHY it is tempting and HOW to recognize the drift?
|
|
174
|
+
- Is the anti-vision specific to THIS product, not generic warnings?
|
|
175
|
+
|
|
176
|
+
### Why This Matters
|
|
177
|
+
|
|
178
|
+
The anti-vision is the vision's immune system. It names the specific failure modes that are most likely given the product's domain, team, and competitive landscape. Without it, teams drift toward common traps without recognizing the drift. A good anti-vision makes the team uncomfortable because it names things they might actually do -- not things no reasonable team would do.
|
|
179
|
+
|
|
180
|
+
### How to Check
|
|
181
|
+
|
|
182
|
+
1. For each anti-vision item, check specificity -- does it name a concrete behavior or outcome, not a vague category?
|
|
183
|
+
2. Apply the "temptation test" -- is this something the team could plausibly drift into? If the answer is "obviously not," the anti-vision item is a strawman
|
|
184
|
+
3. Check whether each item explains the mechanism: why is this trap tempting, and what are the early warning signs?
|
|
185
|
+
4. Verify the anti-vision items connect to the product domain -- are they specific to THIS type of product?
|
|
186
|
+
5. Check that anti-vision items complement guiding principles -- if a principle says "simplicity over power," the anti-vision should name a specific way the product might become complex
|
|
187
|
+
|
|
188
|
+
### What a Finding Looks Like
|
|
189
|
+
|
|
190
|
+
- P0: "Anti-vision section says 'We will not build a bad product.' This is not an anti-vision -- it is a tautology. Name specific traps: 'We will not become a feature-comparison checklist tool that matches competitors feature-for-feature while losing our core simplicity advantage.'"
|
|
191
|
+
- P1: "Anti-vision names 'scope creep' as a trap but does not explain which specific scope expansion is most tempting for this product or how to recognize it early."
|
|
192
|
+
- P2: "Anti-vision items are specific but do not connect to the guiding principles. Each principle's 'Y' (the sacrificed value) should have a corresponding anti-vision item that names the drift toward Y."
|
|
193
|
+
|
|
194
|
+
### Common Failure Modes
|
|
195
|
+
|
|
196
|
+
- **Vague disclaimers**: "We won't lose focus" -- too generic to be actionable
|
|
197
|
+
- **Absurd strawmen**: Names failures no team would pursue ("we won't build an insecure product")
|
|
198
|
+
- **Missing mechanism**: Names the trap but not why it is tempting or how to detect drift
|
|
199
|
+
- **Generic warnings**: Anti-vision items apply to any product, not THIS product specifically
|
|
200
|
+
|
|
201
|
+
---
|
|
202
|
+
|
|
203
|
+
## Finding Report Template
|
|
204
|
+
|
|
205
|
+
```markdown
|
|
206
|
+
## Vision Review Report
|
|
207
|
+
|
|
208
|
+
### Pass 1: Vision Clarity
|
|
209
|
+
- **P1**: Vision statement "Build the best project management tool" is a category description, not a product vision. It cannot guide feature trade-offs. Recommendation: rewrite as a specific change statement.
|
|
210
|
+
|
|
211
|
+
### Pass 2: Target Audience
|
|
212
|
+
- No findings
|
|
213
|
+
|
|
214
|
+
### Pass 3: Competitive Landscape
|
|
215
|
+
- **P2**: Competitor "Acme" is described by weaknesses only. Add at least one acknowledged strength.
|
|
216
|
+
|
|
217
|
+
### Pass 4: Guiding Principles
|
|
218
|
+
- **P0**: Principles are platitudes ("quality", "simplicity") without X-over-Y trade-offs. Cannot resolve downstream decisions.
|
|
219
|
+
|
|
220
|
+
### Pass 5: Anti-Vision
|
|
221
|
+
- **P1**: Anti-vision says "avoid scope creep" without naming which specific scope expansion is tempting.
|
|
222
|
+
|
|
223
|
+
### Summary
|
|
224
|
+
- P0: 1 | P1: 2 | P2: 1 | P3: 0
|
|
225
|
+
- Blocks downstream: Yes (P0 in guiding principles)
|
|
226
|
+
```
|
|
227
|
+
|
|
228
|
+
## Severity Examples for Vision Documents
|
|
229
|
+
|
|
230
|
+
### P0 (Blocks downstream phases)
|
|
231
|
+
|
|
232
|
+
- Vision statement is a category description that cannot guide any decision
|
|
233
|
+
- Target audience is "everyone" -- PRD cannot write meaningful personas
|
|
234
|
+
- No guiding principles exist -- all downstream trade-offs are unresolved
|
|
235
|
+
- Anti-vision is absent entirely
|
|
236
|
+
|
|
237
|
+
### P1 (Causes significant downstream quality issues)
|
|
238
|
+
|
|
239
|
+
- Vision is specific but contains unfalsifiable claims
|
|
240
|
+
- Target audience is demographic-only with no behavioral definition
|
|
241
|
+
- Competitive analysis lists only competitor weaknesses
|
|
242
|
+
- Principles exist but are platitudes without real trade-offs
|
|
243
|
+
|
|
244
|
+
### P2 (Minor issues, fix during iteration)
|
|
245
|
+
|
|
246
|
+
- Vision is slightly too long to memorize
|
|
247
|
+
- One competitor is described generically rather than by name
|
|
248
|
+
- One principle is well-formed but could be more specific
|
|
249
|
+
- Anti-vision items are specific but miss one common trap for this product type
|
|
250
|
+
|
|
251
|
+
### P3 (Observations for future improvement)
|
|
252
|
+
|
|
253
|
+
- Competitive landscape could include an emerging competitor
|
|
254
|
+
- Anti-vision could add early warning indicators for each trap
|
|
255
|
+
- Principles could be ordered by frequency of application
|