convoke-agents 2.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +920 -0
- package/INSTALLATION.md +230 -0
- package/LICENSE +21 -0
- package/README.md +330 -0
- package/UPDATE-GUIDE.md +220 -0
- package/_bmad/bme/_vortex/README.md +150 -0
- package/_bmad/bme/_vortex/agents/contextualization-expert.md +100 -0
- package/_bmad/bme/_vortex/agents/discovery-empathy-expert.md +117 -0
- package/_bmad/bme/_vortex/agents/hypothesis-engineer.md +117 -0
- package/_bmad/bme/_vortex/agents/lean-experiments-specialist.md +118 -0
- package/_bmad/bme/_vortex/agents/learning-decision-expert.md +117 -0
- package/_bmad/bme/_vortex/agents/production-intelligence-specialist.md +117 -0
- package/_bmad/bme/_vortex/agents/research-convergence-specialist.md +117 -0
- package/_bmad/bme/_vortex/compass-routing-reference.md +312 -0
- package/_bmad/bme/_vortex/config.yaml +46 -0
- package/_bmad/bme/_vortex/contracts/hc1-empathy-artifacts.md +152 -0
- package/_bmad/bme/_vortex/contracts/hc2-problem-definition.md +125 -0
- package/_bmad/bme/_vortex/contracts/hc3-hypothesis-contract.md +112 -0
- package/_bmad/bme/_vortex/contracts/hc4-experiment-context.md +140 -0
- package/_bmad/bme/_vortex/contracts/hc5-signal-report.md +130 -0
- package/_bmad/bme/_vortex/examples/hc2-example-problem-definition.md +85 -0
- package/_bmad/bme/_vortex/examples/hc3-example-hypothesis-contract.md +103 -0
- package/_bmad/bme/_vortex/examples/hc5-example-signal-report.md +76 -0
- package/_bmad/bme/_vortex/guides/EMMA-USER-GUIDE.md +232 -0
- package/_bmad/bme/_vortex/guides/ISLA-USER-GUIDE.md +208 -0
- package/_bmad/bme/_vortex/guides/LIAM-USER-GUIDE.md +255 -0
- package/_bmad/bme/_vortex/guides/MAX-USER-GUIDE.md +213 -0
- package/_bmad/bme/_vortex/guides/MILA-USER-GUIDE.md +235 -0
- package/_bmad/bme/_vortex/guides/NOAH-USER-GUIDE.md +258 -0
- package/_bmad/bme/_vortex/guides/WADE-USER-GUIDE.md +245 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/empathy-map.template.md +143 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-01-define-user.md +60 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-02-says-thinks.md +67 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-03-does-feels.md +79 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-04-pain-points.md +87 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-05-gains.md +103 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-06-synthesize.md +104 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/validate.md +117 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/workflow.md +44 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-01-define-requirements.md +85 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-02-user-flows.md +59 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-03-information-architecture.md +68 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-04-wireframe-sketch.md +97 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-05-components.md +128 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-06-synthesize.md +83 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/wireframe.template.md +287 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/workflow.md +44 -0
- package/_bmad/bme/_vortex/workflows/assumption-mapping/steps/step-01-setup.md +66 -0
- package/_bmad/bme/_vortex/workflows/assumption-mapping/steps/step-02-context.md +93 -0
- package/_bmad/bme/_vortex/workflows/assumption-mapping/steps/step-03-risk-mapping.md +103 -0
- package/_bmad/bme/_vortex/workflows/assumption-mapping/steps/step-04-synthesize.md +101 -0
- package/_bmad/bme/_vortex/workflows/assumption-mapping/workflow.md +49 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-01-setup.md +81 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-02-context.md +67 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-03-classification.md +98 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-04-evidence.md +100 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-05-synthesize.md +174 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/workflow.md +52 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/contextualize-scope.template.md +67 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-01-list-opportunities.md +47 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-02-define-criteria.md +36 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-03-evaluate-opportunities.md +30 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-04-define-boundaries.md +32 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-05-validate-fit.md +28 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-06-synthesize.md +36 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/workflow.md +59 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/empathy-map.template.md +143 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-01-define-user.md +60 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-02-says-thinks.md +67 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-03-does-feels.md +79 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-04-pain-points.md +87 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-05-gains.md +103 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-06-synthesize.md +107 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/validate.md +117 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/workflow.md +45 -0
- package/_bmad/bme/_vortex/workflows/experiment-design/steps/step-01-setup.md +66 -0
- package/_bmad/bme/_vortex/workflows/experiment-design/steps/step-02-context.md +77 -0
- package/_bmad/bme/_vortex/workflows/experiment-design/steps/step-03-design.md +114 -0
- package/_bmad/bme/_vortex/workflows/experiment-design/steps/step-04-synthesize.md +128 -0
- package/_bmad/bme/_vortex/workflows/experiment-design/workflow.md +51 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/steps/step-01-setup.md +66 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/steps/step-02-context.md +80 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/steps/step-03-brainwriting.md +79 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/steps/step-04-assumption-mapping.md +102 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/steps/step-05-synthesize.md +130 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/workflow.md +52 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/lean-experiment.template.md +29 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-01-hypothesis.md +58 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-02-design.md +68 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-03-metrics.md +73 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-04-run.md +75 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-05-analyze.md +84 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-06-decide.md +111 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/workflow.md +26 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/lean-persona.template.md +163 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-01-define-job.md +72 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-02-current-solution.md +83 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-03-problem-contexts.md +90 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-04-forces-anxieties.md +98 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-05-success-criteria.md +103 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-06-synthesize.md +129 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/workflow.md +50 -0
- package/_bmad/bme/_vortex/workflows/learning-card/learning-card.template.md +179 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-01-experiment-context.md +100 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-02-raw-results.md +125 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-03-analysis.md +125 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-04-validated-learning.md +139 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-05-implications.md +134 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-06-synthesize.md +121 -0
- package/_bmad/bme/_vortex/workflows/learning-card/validate.md +134 -0
- package/_bmad/bme/_vortex/workflows/learning-card/workflow.md +51 -0
- package/_bmad/bme/_vortex/workflows/mvp/mvp.template.md +40 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-01-riskiest-assumption.md +17 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-02-success-criteria.md +13 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-03-smallest-test.md +13 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-04-scope-features.md +13 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-05-build-measure-learn.md +13 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-06-synthesize.md +28 -0
- package/_bmad/bme/_vortex/workflows/mvp/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/mvp/workflow.md +36 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/steps/step-01-setup.md +102 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/steps/step-02-context.md +81 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/steps/step-03-pattern-identification.md +88 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/steps/step-04-theme-clustering.md +100 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/steps/step-05-synthesize.md +135 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/workflow.md +58 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/pivot-patch-persevere.template.md +201 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-01-evidence-review.md +125 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-02-hypothesis-assessment.md +132 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-03-option-analysis.md +167 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-04-stakeholder-input.md +141 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-05-decision.md +161 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-06-action-plan.md +188 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/validate.md +159 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/workflow.md +51 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/steps/step-01-setup.md +97 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/steps/step-02-context.md +86 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/steps/step-03-jtbd-reframing.md +88 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/steps/step-04-pains-gains-revision.md +76 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/steps/step-05-synthesize.md +158 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/workflow.md +52 -0
- package/_bmad/bme/_vortex/workflows/product-vision/product-vision.template.md +147 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-01-define-problem.md +89 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-02-target-market.md +91 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-03-unique-approach.md +87 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-04-future-state.md +100 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-05-principles.md +92 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-06-synthesize.md +170 -0
- package/_bmad/bme/_vortex/workflows/product-vision/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/product-vision/workflow.md +55 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/steps/step-01-setup.md +84 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/steps/step-02-context.md +66 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/steps/step-03-monitoring.md +74 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/steps/step-04-prioritization.md +97 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/steps/step-05-synthesize.md +183 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/workflow.md +52 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/proof-of-concept.template.md +25 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-01-risk.md +79 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-02-scope.md +105 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-03-build.md +92 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-04-test.md +103 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-05-evaluate.md +114 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-06-document.md +125 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/workflow.md +26 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/proof-of-value.template.md +29 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-01-value-hypothesis.md +75 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-02-validation-design.md +94 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-03-willingness.md +96 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-04-test.md +107 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-05-analyze.md +116 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-06-document.md +147 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/workflow.md +26 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/steps/step-01-setup.md +69 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/steps/step-02-context.md +70 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/steps/step-03-jtbd-framing.md +81 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/steps/step-04-pains-gains.md +77 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/steps/step-05-synthesize.md +147 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/workflow.md +50 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/steps/step-01-setup.md +68 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/steps/step-02-context.md +67 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/steps/step-03-signal-analysis.md +85 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/steps/step-04-anomaly-detection.md +93 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/steps/step-05-synthesize.md +163 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/workflow.md +52 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-01-discovery-scope.md +77 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-02-research-methods.md +152 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-03-research-plan.md +159 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-04-execute.md +169 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-05-organize-data.md +149 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-06-synthesize.md +159 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/user-discovery.template.md +231 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/validate.md +153 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/workflow.md +45 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-01-research-goals.md +100 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-02-interview-script.md +123 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-03-recruitment.md +144 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-04-conduct.md +154 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-05-findings.md +163 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-06-synthesize.md +171 -0
- package/_bmad/bme/_vortex/workflows/user-interview/user-interview.template.md +250 -0
- package/_bmad/bme/_vortex/workflows/user-interview/validate.md +142 -0
- package/_bmad/bme/_vortex/workflows/user-interview/workflow.md +51 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-01-current-state.md +56 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-02-evidence-inventory.md +70 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-03-gap-analysis.md +76 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-04-stream-evaluation.md +57 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-05-recommendation.md +65 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-06-navigation-plan.md +72 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/validate.md +75 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/vortex-navigation.template.md +105 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/workflow.md +54 -0
- package/index.js +56 -0
- package/package.json +77 -0
- package/scripts/README.md +226 -0
- package/scripts/convoke-doctor.js +322 -0
- package/scripts/docs-audit.js +584 -0
- package/scripts/install-all-agents.js +9 -0
- package/scripts/install-vortex-agents.js +208 -0
- package/scripts/postinstall.js +104 -0
- package/scripts/update/convoke-migrate.js +169 -0
- package/scripts/update/convoke-update.js +272 -0
- package/scripts/update/convoke-version.js +134 -0
- package/scripts/update/lib/agent-registry.js +144 -0
- package/scripts/update/lib/backup-manager.js +243 -0
- package/scripts/update/lib/config-merger.js +242 -0
- package/scripts/update/lib/migration-runner.js +367 -0
- package/scripts/update/lib/refresh-installation.js +171 -0
- package/scripts/update/lib/utils.js +96 -0
- package/scripts/update/lib/validator.js +360 -0
- package/scripts/update/lib/version-detector.js +241 -0
- package/scripts/update/migrations/1.0.x-to-1.3.0.js +128 -0
- package/scripts/update/migrations/1.1.x-to-1.3.0.js +29 -0
- package/scripts/update/migrations/1.2.x-to-1.3.0.js +29 -0
- package/scripts/update/migrations/1.3.x-to-1.5.0.js +29 -0
- package/scripts/update/migrations/1.4.x-to-1.5.0.js +29 -0
- package/scripts/update/migrations/1.5.x-to-1.6.0.js +95 -0
- package/scripts/update/migrations/1.6.x-to-1.7.0.js +29 -0
- package/scripts/update/migrations/1.7.x-to-2.0.0.js +31 -0
- package/scripts/update/migrations/registry.js +194 -0
|
@@ -0,0 +1,103 @@
|
|
|
1
|
+
---
|
|
2
|
+
step: 3
|
|
3
|
+
workflow: assumption-mapping
|
|
4
|
+
title: Classification & Risk Mapping
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Step 3: Classification & Risk Mapping
|
|
8
|
+
|
|
9
|
+
You've surfaced every assumption hiding in your hypotheses. Now let's classify each one by how lethal it is and how little we actually know — and build a risk map that tells you exactly what to validate first.
|
|
10
|
+
|
|
11
|
+
## Why This Matters
|
|
12
|
+
|
|
13
|
+
Not all assumptions are created equal. Some are annoying if wrong — you adjust and move on. Others are lethal — the entire hypothesis collapses. The riskiest assumption gets tested first, not the easiest one. This step transforms your assumption inventory into an actionable risk map with a clear testing order. No more guessing about what to validate next.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
### 1. Classify Each Assumption
|
|
18
|
+
|
|
19
|
+
For every assumption in your inventory from Step 2, assess two dimensions:
|
|
20
|
+
|
|
21
|
+
**Lethality** — If this assumption is wrong, what happens to the hypothesis?
|
|
22
|
+
|
|
23
|
+
| Level | Definition | Signal |
|
|
24
|
+
|-------|-----------|--------|
|
|
25
|
+
| **High** | Kills the hypothesis entirely. The whole idea collapses. | "If this is wrong, nothing else matters." |
|
|
26
|
+
| **Medium** | Requires a significant pivot. The direction changes but the core insight survives. | "If this is wrong, we'd need to rethink the approach." |
|
|
27
|
+
| **Low** | Minor adjustment needed. The core idea survives. | "If this is wrong, we tweak and continue." |
|
|
28
|
+
|
|
29
|
+
**Uncertainty** — How much evidence do we actually have?
|
|
30
|
+
|
|
31
|
+
| Level | Definition | Signal |
|
|
32
|
+
|-------|-----------|--------|
|
|
33
|
+
| **High** | No evidence. We're guessing. | "We believe this but have nothing to back it up." |
|
|
34
|
+
| **Medium** | Some evidence, but indirect or incomplete. | "We have signals, but they're not conclusive." |
|
|
35
|
+
| **Low** | Strong evidence from multiple sources. | "Multiple data points confirm this." |
|
|
36
|
+
|
|
37
|
+
**Challenge yourself:** If a classification feels comfortable, it's probably wrong. High-lethality assumptions often get downgraded to medium because the team doesn't want to face the risk. What if it's actually high?
|
|
38
|
+
|
|
39
|
+
### 2. Build the Assumption Risk Map
|
|
40
|
+
|
|
41
|
+
| # | Assumption | Hypothesis | Lethality | Uncertainty | Priority | Validation Status |
|
|
42
|
+
|---|-----------|-----------|-----------|-------------|----------|-------------------|
|
|
43
|
+
| 1 | *Statement* | H1 | High/Med/Low | High/Med/Low | *Derived* | `Unvalidated` |
|
|
44
|
+
| 2 | *Statement* | H1, H2 | High/Med/Low | High/Med/Low | *Derived* | `Unvalidated` |
|
|
45
|
+
|
|
46
|
+
**Priority derivation (lethality × uncertainty):**
|
|
47
|
+
|
|
48
|
+
| | High Uncertainty | Medium Uncertainty | Low Uncertainty |
|
|
49
|
+
|--|-----------------|-------------------|-----------------|
|
|
50
|
+
| **High Lethality** | **Test First** | **Test First** | Monitor |
|
|
51
|
+
| **Medium Lethality** | **Test Soon** | Test Soon | Monitor |
|
|
52
|
+
| **Low Lethality** | Test Soon | Monitor | Monitor |
|
|
53
|
+
|
|
54
|
+
- **Test First:** High lethality + High/Medium uncertainty = validate before any experiment. These assumptions could destroy your hypothesis and you don't have evidence either way.
|
|
55
|
+
- **Test Soon:** Medium risk = validate early in the experiment cycle. Important but not immediately lethal.
|
|
56
|
+
- **Monitor:** Low risk = track but don't delay for validation. The hypothesis survives even if these are wrong.
|
|
57
|
+
|
|
58
|
+
### 3. Produce the Recommended Testing Order
|
|
59
|
+
|
|
60
|
+
Based on the risk map, sequence your assumptions for validation:
|
|
61
|
+
|
|
62
|
+
| Priority | Assumption | Hypothesis | Lethality × Uncertainty | Suggested Method | Minimum Evidence |
|
|
63
|
+
|----------|-----------|-----------|------------------------|-----------------|-----------------|
|
|
64
|
+
| 1 | *Riskiest — test this first* | H1 | High × High | *How to test it* | *What would validate or invalidate* |
|
|
65
|
+
| 2 | *Next riskiest* | H2 | High × Medium | | |
|
|
66
|
+
| 3 | *And so on* | H1, H3 | Medium × High | | |
|
|
67
|
+
|
|
68
|
+
**Tiebreaker rules:**
|
|
69
|
+
1. If two assumptions have equal priority, test the one that affects the **most hypotheses** first — if it's wrong, it invalidates more.
|
|
70
|
+
2. If still tied, test the one with the **highest lethality** first — a lethal assumption is worse than an uncertain one.
|
|
71
|
+
3. If still tied, test the one that's **cheapest to validate** — get quick signal before investing in expensive validation.
|
|
72
|
+
|
|
73
|
+
### 4. Flag Concerns for Routing
|
|
74
|
+
|
|
75
|
+
If any assumptions feel too risky to test through experiments alone — too uncertain, too lethal, or requiring user research before you can even design an experiment — flag them:
|
|
76
|
+
|
|
77
|
+
| Concern | Assumption # | Impact | Recommended Action |
|
|
78
|
+
|---------|-------------|--------|-------------------|
|
|
79
|
+
| *Unvalidated assumption or knowledge gap* | 1 | *How it affects hypothesis quality* | *e.g., "Route to Isla for targeted user discovery"* |
|
|
80
|
+
|
|
81
|
+
These flags may trigger routing to Isla in the Compass step — sending specific assumptions back for validation before proceeding to Wade's experiments.
|
|
82
|
+
|
|
83
|
+
**Guidance:** Don't flag everything. Flag the assumptions where you genuinely don't know enough to design an experiment. If you can design an experiment to test it, it belongs in the testing order, not the flagged concerns.
|
|
84
|
+
|
|
85
|
+
---
|
|
86
|
+
|
|
87
|
+
## Your Turn
|
|
88
|
+
|
|
89
|
+
Classify every assumption by lethality × uncertainty, build the risk map, produce the testing order, and flag any concerns. Share your analysis and I'll help you challenge any classifications that feel too comfortable.
|
|
90
|
+
|
|
91
|
+
---
|
|
92
|
+
|
|
93
|
+
**[a]** Advanced Elicitation — Deep dive into assumption classification with guided questioning
|
|
94
|
+
**[p]** Party Mode — Bring in other Vortex agents to challenge your risk assessments
|
|
95
|
+
**[c]** Continue — Proceed to synthesis and routing
|
|
96
|
+
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
## Next Step
|
|
100
|
+
|
|
101
|
+
When your assumption risk map and testing order are complete, I'll load:
|
|
102
|
+
|
|
103
|
+
{project-root}/_bmad/bme/_vortex/workflows/assumption-mapping/steps/step-04-synthesize.md
|
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
---
|
|
2
|
+
step: 4
|
|
3
|
+
workflow: assumption-mapping
|
|
4
|
+
title: Synthesize & Route
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Step 4: Synthesize & Route
|
|
8
|
+
|
|
9
|
+
You've extracted every assumption, classified each by lethality and uncertainty, and built a risk map with a testing order. Now let's validate the analysis is complete and figure out what happens next.
|
|
10
|
+
|
|
11
|
+
## Why This Matters
|
|
12
|
+
|
|
13
|
+
A risk map is only useful if it's honest. Before you act on it, we need to verify that every assumption was classified fairly, the testing order makes strategic sense, and the flagged concerns are genuine — not just the uncomfortable ones you want to avoid. If you can't prove it wrong, it's not a hypothesis — and if your risk map doesn't challenge you, it's not a risk map.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
### 1. Review Your Assumption Risk Map
|
|
18
|
+
|
|
19
|
+
Verify the risk classifications from Step 3:
|
|
20
|
+
|
|
21
|
+
- [ ] Every assumption from the inventory has a lethality and uncertainty score
|
|
22
|
+
- [ ] No assumptions were conveniently classified as low-risk to avoid confronting them
|
|
23
|
+
- [ ] Cross-hypothesis assumptions (affecting multiple hypotheses) are flagged as higher priority
|
|
24
|
+
- [ ] Priorities are correctly derived from the lethality × uncertainty matrix
|
|
25
|
+
|
|
26
|
+
### 2. Review Your Testing Order
|
|
27
|
+
|
|
28
|
+
Verify the testing sequence makes strategic sense:
|
|
29
|
+
|
|
30
|
+
- [ ] Testing order prioritizes by lethality × uncertainty, not by convenience or cost
|
|
31
|
+
- [ ] Tiebreakers were applied correctly (most hypotheses affected → highest lethality → cheapest to validate)
|
|
32
|
+
- [ ] Each assumption has a suggested method and minimum evidence threshold
|
|
33
|
+
- [ ] The order would survive scrutiny from a skeptic — is this really the right sequence?
|
|
34
|
+
|
|
35
|
+
### 3. Review Flagged Concerns
|
|
36
|
+
|
|
37
|
+
If you flagged concerns for routing to Isla:
|
|
38
|
+
|
|
39
|
+
- [ ] Flagged concerns are genuinely unresolvable through experiments alone
|
|
40
|
+
- [ ] Each flag has a specific research question, not a vague "we need more info"
|
|
41
|
+
- [ ] The recommended action is actionable — Isla would know exactly what to investigate
|
|
42
|
+
|
|
43
|
+
### 4. Final Challenge
|
|
44
|
+
|
|
45
|
+
Before we route, stress-test the whole analysis:
|
|
46
|
+
|
|
47
|
+
**Completeness Check:**
|
|
48
|
+
- [ ] Did we surface unstated assumptions, not just the obvious ones?
|
|
49
|
+
- [ ] Did we challenge comfortable classifications?
|
|
50
|
+
- [ ] Would a different team arrive at the same risk map given the same hypotheses?
|
|
51
|
+
|
|
52
|
+
**Honesty Check:**
|
|
53
|
+
- [ ] Is the riskiest assumption truly the most lethal, or the most visible?
|
|
54
|
+
- [ ] Are the "Monitor" assumptions genuinely low-risk, or are we avoiding them?
|
|
55
|
+
- [ ] Does the testing order reflect strategic priority, not emotional comfort?
|
|
56
|
+
|
|
57
|
+
---
|
|
58
|
+
|
|
59
|
+
## Your Turn
|
|
60
|
+
|
|
61
|
+
Review the assumption risk map, testing order, and flagged concerns. Confirm when you're satisfied that the analysis is honest and complete.
|
|
62
|
+
|
|
63
|
+
---
|
|
64
|
+
|
|
65
|
+
**[a]** Advanced Elicitation — Deep dive into risk map validation with guided questioning
|
|
66
|
+
**[p]** Party Mode — Bring in other Vortex agents to challenge your risk assessments
|
|
67
|
+
**[c]** Continue — Proceed to routing
|
|
68
|
+
|
|
69
|
+
---
|
|
70
|
+
|
|
71
|
+
## Vortex Compass
|
|
72
|
+
|
|
73
|
+
Based on what you just completed, here are your evidence-driven options:
|
|
74
|
+
|
|
75
|
+
| If you learned... | Consider next... | Agent | Why |
|
|
76
|
+
|---|---|---|---|
|
|
77
|
+
| High-risk assumptions need validation before any experiment | user-discovery | Isla 🔍 | Unvalidated lethal assumptions need discovery research first |
|
|
78
|
+
| Assumptions are acceptable — riskiest are testable | lean-experiment | Wade 🧪 | Assumptions acceptable, proceed to test (HC3) |
|
|
79
|
+
| Hypotheses need refinement based on what the risk map revealed | hypothesis-engineering | Liam 💡 | Refine hypotheses based on risk map |
|
|
80
|
+
|
|
81
|
+
> **Note:** These are evidence-based recommendations. You can navigate to any Vortex agent
|
|
82
|
+
> at any time based on your judgment.
|
|
83
|
+
|
|
84
|
+
**Or run Max's [VN] Vortex Navigation** for a full gap analysis across all streams.
|
|
85
|
+
|
|
86
|
+
### ⚠️ Insufficient Evidence for Routing
|
|
87
|
+
|
|
88
|
+
If the evidence gathered so far doesn't clearly point to a single next step:
|
|
89
|
+
|
|
90
|
+
| To route to... | You need... |
|
|
91
|
+
|----------------|-------------|
|
|
92
|
+
| Isla 🔍 | Specific high-risk assumption identified with clear research question |
|
|
93
|
+
| Wade 🧪 | Risk map complete with acceptable risk profile and testing order |
|
|
94
|
+
| Liam 💡 | Clear signal that hypothesis contracts need structural revision |
|
|
95
|
+
|
|
96
|
+
**Workflow-specific signals:**
|
|
97
|
+
- All assumptions scored low-risk → may not need this workflow; proceed to Wade
|
|
98
|
+
- Cannot classify assumptions due to vague hypotheses → revisit **Liam's hypothesis-engineering** for sharper contracts
|
|
99
|
+
- Multiple lethal assumptions with no clear priority → consider routing to **Isla** for targeted discovery
|
|
100
|
+
|
|
101
|
+
**Recommended:** Revisit earlier steps to strengthen your risk map, or run **Max's [VN] Vortex Navigation** for a full gap analysis.
|
|
@@ -0,0 +1,49 @@
|
|
|
1
|
+
---
|
|
2
|
+
workflow: assumption-mapping
|
|
3
|
+
type: step-file
|
|
4
|
+
description: Deep-dive assumption analysis across hypothesis contracts — classify by lethality and uncertainty, prioritize testing order, and surface hidden risks
|
|
5
|
+
author: Liam (hypothesis-engineer)
|
|
6
|
+
version: 1.6.0
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Assumption Mapping Workflow
|
|
10
|
+
|
|
11
|
+
This workflow guides you through a deep analysis of the assumptions embedded in your hypothesis contracts — surfacing hidden risks, classifying each assumption by lethality and uncertainty, and producing a prioritized testing order so you validate the right things first.
|
|
12
|
+
|
|
13
|
+
## What is Assumption Mapping?
|
|
14
|
+
|
|
15
|
+
Every hypothesis is a bet — and every bet hides assumptions. Assumption mapping is the discipline of dragging those assumptions into the open, classifying how lethal they are, and figuring out which ones to test first.
|
|
16
|
+
|
|
17
|
+
What if your riskiest assumption isn't the one you think it is? Most teams test the comfortable assumptions — the ones they can validate cheaply — and ignore the lethal ones until it's too late. This workflow forces you to confront the assumptions that could kill your hypotheses before you invest in experiments.
|
|
18
|
+
|
|
19
|
+
If you can't prove it wrong, it's not a hypothesis. And if you haven't mapped what could prove it wrong, you're not ready to test it.
|
|
20
|
+
|
|
21
|
+
## Workflow Structure
|
|
22
|
+
|
|
23
|
+
**Step-file architecture:**
|
|
24
|
+
- Just-in-time loading (each step loads only when needed)
|
|
25
|
+
- Sequential enforcement (must complete step N before step N+1)
|
|
26
|
+
- State tracking in frontmatter (progress preserved)
|
|
27
|
+
|
|
28
|
+
## Steps Overview
|
|
29
|
+
|
|
30
|
+
1. **Setup & Input Validation** - Validate your hypothesis contracts (HC3 artifact or equivalent input)
|
|
31
|
+
2. **Assumption Inventory & Extraction** - Extract all assumptions — stated and unstated — from every hypothesis contract
|
|
32
|
+
3. **Classification & Risk Mapping** - Classify by lethality × uncertainty, build the risk map, produce testing order
|
|
33
|
+
4. **Synthesize & Route** - Review the risk map, validate completeness, and route via Compass
|
|
34
|
+
|
|
35
|
+
## Output
|
|
36
|
+
|
|
37
|
+
**Working Document:** Enriched assumption risk map with prioritized testing order. This workflow deepens your assumption analysis — the output informs your next move (back to hypothesis refinement, forward to experiments, or to Isla for discovery).
|
|
38
|
+
|
|
39
|
+
**Template:** None (assumption risk map is produced inline during Steps 3-4)
|
|
40
|
+
|
|
41
|
+
**Consumer:** Wade (lean-experiment) consumes the testing order when assumptions are acceptable. Isla (user-discovery) investigates high-risk assumptions that need validation. Liam (hypothesis-engineering) refines hypotheses when the risk map reveals structural weaknesses.
|
|
42
|
+
|
|
43
|
+
---
|
|
44
|
+
|
|
45
|
+
## INITIALIZATION
|
|
46
|
+
|
|
47
|
+
Load config from {project-root}/_bmad/bme/_vortex/config.yaml
|
|
48
|
+
|
|
49
|
+
Load step: {project-root}/_bmad/bme/_vortex/workflows/assumption-mapping/steps/step-01-setup.md
|
|
@@ -0,0 +1,81 @@
|
|
|
1
|
+
---
|
|
2
|
+
step: 1
|
|
3
|
+
workflow: behavior-analysis
|
|
4
|
+
title: Setup & Input Validation
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Step 1: Setup & Input Validation
|
|
8
|
+
|
|
9
|
+
Before we analyze any production behavior, we need the experiment that established the baseline. Behavioral patterns reveal intent that surveys miss — but only when measured against what was predicted and validated. Without that baseline, observed behavior is just noise.
|
|
10
|
+
|
|
11
|
+
## Why This Matters
|
|
12
|
+
|
|
13
|
+
Production behavior viewed in isolation tells you what users are doing. Production behavior viewed against experiment baselines tells you what it means. A segment abandoning a feature could be alarming — or it could be exactly the variance the experiment predicted. The classification depends entirely on the experiment context: what was tested, what was confirmed, and what behavior was expected in production. Without those baselines, you cannot distinguish variance from regression from discovery.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
### 1. What Experiment Context Do You Have?
|
|
18
|
+
|
|
19
|
+
Noah expects experiment context — ideally produced by Wade's experimentation workflow as an HC4-compliant artifact:
|
|
20
|
+
- **HC4 Experiment Context** (from Wade's `lean-experiment` workflow)
|
|
21
|
+
- **HC4 Experiment Context** (from Wade's `proof-of-concept`, `proof-of-value`, or `mvp` workflows)
|
|
22
|
+
|
|
23
|
+
You can also bring **any well-formed experiment summary** — Noah accepts input from outside the Vortex pattern. It doesn't have to be HC4-compliant, but having structured experiment results with explicit success criteria and confirmed/rejected hypotheses makes behavior analysis dramatically more precise.
|
|
24
|
+
|
|
25
|
+
### 2. Provide Your Input
|
|
26
|
+
|
|
27
|
+
Please provide the file path or describe the experiment context and the behavior you want to analyze. For example:
|
|
28
|
+
- `_bmad-output/vortex-artifacts/hc4-experiment-context-2026-02-25.md`
|
|
29
|
+
- Or: "I have experiment results and I'm seeing unexpected user behavior in production"
|
|
30
|
+
|
|
31
|
+
### 3. Input Validation
|
|
32
|
+
|
|
33
|
+
I'll check your artifact against the HC4 schema to assess readiness:
|
|
34
|
+
|
|
35
|
+
**HC4 Frontmatter Check:**
|
|
36
|
+
- `contract: HC4`
|
|
37
|
+
- `type: artifact`
|
|
38
|
+
- `source_agent` (who produced it)
|
|
39
|
+
- `source_workflow` (which workflow)
|
|
40
|
+
- `target_agents: [noah]`
|
|
41
|
+
- `input_artifacts` (upstream references)
|
|
42
|
+
- `created` (date)
|
|
43
|
+
|
|
44
|
+
**HC4 Body Section Check:**
|
|
45
|
+
- Experiment Summary (Name, Description, Type, Duration, Graduation Status)
|
|
46
|
+
- Hypothesis Tested (Statement, Riskiest Assumption, Expected Outcome, Target Behavior Change)
|
|
47
|
+
- Experiment Method (Method Type, Sample Size, Planned Duration)
|
|
48
|
+
- Pre-Defined Success Criteria (Metric, Target Threshold, Actual Result, Met?)
|
|
49
|
+
- Additional Results (optional — Quantitative Metrics, Qualitative Results)
|
|
50
|
+
- Confirmed/Rejected Hypotheses (Status, Assumption Status, Core Learning)
|
|
51
|
+
- Strategic Context (Vortex Stream, Assumption Tested, Decision It Informs, Implications)
|
|
52
|
+
- Production Readiness (Metrics to Monitor, Expected Production Behavior, Signal Thresholds)
|
|
53
|
+
|
|
54
|
+
**If your input is non-conforming:** That's fine — we don't reject experiment context. I'll guide you to identify which elements are present and which gaps we need to work around during behavior analysis. The more complete the experiment context, the sharper the baseline comparison. But even partial context is better than none — here's what we're seeing in context with whatever you can provide.
|
|
55
|
+
|
|
56
|
+
### 4. Describe the Behavior You're Observing
|
|
57
|
+
|
|
58
|
+
While I validate your experiment context, describe the production behavior that prompted this analysis:
|
|
59
|
+
|
|
60
|
+
| Field | Your Observation |
|
|
61
|
+
|-------|-----------------|
|
|
62
|
+
| **What behavior are you seeing?** | Describe the specific user behavior or pattern you've noticed |
|
|
63
|
+
| **How does it differ from what you expected?** | What did you expect users to do vs. what they're actually doing? |
|
|
64
|
+
| **When did you first notice it?** | Approximate time frame |
|
|
65
|
+
| **Which users or segments?** | Who is exhibiting this behavior? |
|
|
66
|
+
|
|
67
|
+
This gives us the raw observation that we'll compare against experiment baselines in Step 2.
|
|
68
|
+
|
|
69
|
+
> For the full HC4 schema reference, see `{project-root}/_bmad/bme/_vortex/contracts/hc4-experiment-context.md`
|
|
70
|
+
|
|
71
|
+
---
|
|
72
|
+
|
|
73
|
+
## Your Turn
|
|
74
|
+
|
|
75
|
+
Please provide your experiment context and describe the behavior you're observing. I'll validate the experiment input and we'll proceed to establishing baselines for comparison.
|
|
76
|
+
|
|
77
|
+
## Next Step
|
|
78
|
+
|
|
79
|
+
When your experiment context is provided and validated, I'll load:
|
|
80
|
+
|
|
81
|
+
{project-root}/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-02-context.md
|
|
@@ -0,0 +1,67 @@
|
|
|
1
|
+
---
|
|
2
|
+
step: 2
|
|
3
|
+
workflow: behavior-analysis
|
|
4
|
+
title: Experiment Baselines & Behavior Observation
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Step 2: Experiment Baselines & Behavior Observation
|
|
8
|
+
|
|
9
|
+
Now that we have validated experiment context, let's extract the baselines that define what "expected behavior" looks like — and document the observed behavior we'll classify against those baselines.
|
|
10
|
+
|
|
11
|
+
## Why This Matters
|
|
12
|
+
|
|
13
|
+
Behavior only becomes meaningful when compared to what was expected. The experiment established baselines: success criteria were met (or not), specific metrics were validated, and production behavior was predicted. Those baselines are the ruler against which every observed behavior is measured. Without explicit baselines, classification is guesswork. With them, the signal indicates whether production is confirming the experiment, diverging from it, or revealing something the experiment never anticipated.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
### 1. Extract Experiment Baselines
|
|
18
|
+
|
|
19
|
+
From the HC4 artifact (or equivalent), extract the validated baselines that behavior will be compared against:
|
|
20
|
+
|
|
21
|
+
| Field | Your Baseline |
|
|
22
|
+
|-------|-------------|
|
|
23
|
+
| **Validated Success Metrics** | Which metrics were confirmed in the experiment? What were the actual values? |
|
|
24
|
+
| **Expected Production Behavior** | What behavior did the experiment predict for production? (from HC4 Section 8) |
|
|
25
|
+
| **Signal Thresholds** | What thresholds were defined for acceptable vs. concerning behavior? (from HC4 Section 8) |
|
|
26
|
+
| **Target Behavior Change** | What specific user behavior change was the experiment designed to produce? (from HC4 Section 2) |
|
|
27
|
+
| **Confirmed Hypothesis Elements** | Which parts of the hypothesis were confirmed? These define what "expected" means. |
|
|
28
|
+
|
|
29
|
+
These baselines form the comparison frame for Step 3's classification. Every observed behavior will be measured against these specific, experiment-validated benchmarks.
|
|
30
|
+
|
|
31
|
+
### 2. Document the Behavior Observation
|
|
32
|
+
|
|
33
|
+
Now let's formalize the behavior observation from Step 1 into a structured description:
|
|
34
|
+
|
|
35
|
+
| Field | Your Observation |
|
|
36
|
+
|-------|-----------------|
|
|
37
|
+
| **Behavior Summary** | One-sentence factual description of the production behavior being analyzed |
|
|
38
|
+
| **Observation Period** | When the behavior was observed (date range) |
|
|
39
|
+
| **Affected Users/Segments** | Which users, segments, or features exhibit this behavior |
|
|
40
|
+
| **Detection Method** | How the behavior came to attention (monitoring, user report, manual analysis, routine review) |
|
|
41
|
+
| **Behavioral Metrics** | Specific metrics that capture this behavior (e.g., adoption rate, usage frequency, task completion) |
|
|
42
|
+
| **Comparison to Baseline** | Initial assessment — how does this behavior compare to the experiment baselines extracted above? |
|
|
43
|
+
|
|
44
|
+
### 3. Map the Vortex History
|
|
45
|
+
|
|
46
|
+
Production behavior doesn't exist in isolation within the Vortex journey. Let's connect to the broader context:
|
|
47
|
+
|
|
48
|
+
| Field | Your Input |
|
|
49
|
+
|-------|-----------|
|
|
50
|
+
| **Problem Definition** | Reference to the HC2 problem definition that started this Vortex journey (if available) |
|
|
51
|
+
| **Hypothesis Origin** | Reference to the HC3 hypothesis contract that this experiment tested (if available) |
|
|
52
|
+
| **Previous Signals** | References to any prior HC5 signal reports for this experiment or feature (if any) |
|
|
53
|
+
| **Related Experiments** | Other experiments that may be influencing the same production area |
|
|
54
|
+
|
|
55
|
+
Not all of these will be available — that's fine. Each connection adds depth to your behavior analysis. Even without formal Vortex artifacts, understanding the broader experiment history enriches the classification.
|
|
56
|
+
|
|
57
|
+
---
|
|
58
|
+
|
|
59
|
+
## Your Turn
|
|
60
|
+
|
|
61
|
+
Extract your experiment baselines, document the behavior observation, and map available Vortex history connections. The more precise the baselines, the more accurate the classification in the next step.
|
|
62
|
+
|
|
63
|
+
## Next Step
|
|
64
|
+
|
|
65
|
+
When your experiment baselines are extracted and behavior is documented, I'll load:
|
|
66
|
+
|
|
67
|
+
{project-root}/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-03-classification.md
|
|
@@ -0,0 +1,98 @@
|
|
|
1
|
+
---
|
|
2
|
+
step: 3
|
|
3
|
+
workflow: behavior-analysis
|
|
4
|
+
title: Behavior Pattern Classification
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Step 3: Behavior Pattern Classification
|
|
8
|
+
|
|
9
|
+
We have the experiment baselines and the observed behavior. Now we classify what the behavior means. This is the core of behavior analysis — converting raw observation into categorized intelligence that tells the decision-maker whether production is confirming, regressing, or revealing.
|
|
10
|
+
|
|
11
|
+
## Why This Matters
|
|
12
|
+
|
|
13
|
+
Classification converts raw observation into actionable intelligence. Without classification, a behavior pattern is just a data point. With classification, it becomes a signal with clear routing implications: expected variance means the experiment model is holding, regression means something validated is degrading, and novel behavior means users are telling you something the experiment didn't ask. Each classification carries different weight and routes to different next steps. The signal indicates different things depending on which category the behavior falls into.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
### 1. Behavior Pattern Analysis
|
|
18
|
+
|
|
19
|
+
Compare the observed behavior to your experiment baselines. Analyze across multiple dimensions:
|
|
20
|
+
|
|
21
|
+
| Dimension | Your Analysis |
|
|
22
|
+
|-----------|-------------|
|
|
23
|
+
| **Metric Comparison** | How do the behavioral metrics compare to experiment-validated thresholds? Within tolerance or outside? |
|
|
24
|
+
| **Segment Behavior** | Are all user segments behaving consistently, or are specific segments diverging? |
|
|
25
|
+
| **Usage Patterns** | Do usage patterns match what the experiment predicted? Frequency, duration, feature paths? |
|
|
26
|
+
| **Timing** | Is the behavior consistent over time, or does it vary by time of day, week, or stage of adoption? |
|
|
27
|
+
| **Edge Cases** | Are edge cases or boundary conditions producing unexpected behavior? |
|
|
28
|
+
| **Feature Interaction** | Are users interacting with the feature in ways the experiment didn't measure? |
|
|
29
|
+
|
|
30
|
+
### 2. Classify Each Behavior Pattern
|
|
31
|
+
|
|
32
|
+
For each distinct behavior pattern observed, classify it into one of three categories:
|
|
33
|
+
|
|
34
|
+
#### Category 1: Expected Variance
|
|
35
|
+
|
|
36
|
+
Behavior falls within experiment-predicted tolerance. The experiment model is holding.
|
|
37
|
+
|
|
38
|
+
| Field | Your Classification |
|
|
39
|
+
|-------|-------------------|
|
|
40
|
+
| **Behavior** | What specific behavior falls within expected range |
|
|
41
|
+
| **Baseline Reference** | Which experiment baseline confirms this is expected |
|
|
42
|
+
| **Tolerance Range** | What variance was predicted? Where does the observed behavior fall? |
|
|
43
|
+
| **Confidence** | How certain is this classification? |
|
|
44
|
+
|
|
45
|
+
#### Category 2: Regression
|
|
46
|
+
|
|
47
|
+
Behavior diverges negatively from validated experiment performance. Something that was working is degrading.
|
|
48
|
+
|
|
49
|
+
| Field | Your Classification |
|
|
50
|
+
|-------|-------------------|
|
|
51
|
+
| **Behavior** | What specific behavior is degrading from validated performance |
|
|
52
|
+
| **Baseline Reference** | Which validated metric is being underperformed |
|
|
53
|
+
| **Deviation** | How far has behavior moved from the validated baseline? Quantify. |
|
|
54
|
+
| **Trajectory** | Is the regression accelerating, decelerating, or stable? |
|
|
55
|
+
| **Possible Factors** | What observable factors might be contributing? (factual only, not speculative strategy) |
|
|
56
|
+
|
|
57
|
+
#### Category 3: Novel Behavior
|
|
58
|
+
|
|
59
|
+
Behavior not covered by the original experiment hypothesis. Users are doing something the experiment didn't predict.
|
|
60
|
+
|
|
61
|
+
| Field | Your Classification |
|
|
62
|
+
|-------|-------------------|
|
|
63
|
+
| **Behavior** | What behavior was observed that the experiment didn't anticipate |
|
|
64
|
+
| **Why It's Novel** | How does this fall outside the experiment's prediction model? |
|
|
65
|
+
| **Scope** | How widespread is this behavior? Which users, how frequently? |
|
|
66
|
+
| **Behavioral Signal** | What might this behavior indicate about user intent? (observational, not prescriptive) |
|
|
67
|
+
|
|
68
|
+
### 3. Classification Summary
|
|
69
|
+
|
|
70
|
+
Summarize all classified behavior patterns:
|
|
71
|
+
|
|
72
|
+
| # | Behavior Pattern | Classification | Key Evidence | Confidence |
|
|
73
|
+
|---|-----------------|---------------|-------------|-----------|
|
|
74
|
+
| 1 | *describe* | Expected Variance / Regression / Novel | *key metric* | High / Medium / Low |
|
|
75
|
+
| 2 | *(if applicable)* | | | |
|
|
76
|
+
| 3 | *(if applicable)* | | | |
|
|
77
|
+
|
|
78
|
+
**Remember:** Noah classifies and reports. Strategic decisions about what to do with these classifications belong downstream with Max. The classification tells you what the behavior is — the decision about what it means for the product direction is not ours to make.
|
|
79
|
+
|
|
80
|
+
---
|
|
81
|
+
|
|
82
|
+
## Your Turn
|
|
83
|
+
|
|
84
|
+
Analyze the behavior patterns against experiment baselines, classify each into the appropriate category, and complete the classification summary. Share your analysis and I'll help refine the classifications before we move to evidence gathering.
|
|
85
|
+
|
|
86
|
+
---
|
|
87
|
+
|
|
88
|
+
**[a]** Advanced Elicitation — Deep dive into behavior classification with guided questioning
|
|
89
|
+
**[p]** Party Mode — Bring in other Vortex agents for collaborative behavior pattern analysis
|
|
90
|
+
**[c]** Continue — Proceed to evidence gathering and data quality assessment
|
|
91
|
+
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
## Next Step
|
|
95
|
+
|
|
96
|
+
When your behavior patterns are classified, I'll load:
|
|
97
|
+
|
|
98
|
+
{project-root}/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-04-evidence.md
|
|
@@ -0,0 +1,100 @@
|
|
|
1
|
+
---
|
|
2
|
+
step: 4
|
|
3
|
+
workflow: behavior-analysis
|
|
4
|
+
title: Evidence Gathering & Data Quality
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Step 4: Evidence Gathering & Data Quality
|
|
8
|
+
|
|
9
|
+
We've classified the behavior patterns. Now we build the evidence base that supports each classification and assess whether the data underlying our analysis is trustworthy. Evidence strength determines routing confidence — the stronger the evidence, the more decisive the Compass recommendation.
|
|
10
|
+
|
|
11
|
+
## Why This Matters
|
|
12
|
+
|
|
13
|
+
Classification without evidence is opinion. Evidence converts classification into intelligence. For each behavior pattern classified in Step 3, we need quantified metrics, specific comparisons to baselines, and honest assessment of data quality. And before we package any of this as intelligence, we need to know how much we can trust the data. Anomaly detection surfaces what dashboards hide — but only when the evidence is solid enough to support the finding.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
### 1. Evidence Gathering by Classification
|
|
18
|
+
|
|
19
|
+
For each behavior pattern classified in Step 3, gather supporting evidence:
|
|
20
|
+
|
|
21
|
+
**For Expected Variance classifications:**
|
|
22
|
+
|
|
23
|
+
| Field | Your Evidence |
|
|
24
|
+
|-------|-------------|
|
|
25
|
+
| **Metric Match** | Specific metrics showing behavior within experiment-predicted tolerance |
|
|
26
|
+
| **Baseline Alignment** | How closely does observed behavior track the validated baseline? |
|
|
27
|
+
| **Duration** | How long has the behavior been within expected range? |
|
|
28
|
+
| **Consistency** | Is the variance consistent across segments, or concentrated in specific groups? |
|
|
29
|
+
|
|
30
|
+
**For Regression classifications:**
|
|
31
|
+
|
|
32
|
+
| Field | Your Evidence |
|
|
33
|
+
|-------|-------------|
|
|
34
|
+
| **Deviation Metrics** | Specific metrics showing divergence from validated performance, quantified |
|
|
35
|
+
| **Regression Timeline** | When did the regression start? How has it progressed? |
|
|
36
|
+
| **Rate of Change** | How quickly is the regression occurring? (e.g., "3% week-over-week decline") |
|
|
37
|
+
| **Scope Impact** | Which users/segments/features are affected and to what degree? |
|
|
38
|
+
| **Contributing Factors** | Observable factors that correlate with the regression (factual, not speculative) |
|
|
39
|
+
|
|
40
|
+
**For Novel Behavior classifications:**
|
|
41
|
+
|
|
42
|
+
| Field | Your Evidence |
|
|
43
|
+
|-------|-------------|
|
|
44
|
+
| **Behavior Metrics** | Specific metrics capturing the novel behavior pattern |
|
|
45
|
+
| **Experiment Gap** | What specifically was not measured or predicted by the experiment? |
|
|
46
|
+
| **Scope & Frequency** | How many users, how often, in what contexts? |
|
|
47
|
+
| **Behavioral Context** | What are users doing before and after the novel behavior? |
|
|
48
|
+
| **Discovery Questions** | What specific questions would need answers to understand this behavior? |
|
|
49
|
+
|
|
50
|
+
### 2. HC10 Novel Behavior Routing Assessment
|
|
51
|
+
|
|
52
|
+
If you identified Novel Behavior in Step 3, assess whether it warrants routing to Isla for discovery research:
|
|
53
|
+
|
|
54
|
+
| Assessment | Your Decision |
|
|
55
|
+
|-----------|--------------|
|
|
56
|
+
| **Novel behavior detected?** | Yes / No |
|
|
57
|
+
| **HC10 routing recommended?** | Yes / No |
|
|
58
|
+
| **Rationale** | Why or why not — what makes this novel behavior significant enough (or not) for discovery research? |
|
|
59
|
+
| **Discovery Focus** | If yes: What specific questions should Isla investigate about this behavior? |
|
|
60
|
+
|
|
61
|
+
**Guidance on HC10 routing:**
|
|
62
|
+
- Route to Isla when novel behavior reveals user intent the experiment didn't anticipate
|
|
63
|
+
- Route to Isla when the behavior pattern is widespread enough to warrant investigation
|
|
64
|
+
- Do NOT route if the novel behavior is isolated, transient, or easily explained by external factors
|
|
65
|
+
|
|
66
|
+
### 3. Data Quality Assessment
|
|
67
|
+
|
|
68
|
+
Before we package this analysis as intelligence, let's assess the reliability of the data:
|
|
69
|
+
|
|
70
|
+
| Field | Your Assessment |
|
|
71
|
+
|-------|----------------|
|
|
72
|
+
| **Sample Size** | Volume of data underlying the behavior analysis — is it sufficient for the classifications made? |
|
|
73
|
+
| **Data Completeness** | Was data collection complete, or were there gaps? (e.g., tracking failures, partial rollouts, missing segments) |
|
|
74
|
+
| **Known Biases** | Any sampling or measurement biases that may affect classification (e.g., self-selection, survivorship bias, time-of-day effects) |
|
|
75
|
+
| **Confidence Level** | `High` / `Medium` / `Low` — overall confidence in the behavior classifications given data quality |
|
|
76
|
+
|
|
77
|
+
**Guidance on confidence:**
|
|
78
|
+
- **High:** Large sample, complete data, no known biases — classifications are reliable
|
|
79
|
+
- **Medium:** Adequate sample but some gaps or potential biases — classifications are directionally sound but should be interpreted with caveats
|
|
80
|
+
- **Low:** Small sample, significant gaps, or notable biases — classifications are preliminary and should not drive major decisions without additional data
|
|
81
|
+
|
|
82
|
+
---
|
|
83
|
+
|
|
84
|
+
## Your Turn
|
|
85
|
+
|
|
86
|
+
Gather evidence for each classification, assess whether novel behavior warrants HC10 routing to Isla, and evaluate data quality. The strength of your evidence determines the confidence of the final behavioral signal report.
|
|
87
|
+
|
|
88
|
+
---
|
|
89
|
+
|
|
90
|
+
**[a]** Advanced Elicitation — Deep dive into evidence strengthening with guided questioning
|
|
91
|
+
**[p]** Party Mode — Bring in other Vortex agents for collaborative evidence assessment
|
|
92
|
+
**[c]** Continue — Proceed to synthesis and HC5 artifact generation
|
|
93
|
+
|
|
94
|
+
---
|
|
95
|
+
|
|
96
|
+
## Next Step
|
|
97
|
+
|
|
98
|
+
When your evidence gathering and data quality assessment are complete, I'll load:
|
|
99
|
+
|
|
100
|
+
{project-root}/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-05-synthesize.md
|