convoke-agents 2.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +920 -0
- package/INSTALLATION.md +230 -0
- package/LICENSE +21 -0
- package/README.md +330 -0
- package/UPDATE-GUIDE.md +220 -0
- package/_bmad/bme/_vortex/README.md +150 -0
- package/_bmad/bme/_vortex/agents/contextualization-expert.md +100 -0
- package/_bmad/bme/_vortex/agents/discovery-empathy-expert.md +117 -0
- package/_bmad/bme/_vortex/agents/hypothesis-engineer.md +117 -0
- package/_bmad/bme/_vortex/agents/lean-experiments-specialist.md +118 -0
- package/_bmad/bme/_vortex/agents/learning-decision-expert.md +117 -0
- package/_bmad/bme/_vortex/agents/production-intelligence-specialist.md +117 -0
- package/_bmad/bme/_vortex/agents/research-convergence-specialist.md +117 -0
- package/_bmad/bme/_vortex/compass-routing-reference.md +312 -0
- package/_bmad/bme/_vortex/config.yaml +46 -0
- package/_bmad/bme/_vortex/contracts/hc1-empathy-artifacts.md +152 -0
- package/_bmad/bme/_vortex/contracts/hc2-problem-definition.md +125 -0
- package/_bmad/bme/_vortex/contracts/hc3-hypothesis-contract.md +112 -0
- package/_bmad/bme/_vortex/contracts/hc4-experiment-context.md +140 -0
- package/_bmad/bme/_vortex/contracts/hc5-signal-report.md +130 -0
- package/_bmad/bme/_vortex/examples/hc2-example-problem-definition.md +85 -0
- package/_bmad/bme/_vortex/examples/hc3-example-hypothesis-contract.md +103 -0
- package/_bmad/bme/_vortex/examples/hc5-example-signal-report.md +76 -0
- package/_bmad/bme/_vortex/guides/EMMA-USER-GUIDE.md +232 -0
- package/_bmad/bme/_vortex/guides/ISLA-USER-GUIDE.md +208 -0
- package/_bmad/bme/_vortex/guides/LIAM-USER-GUIDE.md +255 -0
- package/_bmad/bme/_vortex/guides/MAX-USER-GUIDE.md +213 -0
- package/_bmad/bme/_vortex/guides/MILA-USER-GUIDE.md +235 -0
- package/_bmad/bme/_vortex/guides/NOAH-USER-GUIDE.md +258 -0
- package/_bmad/bme/_vortex/guides/WADE-USER-GUIDE.md +245 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/empathy-map.template.md +143 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-01-define-user.md +60 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-02-says-thinks.md +67 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-03-does-feels.md +79 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-04-pain-points.md +87 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-05-gains.md +103 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/steps/step-06-synthesize.md +104 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/validate.md +117 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/empathy-map/workflow.md +44 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-01-define-requirements.md +85 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-02-user-flows.md +59 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-03-information-architecture.md +68 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-04-wireframe-sketch.md +97 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-05-components.md +128 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/steps/step-06-synthesize.md +83 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/wireframe.template.md +287 -0
- package/_bmad/bme/_vortex/workflows/_deprecated/wireframe/workflow.md +44 -0
- package/_bmad/bme/_vortex/workflows/assumption-mapping/steps/step-01-setup.md +66 -0
- package/_bmad/bme/_vortex/workflows/assumption-mapping/steps/step-02-context.md +93 -0
- package/_bmad/bme/_vortex/workflows/assumption-mapping/steps/step-03-risk-mapping.md +103 -0
- package/_bmad/bme/_vortex/workflows/assumption-mapping/steps/step-04-synthesize.md +101 -0
- package/_bmad/bme/_vortex/workflows/assumption-mapping/workflow.md +49 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-01-setup.md +81 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-02-context.md +67 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-03-classification.md +98 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-04-evidence.md +100 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/steps/step-05-synthesize.md +174 -0
- package/_bmad/bme/_vortex/workflows/behavior-analysis/workflow.md +52 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/contextualize-scope.template.md +67 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-01-list-opportunities.md +47 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-02-define-criteria.md +36 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-03-evaluate-opportunities.md +30 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-04-define-boundaries.md +32 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-05-validate-fit.md +28 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/steps/step-06-synthesize.md +36 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/contextualize-scope/workflow.md +59 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/empathy-map.template.md +143 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-01-define-user.md +60 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-02-says-thinks.md +67 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-03-does-feels.md +79 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-04-pain-points.md +87 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-05-gains.md +103 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/steps/step-06-synthesize.md +107 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/validate.md +117 -0
- package/_bmad/bme/_vortex/workflows/empathy-map/workflow.md +45 -0
- package/_bmad/bme/_vortex/workflows/experiment-design/steps/step-01-setup.md +66 -0
- package/_bmad/bme/_vortex/workflows/experiment-design/steps/step-02-context.md +77 -0
- package/_bmad/bme/_vortex/workflows/experiment-design/steps/step-03-design.md +114 -0
- package/_bmad/bme/_vortex/workflows/experiment-design/steps/step-04-synthesize.md +128 -0
- package/_bmad/bme/_vortex/workflows/experiment-design/workflow.md +51 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/steps/step-01-setup.md +66 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/steps/step-02-context.md +80 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/steps/step-03-brainwriting.md +79 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/steps/step-04-assumption-mapping.md +102 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/steps/step-05-synthesize.md +130 -0
- package/_bmad/bme/_vortex/workflows/hypothesis-engineering/workflow.md +52 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/lean-experiment.template.md +29 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-01-hypothesis.md +58 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-02-design.md +68 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-03-metrics.md +73 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-04-run.md +75 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-05-analyze.md +84 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/steps/step-06-decide.md +111 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/lean-experiment/workflow.md +26 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/lean-persona.template.md +163 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-01-define-job.md +72 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-02-current-solution.md +83 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-03-problem-contexts.md +90 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-04-forces-anxieties.md +98 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-05-success-criteria.md +103 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/steps/step-06-synthesize.md +129 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/lean-persona/workflow.md +50 -0
- package/_bmad/bme/_vortex/workflows/learning-card/learning-card.template.md +179 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-01-experiment-context.md +100 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-02-raw-results.md +125 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-03-analysis.md +125 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-04-validated-learning.md +139 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-05-implications.md +134 -0
- package/_bmad/bme/_vortex/workflows/learning-card/steps/step-06-synthesize.md +121 -0
- package/_bmad/bme/_vortex/workflows/learning-card/validate.md +134 -0
- package/_bmad/bme/_vortex/workflows/learning-card/workflow.md +51 -0
- package/_bmad/bme/_vortex/workflows/mvp/mvp.template.md +40 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-01-riskiest-assumption.md +17 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-02-success-criteria.md +13 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-03-smallest-test.md +13 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-04-scope-features.md +13 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-05-build-measure-learn.md +13 -0
- package/_bmad/bme/_vortex/workflows/mvp/steps/step-06-synthesize.md +28 -0
- package/_bmad/bme/_vortex/workflows/mvp/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/mvp/workflow.md +36 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/steps/step-01-setup.md +102 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/steps/step-02-context.md +81 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/steps/step-03-pattern-identification.md +88 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/steps/step-04-theme-clustering.md +100 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/steps/step-05-synthesize.md +135 -0
- package/_bmad/bme/_vortex/workflows/pattern-mapping/workflow.md +58 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/pivot-patch-persevere.template.md +201 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-01-evidence-review.md +125 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-02-hypothesis-assessment.md +132 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-03-option-analysis.md +167 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-04-stakeholder-input.md +141 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-05-decision.md +161 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/steps/step-06-action-plan.md +188 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/validate.md +159 -0
- package/_bmad/bme/_vortex/workflows/pivot-patch-persevere/workflow.md +51 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/steps/step-01-setup.md +97 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/steps/step-02-context.md +86 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/steps/step-03-jtbd-reframing.md +88 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/steps/step-04-pains-gains-revision.md +76 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/steps/step-05-synthesize.md +158 -0
- package/_bmad/bme/_vortex/workflows/pivot-resynthesis/workflow.md +52 -0
- package/_bmad/bme/_vortex/workflows/product-vision/product-vision.template.md +147 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-01-define-problem.md +89 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-02-target-market.md +91 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-03-unique-approach.md +87 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-04-future-state.md +100 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-05-principles.md +92 -0
- package/_bmad/bme/_vortex/workflows/product-vision/steps/step-06-synthesize.md +170 -0
- package/_bmad/bme/_vortex/workflows/product-vision/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/product-vision/workflow.md +55 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/steps/step-01-setup.md +84 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/steps/step-02-context.md +66 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/steps/step-03-monitoring.md +74 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/steps/step-04-prioritization.md +97 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/steps/step-05-synthesize.md +183 -0
- package/_bmad/bme/_vortex/workflows/production-monitoring/workflow.md +52 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/proof-of-concept.template.md +25 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-01-risk.md +79 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-02-scope.md +105 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-03-build.md +92 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-04-test.md +103 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-05-evaluate.md +114 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-06-document.md +125 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/proof-of-concept/workflow.md +26 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/proof-of-value.template.md +29 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-01-value-hypothesis.md +75 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-02-validation-design.md +94 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-03-willingness.md +96 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-04-test.md +107 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-05-analyze.md +116 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/steps/step-06-document.md +147 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/validate.md +30 -0
- package/_bmad/bme/_vortex/workflows/proof-of-value/workflow.md +26 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/steps/step-01-setup.md +69 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/steps/step-02-context.md +70 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/steps/step-03-jtbd-framing.md +81 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/steps/step-04-pains-gains.md +77 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/steps/step-05-synthesize.md +147 -0
- package/_bmad/bme/_vortex/workflows/research-convergence/workflow.md +50 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/steps/step-01-setup.md +68 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/steps/step-02-context.md +67 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/steps/step-03-signal-analysis.md +85 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/steps/step-04-anomaly-detection.md +93 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/steps/step-05-synthesize.md +163 -0
- package/_bmad/bme/_vortex/workflows/signal-interpretation/workflow.md +52 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-01-discovery-scope.md +77 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-02-research-methods.md +152 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-03-research-plan.md +159 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-04-execute.md +169 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-05-organize-data.md +149 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/steps/step-06-synthesize.md +159 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/user-discovery.template.md +231 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/validate.md +153 -0
- package/_bmad/bme/_vortex/workflows/user-discovery/workflow.md +45 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-01-research-goals.md +100 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-02-interview-script.md +123 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-03-recruitment.md +144 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-04-conduct.md +154 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-05-findings.md +163 -0
- package/_bmad/bme/_vortex/workflows/user-interview/steps/step-06-synthesize.md +171 -0
- package/_bmad/bme/_vortex/workflows/user-interview/user-interview.template.md +250 -0
- package/_bmad/bme/_vortex/workflows/user-interview/validate.md +142 -0
- package/_bmad/bme/_vortex/workflows/user-interview/workflow.md +51 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-01-current-state.md +56 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-02-evidence-inventory.md +70 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-03-gap-analysis.md +76 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-04-stream-evaluation.md +57 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-05-recommendation.md +65 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/steps/step-06-navigation-plan.md +72 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/validate.md +75 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/vortex-navigation.template.md +105 -0
- package/_bmad/bme/_vortex/workflows/vortex-navigation/workflow.md +54 -0
- package/index.js +56 -0
- package/package.json +77 -0
- package/scripts/README.md +226 -0
- package/scripts/convoke-doctor.js +322 -0
- package/scripts/docs-audit.js +584 -0
- package/scripts/install-all-agents.js +9 -0
- package/scripts/install-vortex-agents.js +208 -0
- package/scripts/postinstall.js +104 -0
- package/scripts/update/convoke-migrate.js +169 -0
- package/scripts/update/convoke-update.js +272 -0
- package/scripts/update/convoke-version.js +134 -0
- package/scripts/update/lib/agent-registry.js +144 -0
- package/scripts/update/lib/backup-manager.js +243 -0
- package/scripts/update/lib/config-merger.js +242 -0
- package/scripts/update/lib/migration-runner.js +367 -0
- package/scripts/update/lib/refresh-installation.js +171 -0
- package/scripts/update/lib/utils.js +96 -0
- package/scripts/update/lib/validator.js +360 -0
- package/scripts/update/lib/version-detector.js +241 -0
- package/scripts/update/migrations/1.0.x-to-1.3.0.js +128 -0
- package/scripts/update/migrations/1.1.x-to-1.3.0.js +29 -0
- package/scripts/update/migrations/1.2.x-to-1.3.0.js +29 -0
- package/scripts/update/migrations/1.3.x-to-1.5.0.js +29 -0
- package/scripts/update/migrations/1.4.x-to-1.5.0.js +29 -0
- package/scripts/update/migrations/1.5.x-to-1.6.0.js +95 -0
- package/scripts/update/migrations/1.6.x-to-1.7.0.js +29 -0
- package/scripts/update/migrations/1.7.x-to-2.0.0.js +31 -0
- package/scripts/update/migrations/registry.js +194 -0
|
@@ -0,0 +1,183 @@
|
|
|
1
|
+
---
|
|
2
|
+
step: 5
|
|
3
|
+
workflow: production-monitoring
|
|
4
|
+
title: Synthesize & Route
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Step 5: Synthesize & Route
|
|
8
|
+
|
|
9
|
+
Time to bring the portfolio together. We've validated experiment contexts, assembled the portfolio, monitored signals against baselines, and prioritized by divergence severity. Now we produce the HC5 Portfolio Signal Report artifact and route to the next step in the Vortex.
|
|
10
|
+
|
|
11
|
+
## Why This Matters
|
|
12
|
+
|
|
13
|
+
Production data is the most honest user feedback — it can't lie. At portfolio scale, that honest feedback tells Max which experiments are confirming their hypotheses, which are degrading, and which are revealing unexpected behavior. The HC5 portfolio signal report gives Max everything needed to make portfolio-level decisions: prioritized signals across all monitored experiments, divergence assessments, anomaly flags, and data quality — all grounded in experiment baselines. No recommendations. No strategy. Just prioritized portfolio intelligence.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
### 1. Review Your Portfolio Signal Report
|
|
18
|
+
|
|
19
|
+
Before we package everything, let's do a final quality pass on each section:
|
|
20
|
+
|
|
21
|
+
**Signal Description (Portfolio-Level):**
|
|
22
|
+
|
|
23
|
+
| Field | Check |
|
|
24
|
+
|-------|-------|
|
|
25
|
+
| **Signal Summary** | Does it capture the portfolio-level picture? Is it factual and concise? |
|
|
26
|
+
| **Signal Type** | Is the classification appropriate for portfolio monitoring? |
|
|
27
|
+
| **Severity** | Is the severity justified by the highest-priority signal in the portfolio? |
|
|
28
|
+
| **Detection Method** | Is it clear this is a portfolio monitoring assessment? |
|
|
29
|
+
| **Time Window** | Is the observation period consistent across experiments? |
|
|
30
|
+
| **Affected Scope** | Are all monitored experiments and their affected areas identified? |
|
|
31
|
+
|
|
32
|
+
**Context (Per-Experiment Lineage + Vortex History):**
|
|
33
|
+
|
|
34
|
+
| Field | Check |
|
|
35
|
+
|-------|-------|
|
|
36
|
+
| **Per-Experiment Lineage** | Can each signal be traced back to its originating experiment? Are baselines explicit? |
|
|
37
|
+
| **Vortex History** | Are available upstream references (HC2, HC3, previous HC5) documented per experiment? |
|
|
38
|
+
|
|
39
|
+
**Trend Analysis (Per-Experiment):**
|
|
40
|
+
|
|
41
|
+
| Field | Check |
|
|
42
|
+
|-------|-------|
|
|
43
|
+
| **Trend Direction** | Is the trajectory accurately classified for each experiment? |
|
|
44
|
+
| **Rate of Change** | Is divergence quantified with specific metrics, not vague descriptions? |
|
|
45
|
+
| **Baseline Comparison** | Is each experiment's baseline explicitly stated and the comparison clear? |
|
|
46
|
+
| **Confidence** | Does the confidence level reflect both data quality and assessment certainty per experiment? |
|
|
47
|
+
|
|
48
|
+
**Anomaly Detection (if anomalies flagged):**
|
|
49
|
+
|
|
50
|
+
| Field | Check |
|
|
51
|
+
|-------|-------|
|
|
52
|
+
| **Anomaly Description** | Is the unexpected behavior described factually, without speculative strategy? |
|
|
53
|
+
| **Discovery Needed** | Is the HC10 routing decision documented with rationale per experiment? |
|
|
54
|
+
|
|
55
|
+
**Data Quality (Per-Experiment and Portfolio-Level):**
|
|
56
|
+
|
|
57
|
+
| Field | Check |
|
|
58
|
+
|-------|-------|
|
|
59
|
+
| **Per-Experiment Quality** | Is data quality assessed for each experiment individually? |
|
|
60
|
+
| **Portfolio Confidence** | Does the overall confidence honestly reflect portfolio-level reliability? |
|
|
61
|
+
|
|
62
|
+
### 2. Generate the HC5 Artifact
|
|
63
|
+
|
|
64
|
+
I'll produce the HC5 Portfolio Signal Report artifact with this structure:
|
|
65
|
+
|
|
66
|
+
```yaml
|
|
67
|
+
---
|
|
68
|
+
contract: HC5
|
|
69
|
+
type: artifact
|
|
70
|
+
source_agent: noah
|
|
71
|
+
source_workflow: production-monitoring
|
|
72
|
+
target_agents: [max]
|
|
73
|
+
input_artifacts:
|
|
74
|
+
- path: "_bmad-output/vortex-artifacts/{hc4-experiment-1}"
|
|
75
|
+
contract: HC4
|
|
76
|
+
- path: "_bmad-output/vortex-artifacts/{hc4-experiment-2}"
|
|
77
|
+
contract: HC4
|
|
78
|
+
created: YYYY-MM-DD
|
|
79
|
+
---
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
**HC5 Required Body Sections:**
|
|
83
|
+
1. **Signal Description** — Portfolio-level signal summary, Signal Type, Severity (based on highest-priority signal), Detection Method, Time Window, Affected Scope (all monitored experiments)
|
|
84
|
+
2. **Context** — Per-experiment Experiment Lineage (Originating Experiment, Original Hypothesis, Experiment Outcome, Expected Production Behavior, Actual vs Expected) + Vortex History per experiment
|
|
85
|
+
3. **Trend Analysis** — Per-experiment trend direction, duration, rate of change, baseline comparison, confidence
|
|
86
|
+
4. **Anomaly Detection** (when anomalies flagged in any experiment) — Per-experiment anomaly description, deviation, explanations, discovery needed, discovery focus
|
|
87
|
+
5. **Data Quality** — Per-experiment quality assessment + portfolio-level confidence
|
|
88
|
+
|
|
89
|
+
**Portfolio Signal Summary Addendum:**
|
|
90
|
+
In addition to the standard HC5 sections, include a Portfolio Signal Summary showing each monitored experiment with its priority level, signal status, divergence assessment, and trajectory. This gives Max the portfolio-level view that distinguishes this report from a single-signal or behavioral report.
|
|
91
|
+
|
|
92
|
+
| Priority | Experiment | Signal Status | Divergence | Trajectory | Anomaly? |
|
|
93
|
+
|----------|-----------|--------------|-----------|-----------|----------|
|
|
94
|
+
| P1 | *name* | *one-sentence* | *quantified* | *direction* | Yes / No |
|
|
95
|
+
| P2 | *name* | *one-sentence* | *quantified* | *direction* | Yes / No |
|
|
96
|
+
| P3 | *name* | *one-sentence* | *quantified* | *direction* | Yes / No |
|
|
97
|
+
|
|
98
|
+
**This artifact explicitly does NOT include:**
|
|
99
|
+
- Strategic recommendations (that is Max's domain)
|
|
100
|
+
- Pivot/patch/persevere decisions (that is Max's domain)
|
|
101
|
+
- Experiment design suggestions (that is Liam/Wade's domain)
|
|
102
|
+
- Resource allocation recommendations (that is Max's domain)
|
|
103
|
+
- Portfolio prioritization decisions (Noah prioritizes signals by divergence; Max decides what to do about them)
|
|
104
|
+
|
|
105
|
+
Noah produces intelligence — prioritized, evidence-based portfolio monitoring. Max produces decisions.
|
|
106
|
+
|
|
107
|
+
**Save to:** `{output_folder}/vortex-artifacts/hc5-portfolio-report-{date}.md`
|
|
108
|
+
|
|
109
|
+
I'll create this file with all the sections above once you confirm the content is ready.
|
|
110
|
+
|
|
111
|
+
### 3. Validation Questions
|
|
112
|
+
|
|
113
|
+
Before we finalize, let's validate:
|
|
114
|
+
|
|
115
|
+
**Evidence Check:**
|
|
116
|
+
- [ ] Is every signal assessment grounded in observed production data compared to experiment baselines?
|
|
117
|
+
- [ ] Can we trace each signal back to its originating experiment through the Experiment Lineage section?
|
|
118
|
+
- [ ] Is divergence quantified with specific metrics for every experiment?
|
|
119
|
+
|
|
120
|
+
**Portfolio Completeness Check:**
|
|
121
|
+
- [ ] Are all monitored experiments included in the portfolio signal summary?
|
|
122
|
+
- [ ] Does the Signal Description capture the portfolio-level picture, not just individual experiments?
|
|
123
|
+
- [ ] Does each experiment have its own Context section with Experiment Lineage?
|
|
124
|
+
- [ ] Does each experiment have its own Trend Analysis with all 5 required fields?
|
|
125
|
+
- [ ] Does the Data Quality section include both per-experiment and portfolio-level assessment?
|
|
126
|
+
|
|
127
|
+
**Prioritization Check:**
|
|
128
|
+
- [ ] Is the P1/P2/P3 prioritization consistent with the prioritization framework (severity × scope × confidence)?
|
|
129
|
+
- [ ] Are the highest-priority signals clearly identified and quantified?
|
|
130
|
+
- [ ] Does the portfolio summary give Max a clear ranked view of which experiments need attention?
|
|
131
|
+
|
|
132
|
+
**Intelligence-Not-Strategy Check:**
|
|
133
|
+
- [ ] Does the report contain zero strategic recommendations?
|
|
134
|
+
- [ ] Does it avoid prescribing what to do about any signal or prioritization?
|
|
135
|
+
- [ ] Does it present findings in prioritized signal + context + trend format, leaving decisions to Max?
|
|
136
|
+
- [ ] Would Max have everything needed to make portfolio-level decisions from this report?
|
|
137
|
+
|
|
138
|
+
---
|
|
139
|
+
|
|
140
|
+
## Your Turn
|
|
141
|
+
|
|
142
|
+
Review the portfolio signal report sections. Confirm when you're ready for me to generate the final HC5 artifact.
|
|
143
|
+
|
|
144
|
+
---
|
|
145
|
+
|
|
146
|
+
**[a]** Advanced Elicitation — Deep dive into HC5 refinement with guided questioning
|
|
147
|
+
**[p]** Party Mode — Bring in other Vortex agents for collaborative artifact critique
|
|
148
|
+
**[c]** Continue — Generate the HC5 artifact and proceed to routing
|
|
149
|
+
|
|
150
|
+
---
|
|
151
|
+
|
|
152
|
+
## Vortex Compass
|
|
153
|
+
|
|
154
|
+
Based on what you just completed, here are your evidence-driven options:
|
|
155
|
+
|
|
156
|
+
| If you learned... | Consider next... | Agent | Why |
|
|
157
|
+
|---|---|---|---|
|
|
158
|
+
| Portfolio signal report complete with prioritized signals across experiments | learning-card | Max 🧭 | Portfolio signal report ready (HC5) |
|
|
159
|
+
| ⚡ Anomalies detected across one or more experiments | user-discovery | Isla 🔍 | Anomalies across experiments (HC10) |
|
|
160
|
+
| Specific signal within the portfolio warrants deeper focused analysis | signal-interpretation | Noah 📡 | Deep dive on specific signal |
|
|
161
|
+
|
|
162
|
+
> **Note:** These are evidence-based recommendations. You can navigate to any Vortex agent
|
|
163
|
+
> at any time based on your judgment.
|
|
164
|
+
|
|
165
|
+
**Or run Max's [VN] Vortex Navigation** for a full gap analysis across all streams.
|
|
166
|
+
|
|
167
|
+
### ⚠️ Insufficient Evidence for Routing
|
|
168
|
+
|
|
169
|
+
If the evidence gathered so far doesn't clearly point to a single next step:
|
|
170
|
+
|
|
171
|
+
| To route to... | You need... |
|
|
172
|
+
|----------------|-------------|
|
|
173
|
+
| Max 🧭 | Complete HC5 portfolio signal report with per-experiment assessments and sufficient data quality confidence |
|
|
174
|
+
| Isla 🔍 | Specific anomaly identified across one or more experiments with clear deviation from experiment expectations |
|
|
175
|
+
| Noah 📡 | Specific signal identified during monitoring that warrants focused signal-interpretation analysis |
|
|
176
|
+
|
|
177
|
+
**Workflow-specific signals:**
|
|
178
|
+
- Portfolio assembly incomplete → consider revisiting **step-02** for complete baseline mapping
|
|
179
|
+
- Divergence assessment unclear → consider revisiting **step-03** for sharper signal monitoring
|
|
180
|
+
- Prioritization criteria insufficient → consider revisiting **step-04** for clearer severity/scope assessment
|
|
181
|
+
- Insufficient experiment contexts provided → gather more HC4 artifacts before proceeding
|
|
182
|
+
|
|
183
|
+
**Recommended:** Revisit earlier steps to strengthen your portfolio monitoring, or run **Max's [VN] Vortex Navigation** for a full gap analysis.
|
|
@@ -0,0 +1,52 @@
|
|
|
1
|
+
---
|
|
2
|
+
workflow: production-monitoring
|
|
3
|
+
type: step-file
|
|
4
|
+
description: Monitor production signals across multiple active experiments simultaneously to prioritize divergence and produce portfolio-level intelligence reports
|
|
5
|
+
author: Noah (production-intelligence-specialist)
|
|
6
|
+
version: 1.6.0
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Production Monitoring Workflow
|
|
10
|
+
|
|
11
|
+
This workflow guides you through monitoring production signals across multiple active experiments simultaneously, prioritizing signals by divergence from validated baselines, and producing an HC5 portfolio signal report for Max.
|
|
12
|
+
|
|
13
|
+
## What is Production Monitoring?
|
|
14
|
+
|
|
15
|
+
Production monitoring is the practice of watching multiple graduated experiments in production at once — reading the portfolio of signals through their experiment baselines to identify which experiments need attention and which are performing as expected.
|
|
16
|
+
|
|
17
|
+
Signal + context + trend applies to every experiment in the portfolio. A single experiment's signal tells you about that experiment. Multiple experiments' signals tell you about the portfolio — which bets are paying off, which are degrading, and which are revealing behavior the experiments didn't anticipate. Anomaly detection surfaces what dashboards hide, but at portfolio scale, it also surfaces cross-experiment patterns that single-signal analysis misses.
|
|
18
|
+
|
|
19
|
+
This workflow assembles the experiment portfolio, monitors each experiment's production signals against its baselines, prioritizes by divergence severity, and packages the portfolio intelligence for the decision-maker. Here's what we're seeing in context across the portfolio: that's the deliverable. Not strategy. Not recommendations. Prioritized portfolio intelligence.
|
|
20
|
+
|
|
21
|
+
## Workflow Structure
|
|
22
|
+
|
|
23
|
+
**Step-file architecture:**
|
|
24
|
+
- Just-in-time loading (each step loads only when needed)
|
|
25
|
+
- Sequential enforcement (must complete step N before step N+1)
|
|
26
|
+
- State tracking in frontmatter (progress preserved)
|
|
27
|
+
|
|
28
|
+
## Steps Overview
|
|
29
|
+
|
|
30
|
+
1. **Setup & Multi-Experiment Input Validation** - Validate multiple experiment context inputs (HC4 artifacts or equivalent)
|
|
31
|
+
2. **Portfolio Assembly & Baseline Mapping** - Assemble the experiment portfolio and extract validated baselines per experiment
|
|
32
|
+
3. **Signal Monitoring & Divergence Assessment** - Monitor production signals for each experiment and assess divergence from baselines
|
|
33
|
+
4. **Signal Prioritization & Anomaly Flagging** - Prioritize signals by divergence severity and flag anomalies for routing
|
|
34
|
+
5. **Synthesize & Route** - Produce HC5 portfolio signal report artifact and route via Compass
|
|
35
|
+
|
|
36
|
+
## Output
|
|
37
|
+
|
|
38
|
+
**Artifact:** HC5 Portfolio Signal Report markdown file in `{output_folder}/vortex-artifacts/hc5-portfolio-report-{date}.md`
|
|
39
|
+
|
|
40
|
+
**Template:** None (HC5 artifact is generated inline during Step 5)
|
|
41
|
+
|
|
42
|
+
**Schema:** Conforms to HC5 contract (`_bmad/bme/_vortex/contracts/hc5-signal-report.md`)
|
|
43
|
+
|
|
44
|
+
**Consumer:** Max (learning-card) uses this to make portfolio-level decisions about which experiments need attention.
|
|
45
|
+
|
|
46
|
+
---
|
|
47
|
+
|
|
48
|
+
## INITIALIZATION
|
|
49
|
+
|
|
50
|
+
Load config from {project-root}/_bmad/bme/_vortex/config.yaml
|
|
51
|
+
|
|
52
|
+
Load step: {project-root}/_bmad/bme/_vortex/workflows/production-monitoring/steps/step-01-setup.md
|
|
@@ -0,0 +1,25 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "PoC: {poc-name}"
|
|
3
|
+
date: {date}
|
|
4
|
+
type: proof-of-concept
|
|
5
|
+
status: {status}
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# Proof-of-Concept: {poc-name}
|
|
9
|
+
|
|
10
|
+
## Technical Risk
|
|
11
|
+
|
|
12
|
+
{technical-risk}
|
|
13
|
+
|
|
14
|
+
## PoC Results
|
|
15
|
+
|
|
16
|
+
{results}
|
|
17
|
+
|
|
18
|
+
## Feasibility Assessment
|
|
19
|
+
|
|
20
|
+
{feasibility}
|
|
21
|
+
|
|
22
|
+
---
|
|
23
|
+
|
|
24
|
+
**Created with:** Convoke v2.0.0
|
|
25
|
+
**Workflow:** proof-of-concept
|
|
@@ -0,0 +1,79 @@
|
|
|
1
|
+
---
|
|
2
|
+
step: 1
|
|
3
|
+
workflow: proof-of-concept
|
|
4
|
+
title: Define Technical Risk
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Step 1: Define Technical Risk
|
|
8
|
+
|
|
9
|
+
Before building anything, we need to identify exactly what could prevent this from working. Technical risk comes first because if the technology cannot support the idea, nothing else matters.
|
|
10
|
+
|
|
11
|
+
## Why This Matters
|
|
12
|
+
|
|
13
|
+
Teams waste months building products that hit a technical wall they could have identified in a day. Performance bottlenecks, unsupported integrations, algorithmic infeasibility, data constraints -- these are the silent killers of product ideas. Defining technical risk up front forces you to confront the hardest engineering questions before committing resources. This workflow answers "CAN we build this?" so you never waste time asking "SHOULD we build this?" for something that was technically impossible from the start.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
### 1. What Are You Trying to Build?
|
|
18
|
+
|
|
19
|
+
Describe the technical capability you need to validate. Wade expects a clear technical question -- not a business hypothesis, not a user story, but a specific technical challenge.
|
|
20
|
+
|
|
21
|
+
**Good examples:**
|
|
22
|
+
- "Can we process 10,000 real-time events per second with sub-200ms latency using our current infrastructure?"
|
|
23
|
+
- "Can we integrate with the Stripe Connect API to handle split payments across three-party marketplace transactions?"
|
|
24
|
+
- "Can our recommendation algorithm achieve >80% precision on sparse user data (<10 interactions per user)?"
|
|
25
|
+
|
|
26
|
+
**Provide your technical question:**
|
|
27
|
+
- What exactly needs to work?
|
|
28
|
+
- What system, API, algorithm, or architecture is involved?
|
|
29
|
+
- What does "working" look like in measurable terms?
|
|
30
|
+
|
|
31
|
+
### 2. Identify Technical Risk Categories
|
|
32
|
+
|
|
33
|
+
For your technical question, assess each risk category:
|
|
34
|
+
|
|
35
|
+
| Risk Category | Your Assessment | Severity |
|
|
36
|
+
|---------------|----------------|----------|
|
|
37
|
+
| **Performance** | Can it meet speed/throughput requirements? What are the latency and volume targets? | High / Medium / Low / N/A |
|
|
38
|
+
| **Scalability** | Will it work at 10x or 100x current load? Where are the scaling bottlenecks? | High / Medium / Low / N/A |
|
|
39
|
+
| **Integration Complexity** | How many external systems, APIs, or services must connect? What are the coupling risks? | High / Medium / Low / N/A |
|
|
40
|
+
| **Third-Party Dependencies** | Are we relying on external services, SDKs, or APIs we do not control? What happens if they change or fail? | High / Medium / Low / N/A |
|
|
41
|
+
| **Data Handling** | Can we access, transform, and store the data we need? Are there volume, format, or privacy constraints? | High / Medium / Low / N/A |
|
|
42
|
+
| **Algorithmic Feasibility** | Does a known solution exist for the computational problem? Is it tractable at our scale? | High / Medium / Low / N/A |
|
|
43
|
+
| **Infrastructure** | Do we have (or can we provision) the compute, storage, and networking required? | High / Medium / Low / N/A |
|
|
44
|
+
|
|
45
|
+
### 3. Rank Your Top Technical Risks
|
|
46
|
+
|
|
47
|
+
From the assessment above, identify the 1-3 risks that could kill this effort:
|
|
48
|
+
|
|
49
|
+
| Priority | Risk | Why It Could Kill Us | Current Evidence |
|
|
50
|
+
|----------|------|---------------------|------------------|
|
|
51
|
+
| 1 | *The risk most likely to block you* | *What happens if this fails* | *What do we know today (if anything)* |
|
|
52
|
+
| 2 | *(optional)* | | |
|
|
53
|
+
| 3 | *(optional)* | | |
|
|
54
|
+
|
|
55
|
+
**Guidance:** Focus on the risks with the highest severity AND the least evidence. A high-severity risk with strong evidence that it is solvable is not your biggest problem. A medium-severity risk with zero evidence is more dangerous than it looks.
|
|
56
|
+
|
|
57
|
+
### 4. Define Success and Failure Criteria
|
|
58
|
+
|
|
59
|
+
Before we design the PoC, establish what "feasible" and "not feasible" mean:
|
|
60
|
+
|
|
61
|
+
| Criteria | Threshold | How We Will Measure |
|
|
62
|
+
|----------|-----------|-------------------|
|
|
63
|
+
| **Pass** | *What result proves this is technically feasible?* | *Specific metric or observation* |
|
|
64
|
+
| **Fail** | *What result proves this is NOT feasible?* | *Specific metric or observation* |
|
|
65
|
+
| **Inconclusive** | *What result means we need more data?* | *What additional testing would be required* |
|
|
66
|
+
|
|
67
|
+
If you cannot define a pass/fail threshold, your technical question is too vague. Sharpen it before proceeding.
|
|
68
|
+
|
|
69
|
+
---
|
|
70
|
+
|
|
71
|
+
## Your Turn
|
|
72
|
+
|
|
73
|
+
Describe the technical capability you need to validate, assess the risk categories, rank your top risks, and define pass/fail criteria. Share your analysis and I will help you sharpen the risk definition before we scope the PoC.
|
|
74
|
+
|
|
75
|
+
## Next Step
|
|
76
|
+
|
|
77
|
+
When your technical risks are defined and success criteria are established, I'll load:
|
|
78
|
+
|
|
79
|
+
{project-root}/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-02-scope.md
|
|
@@ -0,0 +1,105 @@
|
|
|
1
|
+
---
|
|
2
|
+
step: 2
|
|
3
|
+
workflow: proof-of-concept
|
|
4
|
+
title: Design PoC Scope
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Step 2: Design PoC Scope
|
|
8
|
+
|
|
9
|
+
Now that you know what could break, let's design the smallest possible test that answers the technical feasibility question. A proof-of-concept is not a prototype, not an MVP, not a demo -- it is the minimum code required to validate or invalidate a specific technical risk.
|
|
10
|
+
|
|
11
|
+
## Why This Matters
|
|
12
|
+
|
|
13
|
+
The most common PoC failure is scope creep. Teams start testing whether an API integration works and end up building half the product. Every hour spent on polish, error handling, or UI in a PoC is an hour wasted -- because if the core technical question answers "no," all that extra work is thrown away. Ruthless scoping keeps PoCs cheap, fast, and focused on the one question that matters.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
### 1. Define the Core Technical Question
|
|
18
|
+
|
|
19
|
+
From Step 1, distill your risks into a single, testable technical question:
|
|
20
|
+
|
|
21
|
+
| Element | Your Answer |
|
|
22
|
+
|---------|-------------|
|
|
23
|
+
| **The Question** | What single technical question must this PoC answer? |
|
|
24
|
+
| **In Scope** | What is the minimum set of components, integrations, or code needed to answer the question? |
|
|
25
|
+
| **Out of Scope** | What are you explicitly NOT building? (Error handling, auth, UI, persistence, monitoring, etc.) |
|
|
26
|
+
| **Time Box** | How long should this PoC take? (Hours, not weeks. If it takes weeks, the scope is too large.) |
|
|
27
|
+
|
|
28
|
+
**Guidance:** If your "In Scope" list has more than 3-5 items, your PoC is too big. Strip it down to the absolute minimum needed to answer the question.
|
|
29
|
+
|
|
30
|
+
### 2. Design the Technical Approach
|
|
31
|
+
|
|
32
|
+
Map out the simplest path to answering the question:
|
|
33
|
+
|
|
34
|
+
| Component | What You Will Build | What You Will Fake/Stub |
|
|
35
|
+
|-----------|-------------------|----------------------|
|
|
36
|
+
| **Data** | *Minimum data needed* | *Hardcoded values, mock data, sample files* |
|
|
37
|
+
| **Integration** | *Actual API calls, library usage, or protocol tests* | *Mock services, stubbed responses, local simulators* |
|
|
38
|
+
| **Compute** | *The actual algorithm, query, or processing* | *Simplified versions, reduced datasets, single-threaded* |
|
|
39
|
+
| **Infrastructure** | *Minimum deployment (local, container, sandbox)* | *Production-grade setup, scaling, redundancy* |
|
|
40
|
+
|
|
41
|
+
**What you will NOT build:**
|
|
42
|
+
- [ ] Production error handling
|
|
43
|
+
- [ ] Authentication / authorization
|
|
44
|
+
- [ ] User interface (unless UI IS the technical question)
|
|
45
|
+
- [ ] Logging / monitoring
|
|
46
|
+
- [ ] Data migrations
|
|
47
|
+
- [ ] Automated tests (manual verification is fine for a PoC)
|
|
48
|
+
- [ ] Documentation beyond what this workflow produces
|
|
49
|
+
|
|
50
|
+
### 3. Define PoC Deliverables
|
|
51
|
+
|
|
52
|
+
What will exist when the PoC is done?
|
|
53
|
+
|
|
54
|
+
| Deliverable | Description |
|
|
55
|
+
|-------------|------------|
|
|
56
|
+
| **Working code** | *What will it do? What will it demonstrate?* |
|
|
57
|
+
| **Test data** | *What data will you use? Where does it come from?* |
|
|
58
|
+
| **Measurement method** | *How will you measure pass/fail against the criteria from Step 1?* |
|
|
59
|
+
| **Run instructions** | *How does someone else reproduce the result? (Even a single command is enough.)* |
|
|
60
|
+
|
|
61
|
+
### 4. Identify Dependencies and Blockers
|
|
62
|
+
|
|
63
|
+
Before building, confirm you have (or can get) everything needed:
|
|
64
|
+
|
|
65
|
+
| Dependency | Status | Blocker? |
|
|
66
|
+
|------------|--------|----------|
|
|
67
|
+
| **API keys / credentials** | Have / Need / Can self-provision | Yes / No |
|
|
68
|
+
| **Test environment** | Have / Need / Can create locally | Yes / No |
|
|
69
|
+
| **Test data** | Have / Need / Can generate | Yes / No |
|
|
70
|
+
| **Libraries / SDKs** | Have / Need / Can install | Yes / No |
|
|
71
|
+
| **Hardware / compute** | Have / Need / Can provision | Yes / No |
|
|
72
|
+
|
|
73
|
+
If any dependency is a blocker, resolve it before proceeding to Step 3. Do not start building a PoC that you know will stall on a missing credential or environment.
|
|
74
|
+
|
|
75
|
+
### 5. Scope Gut-Check
|
|
76
|
+
|
|
77
|
+
Final check before you build:
|
|
78
|
+
|
|
79
|
+
- [ ] Can this PoC be completed in the time box defined above?
|
|
80
|
+
- [ ] Does it test exactly one technical risk (the highest-priority one from Step 1)?
|
|
81
|
+
- [ ] Would a clear pass/fail result from this PoC change your decision about the project?
|
|
82
|
+
- [ ] If the PoC fails, will you know WHY it failed (not just that it failed)?
|
|
83
|
+
- [ ] Is there anything in scope that does not directly contribute to answering the technical question?
|
|
84
|
+
|
|
85
|
+
If you answered "no" to the last question, remove it from scope. If you answered "no" to any of the others, tighten the scope until every answer is "yes."
|
|
86
|
+
|
|
87
|
+
---
|
|
88
|
+
|
|
89
|
+
## Your Turn
|
|
90
|
+
|
|
91
|
+
Design your PoC scope: define the core question, map the technical approach, list deliverables, and confirm dependencies. Share your scope and I will help you cut anything that does not directly answer the technical question.
|
|
92
|
+
|
|
93
|
+
---
|
|
94
|
+
|
|
95
|
+
**[a]** Advanced Elicitation -- Deep dive into scope refinement with guided questioning
|
|
96
|
+
**[p]** Party Mode -- Bring in other Vortex agents to challenge your PoC scope
|
|
97
|
+
**[c]** Continue -- Proceed to building the proof-of-concept
|
|
98
|
+
|
|
99
|
+
---
|
|
100
|
+
|
|
101
|
+
## Next Step
|
|
102
|
+
|
|
103
|
+
When your PoC scope is defined and gut-checked, I'll load:
|
|
104
|
+
|
|
105
|
+
{project-root}/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-03-build.md
|
|
@@ -0,0 +1,92 @@
|
|
|
1
|
+
---
|
|
2
|
+
step: 3
|
|
3
|
+
workflow: proof-of-concept
|
|
4
|
+
title: Build Prototype
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Step 3: Build Prototype
|
|
8
|
+
|
|
9
|
+
Time to build. The scope is locked, the question is clear, and the pass/fail criteria are defined. Now write the minimum code needed to answer the technical feasibility question -- nothing more.
|
|
10
|
+
|
|
11
|
+
## Why This Matters
|
|
12
|
+
|
|
13
|
+
A proof-of-concept is a learning tool, not a product. The moment you start optimizing, refactoring, or adding "just one more feature," you have stopped validating feasibility and started building a product. The goal is a clear answer -- feasible or not feasible -- in the shortest time possible. Ugly code that answers the question is infinitely more valuable than elegant code that answers the wrong question.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
### 1. Set Up the Build Environment
|
|
18
|
+
|
|
19
|
+
Before writing PoC code, confirm your environment matches the scope from Step 2:
|
|
20
|
+
|
|
21
|
+
| Requirement | Status | Notes |
|
|
22
|
+
|-------------|--------|-------|
|
|
23
|
+
| **Runtime / language** | Ready / Setting up | *e.g., Node 20, Python 3.12, Go 1.22* |
|
|
24
|
+
| **Dependencies installed** | Ready / Installing | *List key libraries or SDKs* |
|
|
25
|
+
| **API keys / credentials** | Configured / Pending | *Confirm access to external services* |
|
|
26
|
+
| **Test data available** | Ready / Generating | *Confirm data source and format* |
|
|
27
|
+
| **Test environment** | Ready / Provisioning | *Local, container, cloud sandbox* |
|
|
28
|
+
|
|
29
|
+
If anything is "Pending" or "Provisioning," resolve it before proceeding. Do not start coding around missing infrastructure -- that creates noise in your results.
|
|
30
|
+
|
|
31
|
+
### 2. Build to the Scope
|
|
32
|
+
|
|
33
|
+
Follow the scope defined in Step 2. For each in-scope component:
|
|
34
|
+
|
|
35
|
+
| Component | What to Build | Done? |
|
|
36
|
+
|-----------|--------------|-------|
|
|
37
|
+
| *From Step 2 scope* | *Specific implementation task* | [ ] |
|
|
38
|
+
| *From Step 2 scope* | *Specific implementation task* | [ ] |
|
|
39
|
+
| *From Step 2 scope* | *Specific implementation task* | [ ] |
|
|
40
|
+
|
|
41
|
+
**Build rules:**
|
|
42
|
+
- [ ] **No scope creep.** If you discover something interesting but out of scope, write it down for later -- do not build it now.
|
|
43
|
+
- [ ] **No optimization.** First make it work, then measure. Optimization is for production, not PoCs.
|
|
44
|
+
- [ ] **No polish.** Hardcoded values, console output, manual steps -- all acceptable. The PoC needs to produce a measurable result, not impress anyone.
|
|
45
|
+
- [ ] **Document surprises.** If something unexpected happens during implementation -- an API behaves differently than documented, a library has a limitation you did not expect, performance is wildly different from estimates -- write it down immediately. These surprises ARE the findings.
|
|
46
|
+
|
|
47
|
+
### 3. Track Implementation Notes
|
|
48
|
+
|
|
49
|
+
As you build, capture what you learn. These notes feed directly into Step 4 (testing) and Step 6 (documentation):
|
|
50
|
+
|
|
51
|
+
| Observation | Category | Impact on Feasibility |
|
|
52
|
+
|-------------|----------|----------------------|
|
|
53
|
+
| *What you noticed while building* | Performance / Integration / Data / Algorithm / Infrastructure | Positive / Negative / Neutral |
|
|
54
|
+
|
|
55
|
+
**Categories to watch:**
|
|
56
|
+
- **Performance:** Was anything surprisingly fast or slow?
|
|
57
|
+
- **Integration:** Did the API/service work as documented? Any unexpected behaviors, rate limits, or authentication quirks?
|
|
58
|
+
- **Data:** Is the data in the format you expected? Any transformation issues?
|
|
59
|
+
- **Algorithm:** Does the approach work in principle? Any edge cases that change the picture?
|
|
60
|
+
- **Infrastructure:** Any resource constraints (memory, CPU, network) you did not anticipate?
|
|
61
|
+
|
|
62
|
+
### 4. Build Checkpoint
|
|
63
|
+
|
|
64
|
+
Before moving to testing, verify:
|
|
65
|
+
|
|
66
|
+
- [ ] The PoC runs end-to-end (even if manually triggered or partially hardcoded)
|
|
67
|
+
- [ ] It produces a measurable output that can be compared against the pass/fail criteria from Step 1
|
|
68
|
+
- [ ] You can reproduce the result (run it again, get a comparable outcome)
|
|
69
|
+
- [ ] You have not built anything outside the Step 2 scope
|
|
70
|
+
- [ ] Implementation notes capture every surprise, blocker, and unexpected behavior
|
|
71
|
+
|
|
72
|
+
**If the PoC does not run:** That is a valid finding. If you cannot get it working at all, document exactly where it breaks and why. A PoC that fails to run because of a fundamental technical limitation has answered the feasibility question -- the answer is "no" (or "not with this approach").
|
|
73
|
+
|
|
74
|
+
---
|
|
75
|
+
|
|
76
|
+
## Your Turn
|
|
77
|
+
|
|
78
|
+
Build the PoC according to your Step 2 scope. Track implementation notes as you go. When you have a running prototype that produces measurable output, share your implementation notes and we will move to structured testing.
|
|
79
|
+
|
|
80
|
+
---
|
|
81
|
+
|
|
82
|
+
**[a]** Advanced Elicitation -- Deep dive into implementation challenges with guided questioning
|
|
83
|
+
**[p]** Party Mode -- Bring in other Vortex agents to review your implementation approach
|
|
84
|
+
**[c]** Continue -- Proceed to testing technical assumptions
|
|
85
|
+
|
|
86
|
+
---
|
|
87
|
+
|
|
88
|
+
## Next Step
|
|
89
|
+
|
|
90
|
+
When your prototype is built and producing measurable output, I'll load:
|
|
91
|
+
|
|
92
|
+
{project-root}/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-04-test.md
|
|
@@ -0,0 +1,103 @@
|
|
|
1
|
+
---
|
|
2
|
+
step: 4
|
|
3
|
+
workflow: proof-of-concept
|
|
4
|
+
title: Test Technical Assumptions
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Step 4: Test Technical Assumptions
|
|
8
|
+
|
|
9
|
+
The prototype exists. Now we find out if it actually works -- not "works on my machine" but works against the specific pass/fail criteria defined in Step 1. This is where assumptions meet reality.
|
|
10
|
+
|
|
11
|
+
## Why This Matters
|
|
12
|
+
|
|
13
|
+
Building a PoC and declaring it "works" based on a single happy-path run is not validation -- it is confirmation bias. Technical feasibility means the system works under the conditions that matter: at the expected load, with realistic data, against actual API endpoints, within the required time constraints. This step forces you to test systematically rather than optimistically, so the feasibility assessment in Step 5 is grounded in evidence, not hope.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
### 1. Revisit Your Pass/Fail Criteria
|
|
18
|
+
|
|
19
|
+
Pull forward the criteria from Step 1 and confirm they are still the right thresholds:
|
|
20
|
+
|
|
21
|
+
| Criteria | Threshold (from Step 1) | Still Valid? | Adjusted Threshold (if changed) |
|
|
22
|
+
|----------|------------------------|--------------|-------------------------------|
|
|
23
|
+
| **Pass** | *What proves feasibility* | Yes / No | *Updated threshold if needed* |
|
|
24
|
+
| **Fail** | *What disproves feasibility* | Yes / No | *Updated threshold if needed* |
|
|
25
|
+
| **Inconclusive** | *What means "need more data"* | Yes / No | *Updated threshold if needed* |
|
|
26
|
+
|
|
27
|
+
If you adjusted thresholds, document why. Moving the goalposts after seeing results is a red flag -- but refining criteria based on what you learned during the build (Step 3) is legitimate.
|
|
28
|
+
|
|
29
|
+
### 2. Design Test Scenarios
|
|
30
|
+
|
|
31
|
+
For each technical risk being validated, define specific test scenarios:
|
|
32
|
+
|
|
33
|
+
| Scenario | What It Tests | Input | Expected Output | Risk Category |
|
|
34
|
+
|----------|--------------|-------|----------------|---------------|
|
|
35
|
+
| **Happy path** | Does it work at all under ideal conditions? | *Simplest valid input* | *Expected result* | Baseline |
|
|
36
|
+
| **Realistic load** | Does it work at expected production conditions? | *Realistic data volume, concurrent users, or request rate* | *Within pass threshold* | Performance / Scalability |
|
|
37
|
+
| **Edge cases** | Does it handle boundary conditions? | *Empty data, maximum size, malformed input, timeout* | *Graceful handling or documented limitation* | Data / Algorithm |
|
|
38
|
+
| **Failure modes** | What happens when dependencies fail? | *API timeout, network error, invalid response* | *Known failure behavior* | Integration / Dependencies |
|
|
39
|
+
| **Stress test** | Where does it break? | *2x, 5x, 10x expected load* | *Identify breaking point* | Scalability |
|
|
40
|
+
|
|
41
|
+
**Guidance:** You do not need all five scenario types. Choose the ones that directly test your top risks from Step 1. If your primary risk is integration complexity, focus on happy path and failure modes. If your primary risk is performance, focus on realistic load and stress test.
|
|
42
|
+
|
|
43
|
+
### 3. Run Tests and Record Results
|
|
44
|
+
|
|
45
|
+
Execute each scenario and record raw results:
|
|
46
|
+
|
|
47
|
+
| Scenario | Actual Result | Pass/Fail/Inconclusive | Notes |
|
|
48
|
+
|----------|--------------|----------------------|-------|
|
|
49
|
+
| *Happy path* | *What actually happened* | *Against criteria* | *Anything unexpected* |
|
|
50
|
+
| *Realistic load* | *Measured metrics* | *Against criteria* | *Bottlenecks observed* |
|
|
51
|
+
| *Edge cases* | *Behavior observed* | *Against criteria* | *Limitations found* |
|
|
52
|
+
| *Failure modes* | *How it failed* | *Against criteria* | *Recovery behavior* |
|
|
53
|
+
| *Stress test* | *Breaking point found* | *Against criteria* | *Resource limits hit* |
|
|
54
|
+
|
|
55
|
+
**Record raw data, not interpretations.** If the response time was 340ms, write "340ms" -- do not write "acceptable." Interpretation happens in Step 5.
|
|
56
|
+
|
|
57
|
+
### 4. Capture Performance Data (If Applicable)
|
|
58
|
+
|
|
59
|
+
If performance is a relevant risk, record specific metrics:
|
|
60
|
+
|
|
61
|
+
| Metric | Target (from Step 1) | Measured | Delta | Acceptable? |
|
|
62
|
+
|--------|---------------------|----------|-------|-------------|
|
|
63
|
+
| **Latency (p50)** | *Target ms* | *Actual ms* | *+/- ms* | Yes / No |
|
|
64
|
+
| **Latency (p95)** | *Target ms* | *Actual ms* | *+/- ms* | Yes / No |
|
|
65
|
+
| **Throughput** | *Target ops/sec* | *Actual ops/sec* | *+/- ops/sec* | Yes / No |
|
|
66
|
+
| **Memory usage** | *Target MB* | *Actual MB* | *+/- MB* | Yes / No |
|
|
67
|
+
| **CPU utilization** | *Target %* | *Actual %* | *+/- %* | Yes / No |
|
|
68
|
+
| **Error rate** | *Target %* | *Actual %* | *+/- %* | Yes / No |
|
|
69
|
+
|
|
70
|
+
### 5. Document Unexpected Findings
|
|
71
|
+
|
|
72
|
+
During testing, you will discover things you did not expect. These are often more valuable than the planned test results:
|
|
73
|
+
|
|
74
|
+
| Finding | Expected? | Impact | Action Required |
|
|
75
|
+
|---------|-----------|--------|----------------|
|
|
76
|
+
| *What you discovered* | Yes / No | High / Medium / Low | *Does this change the feasibility assessment?* |
|
|
77
|
+
|
|
78
|
+
Pay special attention to:
|
|
79
|
+
- API rate limits or throttling not mentioned in documentation
|
|
80
|
+
- Data format inconsistencies between documentation and reality
|
|
81
|
+
- Performance cliffs (works fine at N, collapses at N+1)
|
|
82
|
+
- Hidden dependencies or configuration requirements
|
|
83
|
+
- Licensing or usage restrictions discovered during testing
|
|
84
|
+
|
|
85
|
+
---
|
|
86
|
+
|
|
87
|
+
## Your Turn
|
|
88
|
+
|
|
89
|
+
Design test scenarios for your top technical risks, run them against the prototype, and record raw results. Share your test results and unexpected findings -- I will help you assess what they mean for feasibility.
|
|
90
|
+
|
|
91
|
+
---
|
|
92
|
+
|
|
93
|
+
**[a]** Advanced Elicitation -- Deep dive into test design with guided questioning
|
|
94
|
+
**[p]** Party Mode -- Bring in other Vortex agents to challenge your test methodology
|
|
95
|
+
**[c]** Continue -- Proceed to feasibility evaluation
|
|
96
|
+
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
## Next Step
|
|
100
|
+
|
|
101
|
+
When your tests are complete and results are recorded, I'll load:
|
|
102
|
+
|
|
103
|
+
{project-root}/_bmad/bme/_vortex/workflows/proof-of-concept/steps/step-05-evaluate.md
|