@trohde/earos 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (135) hide show
  1. package/README.md +156 -0
  2. package/assets/init/.agents/skills/earos-artifact-gen/SKILL.md +106 -0
  3. package/assets/init/.agents/skills/earos-artifact-gen/references/interview-guide.md +313 -0
  4. package/assets/init/.agents/skills/earos-artifact-gen/references/output-guide.md +367 -0
  5. package/assets/init/.agents/skills/earos-assess/SKILL.md +212 -0
  6. package/assets/init/.agents/skills/earos-assess/references/calibration-benchmarks.md +160 -0
  7. package/assets/init/.agents/skills/earos-assess/references/output-templates.md +311 -0
  8. package/assets/init/.agents/skills/earos-assess/references/scoring-protocol.md +281 -0
  9. package/assets/init/.agents/skills/earos-calibrate/SKILL.md +153 -0
  10. package/assets/init/.agents/skills/earos-calibrate/references/agreement-metrics.md +188 -0
  11. package/assets/init/.agents/skills/earos-calibrate/references/calibration-protocol.md +263 -0
  12. package/assets/init/.agents/skills/earos-create/SKILL.md +257 -0
  13. package/assets/init/.agents/skills/earos-create/references/criterion-writing-guide.md +268 -0
  14. package/assets/init/.agents/skills/earos-create/references/dependency-rules.md +193 -0
  15. package/assets/init/.agents/skills/earos-create/references/rubric-interview-guide.md +123 -0
  16. package/assets/init/.agents/skills/earos-create/references/validation-checklist.md +238 -0
  17. package/assets/init/.agents/skills/earos-profile-author/SKILL.md +251 -0
  18. package/assets/init/.agents/skills/earos-profile-author/references/criterion-writing-guide.md +280 -0
  19. package/assets/init/.agents/skills/earos-profile-author/references/design-methods.md +158 -0
  20. package/assets/init/.agents/skills/earos-profile-author/references/profile-checklist.md +173 -0
  21. package/assets/init/.agents/skills/earos-remediate/SKILL.md +118 -0
  22. package/assets/init/.agents/skills/earos-remediate/references/output-template.md +199 -0
  23. package/assets/init/.agents/skills/earos-remediate/references/remediation-patterns.md +330 -0
  24. package/assets/init/.agents/skills/earos-report/SKILL.md +85 -0
  25. package/assets/init/.agents/skills/earos-report/references/portfolio-template.md +181 -0
  26. package/assets/init/.agents/skills/earos-report/references/single-artifact-template.md +168 -0
  27. package/assets/init/.agents/skills/earos-review/SKILL.md +130 -0
  28. package/assets/init/.agents/skills/earos-review/references/challenge-patterns.md +163 -0
  29. package/assets/init/.agents/skills/earos-review/references/output-template.md +180 -0
  30. package/assets/init/.agents/skills/earos-template-fill/SKILL.md +177 -0
  31. package/assets/init/.agents/skills/earos-template-fill/references/evidence-writing-guide.md +186 -0
  32. package/assets/init/.agents/skills/earos-template-fill/references/section-rubric-mapping.md +200 -0
  33. package/assets/init/.agents/skills/earos-validate/SKILL.md +113 -0
  34. package/assets/init/.agents/skills/earos-validate/references/fix-patterns.md +281 -0
  35. package/assets/init/.agents/skills/earos-validate/references/validation-checks.md +287 -0
  36. package/assets/init/.claude/CLAUDE.md +4 -0
  37. package/assets/init/AGENTS.md +293 -0
  38. package/assets/init/CLAUDE.md +635 -0
  39. package/assets/init/README.md +507 -0
  40. package/assets/init/calibration/gold-set/.gitkeep +0 -0
  41. package/assets/init/calibration/results/.gitkeep +0 -0
  42. package/assets/init/core/core-meta-rubric.yaml +643 -0
  43. package/assets/init/docs/consistency-report.md +325 -0
  44. package/assets/init/docs/getting-started.md +194 -0
  45. package/assets/init/docs/profile-authoring-guide.md +51 -0
  46. package/assets/init/docs/terminology.md +126 -0
  47. package/assets/init/earos.manifest.yaml +104 -0
  48. package/assets/init/evaluations/.gitkeep +0 -0
  49. package/assets/init/examples/aws-event-driven-order-processing/artifact.yaml +2056 -0
  50. package/assets/init/examples/aws-event-driven-order-processing/evaluation.yaml +973 -0
  51. package/assets/init/examples/aws-event-driven-order-processing/report.md +244 -0
  52. package/assets/init/examples/example-solution-architecture.evaluation.yaml +136 -0
  53. package/assets/init/examples/multi-cloud-data-analytics/artifact.yaml +715 -0
  54. package/assets/init/overlays/data-governance.yaml +94 -0
  55. package/assets/init/overlays/regulatory.yaml +154 -0
  56. package/assets/init/overlays/security.yaml +92 -0
  57. package/assets/init/profiles/adr.yaml +225 -0
  58. package/assets/init/profiles/capability-map.yaml +223 -0
  59. package/assets/init/profiles/reference-architecture.yaml +426 -0
  60. package/assets/init/profiles/roadmap.yaml +205 -0
  61. package/assets/init/profiles/solution-architecture.yaml +227 -0
  62. package/assets/init/research/architecture-assessment-rubrics-research.docx +0 -0
  63. package/assets/init/research/architecture-assessment-rubrics-research.md +566 -0
  64. package/assets/init/research/reference-architecture-research.md +751 -0
  65. package/assets/init/standard/EAROS.md +1426 -0
  66. package/assets/init/standard/schemas/artifact.schema.json +1295 -0
  67. package/assets/init/standard/schemas/artifact.uischema.json +65 -0
  68. package/assets/init/standard/schemas/evaluation.schema.json +284 -0
  69. package/assets/init/standard/schemas/rubric.schema.json +383 -0
  70. package/assets/init/templates/evaluation-record.template.yaml +58 -0
  71. package/assets/init/templates/new-profile.template.yaml +65 -0
  72. package/bin.js +188 -0
  73. package/dist/assets/_basePickBy-BVu6YmSW.js +1 -0
  74. package/dist/assets/_baseUniq-CWRzQDz_.js +1 -0
  75. package/dist/assets/arc-CyDBhtDM.js +1 -0
  76. package/dist/assets/architectureDiagram-2XIMDMQ5-BH6O4dvN.js +36 -0
  77. package/dist/assets/blockDiagram-WCTKOSBZ-2xmwdjpg.js +132 -0
  78. package/dist/assets/c4Diagram-IC4MRINW-BNmPRFJF.js +10 -0
  79. package/dist/assets/channel-CiySTNoJ.js +1 -0
  80. package/dist/assets/chunk-4BX2VUAB-DGQTvirp.js +1 -0
  81. package/dist/assets/chunk-55IACEB6-DNMAQAC_.js +1 -0
  82. package/dist/assets/chunk-FMBD7UC4-BJbVTQ5o.js +15 -0
  83. package/dist/assets/chunk-JSJVCQXG-BCxUL74A.js +1 -0
  84. package/dist/assets/chunk-KX2RTZJC-H7wWZOfz.js +1 -0
  85. package/dist/assets/chunk-NQ4KR5QH-BK4RlTQF.js +220 -0
  86. package/dist/assets/chunk-QZHKN3VN-0chxDV5g.js +1 -0
  87. package/dist/assets/chunk-WL4C6EOR-DexfQ-AV.js +189 -0
  88. package/dist/assets/classDiagram-VBA2DB6C-D7luWJQn.js +1 -0
  89. package/dist/assets/classDiagram-v2-RAHNMMFH-D7luWJQn.js +1 -0
  90. package/dist/assets/clone-ylgRbd3D.js +1 -0
  91. package/dist/assets/cose-bilkent-S5V4N54A-DS2IOCfZ.js +1 -0
  92. package/dist/assets/cytoscape.esm-CyJtwmzi.js +331 -0
  93. package/dist/assets/dagre-KLK3FWXG-BbSoTTa3.js +4 -0
  94. package/dist/assets/defaultLocale-DX6XiGOO.js +1 -0
  95. package/dist/assets/diagram-E7M64L7V-C9TvYgv0.js +24 -0
  96. package/dist/assets/diagram-IFDJBPK2-DowUMWrg.js +43 -0
  97. package/dist/assets/diagram-P4PSJMXO-BL6nrnQF.js +24 -0
  98. package/dist/assets/erDiagram-INFDFZHY-rXPRl8VM.js +70 -0
  99. package/dist/assets/flowDiagram-PKNHOUZH-DBRM99-W.js +162 -0
  100. package/dist/assets/ganttDiagram-A5KZAMGK-INcWFsBT.js +292 -0
  101. package/dist/assets/gitGraphDiagram-K3NZZRJ6-DMwpfE91.js +65 -0
  102. package/dist/assets/graph-DLQn37b-.js +1 -0
  103. package/dist/assets/index-BFFITMT8.js +650 -0
  104. package/dist/assets/index-H7f6VTz1.css +1 -0
  105. package/dist/assets/infoDiagram-LFFYTUFH-B0f4TWRM.js +2 -0
  106. package/dist/assets/init-Gi6I4Gst.js +1 -0
  107. package/dist/assets/ishikawaDiagram-PHBUUO56-CsU6XimZ.js +70 -0
  108. package/dist/assets/journeyDiagram-4ABVD52K-CQ7ibNib.js +139 -0
  109. package/dist/assets/kanban-definition-K7BYSVSG-DzEN7THt.js +89 -0
  110. package/dist/assets/katex-B1X10hvy.js +261 -0
  111. package/dist/assets/layout-C0dvb42R.js +1 -0
  112. package/dist/assets/linear-j4a8mGj7.js +1 -0
  113. package/dist/assets/mindmap-definition-YRQLILUH-DP8iEuCf.js +68 -0
  114. package/dist/assets/ordinal-Cboi1Yqb.js +1 -0
  115. package/dist/assets/pieDiagram-SKSYHLDU-BpIAXgAm.js +30 -0
  116. package/dist/assets/quadrantDiagram-337W2JSQ-DrpXn5Eg.js +7 -0
  117. package/dist/assets/requirementDiagram-Z7DCOOCP-Bg7EwHlG.js +73 -0
  118. package/dist/assets/sankeyDiagram-WA2Y5GQK-BWagRs1F.js +10 -0
  119. package/dist/assets/sequenceDiagram-2WXFIKYE-q5jwhivG.js +145 -0
  120. package/dist/assets/stateDiagram-RAJIS63D-B_J9pE-2.js +1 -0
  121. package/dist/assets/stateDiagram-v2-FVOUBMTO-Q_1GcybB.js +1 -0
  122. package/dist/assets/timeline-definition-YZTLITO2-dv0jgQ0z.js +61 -0
  123. package/dist/assets/treemap-KZPCXAKY-Dt1dkIE7.js +162 -0
  124. package/dist/assets/vennDiagram-LZ73GAT5-BdO5RgRZ.js +34 -0
  125. package/dist/assets/xychartDiagram-JWTSCODW-CpDVe-8v.js +7 -0
  126. package/dist/index.html +23 -0
  127. package/export-docx.js +1583 -0
  128. package/init.js +353 -0
  129. package/manifest-cli.mjs +207 -0
  130. package/package.json +83 -0
  131. package/schemas/artifact.schema.json +1295 -0
  132. package/schemas/artifact.uischema.json +65 -0
  133. package/schemas/evaluation.schema.json +284 -0
  134. package/schemas/rubric.schema.json +383 -0
  135. package/serve.js +238 -0
@@ -0,0 +1,205 @@
1
+ rubric_id: EAROS-ROAD-001
2
+ version: 2.0.0
3
+ kind: profile
4
+ title: Roadmap Profile
5
+ status: draft
6
+ artifact_type: roadmap
7
+ inherits:
8
+ - EAROS-CORE-002
9
+ design_method: lifecycle_centred
10
+
11
+ dimensions:
12
+ - id: RD1
13
+ name: Dependency realism
14
+ description: >
15
+ Roadmaps without explicit dependencies are wishful thinking. A list of initiatives
16
+ with target dates but no dependency analysis creates false confidence — when one item
17
+ slips, downstream items slip too, but the roadmap does not show the cascade. Dependency
18
+ realism separates an architectural roadmap from a marketing slide.
19
+ weight: 1.0
20
+ criteria:
21
+ - id: RD-DEP-01
22
+ question: Are dependencies between roadmap items identified and realistic?
23
+ description: >
24
+ Dependencies between roadmap items determine the critical path and constrain
25
+ delivery flexibility. Without explicit dependency analysis, teams cannot identify
26
+ risks to the overall timeline, stakeholders cannot understand which delays matter
27
+ most, and the roadmap cannot be used for genuine portfolio prioritization.
28
+ Dependencies should cover both technical dependencies and team/resource constraints.
29
+ metric_type: ordinal
30
+ scale: [0, 1, 2, 3, 4, "N/A"]
31
+ gate:
32
+ enabled: true
33
+ severity: major
34
+ failure_effect: Cannot pass if score < 2
35
+ required_evidence:
36
+ - dependency map or graph linking roadmap items
37
+ - critical path identification
38
+ scoring_guide:
39
+ "0": No dependencies shown — all items appear independent with only dates
40
+ "1": Dependencies listed but unrealistic or contradictory — dates don't reflect the chain
41
+ "2": Key dependencies identified — main technical dependencies mapped, critical path implied
42
+ "3": Dependencies are realistic and mostly complete — critical path explicit, team dependencies included
43
+ "4": Dependencies are realistic, complete, and risk-assessed — contingency plans for critical path items documented
44
+ anti_patterns:
45
+ - Date list presented without any dependency analysis
46
+ - All items shown as independent in parallel
47
+ - Dependencies mentioned in narrative but not shown in the roadmap structure
48
+ examples:
49
+ good:
50
+ - >
51
+ "API Gateway Upgrade (Q2) → depends on: Platform team capacity confirmed (Q1),
52
+ Security review completion (Q1 scheduled). Blocks: Microservices migration (Q3).
53
+ Critical path flagged. Risk: if API Gateway upgrade slips >4 weeks, Microservices
54
+ migration moves to Q4 — identified as highest schedule risk."
55
+ bad:
56
+ - >
57
+ "Q1: Complete infrastructure upgrade. Q2: Migrate to microservices. Q3: Launch
58
+ new products. [All items listed independently with no dependency links]"
59
+ decision_tree: >
60
+ IF no dependencies shown and all items appear independent THEN score 0.
61
+ IF some dependencies mentioned in narrative but not mapped THEN score 1.
62
+ IF key dependencies identified but analysis incomplete THEN score 2.
63
+ IF dependencies realistic and mostly complete with critical path identified THEN score 3.
64
+ IF all dependencies mapped with risk assessment and contingency for critical path THEN score 4.
65
+ remediation_hints:
66
+ - Draw a dependency graph or add predecessor/successor columns to the roadmap table
67
+ - Mark the critical path explicitly and identify the highest-risk dependency
68
+ - Add team capacity and shared platform dependencies, not just technical ones
69
+
70
+ - id: RD2
71
+ name: Transition-state clarity
72
+ description: >
73
+ The distance between current state and target state is rarely navigable in a single
74
+ step. Transition states define the viable intermediate architectures that allow the
75
+ organization to continue operating safely during transformation while the target
76
+ state is being built incrementally.
77
+ weight: 1.0
78
+ criteria:
79
+ - id: RD-TRN-01
80
+ question: Are intermediate states between current and target architecture defined?
81
+ description: >
82
+ Without defined transition architectures, organizations face a binary choice:
83
+ remain on the current architecture or jump to the target in a big-bang migration.
84
+ In practice neither is acceptable — the organization must keep running during
85
+ transformation. Transition states define the intermediate architectures that are
86
+ stable enough to run production workloads while transformation continues. Each
87
+ state must be a valid, deployable architecture — not just a phase label.
88
+ metric_type: ordinal
89
+ scale: [0, 1, 2, 3, 4, "N/A"]
90
+ gate: false
91
+ required_evidence:
92
+ - transition architecture diagrams or descriptions
93
+ - interim state definitions with entry and exit criteria
94
+ scoring_guide:
95
+ "0": No transition states — jump directly from current to target with no intermediate
96
+ "1": "Vague phase labels only (e.g. 'Phase 1: Migrate') with no architecture content"
97
+ "2": Some transition states defined — intermediate architectures sketched but incomplete
98
+ "3": Transition states are clearly defined and sequenced — each state has architecture content
99
+ "4": Transition states are detailed with entry and exit criteria, risk assessment, and rollback plan
100
+ anti_patterns:
101
+ - Jump from current to target with no intermediate architecture
102
+ - Phase labels without any architecture content
103
+ - No entry or exit criteria for any transition state
104
+ examples:
105
+ good:
106
+ - >
107
+ "State 0 (Current): Monolith on-premises, deployed to VMware.
108
+ State 1 (Q3 2026 — Strangler Fig): New features deployed as cloud microservices;
109
+ existing monolith unchanged. Entry: monolith API spec complete. Exit: all new
110
+ features deployed to cloud for 90 days with no critical incidents.
111
+ State 2 (Q1 2027 — Hybrid): Monolith data synchronised to cloud datastores.
112
+ State 3 (Target — Q4 2027): Cloud-native microservices, monolith decommissioned."
113
+ bad:
114
+ - >
115
+ "Phase 1: Assessment. Phase 2: Migration. Phase 3: Optimization.
116
+ [No architecture content, no entry or exit criteria, no intermediate state design]"
117
+ decision_tree: >
118
+ IF no current or target state and no transition view THEN score 0.
119
+ IF vague phase labels with no architecture content THEN score 1.
120
+ IF some transition states defined but inconsistent or incomplete THEN score 2.
121
+ IF transition states clearly defined and sequenced with architecture content THEN score 3.
122
+ IF all transition states have entry/exit criteria, architecture views, and risk assessment THEN score 4.
123
+ remediation_hints:
124
+ - Define each transition state as a named, deployable architecture (not just a phase name)
125
+ - Add entry criteria (what must be true before entering) and exit criteria (what must be true before moving on)
126
+ - Include a diagram for each transition state
127
+
128
+ - id: RD3
129
+ name: Ownership and measurability
130
+ description: >
131
+ Roadmaps without owners and funding linkage are aspirations, not commitments. A roadmap
132
+ item with no named owner and no budget allocation has no accountability and no realistic
133
+ chance of delivery. Ownership and measurability convert a wish list into an accountable
134
+ architecture plan.
135
+ weight: 1.0
136
+ criteria:
137
+ - id: RD-OWN-01
138
+ question: Do roadmap items have owners, funding linkage, and measurable milestones?
139
+ description: >
140
+ Architecture roadmaps frequently fail not because the technical direction is wrong,
141
+ but because no one owns the delivery. Explicitly naming owners, linking to funding
142
+ mechanisms, and defining measurable milestones with success criteria converts a
143
+ direction statement into an accountable governance artifact that can be tracked
144
+ quarter-over-quarter.
145
+ metric_type: ordinal
146
+ scale: [0, 1, 2, 3, 4, "N/A"]
147
+ gate: false
148
+ required_evidence:
149
+ - owner assignments (named individual or accountable team)
150
+ - funding references (project code, approved budget)
151
+ - milestone definitions with measurable outcomes
152
+ scoring_guide:
153
+ "0": No ownership or milestones — roadmap items have dates only
154
+ "1": Owners vaguely implied — team referenced without individual accountability
155
+ "2": Some owners and milestones — partial coverage across roadmap items
156
+ "3": Most items have owners and milestones — clear accountability for the majority
157
+ "4": All items owned with named individuals, measurable milestones, funding references, and success criteria
158
+ anti_patterns:
159
+ - Roadmap not connected to target state (items floating without strategic anchor)
160
+ - Generic team references ('Platform team will own') without named accountability
161
+ - Milestones defined by activity ('complete design') rather than outcome ('system live with 99.9% SLO')
162
+ examples:
163
+ good:
164
+ - >
165
+ "API Gateway Upgrade: Owner: Jane Smith (Platform Lead). Budget: Project P2026-114
166
+ (£150K approved). Milestone 1: Technical design complete (2026-03-31). Milestone 2:
167
+ Non-prod environment live (2026-05-15). Milestone 3: Production cut-over (2026-06-30).
168
+ Success metric: zero critical P1 incidents for 30 days post-migration."
169
+ bad:
170
+ - >
171
+ "The Platform team will own the infrastructure work. [No individual owner, no
172
+ funding reference, no measurable milestones or success criteria]"
173
+ decision_tree: >
174
+ IF no ownership or milestones anywhere in the roadmap THEN score 0.
175
+ IF owners implied by team names without individual accountability THEN score 1.
176
+ IF some items have owners or milestones but significant gaps THEN score 2.
177
+ IF most items have named owners and measurable milestones THEN score 3.
178
+ IF all items have named owners, funding references, measurable milestones, and success criteria THEN score 4.
179
+ remediation_hints:
180
+ - Name a single accountable owner (individual, not team) for each roadmap item
181
+ - Link each item to an approved funding source or project code
182
+ - Replace activity-based milestones with outcome-based ones
183
+
184
+ scoring:
185
+ scale: 0-4 ordinal plus N/A
186
+ method: gates_first_then_weighted_average
187
+ thresholds:
188
+ pass: No critical gate failure, overall >= 3.2, and no dimension < 2.0
189
+ conditional_pass: No critical gate failure and overall 2.4-3.19 or one weak dimension
190
+ rework_required: Overall < 2.4 or repeated weak dimensions
191
+ reject: Critical gate failure or mandatory control breach
192
+ not_reviewable: Evidence insufficient for core gate criteria
193
+ na_policy: Exclude N/A criteria from denominator; evaluator must justify N/A
194
+ confidence_policy: Confidence reported separately, must not modify score
195
+
196
+ outputs:
197
+ require_evidence_refs: true
198
+ require_confidence: true
199
+ require_actions: true
200
+ require_evidence_class: true
201
+ require_evidence_anchors: true
202
+ formats:
203
+ - yaml
204
+ - json
205
+ - markdown-report
@@ -0,0 +1,227 @@
1
+ rubric_id: EAROS-SOL-001
2
+ version: 2.0.0
3
+ kind: profile
4
+ title: Solution Architecture Profile
5
+ status: approved
6
+ artifact_type: solution_architecture
7
+ inherits:
8
+ - EAROS-CORE-002
9
+ design_method: decision_centred
10
+ purpose:
11
+ - design_review
12
+ - architecture_board_review
13
+ - delivery_readiness_review
14
+ stakeholders:
15
+ - solution_architect
16
+ - product_owner
17
+ - engineering_lead
18
+ - security
19
+ - operations
20
+ - data
21
+ viewpoints:
22
+ - context
23
+ - logical
24
+ - integration
25
+ - deployment
26
+ - security
27
+ - operating_model
28
+
29
+ dimensions:
30
+ - id: SD1
31
+ name: Solution optioning and rationale
32
+ description: >
33
+ Does the artifact explain the decision to adopt the proposed solution, including the
34
+ alternatives that were rejected and the criteria used to choose between them? Architecture
35
+ decisions without rationale cannot be challenged, improved, or defended.
36
+ weight: 1.0
37
+ criteria:
38
+ - id: SOL-01
39
+ question: Does the artifact explain the chosen option and the rejected alternatives at a decision-useful level?
40
+ description: >
41
+ A solution design that simply presents the chosen option — however technically
42
+ sophisticated — is an assertion, not an argument. Reviewers, delivery teams, and
43
+ governance bodies need to understand the decision-making process to provide meaningful
44
+ oversight. This criterion checks whether the artifact documents a genuine decision,
45
+ with alternatives considered and selection criteria made explicit.
46
+ metric_type: ordinal
47
+ scale: [0, 1, 2, 3, 4, "N/A"]
48
+ gate:
49
+ enabled: true
50
+ severity: major
51
+ failure_effect: Cannot pass above conditional_pass
52
+ required_evidence:
53
+ - option comparison
54
+ - selection rationale
55
+ - decision criteria
56
+ scoring_guide:
57
+ "0": No alternatives or rationale — solution presented without explanation
58
+ "1": Choice asserted only — 'we chose X' with no comparison or criteria
59
+ "2": Alternatives mentioned but weakly compared — lacks criteria or is superficial
60
+ "3": Clear rationale and meaningful comparison — criteria applied, alternatives evaluated
61
+ "4": Decision is explicit, evidence-backed, criteria-weighted, and tied to business and quality concerns
62
+ anti_patterns:
63
+ - Single option dressed as a comparison with no real alternatives
64
+ - Cost and risk ignored in option evaluation
65
+ - Decision made before analysis documented
66
+ examples:
67
+ good:
68
+ - >
69
+ "Options considered: (A) Buy — vendor package, (B) Build — custom API, (C) Extend —
70
+ middleware layer. Criteria (weighted): cost (30%), delivery speed (25%), strategic
71
+ fit (25%), vendor risk (20%). Decision: Option B (Build). Rationale: Only option
72
+ meeting EU data residency requirements (eliminates A). Option C rejected: integration
73
+ complexity exceeds 3-year ROI threshold."
74
+ bad:
75
+ - >
76
+ "After reviewing the options, the team decided to build a custom solution.
77
+ [No alternatives documented, no criteria stated]"
78
+ decision_tree: >
79
+ IF no alternatives section exists THEN score 0.
80
+ IF one alternative mentioned without comparison or criteria THEN score 1.
81
+ IF multiple alternatives listed but comparison is superficial or criteria unstated THEN score 2.
82
+ IF meaningful comparison with explicit selection criteria THEN score 3.
83
+ IF decision is explicit, evidence-backed, criteria-weighted, and tied to business and quality concerns THEN score 4.
84
+ remediation_hints:
85
+ - Add an option comparison table with named criteria and scoring
86
+ - Explicitly state why rejected options were eliminated
87
+ - Connect the final decision to business drivers and quality attribute requirements
88
+
89
+ - id: SD2
90
+ name: Quality attribute treatment
91
+ description: >
92
+ Quality attributes translate business expectations into architectural constraints and
93
+ mechanisms. Without an explicit mapping from non-functional requirements to design
94
+ decisions, delivery teams will implement functional requirements correctly but fail
95
+ on performance, security, resilience, or compliance.
96
+ weight: 1.0
97
+ criteria:
98
+ - id: SOL-02
99
+ question: Are key quality attributes and non-functional requirements translated into architectural mechanisms or constraints?
100
+ description: >
101
+ NFRs that are documented but not used in design are decoration. Each material
102
+ non-functional requirement — availability target, throughput, latency, security
103
+ control, data retention — must have a corresponding architectural mechanism or
104
+ constraint that can be inspected in the design and tested in delivery. Without
105
+ traceability from NFR to design response, governance cannot verify compliance.
106
+ metric_type: ordinal
107
+ scale: [0, 1, 2, 3, 4, "N/A"]
108
+ gate:
109
+ enabled: true
110
+ severity: critical
111
+ failure_effect: Reject when material NFRs are absent or untraceable
112
+ required_evidence:
113
+ - NFR list or quality attribute list
114
+ - quality attribute scenarios or measurable targets
115
+ - architectural mechanisms or design responses
116
+ scoring_guide:
117
+ "0": No material NFR treatment — quality attributes entirely absent
118
+ "1": NFRs listed but not used — bullet list with no design response
119
+ "2": Partial architectural response — some NFRs have mechanisms, material gaps remain
120
+ "3": Most material NFRs handled in the design — mechanisms named and traceable
121
+ "4": Quality attribute response is explicit, traceable, measurable, and credible for all material attributes
122
+ anti_patterns:
123
+ - Performance, security, or availability mentioned only in prose with no numbers or mechanisms
124
+ - No resilience mechanism despite high availability claim
125
+ - NFR section at start of document never referenced again
126
+ examples:
127
+ good:
128
+ - >
129
+ "NFR: Availability 99.9%. Mechanism: Active-passive failover with automated health
130
+ checks, 30s switchover. NFR: Data residency EU. Mechanism: All data stores deployed
131
+ in eu-west-1, cross-region replication disabled, enforced by policy. NFR: Auth
132
+ response < 100ms. Mechanism: JWT validation at edge with 5-minute cache."
133
+ bad:
134
+ - >
135
+ "The system will be highly available, performant, and secure.
136
+ [No targets, no mechanisms, no traceability]"
137
+ decision_tree: >
138
+ IF no NFR section or quality attribute list exists THEN score 0.
139
+ IF NFRs listed as bullet points with no design response THEN score 1.
140
+ IF some NFRs have design responses but material NFRs have no response THEN score 2.
141
+ IF most material NFRs have named architectural mechanisms THEN score 3.
142
+ IF all material NFRs are traceable to specific design decisions with measurable targets THEN score 4.
143
+ remediation_hints:
144
+ - Add a scenario-to-design-mechanism mapping table
145
+ - Replace vague adjectives with measurable targets (e.g. '99.9% availability', 'P99 < 200ms')
146
+ - Ensure each mechanism is identifiable in a diagram or design section
147
+
148
+ - id: SD3
149
+ name: Delivery and operability readiness
150
+ description: >
151
+ A solution design that describes what to build but not how to deliver it, operate it,
152
+ or migrate to it leaves a significant gap between architecture and execution. Implementation
153
+ dependencies, operational ownership, and migration implications must be addressed
154
+ to make the artifact actionable for delivery and governance.
155
+ weight: 1.0
156
+ criteria:
157
+ - id: SOL-03
158
+ question: Does the solution architecture describe implementation dependencies, operational ownership, and migration implications?
159
+ description: >
160
+ Many architectures are technically sound but operationally risky because they do not
161
+ address who runs the system, how it is released, what happens when it fails, or how
162
+ the organization transitions from the current state. These concerns must be answered
163
+ in the design — not deferred to delivery — so that governance can assess delivery risk
164
+ alongside architectural soundness.
165
+ metric_type: ordinal
166
+ scale: [0, 1, 2, 3, 4, "N/A"]
167
+ gate: false
168
+ required_evidence:
169
+ - dependency list (team, platform, or technical)
170
+ - operating model or operational ownership
171
+ - migration or rollout approach
172
+ scoring_guide:
173
+ "0": No delivery or operational treatment — design is build-time only
174
+ "1": High-level only — 'the platform team will handle operations'
175
+ "2": Partial readiness view — some operational or delivery concerns addressed
176
+ "3": Mostly clear implementation and operability path — deployment model, ownership, migration approach present
177
+ "4": Ready for execution planning — delivery plan, operating model, observability design, migration steps, and rollback approach all present
178
+ anti_patterns:
179
+ - Build-time architecture view only with no run-time or operational consideration
180
+ - No named operational owner for the solution
181
+ - Migration deferred to 'future project'
182
+ examples:
183
+ good:
184
+ - >
185
+ "Deployment: Blue-green via GitHub Actions, manual approval gate on production.
186
+ Owner: Platform Engineering (Jane Smith, delivery lead). Migration: Strangler fig
187
+ pattern; legacy Payments API deprecated Q4 2026, dual-run period Q3–Q4. Rollback:
188
+ Feature flag disables new service within 30 seconds. Observability: Prometheus +
189
+ Grafana, PagerDuty alerts for P99 > 500ms."
190
+ bad:
191
+ - >
192
+ "The delivery team will handle deployment and operations.
193
+ [No owner named, no migration approach, no operational design]"
194
+ decision_tree: >
195
+ IF no delivery or operational concerns addressed THEN score 0.
196
+ IF only high-level team references without specifics THEN score 1.
197
+ IF some operational or delivery concerns addressed but material gaps THEN score 2.
198
+ IF deployment model, operational ownership, and migration approach are clear THEN score 3.
199
+ IF delivery plan, operating model, observability design, migration steps, and rollback all present THEN score 4.
200
+ remediation_hints:
201
+ - Add a named operational owner and team responsibility model
202
+ - Add migration dependencies and a phased rollout plan
203
+ - Define observability requirements (metrics, alerts, dashboards)
204
+
205
+ scoring:
206
+ scale: 0-4 ordinal plus N/A
207
+ method: gates_first_then_weighted_average
208
+ thresholds:
209
+ pass: No critical gate failure, overall >= 3.2, and no dimension < 2.0
210
+ conditional_pass: No critical gate failure and overall 2.4-3.19 or one weak dimension
211
+ rework_required: Overall < 2.4 or repeated weak dimensions
212
+ reject: Critical gate failure or mandatory control breach
213
+ not_reviewable: Evidence insufficient for core gate criteria
214
+ profile_specific_escalation: Escalate to human review when SOL-02 score < 3 for customer-facing or high-risk solutions
215
+ na_policy: Exclude N/A criteria from denominator; evaluator must justify N/A
216
+ confidence_policy: Confidence reported separately, must not modify score
217
+
218
+ outputs:
219
+ require_evidence_refs: true
220
+ require_confidence: true
221
+ require_actions: true
222
+ require_evidence_class: true
223
+ require_evidence_anchors: true
224
+ formats:
225
+ - yaml
226
+ - json
227
+ - markdown-report