@trohde/earos 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +156 -0
- package/assets/init/.agents/skills/earos-artifact-gen/SKILL.md +106 -0
- package/assets/init/.agents/skills/earos-artifact-gen/references/interview-guide.md +313 -0
- package/assets/init/.agents/skills/earos-artifact-gen/references/output-guide.md +367 -0
- package/assets/init/.agents/skills/earos-assess/SKILL.md +212 -0
- package/assets/init/.agents/skills/earos-assess/references/calibration-benchmarks.md +160 -0
- package/assets/init/.agents/skills/earos-assess/references/output-templates.md +311 -0
- package/assets/init/.agents/skills/earos-assess/references/scoring-protocol.md +281 -0
- package/assets/init/.agents/skills/earos-calibrate/SKILL.md +153 -0
- package/assets/init/.agents/skills/earos-calibrate/references/agreement-metrics.md +188 -0
- package/assets/init/.agents/skills/earos-calibrate/references/calibration-protocol.md +263 -0
- package/assets/init/.agents/skills/earos-create/SKILL.md +257 -0
- package/assets/init/.agents/skills/earos-create/references/criterion-writing-guide.md +268 -0
- package/assets/init/.agents/skills/earos-create/references/dependency-rules.md +193 -0
- package/assets/init/.agents/skills/earos-create/references/rubric-interview-guide.md +123 -0
- package/assets/init/.agents/skills/earos-create/references/validation-checklist.md +238 -0
- package/assets/init/.agents/skills/earos-profile-author/SKILL.md +251 -0
- package/assets/init/.agents/skills/earos-profile-author/references/criterion-writing-guide.md +280 -0
- package/assets/init/.agents/skills/earos-profile-author/references/design-methods.md +158 -0
- package/assets/init/.agents/skills/earos-profile-author/references/profile-checklist.md +173 -0
- package/assets/init/.agents/skills/earos-remediate/SKILL.md +118 -0
- package/assets/init/.agents/skills/earos-remediate/references/output-template.md +199 -0
- package/assets/init/.agents/skills/earos-remediate/references/remediation-patterns.md +330 -0
- package/assets/init/.agents/skills/earos-report/SKILL.md +85 -0
- package/assets/init/.agents/skills/earos-report/references/portfolio-template.md +181 -0
- package/assets/init/.agents/skills/earos-report/references/single-artifact-template.md +168 -0
- package/assets/init/.agents/skills/earos-review/SKILL.md +130 -0
- package/assets/init/.agents/skills/earos-review/references/challenge-patterns.md +163 -0
- package/assets/init/.agents/skills/earos-review/references/output-template.md +180 -0
- package/assets/init/.agents/skills/earos-template-fill/SKILL.md +177 -0
- package/assets/init/.agents/skills/earos-template-fill/references/evidence-writing-guide.md +186 -0
- package/assets/init/.agents/skills/earos-template-fill/references/section-rubric-mapping.md +200 -0
- package/assets/init/.agents/skills/earos-validate/SKILL.md +113 -0
- package/assets/init/.agents/skills/earos-validate/references/fix-patterns.md +281 -0
- package/assets/init/.agents/skills/earos-validate/references/validation-checks.md +287 -0
- package/assets/init/.claude/CLAUDE.md +4 -0
- package/assets/init/AGENTS.md +293 -0
- package/assets/init/CLAUDE.md +635 -0
- package/assets/init/README.md +507 -0
- package/assets/init/calibration/gold-set/.gitkeep +0 -0
- package/assets/init/calibration/results/.gitkeep +0 -0
- package/assets/init/core/core-meta-rubric.yaml +643 -0
- package/assets/init/docs/consistency-report.md +325 -0
- package/assets/init/docs/getting-started.md +194 -0
- package/assets/init/docs/profile-authoring-guide.md +51 -0
- package/assets/init/docs/terminology.md +126 -0
- package/assets/init/earos.manifest.yaml +104 -0
- package/assets/init/evaluations/.gitkeep +0 -0
- package/assets/init/examples/aws-event-driven-order-processing/artifact.yaml +2056 -0
- package/assets/init/examples/aws-event-driven-order-processing/evaluation.yaml +973 -0
- package/assets/init/examples/aws-event-driven-order-processing/report.md +244 -0
- package/assets/init/examples/example-solution-architecture.evaluation.yaml +136 -0
- package/assets/init/examples/multi-cloud-data-analytics/artifact.yaml +715 -0
- package/assets/init/overlays/data-governance.yaml +94 -0
- package/assets/init/overlays/regulatory.yaml +154 -0
- package/assets/init/overlays/security.yaml +92 -0
- package/assets/init/profiles/adr.yaml +225 -0
- package/assets/init/profiles/capability-map.yaml +223 -0
- package/assets/init/profiles/reference-architecture.yaml +426 -0
- package/assets/init/profiles/roadmap.yaml +205 -0
- package/assets/init/profiles/solution-architecture.yaml +227 -0
- package/assets/init/research/architecture-assessment-rubrics-research.docx +0 -0
- package/assets/init/research/architecture-assessment-rubrics-research.md +566 -0
- package/assets/init/research/reference-architecture-research.md +751 -0
- package/assets/init/standard/EAROS.md +1426 -0
- package/assets/init/standard/schemas/artifact.schema.json +1295 -0
- package/assets/init/standard/schemas/artifact.uischema.json +65 -0
- package/assets/init/standard/schemas/evaluation.schema.json +284 -0
- package/assets/init/standard/schemas/rubric.schema.json +383 -0
- package/assets/init/templates/evaluation-record.template.yaml +58 -0
- package/assets/init/templates/new-profile.template.yaml +65 -0
- package/bin.js +188 -0
- package/dist/assets/_basePickBy-BVu6YmSW.js +1 -0
- package/dist/assets/_baseUniq-CWRzQDz_.js +1 -0
- package/dist/assets/arc-CyDBhtDM.js +1 -0
- package/dist/assets/architectureDiagram-2XIMDMQ5-BH6O4dvN.js +36 -0
- package/dist/assets/blockDiagram-WCTKOSBZ-2xmwdjpg.js +132 -0
- package/dist/assets/c4Diagram-IC4MRINW-BNmPRFJF.js +10 -0
- package/dist/assets/channel-CiySTNoJ.js +1 -0
- package/dist/assets/chunk-4BX2VUAB-DGQTvirp.js +1 -0
- package/dist/assets/chunk-55IACEB6-DNMAQAC_.js +1 -0
- package/dist/assets/chunk-FMBD7UC4-BJbVTQ5o.js +15 -0
- package/dist/assets/chunk-JSJVCQXG-BCxUL74A.js +1 -0
- package/dist/assets/chunk-KX2RTZJC-H7wWZOfz.js +1 -0
- package/dist/assets/chunk-NQ4KR5QH-BK4RlTQF.js +220 -0
- package/dist/assets/chunk-QZHKN3VN-0chxDV5g.js +1 -0
- package/dist/assets/chunk-WL4C6EOR-DexfQ-AV.js +189 -0
- package/dist/assets/classDiagram-VBA2DB6C-D7luWJQn.js +1 -0
- package/dist/assets/classDiagram-v2-RAHNMMFH-D7luWJQn.js +1 -0
- package/dist/assets/clone-ylgRbd3D.js +1 -0
- package/dist/assets/cose-bilkent-S5V4N54A-DS2IOCfZ.js +1 -0
- package/dist/assets/cytoscape.esm-CyJtwmzi.js +331 -0
- package/dist/assets/dagre-KLK3FWXG-BbSoTTa3.js +4 -0
- package/dist/assets/defaultLocale-DX6XiGOO.js +1 -0
- package/dist/assets/diagram-E7M64L7V-C9TvYgv0.js +24 -0
- package/dist/assets/diagram-IFDJBPK2-DowUMWrg.js +43 -0
- package/dist/assets/diagram-P4PSJMXO-BL6nrnQF.js +24 -0
- package/dist/assets/erDiagram-INFDFZHY-rXPRl8VM.js +70 -0
- package/dist/assets/flowDiagram-PKNHOUZH-DBRM99-W.js +162 -0
- package/dist/assets/ganttDiagram-A5KZAMGK-INcWFsBT.js +292 -0
- package/dist/assets/gitGraphDiagram-K3NZZRJ6-DMwpfE91.js +65 -0
- package/dist/assets/graph-DLQn37b-.js +1 -0
- package/dist/assets/index-BFFITMT8.js +650 -0
- package/dist/assets/index-H7f6VTz1.css +1 -0
- package/dist/assets/infoDiagram-LFFYTUFH-B0f4TWRM.js +2 -0
- package/dist/assets/init-Gi6I4Gst.js +1 -0
- package/dist/assets/ishikawaDiagram-PHBUUO56-CsU6XimZ.js +70 -0
- package/dist/assets/journeyDiagram-4ABVD52K-CQ7ibNib.js +139 -0
- package/dist/assets/kanban-definition-K7BYSVSG-DzEN7THt.js +89 -0
- package/dist/assets/katex-B1X10hvy.js +261 -0
- package/dist/assets/layout-C0dvb42R.js +1 -0
- package/dist/assets/linear-j4a8mGj7.js +1 -0
- package/dist/assets/mindmap-definition-YRQLILUH-DP8iEuCf.js +68 -0
- package/dist/assets/ordinal-Cboi1Yqb.js +1 -0
- package/dist/assets/pieDiagram-SKSYHLDU-BpIAXgAm.js +30 -0
- package/dist/assets/quadrantDiagram-337W2JSQ-DrpXn5Eg.js +7 -0
- package/dist/assets/requirementDiagram-Z7DCOOCP-Bg7EwHlG.js +73 -0
- package/dist/assets/sankeyDiagram-WA2Y5GQK-BWagRs1F.js +10 -0
- package/dist/assets/sequenceDiagram-2WXFIKYE-q5jwhivG.js +145 -0
- package/dist/assets/stateDiagram-RAJIS63D-B_J9pE-2.js +1 -0
- package/dist/assets/stateDiagram-v2-FVOUBMTO-Q_1GcybB.js +1 -0
- package/dist/assets/timeline-definition-YZTLITO2-dv0jgQ0z.js +61 -0
- package/dist/assets/treemap-KZPCXAKY-Dt1dkIE7.js +162 -0
- package/dist/assets/vennDiagram-LZ73GAT5-BdO5RgRZ.js +34 -0
- package/dist/assets/xychartDiagram-JWTSCODW-CpDVe-8v.js +7 -0
- package/dist/index.html +23 -0
- package/export-docx.js +1583 -0
- package/init.js +353 -0
- package/manifest-cli.mjs +207 -0
- package/package.json +83 -0
- package/schemas/artifact.schema.json +1295 -0
- package/schemas/artifact.uischema.json +65 -0
- package/schemas/evaluation.schema.json +284 -0
- package/schemas/rubric.schema.json +383 -0
- package/serve.js +238 -0
|
@@ -0,0 +1,325 @@
|
|
|
1
|
+
# EaROS Consistency Report
|
|
2
|
+
|
|
3
|
+
**Date:** 2026-03-19
|
|
4
|
+
**Performed by:** Claude Sonnet 4.6 (automated + manual review)
|
|
5
|
+
**Scope:** Full project — YAML rubrics, JSON schemas, Markdown docs, templates
|
|
6
|
+
**Validation tool:** `tools/validate.py` (jsonschema Draft 2020-12)
|
|
7
|
+
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
## Executive Summary
|
|
11
|
+
|
|
12
|
+
| Category | Before | After |
|
|
13
|
+
|----------|--------|-------|
|
|
14
|
+
| Schema errors | 10 | 0 |
|
|
15
|
+
| Duplicate criterion IDs | 0 | 0 |
|
|
16
|
+
| Duplicate dimension IDs | 0 | 0 |
|
|
17
|
+
| Cross-reference errors | 3 | 0 |
|
|
18
|
+
| Documentation inaccuracies | 5 | 0 |
|
|
19
|
+
| Quality warnings (missing optional fields) | 97 | 89 |
|
|
20
|
+
|
|
21
|
+
All schema errors and documentation inaccuracies have been fixed. Quality warnings represent missing CLAUDE.md-recommended fields (`description`, `decision_tree`, `examples.good/bad`) in older criteria — these require content authoring, not structural fixes, and are tracked as technical debt below.
|
|
22
|
+
|
|
23
|
+
---
|
|
24
|
+
|
|
25
|
+
## Check 1 — Schema Validation
|
|
26
|
+
|
|
27
|
+
**Tool:** Python `jsonschema` v4.x, Draft 2020-12 validator against `standard/schemas/rubric.schema.json`
|
|
28
|
+
**Required top-level fields per schema:** `rubric_id`, `version`, `kind`, `title`, `artifact_type`, `dimensions`, `scoring`, `outputs`
|
|
29
|
+
|
|
30
|
+
### Violations Found and Fixed
|
|
31
|
+
|
|
32
|
+
| File | Violation | Fix Applied |
|
|
33
|
+
|------|-----------|-------------|
|
|
34
|
+
| `profiles/roadmap.yaml` | Missing required `outputs` section | Added `outputs` block with all v2 fields |
|
|
35
|
+
| `overlays/regulatory.yaml` | Missing required `scoring` section | Added `scoring` with `append_to_base_rubric` method |
|
|
36
|
+
| `overlays/regulatory.yaml` | Missing required `outputs` section | Added `outputs` block with all v2 fields |
|
|
37
|
+
|
|
38
|
+
### Post-fix Status
|
|
39
|
+
|
|
40
|
+
All 9 rubric files now pass schema validation with 0 errors.
|
|
41
|
+
|
|
42
|
+
---
|
|
43
|
+
|
|
44
|
+
## Check 2 — Criterion ID Uniqueness
|
|
45
|
+
|
|
46
|
+
**Method:** Programmatic scan of all `criteria[*].id` fields across all 9 rubric files.
|
|
47
|
+
**Result: PASS — no duplicates.**
|
|
48
|
+
|
|
49
|
+
35 unique criterion IDs found:
|
|
50
|
+
|
|
51
|
+
| ID | File |
|
|
52
|
+
|----|------|
|
|
53
|
+
| ACT-01 | core/core-meta-rubric.yaml |
|
|
54
|
+
| ADR-01, ADR-02, ADR-03 | profiles/adr.yaml |
|
|
55
|
+
| CAP-01, CAP-02, CAP-03 | profiles/capability-map.yaml |
|
|
56
|
+
| CMP-01, CON-01, CVP-01, MNT-01, RAT-01, SCP-01, STK-01, STK-02, TRC-01 | core/core-meta-rubric.yaml |
|
|
57
|
+
| DAT-01 | overlays/data-governance.yaml |
|
|
58
|
+
| RA-DEC-01, RA-DEC-02, RA-IMP-01, RA-IMP-02, RA-OPS-01, RA-QA-01, RA-REU-01, RA-VIEW-01, RA-VIEW-02 | profiles/reference-architecture.yaml |
|
|
59
|
+
| RD-DEP-01, RD-OWN-01, RD-TRN-01 | profiles/roadmap.yaml |
|
|
60
|
+
| REG-EV-01, REG-ID-01 | overlays/regulatory.yaml |
|
|
61
|
+
| SEC-01 | overlays/security.yaml |
|
|
62
|
+
| SOL-01, SOL-02, SOL-03 | profiles/solution-architecture.yaml |
|
|
63
|
+
|
|
64
|
+
---
|
|
65
|
+
|
|
66
|
+
## Check 3 — Dimension ID Uniqueness
|
|
67
|
+
|
|
68
|
+
**Method:** Programmatic scan of all `dimensions[*].id` fields across all 9 rubric files.
|
|
69
|
+
**Result: PASS — no duplicates.**
|
|
70
|
+
|
|
71
|
+
31 unique dimension IDs found across: `D1–D9` (core), `RA-D1–RA-D6` (reference architecture), `SD1–SD3` (solution architecture), `AD1–AD3` (ADR), `CP1–CP3` (capability map), `RD1–RD3` (roadmap), `SEC1` (security), `DAT1` (data governance), `REG1–REG2` (regulatory).
|
|
72
|
+
|
|
73
|
+
---
|
|
74
|
+
|
|
75
|
+
## Check 4 — Cross-Reference Integrity
|
|
76
|
+
|
|
77
|
+
### Violations Found and Fixed
|
|
78
|
+
|
|
79
|
+
| File | Violation | Fix Applied |
|
|
80
|
+
|------|-----------|-------------|
|
|
81
|
+
| `profiles/solution-architecture.yaml` | `inherits: [EAROS-CORE-001@1.0.0]` — referenced rubric_id does not exist in the v2 repository | Changed to `inherits: [EAROS-CORE-002]` |
|
|
82
|
+
| `profiles/adr.yaml` | Same — inherits non-existent `EAROS-CORE-001@1.0.0` | Changed to `inherits: [EAROS-CORE-002]` |
|
|
83
|
+
| `profiles/capability-map.yaml` | Same — inherits non-existent `EAROS-CORE-001@1.0.0` | Changed to `inherits: [EAROS-CORE-002]` |
|
|
84
|
+
| `overlays/regulatory.yaml` | Had `inherits: [EAROS-CORE-002]` — overlays must NOT have an `inherits` field; they append, not inherit | Removed `inherits` field entirely |
|
|
85
|
+
|
|
86
|
+
### Post-fix Status
|
|
87
|
+
|
|
88
|
+
All profiles inherit `EAROS-CORE-002`. No overlays have `inherits` fields. All referenced rubric IDs resolve to existing files.
|
|
89
|
+
|
|
90
|
+
---
|
|
91
|
+
|
|
92
|
+
## Check 5 — Scoring Consistency
|
|
93
|
+
|
|
94
|
+
### Scale definition
|
|
95
|
+
|
|
96
|
+
All 9 files use `scale: 0-4 ordinal plus N/A` ✓
|
|
97
|
+
|
|
98
|
+
### Scoring guide completeness (keys "0"–"4" for ordinal criteria)
|
|
99
|
+
|
|
100
|
+
All ordinal criteria have all five keys in `scoring_guide`. ✓
|
|
101
|
+
|
|
102
|
+
### Status threshold consistency
|
|
103
|
+
|
|
104
|
+
| File | Method | Thresholds |
|
|
105
|
+
|------|--------|------------|
|
|
106
|
+
| `core-meta-rubric.yaml` | `gates_first_then_weighted_average` | Full set incl. floor check ✓ |
|
|
107
|
+
| `reference-architecture.yaml` | `gates_first_then_weighted_average` | Full set ✓ |
|
|
108
|
+
| `roadmap.yaml` | `gates_first_then_weighted_average` | **Fixed** — was missing floor check `no dimension < 2.0` |
|
|
109
|
+
| `solution-architecture.yaml` | `merge_with_inherited_and_apply_core_thresholds` | Profile-specific escalation only (by design) |
|
|
110
|
+
| `adr.yaml` | `merge_with_inherited_and_apply_core_thresholds` | Profile-specific escalation only (by design) |
|
|
111
|
+
| `capability-map.yaml` | `merge_with_inherited_and_apply_core_thresholds` | Profile-specific escalation only (by design) |
|
|
112
|
+
| `security.yaml` | `append_to_base_rubric` | Overlay-specific ✓ |
|
|
113
|
+
| `data-governance.yaml` | `append_to_base_rubric` | Overlay-specific ✓ |
|
|
114
|
+
| `regulatory.yaml` | `append_to_base_rubric` | **Fixed** — was missing `scoring` section |
|
|
115
|
+
|
|
116
|
+
**Note on `merge_with_inherited_and_apply_core_thresholds`:** This method name is non-standard (not defined in the schema's method enum). It is nonetheless consistent across the three older profiles (SOL, ADR, CAP) and is a valid string per the schema (which does not restrict method values). Future authoring guidance should address whether these profiles should adopt `gates_first_then_weighted_average`.
|
|
117
|
+
|
|
118
|
+
---
|
|
119
|
+
|
|
120
|
+
## Check 6 — Gate Type Consistency
|
|
121
|
+
|
|
122
|
+
**Valid gate severity values per schema:** `advisory`, `major`, `critical`
|
|
123
|
+
**Valid disabled form:** `gate: false`
|
|
124
|
+
|
|
125
|
+
All gate configurations across all 35 criteria use valid severity values. No invalid gate types found. ✓
|
|
126
|
+
|
|
127
|
+
Gate inventory:
|
|
128
|
+
|
|
129
|
+
| Criterion | File | Severity |
|
|
130
|
+
|-----------|------|----------|
|
|
131
|
+
| STK-01, TRC-01 | core | major |
|
|
132
|
+
| SCP-01, CMP-01 | core | critical |
|
|
133
|
+
| RA-VIEW-01, RA-DEC-01, RA-OPS-01, RA-QA-01 | reference-architecture | major |
|
|
134
|
+
| SOL-01 | solution-architecture | major |
|
|
135
|
+
| SOL-02 | solution-architecture | critical |
|
|
136
|
+
| ADR-01 | adr | major |
|
|
137
|
+
| CAP-01 | capability-map | major |
|
|
138
|
+
| RD-DEP-01 | roadmap | major |
|
|
139
|
+
| SEC-01 | security | critical |
|
|
140
|
+
| REG-ID-01, REG-EV-01 | regulatory | critical |
|
|
141
|
+
| All others | — | `gate: false` |
|
|
142
|
+
|
|
143
|
+
---
|
|
144
|
+
|
|
145
|
+
## Check 7 — File Naming Consistency
|
|
146
|
+
|
|
147
|
+
**Convention (CLAUDE.md §8):** `<artifact-type>.v<major>.yaml` where `<major>` must match the major version number inside the file.
|
|
148
|
+
|
|
149
|
+
### Violations Found
|
|
150
|
+
|
|
151
|
+
| File | Filename Major | Internal Version | Status |
|
|
152
|
+
|------|---------------|-----------------|--------|
|
|
153
|
+
| `core-meta-rubric.yaml` | 2 | 2.0.0 | ✓ |
|
|
154
|
+
| `reference-architecture.yaml` | 2 | 2.0.0 | ✓ |
|
|
155
|
+
| `roadmap.yaml` | 2 | 2.0.0 | ✓ |
|
|
156
|
+
| `regulatory.yaml` | 2 | 2.0.0 | ✓ |
|
|
157
|
+
| `solution-architecture.yaml` | 2 | **1.0.0 → fixed to 2.0.0** | ✓ (after fix) |
|
|
158
|
+
| `adr.yaml` | 2 | **1.0.0 → fixed to 2.0.0** | ✓ (after fix) |
|
|
159
|
+
| `capability-map.yaml` | 2 | **1.0.0 → fixed to 2.0.0** | ✓ (after fix) |
|
|
160
|
+
| `security.yaml` | 2 | 1.0.0 | ⚠ mismatch |
|
|
161
|
+
| `data-governance.yaml` | 2 | 1.0.0 | ⚠ mismatch |
|
|
162
|
+
|
|
163
|
+
**Remaining mismatch — `security.yaml` and `data-governance.yaml`:** These overlays have `version: 1.0.0` internally but are named `.yaml`. The `v2` in the filename appears to denote EaROS framework era rather than the file's own major version — a deliberate ambiguity in the original design. These have NOT been changed because: (1) bumping to 2.0.0 would imply a breaking change that did not occur, and (2) the `v2` naming may be intentional to signal EaROS v2 compatibility. This should be resolved by a governance decision, documented below under Judgment Calls.
|
|
164
|
+
|
|
165
|
+
All filenames use kebab-case ✓. No spaces in filenames ✓.
|
|
166
|
+
|
|
167
|
+
---
|
|
168
|
+
|
|
169
|
+
## Check 8 — Missing Required Fields
|
|
170
|
+
|
|
171
|
+
### Schema-required criterion fields (`id`, `question`, `metric_type`, `scale`, `required_evidence`, `scoring_guide`)
|
|
172
|
+
|
|
173
|
+
**Result: PASS — all 35 criteria have all 6 schema-required fields.** ✓
|
|
174
|
+
|
|
175
|
+
### CLAUDE.md-required criterion fields (`description`, `gate`, `anti_patterns`, `examples.good`, `examples.bad`, `decision_tree`, `remediation_hints`)
|
|
176
|
+
|
|
177
|
+
This is a quality standard beyond the JSON schema. 89 warnings remain after fixes. The `reference-architecture.yaml` profile is the most complete; older profiles (SOL, ADR, CAP, Roadmap) and the two simple overlays (SEC, DAT) are missing these fields on most criteria.
|
|
178
|
+
|
|
179
|
+
**Breakdown by file:**
|
|
180
|
+
|
|
181
|
+
| File | Criteria | Missing `description` | Missing `decision_tree` | Missing `examples` |
|
|
182
|
+
|------|----------|-----------------------|------------------------|-------------------|
|
|
183
|
+
| `core-meta-rubric.yaml` | 10 | 9 | 8 | 7 |
|
|
184
|
+
| `reference-architecture.yaml` | 9 | 0 | 5 | 3 |
|
|
185
|
+
| `solution-architecture.yaml` | 3 | 3 | 3 | 3 |
|
|
186
|
+
| `adr.yaml` | 3 | 3 | 3 | 3 |
|
|
187
|
+
| `capability-map.yaml` | 3 | 3 | 3 | 3 |
|
|
188
|
+
| `roadmap.yaml` | 3 | 3 | 3 | 3 |
|
|
189
|
+
| `security.yaml` | 1 | 1 | 1 | 1 |
|
|
190
|
+
| `data-governance.yaml` | 1 | 1 | 1 | 1 |
|
|
191
|
+
| `regulatory.yaml` | 2 | 2 | 2 | 2 |
|
|
192
|
+
|
|
193
|
+
**Action required:** The missing fields in each criterion need domain-expert authoring. They cannot be safely auto-filled — they require knowledge of the review context. This is tracked as a backlog item below.
|
|
194
|
+
|
|
195
|
+
---
|
|
196
|
+
|
|
197
|
+
## Check 9 — README/Docs Consistency
|
|
198
|
+
|
|
199
|
+
### Violations Found and Fixed
|
|
200
|
+
|
|
201
|
+
| File | Issue | Fix Applied |
|
|
202
|
+
|------|-------|-------------|
|
|
203
|
+
| `README.md` | Score labels used `Exemplary`, `Adequate`, `Insufficient` instead of the canonical `Strong`, `Good`, `Weak` from CLAUDE.md §2.1 and the core rubric | Fixed to match canonical labels |
|
|
204
|
+
| `README.md` | DAG step names `challenge_pass_check` and `calibration_anchor` do not match YAML (`challenge_pass`, `calibration`) | Fixed |
|
|
205
|
+
| `docs/getting-started.md` | Referenced `level_descriptors` (field does not exist) — correct field name is `scoring_guide` | Fixed |
|
|
206
|
+
| `docs/getting-started.md` | Referenced `evidence_requirement` (field does not exist) — correct field name is `required_evidence` | Fixed |
|
|
207
|
+
| `docs/getting-started.md` | Gate description stated "any gate criterion scores 0 → Reject" — incorrect; only `critical` gates reject; `major` gates cap at Conditional Pass | Fixed to accurately describe gate severity model |
|
|
208
|
+
| `CHANGELOG.md` | DAG step names `challenge_pass_check` and `calibration_anchor` | Fixed |
|
|
209
|
+
| `CHANGELOG.md` | Reference architecture profile listed as "11 criteria" — actual count is 9 | Fixed |
|
|
210
|
+
| `profiles/reference-architecture.yaml` | Internal comment stated "11 criteria … 21 criteria total" | Fixed to 9 / 19 |
|
|
211
|
+
| `CLAUDE.md` | Section 10 stated "11 criteria … 21 criteria total" | Fixed to 9 / 19 |
|
|
212
|
+
|
|
213
|
+
### File reference integrity (README → actual files)
|
|
214
|
+
|
|
215
|
+
All files referenced in README.md were verified to exist:
|
|
216
|
+
- `core/core-meta-rubric.yaml` ✓
|
|
217
|
+
- All 5 profiles ✓
|
|
218
|
+
- All 3 overlays ✓
|
|
219
|
+
- Both JSON schemas ✓
|
|
220
|
+
- Both template files ✓
|
|
221
|
+
- `templates/reference-architecture/Reference_Architecture_Template_v2.docx` ✓
|
|
222
|
+
- Both scoring sheets ✓
|
|
223
|
+
- `examples/example-solution-architecture.evaluation.yaml` ✓
|
|
224
|
+
- `calibration/gold-set/`, `calibration/results/` ✓
|
|
225
|
+
- All 3 research files ✓
|
|
226
|
+
- All 3 presentation files ✓
|
|
227
|
+
- `docs/getting-started.md`, `docs/profile-authoring-guide.md` ✓
|
|
228
|
+
|
|
229
|
+
---
|
|
230
|
+
|
|
231
|
+
## Check 10 — Duplicate Files
|
|
232
|
+
|
|
233
|
+
**Method:** Compared files by purpose and content type.
|
|
234
|
+
|
|
235
|
+
`tools/scoring-sheets/EAROS_Scoring_Sheet.xlsx` (v1) and `tools/scoring-sheets/EAROS_Scoring_Sheet_v2.xlsx` (v2) coexist intentionally — the v1 sheet is the original tooling from EaROS v1.0, kept for reference. The README correctly lists both. Not a duplicate.
|
|
236
|
+
|
|
237
|
+
No other duplicate files found. ✓
|
|
238
|
+
|
|
239
|
+
---
|
|
240
|
+
|
|
241
|
+
## Check 11 — CHANGELOG Completeness
|
|
242
|
+
|
|
243
|
+
### Violations Found and Fixed
|
|
244
|
+
|
|
245
|
+
| Issue | Fix Applied |
|
|
246
|
+
|-------|-------------|
|
|
247
|
+
| DAG step names `challenge_pass_check` / `calibration_anchor` | Fixed to `challenge_pass` / `calibration` |
|
|
248
|
+
| Reference architecture profile listed with "11 criteria" | Fixed to 9 |
|
|
249
|
+
|
|
250
|
+
### Coverage gaps (not fixed — judgment calls)
|
|
251
|
+
|
|
252
|
+
- The CHANGELOG v1.0.0 entry references `solution-architecture.yaml` and `adr.yaml`, but the repository contains `solution-architecture.yaml` and `adr.yaml`. The v1.x files no longer exist in the repository. The v2.0.0 changelog does not mention these profiles were ported/revised for v2. **Recommendation:** Add a changelog entry noting the v2 updates to `solution-architecture.yaml`, `adr.yaml`, and `capability-map.yaml` — or annotate that they were renamed from v1 originals.
|
|
253
|
+
|
|
254
|
+
---
|
|
255
|
+
|
|
256
|
+
## Check 12 — CLAUDE.md Accuracy
|
|
257
|
+
|
|
258
|
+
The CLAUDE.md accurately describes the project structure, scoring model, gate types, evaluation modes, and conventions, with the single fix applied above (criterion count 11→9). All file paths in the Quick Reference table were verified to exist. ✓
|
|
259
|
+
|
|
260
|
+
---
|
|
261
|
+
|
|
262
|
+
## Judgment Calls (Not Auto-Fixed)
|
|
263
|
+
|
|
264
|
+
These items require deliberate decisions by the project owner:
|
|
265
|
+
|
|
266
|
+
### JC-1: Overlays `security.yaml` and `data-governance.yaml` — version mismatch
|
|
267
|
+
|
|
268
|
+
Both have `version: 1.0.0` internally but are named `.yaml`. Two interpretations:
|
|
269
|
+
- **Interpretation A:** The `v2` in the filename means EaROS-v2-era compatibility, not the file's own major version. Under this interpretation, `version: 1.0.0` is correct.
|
|
270
|
+
- **Interpretation B:** Per CLAUDE.md §8 convention, `v<major>` in the filename must match the internal major. Under this interpretation, the internal version should be `2.0.0`.
|
|
271
|
+
|
|
272
|
+
**Recommendation:** Adopt Interpretation B for consistency. Bump both overlays to `version: 2.0.0` and document in CHANGELOG. This is a non-breaking change (no scoring model change).
|
|
273
|
+
|
|
274
|
+
### JC-2: `merge_with_inherited_and_apply_core_thresholds` scoring method
|
|
275
|
+
|
|
276
|
+
Three profiles (SOL, ADR, CAP) use this non-standard method name. It is not defined in the schema's `method` property (which accepts any string, so no schema violation), but it differs from the primary method `gates_first_then_weighted_average` used by newer files.
|
|
277
|
+
|
|
278
|
+
**Recommendation:** Decide whether these profiles should use `gates_first_then_weighted_average` (requiring full threshold definitions) or whether `merge_with_inherited_and_apply_core_thresholds` should be formally documented as a valid method name in the schema.
|
|
279
|
+
|
|
280
|
+
### JC-3: Missing recommended criterion fields across 27 criteria
|
|
281
|
+
|
|
282
|
+
89 quality warnings remain for missing `description`, `decision_tree`, and `examples.good/bad` fields. The reference architecture profile is the model of completeness; all other files fall short of the CLAUDE.md §8 standard.
|
|
283
|
+
|
|
284
|
+
**Recommendation:** Create a sprint/backlog item to enrich each criterion with:
|
|
285
|
+
1. A `description` field (what does this criterion assess and why does it matter?)
|
|
286
|
+
2. A `decision_tree` with IF/THEN logic for AI agent disambiguation
|
|
287
|
+
3. `examples.good` and `examples.bad` for concrete illustrations
|
|
288
|
+
|
|
289
|
+
Priority order: core criteria first (most frequently used), then SOL/ADR/CAP profiles, then overlays.
|
|
290
|
+
|
|
291
|
+
### JC-4: CHANGELOG v1.0.0 entry references non-existent file names
|
|
292
|
+
|
|
293
|
+
The v1.0.0 section mentions `solution-architecture.yaml` and `adr.yaml`, which are not in the repository. Either the files were renamed without a changelog entry, or the v1 files were replaced wholesale.
|
|
294
|
+
|
|
295
|
+
**Recommendation:** Add a note to CHANGELOG v2.0.0 explaining that `solution-architecture.yaml` and `adr.yaml` are updated versions of the v1.0 originals, now inheriting EAROS-CORE-002.
|
|
296
|
+
|
|
297
|
+
---
|
|
298
|
+
|
|
299
|
+
## Files Modified by This Consistency Check
|
|
300
|
+
|
|
301
|
+
| File | Change |
|
|
302
|
+
|------|--------|
|
|
303
|
+
| `profiles/solution-architecture.yaml` | Fixed `inherits` (CORE-001→CORE-002), added `design_method: decision_centred`, bumped `version` to 2.0.0, added `require_evidence_class` and `require_evidence_anchors` to outputs |
|
|
304
|
+
| `profiles/adr.yaml` | Fixed `inherits`, added `design_method: decision_centred`, bumped `version` to 2.0.0, added v2 output fields |
|
|
305
|
+
| `profiles/capability-map.yaml` | Fixed `inherits`, added `design_method: viewpoint_centred`, bumped `version` to 2.0.0, added v2 output fields |
|
|
306
|
+
| `profiles/roadmap.yaml` | Added missing `outputs` section, fixed `thresholds` to include floor check and `not_reviewable` status |
|
|
307
|
+
| `overlays/regulatory.yaml` | Removed erroneous `inherits` field, added `scoring` section, added `outputs` section, fixed `rubric_id` from `EAROS-OVL-REG-001` to `EAROS-OVR-REG-001`, added `anti_patterns` and `remediation_hints` to both criteria |
|
|
308
|
+
| `README.md` | Fixed score labels (Exemplary→Strong, Adequate→Good, Insufficient→Weak), fixed DAG step names |
|
|
309
|
+
| `docs/getting-started.md` | Fixed `level_descriptors`→`scoring_guide`, `evidence_requirement`→`required_evidence`, rewrote gate description to accurately reflect severity model |
|
|
310
|
+
| `CHANGELOG.md` | Fixed DAG step names, fixed reference architecture criterion count (11→9) |
|
|
311
|
+
| `profiles/reference-architecture.yaml` | Fixed internal comment (11 criteria→9, 21 total→19) |
|
|
312
|
+
| `CLAUDE.md` | Fixed Section 10 criterion count (11→9, 21→19) |
|
|
313
|
+
| `tools/validate.py` | New file — validation script for ongoing use |
|
|
314
|
+
|
|
315
|
+
---
|
|
316
|
+
|
|
317
|
+
## Validation Script
|
|
318
|
+
|
|
319
|
+
`tools/validate.py` can be run at any time to re-check all rubric files:
|
|
320
|
+
|
|
321
|
+
```bash
|
|
322
|
+
py -3 tools/validate.py
|
|
323
|
+
```
|
|
324
|
+
|
|
325
|
+
It checks: schema compliance, cross-reference integrity, duplicate IDs, gate severities, scoring guide completeness, filename/version alignment, and quality field coverage. Add new rubric files to the `RUBRIC_FILES` list at the top of the script.
|
|
@@ -0,0 +1,194 @@
|
|
|
1
|
+
# Getting Started with EaROS
|
|
2
|
+
|
|
3
|
+
This guide walks you through your first architecture artifact assessment using EaROS. By the end, you will have scored an artifact, produced a structured evaluation record, and know how to interpret the results.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Before You Start
|
|
8
|
+
|
|
9
|
+
**What you need:**
|
|
10
|
+
- The artifact you want to assess (document, YAML, or structured text)
|
|
11
|
+
- About 30–60 minutes for a first assessment (subsequent assessments take 15–20 minutes once you know the rubric)
|
|
12
|
+
|
|
13
|
+
**What EaROS provides:**
|
|
14
|
+
- A rubric specifying exactly what to look for and how to score it
|
|
15
|
+
- A scoring sheet to record your evidence and scores
|
|
16
|
+
- Clear pass/fail thresholds
|
|
17
|
+
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
## Step 1: Identify the Artifact Type
|
|
21
|
+
|
|
22
|
+
EaROS has profiles for the most common enterprise architecture artifact types:
|
|
23
|
+
|
|
24
|
+
| Artifact Type | Profile to Use |
|
|
25
|
+
|--------------|----------------|
|
|
26
|
+
| Solution architecture document | `profiles/solution-architecture.yaml` |
|
|
27
|
+
| Reference architecture | `profiles/reference-architecture.yaml` |
|
|
28
|
+
| Architecture Decision Record (ADR) | `profiles/adr.yaml` |
|
|
29
|
+
| Capability map | `profiles/capability-map.yaml` |
|
|
30
|
+
| Architecture roadmap | `profiles/roadmap.yaml` |
|
|
31
|
+
| Other / unknown | Core only: `core/core-meta-rubric.yaml` |
|
|
32
|
+
|
|
33
|
+
If your artifact does not match any profile, apply only the core rubric. The core dimensions are universal.
|
|
34
|
+
|
|
35
|
+
---
|
|
36
|
+
|
|
37
|
+
## Step 2: Select Your Rubric Set
|
|
38
|
+
|
|
39
|
+
Every assessment starts with the core and adds a profile, plus any applicable overlays.
|
|
40
|
+
|
|
41
|
+
**Minimum (core only):**
|
|
42
|
+
```
|
|
43
|
+
core/core-meta-rubric.yaml
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
**Typical (core + profile):**
|
|
47
|
+
```
|
|
48
|
+
core/core-meta-rubric.yaml
|
|
49
|
+
profiles/solution-architecture.yaml
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
**Full (core + profile + overlay):**
|
|
53
|
+
```
|
|
54
|
+
core/core-meta-rubric.yaml
|
|
55
|
+
profiles/solution-architecture.yaml
|
|
56
|
+
overlays/security.yaml ← if the design touches auth, secrets, or data handling
|
|
57
|
+
overlays/data-governance.yaml ← if the design involves data flows or storage
|
|
58
|
+
overlays/regulatory.yaml ← if the design is subject to compliance requirements
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
Apply overlays selectively. Not every artifact needs every overlay.
|
|
62
|
+
|
|
63
|
+
---
|
|
64
|
+
|
|
65
|
+
## Step 3: Open the Scoring Sheet
|
|
66
|
+
|
|
67
|
+
Open the appropriate Excel scoring sheet from `tools/scoring-sheets/`:
|
|
68
|
+
|
|
69
|
+
- **`EAROS_Scoring_Sheet_v2.xlsx`** — use for most artifact types
|
|
70
|
+
- **`EAROS_RefArch_Scoring_Sheet.xlsx`** — use specifically for reference architectures
|
|
71
|
+
|
|
72
|
+
The scoring sheet has:
|
|
73
|
+
- One tab per rubric section (core dimensions + profile dimensions)
|
|
74
|
+
- Dropdown menus for scores (0, 1, 2, 3, 4, N/A)
|
|
75
|
+
- Evidence fields for recording your cited text or reference
|
|
76
|
+
- An automatic aggregation tab that calculates the weighted score and indicates the pass threshold
|
|
77
|
+
|
|
78
|
+
---
|
|
79
|
+
|
|
80
|
+
## Step 4: Read the Rubric, Then Read the Artifact
|
|
81
|
+
|
|
82
|
+
Open the relevant YAML rubric files. For each criterion, familiarise yourself with:
|
|
83
|
+
- The `description` — what the criterion is asking
|
|
84
|
+
- The `scoring_guide` — what each score level (0–4) means for this specific criterion
|
|
85
|
+
- The `required_evidence` — what kind of evidence counts
|
|
86
|
+
- The `gate` field — if enabled with severity `critical`, a failure triggers a Reject; severity `major` caps the status at Conditional Pass
|
|
87
|
+
|
|
88
|
+
**Then read the artifact end-to-end** before scoring. Do not score as you read on the first pass. Form an overall impression first, then return to score criterion by criterion.
|
|
89
|
+
|
|
90
|
+
---
|
|
91
|
+
|
|
92
|
+
## Step 5: Score Each Criterion
|
|
93
|
+
|
|
94
|
+
For each criterion:
|
|
95
|
+
|
|
96
|
+
1. **Find the evidence.** Locate the section, statement, diagram, or table in the artifact that addresses this criterion. If you cannot find any evidence, that is a 0 (Absent) or N/A, not a 1.
|
|
97
|
+
|
|
98
|
+
2. **Match the level descriptor.** Compare what you found to the descriptor for each score level. The score is the *highest level where the artifact fully satisfies the descriptor*. If it partially meets level 3 but not fully, score 2.
|
|
99
|
+
|
|
100
|
+
3. **Record the evidence.** In the scoring sheet evidence field, write a brief reference: section number, page, or a short direct quote. This is mandatory — unsubstantiated scores are not valid under EaROS.
|
|
101
|
+
|
|
102
|
+
4. **Flag uncertainty.** If you genuinely cannot assess the criterion (for example, the artifact references external documents you do not have access to), mark it N/A and note the reason.
|
|
103
|
+
|
|
104
|
+
### Example Scoring Decision
|
|
105
|
+
|
|
106
|
+
**Criterion:** Scope and boundary clarity
|
|
107
|
+
**Descriptor for score 3:** "Scope is defined with boundaries; assumptions are stated but some exclusions are implicit rather than explicit."
|
|
108
|
+
**Descriptor for score 4:** "Scope is precisely defined with explicit boundaries, exclusions, assumptions, and constraints; no ambiguity about what is in or out."
|
|
109
|
+
|
|
110
|
+
You read the artifact and find a scope statement that defines what is in scope but does not list explicit exclusions. → **Score: 3** → Record: "Section 1.2: scope statement defines in-scope components but exclusions are not listed."
|
|
111
|
+
|
|
112
|
+
---
|
|
113
|
+
|
|
114
|
+
## Step 6: Check the Gates
|
|
115
|
+
|
|
116
|
+
Before calculating the aggregate, check every criterion with a `gate` object (not `gate: false`) in the rubric files. Gate behaviour depends on severity:
|
|
117
|
+
|
|
118
|
+
- **`critical`** — Any score below the threshold triggers an immediate **Reject**, regardless of the aggregate score.
|
|
119
|
+
- **`major`** — A weak score (typically < 2) caps the status at **Conditional Pass** at best; cannot achieve a Pass.
|
|
120
|
+
- **`advisory`** — Triggers a recommendation but does not cap the status.
|
|
121
|
+
|
|
122
|
+
Gate criteria represent non-negotiable minimums on their respective concern. A critical gate failure means the artifact has a fundamental deficiency that makes it unsuitable for its purpose.
|
|
123
|
+
|
|
124
|
+
---
|
|
125
|
+
|
|
126
|
+
## Step 7: Determine the Status
|
|
127
|
+
|
|
128
|
+
The scoring sheet calculates the weighted dimension average automatically. Read the status from the aggregation tab:
|
|
129
|
+
|
|
130
|
+
| Weighted Average | Status |
|
|
131
|
+
|-----------------|--------|
|
|
132
|
+
| ≥ 3.2 | **Pass** |
|
|
133
|
+
| 2.4 – 3.19 | **Conditional Pass** |
|
|
134
|
+
| < 2.4 | **Rework Required** |
|
|
135
|
+
| Any gate at 0 | **Reject** |
|
|
136
|
+
|
|
137
|
+
**Conditional Pass** means the artifact is acceptable for use but has identified remediation items that must be addressed before the next formal review. Document each item with the criterion ID, the score, and the specific improvement needed.
|
|
138
|
+
|
|
139
|
+
---
|
|
140
|
+
|
|
141
|
+
## Step 8: Write the Evaluation Record
|
|
142
|
+
|
|
143
|
+
Use `templates/evaluation-record.template.yaml` to produce a structured evaluation record. See `examples/example-solution-architecture.evaluation.yaml` for a completed example.
|
|
144
|
+
|
|
145
|
+
The evaluation record captures:
|
|
146
|
+
- Artifact metadata (name, version, type, author)
|
|
147
|
+
- Assessor identity and date
|
|
148
|
+
- Rubric set used (core + profile + overlays, with versions)
|
|
149
|
+
- Per-criterion scores and evidence references
|
|
150
|
+
- Dimension averages and weights
|
|
151
|
+
- Gate check results
|
|
152
|
+
- Overall status
|
|
153
|
+
- Remediation items (for Conditional Pass) or rejection reason (for Reject)
|
|
154
|
+
|
|
155
|
+
Store completed evaluation records with the artifact or in your architecture governance system.
|
|
156
|
+
|
|
157
|
+
---
|
|
158
|
+
|
|
159
|
+
## Interpreting Results
|
|
160
|
+
|
|
161
|
+
### Pass
|
|
162
|
+
The artifact meets the standard. It may still have minor improvement opportunities noted in the assessment — these are recommended, not required.
|
|
163
|
+
|
|
164
|
+
### Conditional Pass
|
|
165
|
+
The artifact is usable but has specific gaps that must be addressed. List each remediation item with the criterion ID and a concrete improvement. Schedule a re-review after remediation.
|
|
166
|
+
|
|
167
|
+
### Rework Required
|
|
168
|
+
The artifact has pervasive or significant gaps. Return it to the author with the full evaluation record. A new assessment is required after rework — do not re-score the same version.
|
|
169
|
+
|
|
170
|
+
### Reject
|
|
171
|
+
The artifact has failed one or more gate criteria, indicating a fundamental deficiency. Reject means the artifact should not be used or progressed until the gate issue is fully resolved. A gate failure is not about quality level — it is about something that makes the artifact unsuitable for its purpose.
|
|
172
|
+
|
|
173
|
+
---
|
|
174
|
+
|
|
175
|
+
## Calibrating Your Assessments
|
|
176
|
+
|
|
177
|
+
If you are introducing EaROS to a team or beginning to use it for formal governance, calibrate before going live:
|
|
178
|
+
|
|
179
|
+
1. Select 3–5 artifacts of the same type with a range of quality levels
|
|
180
|
+
2. Have two or more assessors score them independently using the same rubric set
|
|
181
|
+
3. Compare scores criterion by criterion
|
|
182
|
+
4. For any criterion where scores differ by more than 1 point, re-read the level descriptors together and reach consensus
|
|
183
|
+
5. Document the agreed interpretation in a calibration note stored in `calibration/gold-set/`
|
|
184
|
+
|
|
185
|
+
Target inter-rater reliability: Cohen's κ > 0.70 for well-defined criteria.
|
|
186
|
+
|
|
187
|
+
---
|
|
188
|
+
|
|
189
|
+
## Next Steps
|
|
190
|
+
|
|
191
|
+
- **Create a profile** for an artifact type not yet covered → [`docs/profile-authoring-guide.md`](profile-authoring-guide.md)
|
|
192
|
+
- **Set up AI-agent assessment** → [`README.md`](../README.md#ai-agent-assessment) and [`standard/EAROS.md`](../standard/EAROS.md)
|
|
193
|
+
- **Review the research behind EaROS** → [`research/`](../research/)
|
|
194
|
+
- **Run a team calibration session** → [`calibration/`](../calibration/)
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
# EaROS v2.0 Profile Authoring Guide
|
|
2
|
+
|
|
3
|
+
## Overview
|
|
4
|
+
|
|
5
|
+
Profiles extend the Core Meta-Rubric for specific artifact types. This guide explains how to create, validate, and publish a new profile.
|
|
6
|
+
|
|
7
|
+
## Before You Start
|
|
8
|
+
|
|
9
|
+
1. Confirm the artifact type recurs enough to justify standardisation
|
|
10
|
+
2. Verify the Core Meta-Rubric alone is insufficient
|
|
11
|
+
3. Choose a design method (A-E) from EaROS v2.0 Section 15
|
|
12
|
+
4. Gather 3-5 representative artifacts for calibration
|
|
13
|
+
|
|
14
|
+
## Profile Design Methods
|
|
15
|
+
|
|
16
|
+
| Method | Best For |
|
|
17
|
+
|--------|----------|
|
|
18
|
+
| A: Decision-Centred | ADRs, investment reviews, exception requests |
|
|
19
|
+
| B: Viewpoint-Centred | Capability maps, reference architectures |
|
|
20
|
+
| C: Lifecycle-Centred | Transition designs, roadmaps, handover docs |
|
|
21
|
+
| D: Risk-Centred | Security, regulatory, resilience architecture |
|
|
22
|
+
| E: Pattern-Library | Recurring reference patterns, platform services |
|
|
23
|
+
|
|
24
|
+
## Profile Rules
|
|
25
|
+
|
|
26
|
+
- Inherit the core scale (0-4) and status model unless approved exception
|
|
27
|
+
- Add no more than 5-12 specific criteria
|
|
28
|
+
- Map every criterion to a dimension
|
|
29
|
+
- Define evidence anchors, gate types, and applicability rules
|
|
30
|
+
- Include examples, anti-patterns, and remediation hints
|
|
31
|
+
- Include at least one calibration artifact before approval
|
|
32
|
+
|
|
33
|
+
## v2.0 Additions
|
|
34
|
+
|
|
35
|
+
- Include `examples.good` and `examples.bad` for AI disambiguation
|
|
36
|
+
- Add `decision_tree` for ambiguous criteria
|
|
37
|
+
- Specify `design_method` in metadata
|
|
38
|
+
- Set `reliability_targets` for agent evaluation
|
|
39
|
+
- Document `agent_scale` if collapsing to 0-3 for agent mode
|
|
40
|
+
|
|
41
|
+
## Workflow
|
|
42
|
+
|
|
43
|
+
1. Create proposal (see EaROS Section 16)
|
|
44
|
+
2. Classify as profile or overlay
|
|
45
|
+
3. Choose design method (A-E)
|
|
46
|
+
4. Draft using `templates/new-profile.template.yaml`
|
|
47
|
+
5. Build calibration pack (1 strong, 1 weak, 1 ambiguous, 1 incomplete)
|
|
48
|
+
6. Run calibration with 2+ reviewers
|
|
49
|
+
7. Revise until agreement stabilises
|
|
50
|
+
8. Approve, publish, and assign owner
|
|
51
|
+
9. Monitor in production
|