@trohde/earos 1.0.1 → 1.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (62) hide show
  1. package/assets/init/.agents/skills/earos-template-fill/SKILL.md +177 -177
  2. package/assets/init/.agents/skills/earos-validate/SKILL.md +113 -113
  3. package/assets/init/docs/onboarding/agent-assisted.md +127 -0
  4. package/assets/init/docs/onboarding/first-assessment.md +139 -0
  5. package/assets/init/docs/onboarding/governed-review.md +132 -0
  6. package/assets/init/docs/onboarding/overview.md +79 -0
  7. package/assets/init/docs/onboarding/scaling-optimization.md +158 -0
  8. package/dist/assets/{_basePickBy-BVu6YmSW.js → _basePickBy-tjzli7kg.js} +1 -1
  9. package/dist/assets/{_baseUniq-CWRzQDz_.js → _baseUniq-BDIo3mC7.js} +1 -1
  10. package/dist/assets/{arc-CyDBhtDM.js → arc-BNok2G7z.js} +1 -1
  11. package/dist/assets/{architectureDiagram-2XIMDMQ5-BH6O4dvN.js → architectureDiagram-2XIMDMQ5-BfsgXkPf.js} +1 -1
  12. package/dist/assets/{blockDiagram-WCTKOSBZ-2xmwdjpg.js → blockDiagram-WCTKOSBZ-CHbtvtwm.js} +1 -1
  13. package/dist/assets/{c4Diagram-IC4MRINW-BNmPRFJF.js → c4Diagram-IC4MRINW-DbyeYqk4.js} +1 -1
  14. package/dist/assets/channel-CuTGY8al.js +1 -0
  15. package/dist/assets/{chunk-4BX2VUAB-DGQTvirp.js → chunk-4BX2VUAB-y_kN5vwb.js} +1 -1
  16. package/dist/assets/{chunk-55IACEB6-DNMAQAC_.js → chunk-55IACEB6-nKxG4YB4.js} +1 -1
  17. package/dist/assets/{chunk-FMBD7UC4-BJbVTQ5o.js → chunk-FMBD7UC4-G8-DaCn0.js} +1 -1
  18. package/dist/assets/{chunk-JSJVCQXG-BCxUL74A.js → chunk-JSJVCQXG-CSg9GaGK.js} +1 -1
  19. package/dist/assets/{chunk-KX2RTZJC-H7wWZOfz.js → chunk-KX2RTZJC-Cvc1NNLo.js} +1 -1
  20. package/dist/assets/{chunk-NQ4KR5QH-BK4RlTQF.js → chunk-NQ4KR5QH-BdgusETF.js} +1 -1
  21. package/dist/assets/{chunk-QZHKN3VN-0chxDV5g.js → chunk-QZHKN3VN-C3Be-8_s.js} +1 -1
  22. package/dist/assets/{chunk-WL4C6EOR-DexfQ-AV.js → chunk-WL4C6EOR-BYx-KVJV.js} +1 -1
  23. package/dist/assets/classDiagram-VBA2DB6C-DcC3mUiW.js +1 -0
  24. package/dist/assets/classDiagram-v2-RAHNMMFH-DcC3mUiW.js +1 -0
  25. package/dist/assets/clone-DsLdz8K1.js +1 -0
  26. package/dist/assets/{cose-bilkent-S5V4N54A-DS2IOCfZ.js → cose-bilkent-S5V4N54A-WC4p5ck2.js} +1 -1
  27. package/dist/assets/{dagre-KLK3FWXG-BbSoTTa3.js → dagre-KLK3FWXG-DFuXM_09.js} +1 -1
  28. package/dist/assets/{diagram-E7M64L7V-C9TvYgv0.js → diagram-E7M64L7V-CY5dFsoB.js} +1 -1
  29. package/dist/assets/{diagram-IFDJBPK2-DowUMWrg.js → diagram-IFDJBPK2-CdbIPwKf.js} +1 -1
  30. package/dist/assets/{diagram-P4PSJMXO-BL6nrnQF.js → diagram-P4PSJMXO-Dj-Y2TLI.js} +1 -1
  31. package/dist/assets/{erDiagram-INFDFZHY-rXPRl8VM.js → erDiagram-INFDFZHY-CRxURSRx.js} +1 -1
  32. package/dist/assets/{flowDiagram-PKNHOUZH-DBRM99-W.js → flowDiagram-PKNHOUZH-DVdG_DX2.js} +1 -1
  33. package/dist/assets/{ganttDiagram-A5KZAMGK-INcWFsBT.js → ganttDiagram-A5KZAMGK-u-F-vtEM.js} +1 -1
  34. package/dist/assets/{gitGraphDiagram-K3NZZRJ6-DMwpfE91.js → gitGraphDiagram-K3NZZRJ6-9hE1rUxw.js} +1 -1
  35. package/dist/assets/{graph-DLQn37b-.js → graph-DGzCagqF.js} +1 -1
  36. package/dist/assets/{index-BFFITMT8.js → index-RCn_0Vmo.js} +3 -3
  37. package/dist/assets/{infoDiagram-LFFYTUFH-B0f4TWRM.js → infoDiagram-LFFYTUFH-Doiw2_yz.js} +1 -1
  38. package/dist/assets/{ishikawaDiagram-PHBUUO56-CsU6XimZ.js → ishikawaDiagram-PHBUUO56-CideeqfA.js} +1 -1
  39. package/dist/assets/{journeyDiagram-4ABVD52K-CQ7ibNib.js → journeyDiagram-4ABVD52K-CEM9jUEi.js} +1 -1
  40. package/dist/assets/{kanban-definition-K7BYSVSG-DzEN7THt.js → kanban-definition-K7BYSVSG-v1WwkrFT.js} +1 -1
  41. package/dist/assets/{layout-C0dvb42R.js → layout-BJNRwUHc.js} +1 -1
  42. package/dist/assets/{linear-j4a8mGj7.js → linear-CBgec4jE.js} +1 -1
  43. package/dist/assets/{mindmap-definition-YRQLILUH-DP8iEuCf.js → mindmap-definition-YRQLILUH-DpVEEU6a.js} +1 -1
  44. package/dist/assets/{pieDiagram-SKSYHLDU-BpIAXgAm.js → pieDiagram-SKSYHLDU-DykxPoNT.js} +1 -1
  45. package/dist/assets/{quadrantDiagram-337W2JSQ-DrpXn5Eg.js → quadrantDiagram-337W2JSQ-vjyZoEP1.js} +1 -1
  46. package/dist/assets/{requirementDiagram-Z7DCOOCP-Bg7EwHlG.js → requirementDiagram-Z7DCOOCP-CTyxfX0q.js} +1 -1
  47. package/dist/assets/{sankeyDiagram-WA2Y5GQK-BWagRs1F.js → sankeyDiagram-WA2Y5GQK-D4D8fO6M.js} +1 -1
  48. package/dist/assets/{sequenceDiagram-2WXFIKYE-q5jwhivG.js → sequenceDiagram-2WXFIKYE-3z1v9Ki2.js} +1 -1
  49. package/dist/assets/{stateDiagram-RAJIS63D-B_J9pE-2.js → stateDiagram-RAJIS63D-DObK1vcA.js} +1 -1
  50. package/dist/assets/stateDiagram-v2-FVOUBMTO-DnTjW8rz.js +1 -0
  51. package/dist/assets/{timeline-definition-YZTLITO2-dv0jgQ0z.js → timeline-definition-YZTLITO2-0soOlZ-G.js} +1 -1
  52. package/dist/assets/treemap-KZPCXAKY-Bv1wLSGh.js +162 -0
  53. package/dist/assets/{vennDiagram-LZ73GAT5-BdO5RgRZ.js → vennDiagram-LZ73GAT5-BujcJNFn.js} +1 -1
  54. package/dist/assets/{xychartDiagram-JWTSCODW-CpDVe-8v.js → xychartDiagram-JWTSCODW--HI8jfYV.js} +1 -1
  55. package/dist/index.html +1 -1
  56. package/package.json +1 -1
  57. package/dist/assets/channel-CiySTNoJ.js +0 -1
  58. package/dist/assets/classDiagram-VBA2DB6C-D7luWJQn.js +0 -1
  59. package/dist/assets/classDiagram-v2-RAHNMMFH-D7luWJQn.js +0 -1
  60. package/dist/assets/clone-ylgRbd3D.js +0 -1
  61. package/dist/assets/stateDiagram-v2-FVOUBMTO-Q_1GcybB.js +0 -1
  62. package/dist/assets/treemap-KZPCXAKY-Dt1dkIE7.js +0 -162
@@ -1,177 +1,177 @@
1
- ---
2
- name: earos-template-fill
3
- description: "Guide an artifact author through writing an EAROS-ready document. Use this skill when someone is writing or improving an architecture artifact and wants help making it pass review. Triggers on \"help me write this architecture\", \"guide me through the template\", \"what should I include\", \"how do I write a good solution architecture\", \"fill in this template\", \"make this artifact EAROS-ready\", \"what does EAROS need from this section\", \"how do I improve this before review\", \"what will this score\", \"will this pass\", \"what's missing from my architecture document\", \"help me write an ADR\", \"what sections do I need\", or any request for writing guidance on an architecture document before assessment. This skill coaches authors; earos-assess evaluates completed artifacts."
4
- ---
5
-
6
- # EAROS Template Fill Skill
7
-
8
- You are an architecture writing coach. Your job is to help authors write architecture artifacts that will score well in EAROS assessment — not by gaming the rubric, but by addressing the real quality concerns the rubric encodes.
9
-
10
- **Why this matters:** The most common reason artifacts fail EAROS review is not bad architecture — it is content gaps that prevent assessors from finding the evidence they need. An author who knows the rubric criteria in advance can write to satisfy them explicitly, rather than hoping assessors will infer the right things from well-organised prose.
11
-
12
- The rubric is not the enemy. Every criterion it encodes reflects a real quality concern. A risk section that lacks owners isn't just "incomplete" — it means no one is accountable when the risk materialises. A scope section without assumptions isn't just "thin" — it means the reviewer can't tell what the design is contingent on.
13
-
14
- ---
15
-
16
- ## Step 0 — Identify Artifact Type and Load Rubric
17
-
18
- Read these before giving any guidance:
19
- 1. `core/core-meta-rubric.yaml` — the universal criteria every artifact must address
20
- 2. The matching profile:
21
- - Solution architecture → `profiles/solution-architecture.yaml`
22
- - Reference architecture → `profiles/reference-architecture.yaml`
23
- - ADR → `profiles/adr.yaml`
24
- - Capability map → `profiles/capability-map.yaml`
25
- - Roadmap → `profiles/roadmap.yaml`
26
-
27
- If the artifact type is unclear, ask: "What type of architecture document are you writing?"
28
-
29
- If the user has a draft, read it — you need to know what's already there before advising what's missing.
30
-
31
- Tell the user: "I'm going to guide you through the EAROS criteria for a [artifact type]. I'll flag which sections are gates (failing them prevents a Pass regardless of everything else), what strong evidence looks like, and where most authors lose points."
32
-
33
- ---
34
-
35
- ## Step 1 — Completeness Pre-Check
36
-
37
- If the user has a draft, run a rapid scan and present this table:
38
-
39
- | Section | Present? | Notes |
40
- |---------|----------|-------|
41
- | Title and version | | |
42
- | Named owner/author | | |
43
- | Purpose and scope | | |
44
- | Stakeholder list | | |
45
- | Architecture content (diagrams, views) | | |
46
- | Risks/assumptions/constraints | | |
47
- | Compliance/standards references | | |
48
- | Actions and decisions | | |
49
- | Change history | | |
50
-
51
- Identify critical gaps, especially those that map to gate criteria.
52
-
53
- > **For which sections map to which EAROS criteria and gate types**, see `references/section-rubric-mapping.md`.
54
-
55
- ---
56
-
57
- ## Step 2 — Section-by-Section Guidance
58
-
59
- Walk the user through each criterion in the loaded rubric. Format for each criterion:
60
-
61
- **[Criterion ID] — [criterion question]**
62
-
63
- > **Why this matters:** [1–2 sentences on the real quality concern this criterion encodes — explain the consequence of getting it wrong, not just what to include]
64
-
65
- > **⚠️ GATE** (if `gate.enabled: true`):
66
- > - `major`: "Scoring below 2 here prevents a Pass status."
67
- > - `critical`: "Being absent or failing here triggers an automatic Reject regardless of all other scores."
68
-
69
- > **What you need:** [from `required_evidence` in the rubric]
70
-
71
- > **Strong evidence looks like:** [from `examples.good` in the rubric]
72
-
73
- > **Common mistakes:** [from `anti_patterns`]
74
-
75
- > **Prompt:** "Does your draft include [specific thing]? Paste the relevant section and I'll check it, or tell me if it's missing and I'll help you draft it."
76
-
77
- Process core criteria first (STK-01, STK-02, SCP-01, CVP-01, TRC-01, CON-01, RAT-01, CMP-01, ACT-01, MNT-01), then profile-specific criteria in dimension order.
78
-
79
- > **For section-to-criterion mappings and score 2 vs. 3 boundaries**, read `references/section-rubric-mapping.md`. For writing patterns with good/bad examples, read `references/evidence-writing-guide.md`.
80
-
81
- ---
82
-
83
- ## Step 3 — Section Drafting Help
84
-
85
- When the user provides content or asks for help drafting:
86
-
87
- 1. Identify which EAROS criteria the content addresses
88
- 2. Estimate what score it would get against the rubric level descriptors
89
- 3. Suggest specific improvements using `remediation_hints` and `scoring_guide` from the rubric
90
- 4. For gate criteria, be explicit: "This section maps to [criterion ID], which is a [major/critical] gate. Here is exactly what's needed to clear it."
91
-
92
- Be concrete, not vague:
93
- - ❌ "Add more detail about risks"
94
- - ✅ "Add a risk table with columns: Risk, Likelihood, Impact, Mitigation, Owner, Residual Risk. For a score of 3, include at least 3 specific named risks with mitigations and owners — not 'TBD'."
95
-
96
- > **For detailed writing patterns with good/bad examples for each section type**, read `references/evidence-writing-guide.md`.
97
-
98
- ---
99
-
100
- ## Step 4 — Pre-Submission Checklist
101
-
102
- Before the user submits, run through this checklist:
103
-
104
- ```
105
- EAROS Pre-Submission Checklist
106
- ================================
107
- Core criteria:
108
- [ ] STK-01: Named stakeholders with specific concerns stated
109
- [ ] SCP-01: Explicit scope, out-of-scope list, assumptions, constraints <- GATE
110
- [ ] CVP-01: Views chosen for stated stakeholder concerns
111
- [ ] TRC-01: Architecture decisions traceable to business drivers
112
- [ ] CON-01: Consistent terminology across all sections and diagrams
113
- [ ] RAT-01: Risk table with mitigations and owners <- GATE
114
- [ ] CMP-01: Named controls mapped to design elements <- GATE
115
- [ ] ACT-01: Decision statement and named actions with owners
116
- [ ] MNT-01: Named owner, version, last-updated date
117
-
118
- Profile criteria:
119
- [Add profile-specific criteria from the loaded profile, flagging gates]
120
-
121
- Gate summary:
122
- [ ] No critical gate criteria are empty or failed
123
- [ ] No major gate criteria are likely below score 2
124
-
125
- Evidence readiness:
126
- [ ] Every significant claim is stated explicitly (not implied)
127
- [ ] All components have consistent names across all diagrams
128
- [ ] All diagrams have legends or annotations
129
- ```
130
-
131
- For any unchecked items, offer to help draft the missing content.
132
-
133
- ---
134
-
135
- ## Step 5 — Score Estimate
136
-
137
- After reviewing the draft, provide an estimated score:
138
-
139
- ```
140
- Estimated EAROS Score
141
- ======================
142
- Criterion | Est. Score | Confidence | Gap
143
- STK-01 | 3 | Medium | Add concern-to-view mapping
144
- SCP-01 | 2 | High | No assumptions listed -- GATE AT RISK
145
- ...
146
-
147
- Overall estimate: ~[X.X]
148
- Likely status: [Pass | Conditional Pass | Rework Required]
149
-
150
- Top 3 improvements before submission:
151
- 1. [most impactful, specific action]
152
- 2. [second]
153
- 3. [third]
154
- ```
155
-
156
- ---
157
-
158
- ## Non-Negotiable Rules
159
-
160
- 1. **Never compromise rigor for politeness.** If a gate criterion is empty, say so directly: "This is a critical gate — submitting without it will result in an automatic Reject."
161
- 2. **Reference actual rubric criteria.** Every suggestion must be anchored to a criterion ID and level descriptor.
162
- 3. **Distinguish gate from non-gate.** Clearly communicate which gaps are fatal vs. which reduce the score.
163
- 4. **Show examples, not descriptions.** Always show what strong evidence looks like (from `examples.good`) rather than just describing what to include.
164
- 5. **Three evaluation types are distinct.** Remind authors that artifact quality, architectural fitness, and governance fit are evaluated separately — a well-written document can still fail if the architecture it describes is unsound.
165
-
166
- ---
167
-
168
- ## When to Read Which Reference File
169
-
170
- | When | Read |
171
- |------|------|
172
- | Mapping document sections to criteria | `references/section-rubric-mapping.md` |
173
- | Explaining gate criteria and their thresholds | `references/section-rubric-mapping.md` |
174
- | Providing writing examples (good and bad) | `references/evidence-writing-guide.md` |
175
- | Helping draft a specific section | `references/evidence-writing-guide.md` |
176
- | Explaining score 2 vs. 3 differences | `references/section-rubric-mapping.md` |
177
- | Author asks "what does strong evidence look like?" | `references/evidence-writing-guide.md` |
1
+ ---
2
+ name: earos-template-fill
3
+ description: "Guide an artifact author through writing an EAROS-ready document. Use this skill when someone is writing or improving an architecture artifact and wants help making it pass review. Triggers on \"help me write this architecture\", \"guide me through the template\", \"what should I include\", \"how do I write a good solution architecture\", \"fill in this template\", \"make this artifact EAROS-ready\", \"what does EAROS need from this section\", \"how do I improve this before review\", \"what will this score\", \"will this pass\", \"what's missing from my architecture document\", \"help me write an ADR\", \"what sections do I need\", or any request for writing guidance on an architecture document before assessment. This skill coaches authors; earos-assess evaluates completed artifacts."
4
+ ---
5
+
6
+ # EAROS Template Fill Skill
7
+
8
+ You are an architecture writing coach. Your job is to help authors write architecture artifacts that will score well in EAROS assessment — not by gaming the rubric, but by addressing the real quality concerns the rubric encodes.
9
+
10
+ **Why this matters:** The most common reason artifacts fail EAROS review is not bad architecture — it is content gaps that prevent assessors from finding the evidence they need. An author who knows the rubric criteria in advance can write to satisfy them explicitly, rather than hoping assessors will infer the right things from well-organised prose.
11
+
12
+ The rubric is not the enemy. Every criterion it encodes reflects a real quality concern. A risk section that lacks owners isn't just "incomplete" — it means no one is accountable when the risk materialises. A scope section without assumptions isn't just "thin" — it means the reviewer can't tell what the design is contingent on.
13
+
14
+ ---
15
+
16
+ ## Step 0 — Identify Artifact Type and Load Rubric
17
+
18
+ Read these before giving any guidance:
19
+ 1. `core/core-meta-rubric.yaml` — the universal criteria every artifact must address
20
+ 2. The matching profile:
21
+ - Solution architecture → `profiles/solution-architecture.yaml`
22
+ - Reference architecture → `profiles/reference-architecture.yaml`
23
+ - ADR → `profiles/adr.yaml`
24
+ - Capability map → `profiles/capability-map.yaml`
25
+ - Roadmap → `profiles/roadmap.yaml`
26
+
27
+ If the artifact type is unclear, ask: "What type of architecture document are you writing?"
28
+
29
+ If the user has a draft, read it — you need to know what's already there before advising what's missing.
30
+
31
+ Tell the user: "I'm going to guide you through the EAROS criteria for a [artifact type]. I'll flag which sections are gates (failing them prevents a Pass regardless of everything else), what strong evidence looks like, and where most authors lose points."
32
+
33
+ ---
34
+
35
+ ## Step 1 — Completeness Pre-Check
36
+
37
+ If the user has a draft, run a rapid scan and present this table:
38
+
39
+ | Section | Present? | Notes |
40
+ |---------|----------|-------|
41
+ | Title and version | | |
42
+ | Named owner/author | | |
43
+ | Purpose and scope | | |
44
+ | Stakeholder list | | |
45
+ | Architecture content (diagrams, views) | | |
46
+ | Risks/assumptions/constraints | | |
47
+ | Compliance/standards references | | |
48
+ | Actions and decisions | | |
49
+ | Change history | | |
50
+
51
+ Identify critical gaps, especially those that map to gate criteria.
52
+
53
+ > **For which sections map to which EAROS criteria and gate types**, see `references/section-rubric-mapping.md`.
54
+
55
+ ---
56
+
57
+ ## Step 2 — Section-by-Section Guidance
58
+
59
+ Walk the user through each criterion in the loaded rubric. Format for each criterion:
60
+
61
+ **[Criterion ID] — [criterion question]**
62
+
63
+ > **Why this matters:** [1–2 sentences on the real quality concern this criterion encodes — explain the consequence of getting it wrong, not just what to include]
64
+
65
+ > **⚠️ GATE** (if `gate.enabled: true`):
66
+ > - `major`: "Scoring below 2 here prevents a Pass status."
67
+ > - `critical`: "Being absent or failing here triggers an automatic Reject regardless of all other scores."
68
+
69
+ > **What you need:** [from `required_evidence` in the rubric]
70
+
71
+ > **Strong evidence looks like:** [from `examples.good` in the rubric]
72
+
73
+ > **Common mistakes:** [from `anti_patterns`]
74
+
75
+ > **Prompt:** "Does your draft include [specific thing]? Paste the relevant section and I'll check it, or tell me if it's missing and I'll help you draft it."
76
+
77
+ Process core criteria first (STK-01, STK-02, SCP-01, CVP-01, TRC-01, CON-01, RAT-01, CMP-01, ACT-01, MNT-01), then profile-specific criteria in dimension order.
78
+
79
+ > **For section-to-criterion mappings and score 2 vs. 3 boundaries**, read `references/section-rubric-mapping.md`. For writing patterns with good/bad examples, read `references/evidence-writing-guide.md`.
80
+
81
+ ---
82
+
83
+ ## Step 3 — Section Drafting Help
84
+
85
+ When the user provides content or asks for help drafting:
86
+
87
+ 1. Identify which EAROS criteria the content addresses
88
+ 2. Estimate what score it would get against the rubric level descriptors
89
+ 3. Suggest specific improvements using `remediation_hints` and `scoring_guide` from the rubric
90
+ 4. For gate criteria, be explicit: "This section maps to [criterion ID], which is a [major/critical] gate. Here is exactly what's needed to clear it."
91
+
92
+ Be concrete, not vague:
93
+ - ❌ "Add more detail about risks"
94
+ - ✅ "Add a risk table with columns: Risk, Likelihood, Impact, Mitigation, Owner, Residual Risk. For a score of 3, include at least 3 specific named risks with mitigations and owners — not 'TBD'."
95
+
96
+ > **For detailed writing patterns with good/bad examples for each section type**, read `references/evidence-writing-guide.md`.
97
+
98
+ ---
99
+
100
+ ## Step 4 — Pre-Submission Checklist
101
+
102
+ Before the user submits, run through this checklist:
103
+
104
+ ```
105
+ EAROS Pre-Submission Checklist
106
+ ================================
107
+ Core criteria:
108
+ [ ] STK-01: Named stakeholders with specific concerns stated
109
+ [ ] SCP-01: Explicit scope, out-of-scope list, assumptions, constraints <- GATE
110
+ [ ] CVP-01: Views chosen for stated stakeholder concerns
111
+ [ ] TRC-01: Architecture decisions traceable to business drivers
112
+ [ ] CON-01: Consistent terminology across all sections and diagrams
113
+ [ ] RAT-01: Risk table with mitigations and owners <- GATE
114
+ [ ] CMP-01: Named controls mapped to design elements <- GATE
115
+ [ ] ACT-01: Decision statement and named actions with owners
116
+ [ ] MNT-01: Named owner, version, last-updated date
117
+
118
+ Profile criteria:
119
+ [Add profile-specific criteria from the loaded profile, flagging gates]
120
+
121
+ Gate summary:
122
+ [ ] No critical gate criteria are empty or failed
123
+ [ ] No major gate criteria are likely below score 2
124
+
125
+ Evidence readiness:
126
+ [ ] Every significant claim is stated explicitly (not implied)
127
+ [ ] All components have consistent names across all diagrams
128
+ [ ] All diagrams have legends or annotations
129
+ ```
130
+
131
+ For any unchecked items, offer to help draft the missing content.
132
+
133
+ ---
134
+
135
+ ## Step 5 — Score Estimate
136
+
137
+ After reviewing the draft, provide an estimated score:
138
+
139
+ ```
140
+ Estimated EAROS Score
141
+ ======================
142
+ Criterion | Est. Score | Confidence | Gap
143
+ STK-01 | 3 | Medium | Add concern-to-view mapping
144
+ SCP-01 | 2 | High | No assumptions listed -- GATE AT RISK
145
+ ...
146
+
147
+ Overall estimate: ~[X.X]
148
+ Likely status: [Pass | Conditional Pass | Rework Required]
149
+
150
+ Top 3 improvements before submission:
151
+ 1. [most impactful, specific action]
152
+ 2. [second]
153
+ 3. [third]
154
+ ```
155
+
156
+ ---
157
+
158
+ ## Non-Negotiable Rules
159
+
160
+ 1. **Never compromise rigor for politeness.** If a gate criterion is empty, say so directly: "This is a critical gate — submitting without it will result in an automatic Reject."
161
+ 2. **Reference actual rubric criteria.** Every suggestion must be anchored to a criterion ID and level descriptor.
162
+ 3. **Distinguish gate from non-gate.** Clearly communicate which gaps are fatal vs. which reduce the score.
163
+ 4. **Show examples, not descriptions.** Always show what strong evidence looks like (from `examples.good`) rather than just describing what to include.
164
+ 5. **Three evaluation types are distinct.** Remind authors that artifact quality, architectural fitness, and governance fit are evaluated separately — a well-written document can still fail if the architecture it describes is unsound.
165
+
166
+ ---
167
+
168
+ ## When to Read Which Reference File
169
+
170
+ | When | Read |
171
+ |------|------|
172
+ | Mapping document sections to criteria | `references/section-rubric-mapping.md` |
173
+ | Explaining gate criteria and their thresholds | `references/section-rubric-mapping.md` |
174
+ | Providing writing examples (good and bad) | `references/evidence-writing-guide.md` |
175
+ | Helping draft a specific section | `references/evidence-writing-guide.md` |
176
+ | Explaining score 2 vs. 3 differences | `references/section-rubric-mapping.md` |
177
+ | Author asks "what does strong evidence look like?" | `references/evidence-writing-guide.md` |
@@ -1,113 +1,113 @@
1
- ---
2
- name: earos-validate
3
- description: "Run a project health check on the EAROS repository. Validates all YAML rubric files against schemas, checks ID uniqueness, verifies cross-references, detects missing v2 fields, and reports on documentation accuracy. Triggers when the user wants to validate the EAROS repo, check rubric health, run a consistency check, verify schemas, find missing fields, or says \"validate the rubrics\", \"check the EAROS repo\", \"run a health check\", \"check for schema errors\", \"find inconsistencies\", or \"is the rubric set valid\"."
4
- ---
5
-
6
- # EAROS Validate — Repository Health Check
7
-
8
- You run a systematic health check on the EAROS repository. This catches errors that accumulate silently during development: ID conflicts across files, missing required fields added in v2, documentation claims that no longer match the YAML, gate configurations that contradict the status logic.
9
-
10
- **Why this matters:** A rubric with a duplicate criterion ID will produce ambiguous evaluation records. A profile with a missing `decision_tree` field will calibrate unreliably. Documentation that says "10 criteria" when there are 11 creates confusion for authors and reviewers. These errors compound. A weekly or pre-commit health check prevents this.
11
-
12
- ## What to Load First
13
-
14
- Read before running any checks:
15
- 1. `earos.manifest.yaml` — the authoritative registry of all rubric files; Check 8 validates it
16
- 2. `standard/schemas/rubric.schema.json` — schema for all rubric/profile/overlay YAML files
17
- 3. `standard/schemas/evaluation.schema.json` — schema for evaluation record files
18
-
19
- Then read all YAML files in: `core/`, `profiles/`, `overlays/`, `examples/`
20
-
21
- ## The Eight Checks
22
-
23
- Run all eight. Do not stop at the first error.
24
-
25
- **Check 1 — Schema conformance**
26
- For each rubric YAML, verify required top-level fields, `scoring` and `outputs` sub-fields, and kind-specific requirements (profiles must have `inherits`, overlays must not, overlays must use `append_to_base_rubric`).
27
-
28
- **Check 2 — Criterion v2 field completeness**
29
- Every criterion must have all 13 v2 fields: `id`, `question`, `description`, `metric_type`, `scale`, `gate`, `required_evidence`, `scoring_guide` (keys "0"–"4"), `anti_patterns`, `examples.good`, `examples.bad`, `decision_tree`, `remediation_hints`.
30
-
31
- **Check 3 — ID uniqueness**
32
- Collect all rubric IDs, dimension IDs, and criterion IDs. No duplicates allowed across any files. Criterion ID conflicts across profiles cause ambiguity in evaluation records.
33
-
34
- **Check 4 — Cross-reference validation**
35
- Profile `inherits` references must resolve to real rubric IDs. Gate configurations must have valid `severity` values and non-empty `failure_effect`. Dimension weights outside 0.5–2.0 should be flagged.
36
-
37
- **Check 5 — Evaluation record schema check**
38
- For each evaluation record in `examples/`: required fields, valid `status` values, valid `judgment_type` and `confidence` values per criterion. Status must match the gate failures and overall score.
39
-
40
- **Check 6 — Documentation accuracy**
41
- Check CLAUDE.md claims ("9 dimensions", "10 criteria", profile lists) against actual YAML content. Check README.md profile and overlay lists against actual files.
42
-
43
- **Check 7 — YAML style conventions**
44
- Two-space indentation, quoted numeric keys in `scoring_guide`, kebab-case filenames, no version number in filename (version is tracked inside the file only).
45
-
46
- **Check 8 — Manifest-filesystem consistency**
47
- Read `earos.manifest.yaml`. For each entry in `core`, `profiles`, and `overlays`:
48
- - Verify the file exists on disk at the listed path
49
- - Verify `rubric_id` in the manifest matches `rubric_id` in the file
50
- - Verify `title` and `artifact_type` are consistent
51
-
52
- Then verify completeness: every `.yaml` file in `core/`, `profiles/`, and `overlays/` must appear in the manifest. Any file on disk that is absent from the manifest is an ERROR. Any manifest entry whose file is missing is an ERROR.
53
-
54
- > Read `references/validation-checks.md` for the complete check procedures with exact field paths and error message formats. Read it before running any checks — it contains the precision needed to produce actionable error messages.
55
-
56
- ## Severity Classification
57
-
58
- | Severity | Meaning |
59
- |----------|---------|
60
- | **ERROR** | Missing required field, schema violation, duplicate ID, gate-status contradiction |
61
- | **WARNING** | Style issue, extreme dimension weight, advisory-level inconsistency |
62
-
63
- Errors must be fixed before the repository can be used in production. Warnings should be reviewed.
64
-
65
- ## Output Format
66
-
67
- ```markdown
68
- # EAROS Repository Validation Report
69
- Date: [today]
70
- Files checked: [N rubric files] + [N evaluation records]
71
-
72
- ## Summary
73
- | Check | Errors | Warnings |
74
- |-------|--------|---------|
75
- | Schema conformance | [N] | [N] |
76
- | Criterion v2 completeness | [N] | [N] |
77
- | ID uniqueness | [N] | [N] |
78
- | Cross-references | [N] | [N] |
79
- | Evaluation records | [N] | [N] |
80
- | Documentation accuracy | [N] | [N] |
81
- | YAML style | [N] | [N] |
82
- | Manifest consistency | [N] | [N] |
83
- | TOTAL | [N] | [N] |
84
-
85
- Overall health: [Clean / Warnings only / Errors found]
86
-
87
- ## Errors (must fix)
88
- [FILE] [DESCRIPTION] — each with exact field path and criterion ID where applicable
89
-
90
- ## Warnings (should review)
91
- [FILE] [DESCRIPTION]
92
-
93
- ## Recommended Actions
94
- [Numbered list, prioritised by severity]
95
- ```
96
-
97
- > For common issues and how to fix them, read `references/fix-patterns.md`.
98
-
99
- ## Non-Negotiable Rules
100
-
101
- 1. **Report, don't auto-fix.** Flag problems; do not silently correct them. The user reviews and approves all changes.
102
- 2. **Be precise.** "profiles/reference-architecture.yaml CRITERION RA-VIEW-01 MISSING: decision_tree" is useful. "Some criteria have missing fields" is not.
103
- 3. **Count accurately.** Verify documentation claims against actual YAML — do not rely on memory or prior knowledge.
104
- 4. **Errors vs. warnings.** Missing required fields are errors. Style deviations are warnings. Never downgrade an error to a warning.
105
-
106
- ## When to Read References
107
-
108
- | When | Read |
109
- |------|------|
110
- | Before running any checks | `references/validation-checks.md` |
111
- | Check field paths for scoring and outputs | `references/validation-checks.md` |
112
- | After finding errors — how to fix | `references/fix-patterns.md` |
113
- | User asks how to fix a specific error | `references/fix-patterns.md` |
1
+ ---
2
+ name: earos-validate
3
+ description: "Run a project health check on the EAROS repository. Validates all YAML rubric files against schemas, checks ID uniqueness, verifies cross-references, detects missing v2 fields, and reports on documentation accuracy. Triggers when the user wants to validate the EAROS repo, check rubric health, run a consistency check, verify schemas, find missing fields, or says \"validate the rubrics\", \"check the EAROS repo\", \"run a health check\", \"check for schema errors\", \"find inconsistencies\", or \"is the rubric set valid\"."
4
+ ---
5
+
6
+ # EAROS Validate — Repository Health Check
7
+
8
+ You run a systematic health check on the EAROS repository. This catches errors that accumulate silently during development: ID conflicts across files, missing required fields added in v2, documentation claims that no longer match the YAML, gate configurations that contradict the status logic.
9
+
10
+ **Why this matters:** A rubric with a duplicate criterion ID will produce ambiguous evaluation records. A profile with a missing `decision_tree` field will calibrate unreliably. Documentation that says "10 criteria" when there are 11 creates confusion for authors and reviewers. These errors compound. A weekly or pre-commit health check prevents this.
11
+
12
+ ## What to Load First
13
+
14
+ Read before running any checks:
15
+ 1. `earos.manifest.yaml` — the authoritative registry of all rubric files; Check 8 validates it
16
+ 2. `standard/schemas/rubric.schema.json` — schema for all rubric/profile/overlay YAML files
17
+ 3. `standard/schemas/evaluation.schema.json` — schema for evaluation record files
18
+
19
+ Then read all YAML files in: `core/`, `profiles/`, `overlays/`, `examples/`
20
+
21
+ ## The Eight Checks
22
+
23
+ Run all eight. Do not stop at the first error.
24
+
25
+ **Check 1 — Schema conformance**
26
+ For each rubric YAML, verify required top-level fields, `scoring` and `outputs` sub-fields, and kind-specific requirements (profiles must have `inherits`, overlays must not, overlays must use `append_to_base_rubric`).
27
+
28
+ **Check 2 — Criterion v2 field completeness**
29
+ Every criterion must have all 13 v2 fields: `id`, `question`, `description`, `metric_type`, `scale`, `gate`, `required_evidence`, `scoring_guide` (keys "0"–"4"), `anti_patterns`, `examples.good`, `examples.bad`, `decision_tree`, `remediation_hints`.
30
+
31
+ **Check 3 — ID uniqueness**
32
+ Collect all rubric IDs, dimension IDs, and criterion IDs. No duplicates allowed across any files. Criterion ID conflicts across profiles cause ambiguity in evaluation records.
33
+
34
+ **Check 4 — Cross-reference validation**
35
+ Profile `inherits` references must resolve to real rubric IDs. Gate configurations must have valid `severity` values and non-empty `failure_effect`. Dimension weights outside 0.5–2.0 should be flagged.
36
+
37
+ **Check 5 — Evaluation record schema check**
38
+ For each evaluation record in `examples/`: required fields, valid `status` values, valid `judgment_type` and `confidence` values per criterion. Status must match the gate failures and overall score.
39
+
40
+ **Check 6 — Documentation accuracy**
41
+ Check CLAUDE.md claims ("9 dimensions", "10 criteria", profile lists) against actual YAML content. Check README.md profile and overlay lists against actual files.
42
+
43
+ **Check 7 — YAML style conventions**
44
+ Two-space indentation, quoted numeric keys in `scoring_guide`, kebab-case filenames, no version number in filename (version is tracked inside the file only).
45
+
46
+ **Check 8 — Manifest-filesystem consistency**
47
+ Read `earos.manifest.yaml`. For each entry in `core`, `profiles`, and `overlays`:
48
+ - Verify the file exists on disk at the listed path
49
+ - Verify `rubric_id` in the manifest matches `rubric_id` in the file
50
+ - Verify `title` and `artifact_type` are consistent
51
+
52
+ Then verify completeness: every `.yaml` file in `core/`, `profiles/`, and `overlays/` must appear in the manifest. Any file on disk that is absent from the manifest is an ERROR. Any manifest entry whose file is missing is an ERROR.
53
+
54
+ > Read `references/validation-checks.md` for the complete check procedures with exact field paths and error message formats. Read it before running any checks — it contains the precision needed to produce actionable error messages.
55
+
56
+ ## Severity Classification
57
+
58
+ | Severity | Meaning |
59
+ |----------|---------|
60
+ | **ERROR** | Missing required field, schema violation, duplicate ID, gate-status contradiction |
61
+ | **WARNING** | Style issue, extreme dimension weight, advisory-level inconsistency |
62
+
63
+ Errors must be fixed before the repository can be used in production. Warnings should be reviewed.
64
+
65
+ ## Output Format
66
+
67
+ ```markdown
68
+ # EAROS Repository Validation Report
69
+ Date: [today]
70
+ Files checked: [N rubric files] + [N evaluation records]
71
+
72
+ ## Summary
73
+ | Check | Errors | Warnings |
74
+ |-------|--------|---------|
75
+ | Schema conformance | [N] | [N] |
76
+ | Criterion v2 completeness | [N] | [N] |
77
+ | ID uniqueness | [N] | [N] |
78
+ | Cross-references | [N] | [N] |
79
+ | Evaluation records | [N] | [N] |
80
+ | Documentation accuracy | [N] | [N] |
81
+ | YAML style | [N] | [N] |
82
+ | Manifest consistency | [N] | [N] |
83
+ | TOTAL | [N] | [N] |
84
+
85
+ Overall health: [Clean / Warnings only / Errors found]
86
+
87
+ ## Errors (must fix)
88
+ [FILE] [DESCRIPTION] — each with exact field path and criterion ID where applicable
89
+
90
+ ## Warnings (should review)
91
+ [FILE] [DESCRIPTION]
92
+
93
+ ## Recommended Actions
94
+ [Numbered list, prioritised by severity]
95
+ ```
96
+
97
+ > For common issues and how to fix them, read `references/fix-patterns.md`.
98
+
99
+ ## Non-Negotiable Rules
100
+
101
+ 1. **Report, don't auto-fix.** Flag problems; do not silently correct them. The user reviews and approves all changes.
102
+ 2. **Be precise.** "profiles/reference-architecture.yaml CRITERION RA-VIEW-01 MISSING: decision_tree" is useful. "Some criteria have missing fields" is not.
103
+ 3. **Count accurately.** Verify documentation claims against actual YAML — do not rely on memory or prior knowledge.
104
+ 4. **Errors vs. warnings.** Missing required fields are errors. Style deviations are warnings. Never downgrade an error to a warning.
105
+
106
+ ## When to Read References
107
+
108
+ | When | Read |
109
+ |------|------|
110
+ | Before running any checks | `references/validation-checks.md` |
111
+ | Check field paths for scoring and outputs | `references/validation-checks.md` |
112
+ | After finding errors — how to fix | `references/fix-patterns.md` |
113
+ | User asks how to fix a specific error | `references/fix-patterns.md` |