@trohde/earos 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (135) hide show
  1. package/README.md +156 -0
  2. package/assets/init/.agents/skills/earos-artifact-gen/SKILL.md +106 -0
  3. package/assets/init/.agents/skills/earos-artifact-gen/references/interview-guide.md +313 -0
  4. package/assets/init/.agents/skills/earos-artifact-gen/references/output-guide.md +367 -0
  5. package/assets/init/.agents/skills/earos-assess/SKILL.md +212 -0
  6. package/assets/init/.agents/skills/earos-assess/references/calibration-benchmarks.md +160 -0
  7. package/assets/init/.agents/skills/earos-assess/references/output-templates.md +311 -0
  8. package/assets/init/.agents/skills/earos-assess/references/scoring-protocol.md +281 -0
  9. package/assets/init/.agents/skills/earos-calibrate/SKILL.md +153 -0
  10. package/assets/init/.agents/skills/earos-calibrate/references/agreement-metrics.md +188 -0
  11. package/assets/init/.agents/skills/earos-calibrate/references/calibration-protocol.md +263 -0
  12. package/assets/init/.agents/skills/earos-create/SKILL.md +257 -0
  13. package/assets/init/.agents/skills/earos-create/references/criterion-writing-guide.md +268 -0
  14. package/assets/init/.agents/skills/earos-create/references/dependency-rules.md +193 -0
  15. package/assets/init/.agents/skills/earos-create/references/rubric-interview-guide.md +123 -0
  16. package/assets/init/.agents/skills/earos-create/references/validation-checklist.md +238 -0
  17. package/assets/init/.agents/skills/earos-profile-author/SKILL.md +251 -0
  18. package/assets/init/.agents/skills/earos-profile-author/references/criterion-writing-guide.md +280 -0
  19. package/assets/init/.agents/skills/earos-profile-author/references/design-methods.md +158 -0
  20. package/assets/init/.agents/skills/earos-profile-author/references/profile-checklist.md +173 -0
  21. package/assets/init/.agents/skills/earos-remediate/SKILL.md +118 -0
  22. package/assets/init/.agents/skills/earos-remediate/references/output-template.md +199 -0
  23. package/assets/init/.agents/skills/earos-remediate/references/remediation-patterns.md +330 -0
  24. package/assets/init/.agents/skills/earos-report/SKILL.md +85 -0
  25. package/assets/init/.agents/skills/earos-report/references/portfolio-template.md +181 -0
  26. package/assets/init/.agents/skills/earos-report/references/single-artifact-template.md +168 -0
  27. package/assets/init/.agents/skills/earos-review/SKILL.md +130 -0
  28. package/assets/init/.agents/skills/earos-review/references/challenge-patterns.md +163 -0
  29. package/assets/init/.agents/skills/earos-review/references/output-template.md +180 -0
  30. package/assets/init/.agents/skills/earos-template-fill/SKILL.md +177 -0
  31. package/assets/init/.agents/skills/earos-template-fill/references/evidence-writing-guide.md +186 -0
  32. package/assets/init/.agents/skills/earos-template-fill/references/section-rubric-mapping.md +200 -0
  33. package/assets/init/.agents/skills/earos-validate/SKILL.md +113 -0
  34. package/assets/init/.agents/skills/earos-validate/references/fix-patterns.md +281 -0
  35. package/assets/init/.agents/skills/earos-validate/references/validation-checks.md +287 -0
  36. package/assets/init/.claude/CLAUDE.md +4 -0
  37. package/assets/init/AGENTS.md +293 -0
  38. package/assets/init/CLAUDE.md +635 -0
  39. package/assets/init/README.md +507 -0
  40. package/assets/init/calibration/gold-set/.gitkeep +0 -0
  41. package/assets/init/calibration/results/.gitkeep +0 -0
  42. package/assets/init/core/core-meta-rubric.yaml +643 -0
  43. package/assets/init/docs/consistency-report.md +325 -0
  44. package/assets/init/docs/getting-started.md +194 -0
  45. package/assets/init/docs/profile-authoring-guide.md +51 -0
  46. package/assets/init/docs/terminology.md +126 -0
  47. package/assets/init/earos.manifest.yaml +104 -0
  48. package/assets/init/evaluations/.gitkeep +0 -0
  49. package/assets/init/examples/aws-event-driven-order-processing/artifact.yaml +2056 -0
  50. package/assets/init/examples/aws-event-driven-order-processing/evaluation.yaml +973 -0
  51. package/assets/init/examples/aws-event-driven-order-processing/report.md +244 -0
  52. package/assets/init/examples/example-solution-architecture.evaluation.yaml +136 -0
  53. package/assets/init/examples/multi-cloud-data-analytics/artifact.yaml +715 -0
  54. package/assets/init/overlays/data-governance.yaml +94 -0
  55. package/assets/init/overlays/regulatory.yaml +154 -0
  56. package/assets/init/overlays/security.yaml +92 -0
  57. package/assets/init/profiles/adr.yaml +225 -0
  58. package/assets/init/profiles/capability-map.yaml +223 -0
  59. package/assets/init/profiles/reference-architecture.yaml +426 -0
  60. package/assets/init/profiles/roadmap.yaml +205 -0
  61. package/assets/init/profiles/solution-architecture.yaml +227 -0
  62. package/assets/init/research/architecture-assessment-rubrics-research.docx +0 -0
  63. package/assets/init/research/architecture-assessment-rubrics-research.md +566 -0
  64. package/assets/init/research/reference-architecture-research.md +751 -0
  65. package/assets/init/standard/EAROS.md +1426 -0
  66. package/assets/init/standard/schemas/artifact.schema.json +1295 -0
  67. package/assets/init/standard/schemas/artifact.uischema.json +65 -0
  68. package/assets/init/standard/schemas/evaluation.schema.json +284 -0
  69. package/assets/init/standard/schemas/rubric.schema.json +383 -0
  70. package/assets/init/templates/evaluation-record.template.yaml +58 -0
  71. package/assets/init/templates/new-profile.template.yaml +65 -0
  72. package/bin.js +188 -0
  73. package/dist/assets/_basePickBy-BVu6YmSW.js +1 -0
  74. package/dist/assets/_baseUniq-CWRzQDz_.js +1 -0
  75. package/dist/assets/arc-CyDBhtDM.js +1 -0
  76. package/dist/assets/architectureDiagram-2XIMDMQ5-BH6O4dvN.js +36 -0
  77. package/dist/assets/blockDiagram-WCTKOSBZ-2xmwdjpg.js +132 -0
  78. package/dist/assets/c4Diagram-IC4MRINW-BNmPRFJF.js +10 -0
  79. package/dist/assets/channel-CiySTNoJ.js +1 -0
  80. package/dist/assets/chunk-4BX2VUAB-DGQTvirp.js +1 -0
  81. package/dist/assets/chunk-55IACEB6-DNMAQAC_.js +1 -0
  82. package/dist/assets/chunk-FMBD7UC4-BJbVTQ5o.js +15 -0
  83. package/dist/assets/chunk-JSJVCQXG-BCxUL74A.js +1 -0
  84. package/dist/assets/chunk-KX2RTZJC-H7wWZOfz.js +1 -0
  85. package/dist/assets/chunk-NQ4KR5QH-BK4RlTQF.js +220 -0
  86. package/dist/assets/chunk-QZHKN3VN-0chxDV5g.js +1 -0
  87. package/dist/assets/chunk-WL4C6EOR-DexfQ-AV.js +189 -0
  88. package/dist/assets/classDiagram-VBA2DB6C-D7luWJQn.js +1 -0
  89. package/dist/assets/classDiagram-v2-RAHNMMFH-D7luWJQn.js +1 -0
  90. package/dist/assets/clone-ylgRbd3D.js +1 -0
  91. package/dist/assets/cose-bilkent-S5V4N54A-DS2IOCfZ.js +1 -0
  92. package/dist/assets/cytoscape.esm-CyJtwmzi.js +331 -0
  93. package/dist/assets/dagre-KLK3FWXG-BbSoTTa3.js +4 -0
  94. package/dist/assets/defaultLocale-DX6XiGOO.js +1 -0
  95. package/dist/assets/diagram-E7M64L7V-C9TvYgv0.js +24 -0
  96. package/dist/assets/diagram-IFDJBPK2-DowUMWrg.js +43 -0
  97. package/dist/assets/diagram-P4PSJMXO-BL6nrnQF.js +24 -0
  98. package/dist/assets/erDiagram-INFDFZHY-rXPRl8VM.js +70 -0
  99. package/dist/assets/flowDiagram-PKNHOUZH-DBRM99-W.js +162 -0
  100. package/dist/assets/ganttDiagram-A5KZAMGK-INcWFsBT.js +292 -0
  101. package/dist/assets/gitGraphDiagram-K3NZZRJ6-DMwpfE91.js +65 -0
  102. package/dist/assets/graph-DLQn37b-.js +1 -0
  103. package/dist/assets/index-BFFITMT8.js +650 -0
  104. package/dist/assets/index-H7f6VTz1.css +1 -0
  105. package/dist/assets/infoDiagram-LFFYTUFH-B0f4TWRM.js +2 -0
  106. package/dist/assets/init-Gi6I4Gst.js +1 -0
  107. package/dist/assets/ishikawaDiagram-PHBUUO56-CsU6XimZ.js +70 -0
  108. package/dist/assets/journeyDiagram-4ABVD52K-CQ7ibNib.js +139 -0
  109. package/dist/assets/kanban-definition-K7BYSVSG-DzEN7THt.js +89 -0
  110. package/dist/assets/katex-B1X10hvy.js +261 -0
  111. package/dist/assets/layout-C0dvb42R.js +1 -0
  112. package/dist/assets/linear-j4a8mGj7.js +1 -0
  113. package/dist/assets/mindmap-definition-YRQLILUH-DP8iEuCf.js +68 -0
  114. package/dist/assets/ordinal-Cboi1Yqb.js +1 -0
  115. package/dist/assets/pieDiagram-SKSYHLDU-BpIAXgAm.js +30 -0
  116. package/dist/assets/quadrantDiagram-337W2JSQ-DrpXn5Eg.js +7 -0
  117. package/dist/assets/requirementDiagram-Z7DCOOCP-Bg7EwHlG.js +73 -0
  118. package/dist/assets/sankeyDiagram-WA2Y5GQK-BWagRs1F.js +10 -0
  119. package/dist/assets/sequenceDiagram-2WXFIKYE-q5jwhivG.js +145 -0
  120. package/dist/assets/stateDiagram-RAJIS63D-B_J9pE-2.js +1 -0
  121. package/dist/assets/stateDiagram-v2-FVOUBMTO-Q_1GcybB.js +1 -0
  122. package/dist/assets/timeline-definition-YZTLITO2-dv0jgQ0z.js +61 -0
  123. package/dist/assets/treemap-KZPCXAKY-Dt1dkIE7.js +162 -0
  124. package/dist/assets/vennDiagram-LZ73GAT5-BdO5RgRZ.js +34 -0
  125. package/dist/assets/xychartDiagram-JWTSCODW-CpDVe-8v.js +7 -0
  126. package/dist/index.html +23 -0
  127. package/export-docx.js +1583 -0
  128. package/init.js +353 -0
  129. package/manifest-cli.mjs +207 -0
  130. package/package.json +83 -0
  131. package/schemas/artifact.schema.json +1295 -0
  132. package/schemas/artifact.uischema.json +65 -0
  133. package/schemas/evaluation.schema.json +284 -0
  134. package/schemas/rubric.schema.json +383 -0
  135. package/serve.js +238 -0
package/README.md ADDED
@@ -0,0 +1,156 @@
1
+ # @trohde/earos
2
+
3
+ **CLI and web editor for the EaROS architecture assessment framework**
4
+
5
+ [EaROS](https://github.com/ThomasRohde/EAROS) (Enterprise Architecture Rubric Operational Standard) is a structured framework for evaluating architecture artifacts consistently — by humans, AI agents, or both. This package gives you the CLI to scaffold and manage an EaROS workspace and a browser-based editor to create rubrics, run assessments, and author artifacts.
6
+
7
+ ---
8
+
9
+ ## Quick Start
10
+
11
+ ```bash
12
+ npm install -g @trohde/earos
13
+ earos init my-architecture --icons
14
+ cd my-architecture && earos
15
+ ```
16
+
17
+ That's it. Your workspace opens in the browser, ready to assess.
18
+
19
+ Pass `--icons` to download the official architecture icon packages from **AWS**, **Azure**, and **GCP** into `./icons` during workspace initialization. The initializer creates stable Mermaid-friendly aliases under `./icons/aws/`, `./icons/azure/`, and `./icons/gcp/`.
20
+
21
+ In an existing EaROS workspace, run `earos init . --icons` to fetch or refresh the icon sets without re-scaffolding the workspace.
22
+
23
+ Override download URLs with environment variables if needed:
24
+ - `EAROS_AWS_ICON_PACKAGE_URL` / `EAROS_AWS_ICON_PAGE_URL`
25
+ - `EAROS_AZURE_ICON_PACKAGE_URL` / `EAROS_AZURE_ICON_PAGE_URL`
26
+ - `EAROS_GCP_ICON_PACKAGE_URL` / `EAROS_GCP_ICON_PAGE_URL`
27
+
28
+ ---
29
+
30
+ ## What `earos init` Creates
31
+
32
+ ```
33
+ my-architecture/
34
+ ├── core/ Core meta-rubric (universal foundation, 9 dimensions)
35
+ ├── profiles/ Artifact-specific profiles (5 bundled: solution-architecture,
36
+ │ reference-architecture, adr, capability-map, roadmap)
37
+ ├── overlays/ Cross-cutting overlays (3 bundled: security, data-governance,
38
+ │ regulatory)
39
+ ├── standard/schemas/ JSON schemas for rubrics, evaluations, and artifacts
40
+ ├── templates/ Blank scaffolds for new profiles and evaluations
41
+ ├── evaluations/ Your evaluation records go here
42
+ ├── calibration/ Calibration artifacts and results
43
+ ├── .agents/skills/ 10 EaROS skills for any AI coding agent
44
+ ├── earos.manifest.yaml Single source of truth — inventory of all rubric files
45
+ └── AGENTS.md Project guide for AI agents (agent-agnostic)
46
+ ```
47
+
48
+ The workspace is **agent-agnostic** — the `.agents/skills/` directory works with any AI coding agent that reads skill files (Claude Code, Cursor, Copilot Workspace, and others).
49
+
50
+ ---
51
+
52
+ ## CLI Commands
53
+
54
+ | Command | Description |
55
+ |---------|-------------|
56
+ | `earos` | Start the web editor (Express server, opens browser) |
57
+ | `earos init [dir] [--icons]` | Scaffold a complete EaROS workspace in `dir` and optionally download architecture icon packages from AWS, Azure, and GCP into `icons/`, with stable aliases in `icons/aws/`, `icons/azure/`, and `icons/gcp/` |
58
+ | `earos validate <file>` | Validate a rubric or evaluation YAML against EaROS schemas (exit 0/1) |
59
+ | `earos manifest` | Regenerate `earos.manifest.yaml` by scanning the filesystem |
60
+ | `earos manifest add <file>` | Add a single file to the manifest |
61
+ | `earos manifest check` | Verify the manifest matches the filesystem (exits non-zero on drift) |
62
+ | `earos --help` | Show help |
63
+
64
+ ### Validate exit codes
65
+
66
+ | Code | Meaning |
67
+ |------|---------|
68
+ | `0` | File is valid |
69
+ | `1` | Validation errors found (printed to stderr) |
70
+
71
+ ---
72
+
73
+ ## The Editor
74
+
75
+ Running `earos` opens a browser-based editor with a **3×2 home screen**:
76
+
77
+ | Audience | Cards |
78
+ |----------|-------|
79
+ | **Governance Teams** | Create Rubric, Edit Rubric |
80
+ | **Reviewers** | New Assessment (guided wizard), Continue Assessment |
81
+ | **Architects** | Create Artifact, Edit Artifact |
82
+
83
+ Key editor features:
84
+
85
+ - **Manifest-driven sidebar** — browse and load any rubric, profile, or overlay from your workspace
86
+ - **Live YAML preview** — right panel updates in real time as you edit the form
87
+ - **Schema validation** — status bar shows errors in real time against the EaROS JSON schemas
88
+ - **Kind selector** — switches the form between `core_rubric`, `profile`, `overlay`, `evaluation`, and `artifact` — reshaping validation and field layout automatically
89
+ - **Import / Export** — drag-and-drop YAML import; export as `<rubric_id>.yaml`
90
+ - **Context-aware help** — inline guidance tied to the EaROS standard
91
+
92
+ ---
93
+
94
+ ## AI Agent Skills
95
+
96
+ The initialized workspace includes **10 bundled skills** in `.agents/skills/` that any AI coding agent can use:
97
+
98
+ | Skill | Purpose |
99
+ |-------|---------|
100
+ | `earos-assess` | Run a full 8-step evaluation on any architecture artifact |
101
+ | `earos-review` | Audit an existing evaluation for over-scoring and unsupported claims |
102
+ | `earos-template-fill` | Guide authors through writing an assessment-ready document |
103
+ | `earos-artifact-gen` | Interview-driven artifact creation (produces schema-compliant YAML) |
104
+ | `earos-create` | Create new rubrics (profiles, overlays, or core) through guided interview |
105
+ | `earos-profile-author` | Technical YAML authoring guide for v2 field structure |
106
+ | `earos-calibrate` | Run calibration exercises and compute inter-rater reliability |
107
+ | `earos-report` | Generate executive reports from evaluation records |
108
+ | `earos-validate` | Health check — validates all YAMLs against schemas |
109
+ | `earos-remediate` | Generate prioritized improvement plans from evaluation results |
110
+
111
+ See the [EaROS repository](https://github.com/ThomasRohde/EAROS) for full skill documentation.
112
+
113
+ ---
114
+
115
+ ## The EaROS Framework
116
+
117
+ EaROS uses a **three-layer model**:
118
+
119
+ ```
120
+ ┌─────────────────────────────────────────────────────┐
121
+ │ OVERLAYS — cross-cutting concerns │
122
+ │ security · data-governance · regulatory │
123
+ ├─────────────────────────────────────────────────────┤
124
+ │ PROFILES — artifact-specific extensions │
125
+ │ solution-architecture · reference-architecture │
126
+ │ adr · capability-map · roadmap │
127
+ ├─────────────────────────────────────────────────────┤
128
+ │ CORE — universal foundation (always applied) │
129
+ │ 9 dimensions · 10 criteria │
130
+ └─────────────────────────────────────────────────────┘
131
+ ```
132
+
133
+ **Scoring** uses a 0–4 ordinal scale (0 = Absent → 4 = Strong) with explicit gate types that prevent weak scores from being hidden by weighted averages:
134
+
135
+ | Gate | Effect |
136
+ |------|--------|
137
+ | `advisory` | Triggers a recommendation |
138
+ | `major` | Caps the pass status |
139
+ | `critical` | Blocks pass entirely; forces Reject |
140
+
141
+ **Status outcomes:** Pass · Conditional Pass · Rework Required · Reject · Not Reviewable
142
+
143
+ For the complete standard — scoring model, gate logic, status thresholds, DAG evaluation flow, and calibration protocol — see the [EaROS repository](https://github.com/ThomasRohde/EAROS).
144
+
145
+ ---
146
+
147
+ ## Links
148
+
149
+ - **Repository:** [github.com/ThomasRohde/EAROS](https://github.com/ThomasRohde/EAROS)
150
+ - **Issues:** [github.com/ThomasRohde/EAROS/issues](https://github.com/ThomasRohde/EAROS/issues)
151
+
152
+ ---
153
+
154
+ ## License
155
+
156
+ [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) — Thomas Rohde
@@ -0,0 +1,106 @@
1
+ ---
2
+ name: earos-artifact-gen
3
+ description: "Create architecture documents through guided interview. Triggers on \"create an architecture document\", \"generate a reference architecture\", \"help me write a solution architecture\", \"document my architecture\", \"new architecture document\", or any request to create/write/generate architecture artifacts."
4
+ ---
5
+
6
+ # SKILL: earos-artifact-gen
7
+
8
+ Interview an architect and generate a structured architecture artifact document that conforms to `standard/schemas/artifact.schema.json` and satisfies the evidence requirements of the relevant EAROS rubric.
9
+
10
+ ## References
11
+
12
+ - Interview question templates: `references/interview-guide.md`
13
+ - Output generation guide: `references/output-guide.md`
14
+ - Artifact schema: `standard/schemas/artifact.schema.json`
15
+ - Rubric files: discovered at runtime via `earos.manifest.yaml`
16
+
17
+ ---
18
+
19
+ ## Workflow
20
+
21
+ ### Step 1 — Determine artifact type and load rubric
22
+
23
+ Ask the user: "What type of architecture artifact are you creating?"
24
+
25
+ Common types: `solution_architecture`, `reference_architecture`, `adr`, `capability_map`, `roadmap`.
26
+
27
+ Once confirmed:
28
+ 1. Read `earos.manifest.yaml` to find the matching profile.
29
+ 2. Load `core/core-meta-rubric.yaml` and the matching profile YAML.
30
+ 3. Load `standard/schemas/artifact.schema.json` for the artifact structure.
31
+ 4. Ask: "Are any cross-cutting concerns in scope? (security, data governance, regulatory compliance)" — load applicable overlays if yes.
32
+
33
+ Announce what you loaded: "I'll use the [profile name] profile plus [overlays if any]. This covers [N] criteria."
34
+
35
+ ### Step 2 — Conduct the structured interview
36
+
37
+ Read `references/interview-guide.md` before starting.
38
+
39
+ Interview the architect section by section. Do not ask all questions at once — work through one section, confirm the responses, then move to the next. The sections follow the artifact schema structure:
40
+
41
+ 1. **Overview** — title, purpose, version, owner, status, audience
42
+ 2. **Business context** — business drivers, goals, stakeholders and their concerns
43
+ 3. **Scope and boundaries** — in-scope, out-of-scope, deferred, assumptions
44
+ 4. **Architecture views** — context view, functional decomposition, deployment topology, data flows
45
+ 5. **Key decisions and rationale** — major design choices, alternatives considered, tradeoffs
46
+ 6. **Quality attributes** — measurable targets, scenarios, architectural responses
47
+ 7. **Risks and mitigations** — open risks, mitigated risks, assumptions to validate
48
+ 8. **Operations and implementation** — deployment phasing, interface contracts, operational concerns
49
+ 9. **Compliance and governance** — applicable standards, controls implemented, evidence
50
+ 10. **Implementation guidance** — next steps, dependencies, open decisions
51
+
52
+ **Interview technique:**
53
+ - Ask questions in natural language, not in rubric criterion language. "What external systems does this connect to?" not "Describe your STK-01 stakeholders."
54
+ - If an answer is thin, prompt for more: "Can you be more specific about [X]?" or "What would someone need to know to implement that?"
55
+ - If the architect doesn't know something, record it as an assumption or open question rather than skipping it.
56
+ - After each section, summarize what you captured and ask: "Does that accurately capture it? Anything to add?"
57
+
58
+ Use `references/interview-guide.md` for question templates and guidance on what constitutes a useful answer.
59
+
60
+ ### Step 3 — Map answers to rubric criteria
61
+
62
+ After completing the interview, map each collected answer to the rubric criteria it satisfies:
63
+
64
+ For each criterion in the core + profile:
65
+ - Identify which section(s) of the interview contain the evidence
66
+ - Note if the evidence is thin (would score 1–2) vs. adequate (would score 3–4)
67
+ - Flag gaps: criteria where no interview answer provides relevant evidence
68
+
69
+ Present the gap summary: "Based on your answers, I have enough material for [N] of [M] criteria. I need more information about: [list]. Can we go back to these?"
70
+
71
+ Fill gaps before proceeding to output generation.
72
+
73
+ ### Step 4 — Generate the artifact YAML
74
+
75
+ Read `references/output-guide.md` before this step.
76
+
77
+ Transform the interview answers into a structured YAML document conforming to `standard/schemas/artifact.schema.json`.
78
+
79
+ Key principles:
80
+ - Every section that maps to a rubric criterion must contain enough detail to evidence a score of 3 (the "clearly addressed" level).
81
+ - Use structured sub-elements (tables, lists, scenarios) rather than freeform prose where the schema supports it. These are easier for EAROS evaluation.
82
+ - Preserve the architect's language for names and terminology — do not substitute generic labels.
83
+ - For every design decision, include: the driver, the alternatives considered (at least one), the rejection rationale, and the selected option.
84
+ - Quality attribute targets must be measurable: "99.9% availability" not "high availability".
85
+
86
+ ### Step 5 — Validate and offer assessment
87
+
88
+ After generating the artifact YAML:
89
+
90
+ 1. Check it against `standard/schemas/artifact.schema.json` — verify all required fields are present.
91
+ 2. Cross-check each rubric criterion: does the artifact contain the `required_evidence` items listed in the rubric?
92
+ 3. Flag any remaining gaps: "Criterion [id] requires [evidence] which is not present."
93
+ 4. Ask: "Would you like me to run `earos-assess` on this artifact to get a preliminary score before you finalize it?"
94
+
95
+ If gaps remain and the architect cannot fill them, mark them as `[TBD: <what is needed>]` in the YAML so they are visible during review.
96
+
97
+ ---
98
+
99
+ ## Operating Principles
100
+
101
+ - **Schema-first.** The artifact YAML must conform to `artifact.schema.json`. Do not generate free-form documents — the schema is the contract between artifact authors and EAROS evaluators.
102
+ - **Rubric-aware.** Every field in the artifact schema maps to a rubric criterion's `required_evidence`. Filling the schema correctly means satisfying the evidence requirements.
103
+ - **Interview before output.** Never generate a template with placeholder text and ask the architect to fill it in. Gather the information through conversation, then generate the artifact. Templates produce generic output; interviews produce specific, defensible artifacts.
104
+ - **Decisions need rationale.** The most common reason artifacts score 1–2 is missing decision rationale. Probe every design choice: Why this? What else was considered? What was rejected and why?
105
+ - **Measurable targets only.** Push back on aspirational quality attributes. "Fast" is not scoreable. "p99 < 200ms under 500 TPS load" is.
106
+ - **Gap transparency.** Unknown information is better represented as `[TBD: <description>]` than as an omission. Unknown items that are marked as TBD are visible; silent omissions create surprises in EAROS review.
@@ -0,0 +1,313 @@
1
+ # Interview Guide — EAROS Artifact Generation
2
+
3
+ This reference contains question templates per section, guidance on adapting to artifact type and platform, examples of good vs. bad interview answers, probing techniques, and N/A handling. Read this before Step 2 (interview) in SKILL.md.
4
+
5
+ ---
6
+
7
+ ## Core Interviewing Principles
8
+
9
+ ### Ask about the world, not the document
10
+
11
+ The architect knows their system. They do not know EAROS. Every question must be phrased in terms of the system, not the rubric.
12
+
13
+ | Rubric language (never use) | Natural language (use this) |
14
+ |----------------------------|----------------------------|
15
+ | "Describe your STK-01 stakeholders" | "Who are the main groups that will use or be affected by this system?" |
16
+ | "What are your TRC-01 business drivers?" | "What problem is this system solving — what would go wrong if it didn't exist?" |
17
+ | "Provide RA-VIEW-01 architectural views" | "Can you describe what the system looks like — what are the main pieces and how do they connect?" |
18
+ | "Document your CMP-01 compliance controls" | "Are there any regulatory or policy requirements this system has to meet?" |
19
+
20
+ ### Probe for specifics
21
+
22
+ Vague answers produce vague artifacts that score 1. Use these probing patterns:
23
+
24
+ | Vague answer | Probing question |
25
+ |-------------|-----------------|
26
+ | "We need high availability" | "What does that mean in practice — how long can the system be unavailable before it becomes a business problem? Do you have a number?" |
27
+ | "It integrates with the main systems" | "Which specific systems — what are their names? What does each integration do?" |
28
+ | "We considered a few options" | "What were the options? What made you choose this one over the others?" |
29
+ | "Security is important" | "What specific security requirements does this need to meet? Any compliance frameworks like ISO 27001 or PCI DSS? Who owns security sign-off?" |
30
+ | "It's a microservices architecture" | "How many services? Who owns each one? What do the service boundaries map to?" |
31
+ | "We'll monitor it with standard tooling" | "What tooling specifically? What are the key metrics you'll alert on? What does an on-call engineer do when they get paged?" |
32
+
33
+ ### When to push back
34
+
35
+ Push back when:
36
+ - A quality attribute target is not measurable ("fast", "reliable", "secure")
37
+ - A decision is stated without rationale ("we chose Kafka")
38
+ - A stakeholder is listed without concerns ("IT Operations")
39
+ - A compliance requirement is mentioned without a control mechanism ("we follow GDPR")
40
+
41
+ Don't accept aspirational answers. The EAROS scoring guide requires evidence. If the artifact says "high performance", an evaluator will score it 1. If it says "p99 < 100ms at 1,000 TPS, validated by load test on date", it scores 3 or 4.
42
+
43
+ ---
44
+
45
+ ## Block-by-Block Question Templates
46
+
47
+ ### Block 1 — Context and Purpose
48
+ *Rubric mapping: STK-01 (stakeholder identification), SCP-01 (scope and boundaries)*
49
+
50
+ **Opening:**
51
+ > "Before we get into the architecture, let's make sure we capture the context. I'll ask a few questions about who this is for and what it's supposed to do."
52
+
53
+ **Questions:**
54
+ 1. "What is this system? Give me the one-sentence version — what does it do and why does it exist?"
55
+ 2. "Who are the main groups that will use or be affected by this system? Think broadly — not just developers, but business users, operations teams, compliance reviewers, customers."
56
+ 3. "For each of those groups, what is their primary concern about this system? What would they most want to know when reviewing this document?"
57
+ 4. "Where does this system start and stop? What is in scope for this architecture, and what are you explicitly not covering? Are there elements you're deferring to a later phase?"
58
+ 5. "Who authored this document and who is responsible for keeping it up to date?"
59
+
60
+ **What good answers look like:**
61
+ - Stakeholder: "The payments team (will build it), Finance (approves transactions), Fraud (monitors for anomalies) — each with a one-sentence concern"
62
+ - Scope: Clear in/out-of-scope list, not "everything related to payments"
63
+
64
+ **What to probe:**
65
+ - If stakeholders are all technical roles: "Are there business stakeholders — people who will use the system or be accountable for it?"
66
+ - If scope is stated in terms of goals rather than boundaries: "Can you describe the technical boundary — what external systems does this interact with? Where does your system hand off to another?"
67
+
68
+ ---
69
+
70
+ ### Block 2 — Business Drivers and Constraints
71
+ *Rubric mapping: TRC-01 (traceability), RAT-01 (rationale and decisions)*
72
+
73
+ **Opening:**
74
+ > "Let's talk about why this architecture looks the way it does — what were the key requirements and constraints driving the design?"
75
+
76
+ **Questions:**
77
+ 1. "What business problem is this architecture solving? What would happen if it didn't exist or if you built it a different way?"
78
+ 2. "Are there specific business drivers — events, regulations, strategic goals, or pain points — that led to this project?"
79
+ 3. "What are the hard constraints? Things that are non-negotiable — budget, timeline, technology stack, integration with existing systems, regulatory requirements?"
80
+ 4. "Who are the key business stakeholders for this project, and what does success look like from their perspective?"
81
+ 5. "Are there known risks or assumptions that could affect the design?"
82
+
83
+ **What good answers look like:**
84
+ - "The regulation requires us to store transaction records for 7 years — that drove the data retention architecture in Section 4"
85
+ - "We're constrained to Azure because of our enterprise agreement — that ruled out AWS-native services"
86
+
87
+ **What to probe:**
88
+ - If drivers are vague ("digital transformation"): "What specifically needs to change? What are users or the business unable to do today?"
89
+ - If constraints are vague ("limited budget"): "What does 'limited' mean — are we talking about annual cloud spend target, headcount, or license costs?"
90
+
91
+ ---
92
+
93
+ ### Block 3 — Architecture Views and Components
94
+ *Rubric mapping: RA-VIEW-01 (views), CVP-01 (component value proposition) — or equivalent per profile*
95
+
96
+ **Opening:**
97
+ > "Now let's get into the architecture itself. I'll ask you to describe it at a few different levels — the big picture, the main components, and how data moves through the system."
98
+
99
+ **Questions:**
100
+ 1. "At the highest level — what does this system look like? If you drew a box labeled '[system name]' in the middle of a whiteboard, what external actors and systems would you draw connecting to it? What does each connection represent?"
101
+ 2. "Inside that box, what are the main components or services? What is each one responsible for?"
102
+ 3. "How do the components talk to each other? Are there any important data flows or sequences worth describing — like a key transaction or use case?"
103
+ 4. "Where does this run? Describe the deployment — cloud, on-prem, hybrid? What infrastructure or platform components are involved?"
104
+ 5. "Where does data live? What are the key data stores, and what does each one hold?"
105
+
106
+ **What good answers look like:**
107
+ - "The API gateway receives all inbound requests, routes them to three microservices, and calls the legacy mainframe via an adapter for settlement"
108
+ - "We have three data stores: Postgres for transactional data, Redis for session state, and S3 for document storage"
109
+
110
+ **What to probe:**
111
+ - If the description is still high-level: "Can you name the main components? Even a list of service names helps"
112
+ - If deployment is vague ("it runs on AWS"): "Which AWS services specifically? Are you on ECS, EKS, Lambda? Is there a multi-region setup?"
113
+ - If data flows are missing: "Walk me through a typical request — what happens from the moment a user hits the API to when they get a response?"
114
+
115
+ **Platform-specific probing:**
116
+
117
+ *AWS:* "EC2 or managed services? ECS/EKS/Lambda? RDS or DynamoDB? Are you using VPC peering or Transit Gateway for network isolation?"
118
+
119
+ *Azure:* "AKS or App Service? Cosmos DB or Azure SQL? Are you using Azure AD for identity, and how does that integrate with your existing directory?"
120
+
121
+ *On-prem / hybrid:* "What hypervisor or container platform? How does this connect to your cloud footprint, if any? Who manages the physical infrastructure?"
122
+
123
+ ---
124
+
125
+ ### Block 4 — Key Decisions and Rationale
126
+ *Rubric mapping: RAT-01 (decision rationale), RA-DEC-01 (key architectural decisions)*
127
+
128
+ **Opening:**
129
+ > "Architecture is shaped by choices. Let's capture the major decisions that define why this design looks the way it does."
130
+
131
+ **Questions:**
132
+ 1. "What were the 3–5 most important design decisions you made? The ones that, if you'd chosen differently, would have produced a fundamentally different architecture?"
133
+ 2. For each decision: "What were the main options you considered?"
134
+ 3. For each decision: "What led you to choose this option over the others? Was it a technical constraint, a business requirement, a risk, or a preference?"
135
+ 4. "Are there any decisions that are still open — where you haven't settled on an approach yet?"
136
+ 5. "Were there any decisions that were controversial or where people on the team disagreed? How was it resolved?"
137
+
138
+ **What good answers look like:**
139
+ - "We chose event streaming over REST callbacks for inter-service communication. We considered webhooks (simpler but unreliable) and polling (too much load). We went with Kafka because it's already in our platform and gives us replay capability for the audit requirement"
140
+ - "The choice of PostgreSQL over MongoDB was driven by the need for strong consistency on financial records — we couldn't accept eventual consistency"
141
+
142
+ **What to probe:**
143
+ - If a decision is stated without alternatives: "What else did you consider before choosing [X]?"
144
+ - If a decision lacks a business driver: "What requirement or constraint made [X] the right choice? What would have been different if you'd chosen [Y]?"
145
+ - If the team "always uses" something: "Is there a specific reason that technology was the right fit here, or was it familiarity? Were there any tradeoffs you accepted?"
146
+
147
+ ---
148
+
149
+ ### Block 5 — Quality Attributes
150
+ *Rubric mapping: RA-QA-01 (quality attributes and NFRs)*
151
+
152
+ **Opening:**
153
+ > "Let's talk about the non-functional requirements — the '-ilities'. These are often what makes or breaks an architecture."
154
+
155
+ **Questions:**
156
+ 1. "What are the performance requirements? Think about response time, throughput, and concurrent users."
157
+ 2. "What are the availability and reliability requirements? What's the acceptable downtime? What happens during a failure?"
158
+ 3. "What are the scalability requirements? What's the current scale, and what does growth look like? Is there a seasonal peak?"
159
+ 4. "What are the security requirements? Authentication, authorization, encryption at rest and in transit?"
160
+ 5. "What observability do you need? What metrics, logs, and traces are required?"
161
+
162
+ **What good answers look like:**
163
+ - "API response time < 200ms at the 99th percentile under 500 concurrent users"
164
+ - "We need 99.9% monthly uptime — that's roughly 8 hours/year"
165
+ - "Data at rest encrypted with AES-256, all connections TLS 1.2 or higher, OAuth 2.0 for external API auth"
166
+
167
+ **What to probe:**
168
+ - If targets are not measurable: "Can you put a number on that? What's the threshold where performance becomes a problem?"
169
+ - If availability is stated without a failure scenario: "What happens when a component fails? Is there failover? What's the recovery time objective?"
170
+ - If observability is vague: "What would an on-call engineer look at first when something goes wrong?"
171
+
172
+ ---
173
+
174
+ ### Block 6 — Operations
175
+ *Rubric mapping: RA-OPS-01 (operational concerns), MNT-01 (maintainability)*
176
+
177
+ **Opening:**
178
+ > "Good architectures are built to run, not just to deploy. Let's cover how this system will operate in production."
179
+
180
+ **Questions:**
181
+ 1. "How will this be deployed? Is there a CI/CD pipeline? Who owns the deployment process?"
182
+ 2. "How will it be monitored? What are the key health indicators?"
183
+ 3. "What does a degraded state look like, and what happens to traffic when a component is down?"
184
+ 4. "What is the disaster recovery plan? If the whole system went down right now, what would you do, and how long would it take to restore?"
185
+ 5. "Are there any ongoing operational costs or concerns — data retention, storage growth, licensing?"
186
+
187
+ **What good answers look like:**
188
+ - "We use ArgoCD for continuous deployment from main. RTO is 4 hours based on snapshot recovery. We alert on p95 > 500ms and error rate > 0.1%"
189
+
190
+ **What to probe:**
191
+ - If DR is vague: "Walk me through what you'd actually do — who gets called, what gets restored first, what's the recovery time in the worst case?"
192
+ - If monitoring is vague: "What's the dashboard that the on-call person opens first? What's the first alert they'd see?"
193
+
194
+ ---
195
+
196
+ ### Block 7 — Compliance and Governance
197
+ *Rubric mapping: CMP-01 (compliance and governance fit)*
198
+
199
+ **Opening:**
200
+ > "Let's make sure we capture the regulatory and policy context. This is important for governance review."
201
+
202
+ **Questions:**
203
+ 1. "Does this system process personal data, financial data, or any regulated data types?"
204
+ 2. "What compliance frameworks or regulations apply — GDPR, PCI DSS, ISO 27001, internal policy?"
205
+ 3. "For each applicable framework, what specific requirements does it impose on this architecture? How does the design address those requirements?"
206
+ 4. "Are there any compliance gaps — controls that are required but not yet implemented?"
207
+ 5. "Who is responsible for compliance sign-off? Has legal or compliance reviewed this design?"
208
+
209
+ **What good answers look like:**
210
+ - "We're in scope for PCI DSS SAQ D because we handle cardholder data. The key requirements addressed are: network segmentation (shown in Section 4.2), encryption in transit and at rest (Section 5.1), and access logging (Section 6.3). The pen test requirement is deferred to Q3"
211
+
212
+ **What to probe:**
213
+ - If "it's not in scope": "Are you certain? Does it process any customer identifiers, financial transactions, or health data?"
214
+ - If compliance is asserted without evidence: "How specifically does the architecture address that requirement? Which component or control implements it?"
215
+
216
+ ---
217
+
218
+ ### Block 8 — Implementation Guidance
219
+ *Rubric mapping: RA-IMP-01 (implementation guidance), ACT-01 (actionability)*
220
+
221
+ **Opening:**
222
+ > "Finally, let's make sure the document gives implementers what they need to act."
223
+
224
+ **Questions:**
225
+ 1. "What are the key next steps? What needs to happen for this architecture to be implemented?"
226
+ 2. "Are there any decisions still outstanding that need to be made before implementation can start?"
227
+ 3. "What are the main dependencies on other teams, systems, or external approvals?"
228
+ 4. "Are there any known risks that could affect the implementation timeline or approach?"
229
+ 5. "Who should read this document? What do you want them to do with it?"
230
+
231
+ ---
232
+
233
+ ## Artifact-Type Specific Guidance
234
+
235
+ ### Solution Architecture
236
+
237
+ Focus on: implementation specifics, integration points, phasing, team ownership.
238
+
239
+ Additional questions:
240
+ - "What is the delivery timeline? Is this being built in phases?"
241
+ - "Which team or vendor is building each component?"
242
+ - "What are the integration contracts with existing systems — APIs, events, data formats?"
243
+
244
+ Probe deeper on: deployment topology, team boundaries, handover points.
245
+
246
+ ### Reference Architecture
247
+
248
+ Focus on: reusability, pattern documentation, variation points.
249
+
250
+ Additional questions:
251
+ - "Who are the intended consumers of this reference architecture? What systems or teams would use it as a blueprint?"
252
+ - "What are the 'slots' where users can substitute their own choices? What's fixed vs. flexible?"
253
+ - "What are the known patterns or anti-patterns for systems in this category?"
254
+
255
+ Probe deeper on: applicability conditions, variance mechanism, governance for deviation.
256
+
257
+ ### Architecture Decision Record (ADR)
258
+
259
+ Focus on: one decision only. Interview is shorter — 5–8 questions.
260
+
261
+ Structure: Context → Decision → Alternatives → Rationale → Consequences.
262
+
263
+ Questions:
264
+ 1. "What decision are you documenting?"
265
+ 2. "What context led to this decision — what constraints or requirements made it necessary?"
266
+ 3. "What options did you consider? Why were the alternatives rejected?"
267
+ 4. "What are the consequences — positive and negative — of this decision?"
268
+ 5. "Who made this decision, and is it final or revisable?"
269
+
270
+ ### Capability Map
271
+
272
+ Focus on: capability definitions, business ownership, maturity levels.
273
+
274
+ Additional questions:
275
+ - "How are capabilities defined — what makes something a distinct capability vs. a sub-capability?"
276
+ - "For each capability, who owns it and what business process does it support?"
277
+ - "Is there a current maturity assessment for each capability?"
278
+
279
+ Probe deeper on: gaps between current and target maturity, investment priorities.
280
+
281
+ ### Roadmap
282
+
283
+ Focus on: timeline, sequencing rationale, dependencies.
284
+
285
+ Additional questions:
286
+ - "What is the planning horizon? 1 year, 3 years?"
287
+ - "What drives the sequencing — business priority, technical dependency, funding?"
288
+ - "What are the key milestones and what does each one deliver?"
289
+
290
+ Probe deeper on: risks to the timeline, dependencies on external decisions or teams.
291
+
292
+ ---
293
+
294
+ ## When to Skip a Section
295
+
296
+ If a block clearly doesn't apply, state it explicitly and move on:
297
+
298
+ - "You mentioned this is a capability map with no deployment concerns — I'll skip the operations block."
299
+ - "For an ADR, compliance details belong in the decision context rather than a separate block. Let me ask about that as part of the rationale."
300
+
301
+ Never skip silently. Every skipped block should be noted as either N/A or deferred, so an evaluator knows the author considered it.
302
+
303
+ ---
304
+
305
+ ## Handling "I Don't Know" Answers
306
+
307
+ If the architect can't answer a question:
308
+
309
+ 1. Record it as an open question or assumption: `[TBD: who owns DR for this system?]`
310
+ 2. Note the impact: "This will likely score low on the operational concerns criterion — it's worth finding out before the review"
311
+ 3. Offer to proceed and return: "We can come back to this — let's keep going and circle back"
312
+
313
+ Do not generate placeholder content that sounds complete. A visible TBD is better than a fabricated answer that scores 0 when the evaluator checks.