@trohde/earos 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +156 -0
- package/assets/init/.agents/skills/earos-artifact-gen/SKILL.md +106 -0
- package/assets/init/.agents/skills/earos-artifact-gen/references/interview-guide.md +313 -0
- package/assets/init/.agents/skills/earos-artifact-gen/references/output-guide.md +367 -0
- package/assets/init/.agents/skills/earos-assess/SKILL.md +212 -0
- package/assets/init/.agents/skills/earos-assess/references/calibration-benchmarks.md +160 -0
- package/assets/init/.agents/skills/earos-assess/references/output-templates.md +311 -0
- package/assets/init/.agents/skills/earos-assess/references/scoring-protocol.md +281 -0
- package/assets/init/.agents/skills/earos-calibrate/SKILL.md +153 -0
- package/assets/init/.agents/skills/earos-calibrate/references/agreement-metrics.md +188 -0
- package/assets/init/.agents/skills/earos-calibrate/references/calibration-protocol.md +263 -0
- package/assets/init/.agents/skills/earos-create/SKILL.md +257 -0
- package/assets/init/.agents/skills/earos-create/references/criterion-writing-guide.md +268 -0
- package/assets/init/.agents/skills/earos-create/references/dependency-rules.md +193 -0
- package/assets/init/.agents/skills/earos-create/references/rubric-interview-guide.md +123 -0
- package/assets/init/.agents/skills/earos-create/references/validation-checklist.md +238 -0
- package/assets/init/.agents/skills/earos-profile-author/SKILL.md +251 -0
- package/assets/init/.agents/skills/earos-profile-author/references/criterion-writing-guide.md +280 -0
- package/assets/init/.agents/skills/earos-profile-author/references/design-methods.md +158 -0
- package/assets/init/.agents/skills/earos-profile-author/references/profile-checklist.md +173 -0
- package/assets/init/.agents/skills/earos-remediate/SKILL.md +118 -0
- package/assets/init/.agents/skills/earos-remediate/references/output-template.md +199 -0
- package/assets/init/.agents/skills/earos-remediate/references/remediation-patterns.md +330 -0
- package/assets/init/.agents/skills/earos-report/SKILL.md +85 -0
- package/assets/init/.agents/skills/earos-report/references/portfolio-template.md +181 -0
- package/assets/init/.agents/skills/earos-report/references/single-artifact-template.md +168 -0
- package/assets/init/.agents/skills/earos-review/SKILL.md +130 -0
- package/assets/init/.agents/skills/earos-review/references/challenge-patterns.md +163 -0
- package/assets/init/.agents/skills/earos-review/references/output-template.md +180 -0
- package/assets/init/.agents/skills/earos-template-fill/SKILL.md +177 -0
- package/assets/init/.agents/skills/earos-template-fill/references/evidence-writing-guide.md +186 -0
- package/assets/init/.agents/skills/earos-template-fill/references/section-rubric-mapping.md +200 -0
- package/assets/init/.agents/skills/earos-validate/SKILL.md +113 -0
- package/assets/init/.agents/skills/earos-validate/references/fix-patterns.md +281 -0
- package/assets/init/.agents/skills/earos-validate/references/validation-checks.md +287 -0
- package/assets/init/.claude/CLAUDE.md +4 -0
- package/assets/init/AGENTS.md +293 -0
- package/assets/init/CLAUDE.md +635 -0
- package/assets/init/README.md +507 -0
- package/assets/init/calibration/gold-set/.gitkeep +0 -0
- package/assets/init/calibration/results/.gitkeep +0 -0
- package/assets/init/core/core-meta-rubric.yaml +643 -0
- package/assets/init/docs/consistency-report.md +325 -0
- package/assets/init/docs/getting-started.md +194 -0
- package/assets/init/docs/profile-authoring-guide.md +51 -0
- package/assets/init/docs/terminology.md +126 -0
- package/assets/init/earos.manifest.yaml +104 -0
- package/assets/init/evaluations/.gitkeep +0 -0
- package/assets/init/examples/aws-event-driven-order-processing/artifact.yaml +2056 -0
- package/assets/init/examples/aws-event-driven-order-processing/evaluation.yaml +973 -0
- package/assets/init/examples/aws-event-driven-order-processing/report.md +244 -0
- package/assets/init/examples/example-solution-architecture.evaluation.yaml +136 -0
- package/assets/init/examples/multi-cloud-data-analytics/artifact.yaml +715 -0
- package/assets/init/overlays/data-governance.yaml +94 -0
- package/assets/init/overlays/regulatory.yaml +154 -0
- package/assets/init/overlays/security.yaml +92 -0
- package/assets/init/profiles/adr.yaml +225 -0
- package/assets/init/profiles/capability-map.yaml +223 -0
- package/assets/init/profiles/reference-architecture.yaml +426 -0
- package/assets/init/profiles/roadmap.yaml +205 -0
- package/assets/init/profiles/solution-architecture.yaml +227 -0
- package/assets/init/research/architecture-assessment-rubrics-research.docx +0 -0
- package/assets/init/research/architecture-assessment-rubrics-research.md +566 -0
- package/assets/init/research/reference-architecture-research.md +751 -0
- package/assets/init/standard/EAROS.md +1426 -0
- package/assets/init/standard/schemas/artifact.schema.json +1295 -0
- package/assets/init/standard/schemas/artifact.uischema.json +65 -0
- package/assets/init/standard/schemas/evaluation.schema.json +284 -0
- package/assets/init/standard/schemas/rubric.schema.json +383 -0
- package/assets/init/templates/evaluation-record.template.yaml +58 -0
- package/assets/init/templates/new-profile.template.yaml +65 -0
- package/bin.js +188 -0
- package/dist/assets/_basePickBy-BVu6YmSW.js +1 -0
- package/dist/assets/_baseUniq-CWRzQDz_.js +1 -0
- package/dist/assets/arc-CyDBhtDM.js +1 -0
- package/dist/assets/architectureDiagram-2XIMDMQ5-BH6O4dvN.js +36 -0
- package/dist/assets/blockDiagram-WCTKOSBZ-2xmwdjpg.js +132 -0
- package/dist/assets/c4Diagram-IC4MRINW-BNmPRFJF.js +10 -0
- package/dist/assets/channel-CiySTNoJ.js +1 -0
- package/dist/assets/chunk-4BX2VUAB-DGQTvirp.js +1 -0
- package/dist/assets/chunk-55IACEB6-DNMAQAC_.js +1 -0
- package/dist/assets/chunk-FMBD7UC4-BJbVTQ5o.js +15 -0
- package/dist/assets/chunk-JSJVCQXG-BCxUL74A.js +1 -0
- package/dist/assets/chunk-KX2RTZJC-H7wWZOfz.js +1 -0
- package/dist/assets/chunk-NQ4KR5QH-BK4RlTQF.js +220 -0
- package/dist/assets/chunk-QZHKN3VN-0chxDV5g.js +1 -0
- package/dist/assets/chunk-WL4C6EOR-DexfQ-AV.js +189 -0
- package/dist/assets/classDiagram-VBA2DB6C-D7luWJQn.js +1 -0
- package/dist/assets/classDiagram-v2-RAHNMMFH-D7luWJQn.js +1 -0
- package/dist/assets/clone-ylgRbd3D.js +1 -0
- package/dist/assets/cose-bilkent-S5V4N54A-DS2IOCfZ.js +1 -0
- package/dist/assets/cytoscape.esm-CyJtwmzi.js +331 -0
- package/dist/assets/dagre-KLK3FWXG-BbSoTTa3.js +4 -0
- package/dist/assets/defaultLocale-DX6XiGOO.js +1 -0
- package/dist/assets/diagram-E7M64L7V-C9TvYgv0.js +24 -0
- package/dist/assets/diagram-IFDJBPK2-DowUMWrg.js +43 -0
- package/dist/assets/diagram-P4PSJMXO-BL6nrnQF.js +24 -0
- package/dist/assets/erDiagram-INFDFZHY-rXPRl8VM.js +70 -0
- package/dist/assets/flowDiagram-PKNHOUZH-DBRM99-W.js +162 -0
- package/dist/assets/ganttDiagram-A5KZAMGK-INcWFsBT.js +292 -0
- package/dist/assets/gitGraphDiagram-K3NZZRJ6-DMwpfE91.js +65 -0
- package/dist/assets/graph-DLQn37b-.js +1 -0
- package/dist/assets/index-BFFITMT8.js +650 -0
- package/dist/assets/index-H7f6VTz1.css +1 -0
- package/dist/assets/infoDiagram-LFFYTUFH-B0f4TWRM.js +2 -0
- package/dist/assets/init-Gi6I4Gst.js +1 -0
- package/dist/assets/ishikawaDiagram-PHBUUO56-CsU6XimZ.js +70 -0
- package/dist/assets/journeyDiagram-4ABVD52K-CQ7ibNib.js +139 -0
- package/dist/assets/kanban-definition-K7BYSVSG-DzEN7THt.js +89 -0
- package/dist/assets/katex-B1X10hvy.js +261 -0
- package/dist/assets/layout-C0dvb42R.js +1 -0
- package/dist/assets/linear-j4a8mGj7.js +1 -0
- package/dist/assets/mindmap-definition-YRQLILUH-DP8iEuCf.js +68 -0
- package/dist/assets/ordinal-Cboi1Yqb.js +1 -0
- package/dist/assets/pieDiagram-SKSYHLDU-BpIAXgAm.js +30 -0
- package/dist/assets/quadrantDiagram-337W2JSQ-DrpXn5Eg.js +7 -0
- package/dist/assets/requirementDiagram-Z7DCOOCP-Bg7EwHlG.js +73 -0
- package/dist/assets/sankeyDiagram-WA2Y5GQK-BWagRs1F.js +10 -0
- package/dist/assets/sequenceDiagram-2WXFIKYE-q5jwhivG.js +145 -0
- package/dist/assets/stateDiagram-RAJIS63D-B_J9pE-2.js +1 -0
- package/dist/assets/stateDiagram-v2-FVOUBMTO-Q_1GcybB.js +1 -0
- package/dist/assets/timeline-definition-YZTLITO2-dv0jgQ0z.js +61 -0
- package/dist/assets/treemap-KZPCXAKY-Dt1dkIE7.js +162 -0
- package/dist/assets/vennDiagram-LZ73GAT5-BdO5RgRZ.js +34 -0
- package/dist/assets/xychartDiagram-JWTSCODW-CpDVe-8v.js +7 -0
- package/dist/index.html +23 -0
- package/export-docx.js +1583 -0
- package/init.js +353 -0
- package/manifest-cli.mjs +207 -0
- package/package.json +83 -0
- package/schemas/artifact.schema.json +1295 -0
- package/schemas/artifact.uischema.json +65 -0
- package/schemas/evaluation.schema.json +284 -0
- package/schemas/rubric.schema.json +383 -0
- package/serve.js +238 -0
|
@@ -0,0 +1,118 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: earos-remediate
|
|
3
|
+
description: "Generate a prioritized improvement plan from an EAROS evaluation. Triggers on \"how do I fix this\", \"improve this artifact\", \"remediation plan\", \"how to pass EAROS\", \"fix the assessment\", \"improvement plan\", \"what's wrong with my architecture\", \"how to get a better score\", or any request to improve an artifact based on evaluation results."
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# SKILL: earos-remediate
|
|
7
|
+
|
|
8
|
+
Generate a prioritized, actionable improvement plan from an EAROS evaluation record.
|
|
9
|
+
|
|
10
|
+
## References
|
|
11
|
+
|
|
12
|
+
- Patterns and before/after examples: `references/remediation-patterns.md`
|
|
13
|
+
- Output template: `references/output-template.md`
|
|
14
|
+
- Rubric files: discovered at runtime via `earos.manifest.yaml`
|
|
15
|
+
- Evaluation schema: `standard/schemas/evaluation.schema.json`
|
|
16
|
+
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
## Workflow
|
|
20
|
+
|
|
21
|
+
### Step 1 — Load the evaluation record
|
|
22
|
+
|
|
23
|
+
Ask the user to provide the evaluation record (file path or paste content).
|
|
24
|
+
|
|
25
|
+
- Read the file if a path is given.
|
|
26
|
+
- Identify: `rubric_id`, `profile_id`, `overlay_ids`, `status`, all criterion scores, gate results, dimension scores, and the narrative fields (`evidence`, `rationale`, `actions`).
|
|
27
|
+
- If the record is missing required fields, note which ones and proceed with what is available.
|
|
28
|
+
|
|
29
|
+
### Step 2 — Load the rubric files
|
|
30
|
+
|
|
31
|
+
Read `earos.manifest.yaml` to locate the rubric(s) referenced by the evaluation record.
|
|
32
|
+
|
|
33
|
+
For each referenced rubric (core + profile + overlays):
|
|
34
|
+
- Load the YAML file.
|
|
35
|
+
- Extract for every criterion: `id`, `question`, `gate`, `scoring_guide`, `examples.good`, `examples.bad`, `anti_patterns`, `remediation_hints`, `decision_tree`.
|
|
36
|
+
|
|
37
|
+
This is the authoritative source for what "good" looks like. Do not rely on training data for rubric content.
|
|
38
|
+
|
|
39
|
+
### Step 3 — Classify and prioritize issues
|
|
40
|
+
|
|
41
|
+
Read `references/remediation-patterns.md` before this step.
|
|
42
|
+
|
|
43
|
+
Triage every criterion that scored below 3 (or triggered a gate failure):
|
|
44
|
+
|
|
45
|
+
**Tier 1 — Blockers** (fix these first; evaluation cannot pass until resolved)
|
|
46
|
+
- Any criterion with a `critical` gate that failed (status = Reject)
|
|
47
|
+
- Any criterion with a `major` gate that scored < 2 (caps status at Conditional Pass)
|
|
48
|
+
|
|
49
|
+
**Tier 2 — High Impact** (most likely to change the overall status)
|
|
50
|
+
- Criteria scored 0 or 1 (Absent / Weak)
|
|
51
|
+
- Criteria in high-weight dimensions (weight >= 1.1)
|
|
52
|
+
- The criterion closest to tipping a dimension above the 2.0 floor threshold
|
|
53
|
+
|
|
54
|
+
**Tier 3 — Incremental** (polish, turn Conditional Pass into Pass)
|
|
55
|
+
- Criteria scored 2 (Partial) — especially those where the gap to 3 is narrow
|
|
56
|
+
- Dimension average between 2.4 and 3.2 (one bump in a criterion could cross the Pass threshold)
|
|
57
|
+
|
|
58
|
+
**Tier 4 — N/A review** (challenge whether N/A was correctly applied)
|
|
59
|
+
- Any criterion marked N/A — check the justification is sound
|
|
60
|
+
|
|
61
|
+
Within each tier, order by: gate severity descending → weight descending → score ascending.
|
|
62
|
+
|
|
63
|
+
### Step 4 — Generate remediation items
|
|
64
|
+
|
|
65
|
+
For each prioritized criterion, produce a structured remediation entry using the rubric's own content:
|
|
66
|
+
|
|
67
|
+
```
|
|
68
|
+
Criterion: <id> — <question>
|
|
69
|
+
Current score: <score>/4 Gate: <severity or none>
|
|
70
|
+
What score 3 looks like: <scoring_guide["3"]>
|
|
71
|
+
What score 4 looks like: <scoring_guide["4"]>
|
|
72
|
+
Good example: <examples.good[0]>
|
|
73
|
+
Anti-patterns to avoid: <anti_patterns>
|
|
74
|
+
Decision path: <decision_tree excerpt>
|
|
75
|
+
Specific fixes:
|
|
76
|
+
- <remediation_hints item 1>
|
|
77
|
+
- <remediation_hints item 2>
|
|
78
|
+
- [context-specific fix derived from the evaluation's rationale]
|
|
79
|
+
Effort estimate: <Low | Medium | High> (see references/remediation-patterns.md)
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
Derive effort from the gap and criterion complexity:
|
|
83
|
+
- Score 3→4: Low (usually evidence and cross-referencing)
|
|
84
|
+
- Score 2→3: Medium (content gaps to fill)
|
|
85
|
+
- Score 0–1→3: High (structural rework required)
|
|
86
|
+
- Gate failure on critical: High (non-negotiable)
|
|
87
|
+
|
|
88
|
+
### Step 5 — Compute projected impact
|
|
89
|
+
|
|
90
|
+
After fixing Tier 1 and Tier 2 items, estimate what the new status would be:
|
|
91
|
+
- Recalculate dimension averages assuming target scores (conservative: assume score reaches the minimum needed, not the maximum possible).
|
|
92
|
+
- Check gate conditions.
|
|
93
|
+
- State the projected status: Pass / Conditional Pass / Rework Required.
|
|
94
|
+
|
|
95
|
+
If the current status is Reject due to a critical gate, state clearly: fixing the gate failure is the prerequisite for any other improvements to matter.
|
|
96
|
+
|
|
97
|
+
### Step 6 — Output the remediation plan
|
|
98
|
+
|
|
99
|
+
Read `references/output-template.md` before producing output.
|
|
100
|
+
|
|
101
|
+
Structure:
|
|
102
|
+
1. Summary (current status, projected status after fixes, number of issues by tier)
|
|
103
|
+
2. Tier 1 — Blockers (gate failures)
|
|
104
|
+
3. Tier 2 — High Impact items
|
|
105
|
+
4. Tier 3 — Incremental improvements
|
|
106
|
+
5. Tier 4 — N/A review (if any)
|
|
107
|
+
6. Quick wins (Tier 3 items that are Low effort — do these first)
|
|
108
|
+
7. Next step: offer to run `earos-assess` on the revised artifact
|
|
109
|
+
|
|
110
|
+
---
|
|
111
|
+
|
|
112
|
+
## Operating Principles
|
|
113
|
+
|
|
114
|
+
- **Rubric-anchored.** Every fix recommendation must cite the rubric's own `scoring_guide`, `examples.good`, or `remediation_hints`. Do not invent criteria or guidance not in the rubric.
|
|
115
|
+
- **Triage ruthlessly.** A plan with 15 equal-priority items is useless. The output must be prioritized so the architect knows exactly where to start.
|
|
116
|
+
- **Feasibility-aware.** Flag if a Tier 1 item requires organizational change (e.g., a governance decision) vs. document change.
|
|
117
|
+
- **Evidence anchored.** Where the evaluation record includes evidence anchors, quote them back to the architect so they know exactly what text triggered the low score.
|
|
118
|
+
- **Status awareness.** If the artifact is already a Conditional Pass and the architect's goal is Pass, focus on Tier 3. If it is Reject, Tier 1 is the only thing that matters.
|
|
@@ -0,0 +1,199 @@
|
|
|
1
|
+
# Output Template — EAROS Remediation Plan
|
|
2
|
+
|
|
3
|
+
This file defines the structure and formatting for the remediation plan output. Read this before Step 5 (output).
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Prioritization Logic
|
|
8
|
+
|
|
9
|
+
Before writing the plan, compute the following and display it in the Status Summary:
|
|
10
|
+
|
|
11
|
+
1. **Current weighted overall score** (from the evaluation record `overall_score`)
|
|
12
|
+
2. **Current status** (from the evaluation record `status`)
|
|
13
|
+
3. **Score needed for next status tier:**
|
|
14
|
+
- To reach `conditional_pass` from `rework_required`: need overall ≥ 2.4, no critical gate failures
|
|
15
|
+
- To reach `pass` from `conditional_pass`: need overall ≥ 3.2, no dimension < 2.0, no critical gate failures
|
|
16
|
+
- To reach `conditional_pass` from `reject`: must first fix the critical gate failure, then meet the conditional_pass threshold
|
|
17
|
+
4. **Gap:** how many average points needed across criteria — identify the specific criteria where improvement is most achievable given the rubric's level descriptors
|
|
18
|
+
|
|
19
|
+
Show this calculation explicitly in the status summary so the author understands exactly what they are working toward.
|
|
20
|
+
|
|
21
|
+
### How to identify status-tipping criteria
|
|
22
|
+
|
|
23
|
+
To find score-2 criteria that tip the status:
|
|
24
|
+
1. Take current overall score
|
|
25
|
+
2. For each score-2 criterion, compute: what would the overall score be if this criterion became a 3?
|
|
26
|
+
- Delta per criterion = (score improvement × criterion weight) / total weight
|
|
27
|
+
3. If the resulting score crosses a threshold (2.4 or 3.2), flag this criterion as "status-tipping"
|
|
28
|
+
|
|
29
|
+
This makes the remediation plan strategic, not just a list of things to fix.
|
|
30
|
+
|
|
31
|
+
---
|
|
32
|
+
|
|
33
|
+
## Full Remediation Plan Template
|
|
34
|
+
|
|
35
|
+
```markdown
|
|
36
|
+
# EAROS Remediation Plan
|
|
37
|
+
|
|
38
|
+
**Artifact:** [title from evaluation record]
|
|
39
|
+
**Artifact Type:** [type]
|
|
40
|
+
**Evaluation Record:** [evaluation_id]
|
|
41
|
+
**Evaluation Date:** [date]
|
|
42
|
+
**Current Status:** [status — use traffic light: 🔴 Reject / 🟠 Rework Required / 🟡 Conditional Pass]
|
|
43
|
+
**Current Score:** [overall_score] / 4.0
|
|
44
|
+
|
|
45
|
+
---
|
|
46
|
+
|
|
47
|
+
## What Needs to Change
|
|
48
|
+
|
|
49
|
+
[1–2 sentence summary of what is holding this artifact back, in plain language for the author.
|
|
50
|
+
Not criterion IDs — synthesize. E.g.: "The artifact scores well on scope and stakeholder
|
|
51
|
+
identification, but lacks decision rationale and compliance traceability, which are the
|
|
52
|
+
primary blockers for a Conditional Pass."]
|
|
53
|
+
|
|
54
|
+
**Score needed for [target status]:** [X.X] / 4.0 (current: [Y.Y])
|
|
55
|
+
**Gap:** [+Z.Z points] — achievable by [brief description of where the points come from]
|
|
56
|
+
|
|
57
|
+
---
|
|
58
|
+
|
|
59
|
+
## Priority 1 — Fix Gate Failures First
|
|
60
|
+
|
|
61
|
+
> ⚠️ These block passing regardless of your overall score. Fix these before anything else.
|
|
62
|
+
|
|
63
|
+
[If no gate failures:]
|
|
64
|
+
No gate failures identified. Proceed to Priority 2.
|
|
65
|
+
|
|
66
|
+
[For each gate failure:]
|
|
67
|
+
|
|
68
|
+
### [criterion_id]: [criterion question]
|
|
69
|
+
**Gate severity:** CRITICAL / MAJOR
|
|
70
|
+
**Effect:** [what this gate failure causes — e.g., "Status = Reject regardless of overall score"]
|
|
71
|
+
**Current score:** [0–1] — [evaluator's rationale from the evaluation record]
|
|
72
|
+
**Evidence gap:** [what the evaluator found missing]
|
|
73
|
+
|
|
74
|
+
**What a passing score (≥ 2) looks like:**
|
|
75
|
+
> [Quote or paraphrase the scoring_guide "2" level descriptor from the rubric YAML]
|
|
76
|
+
|
|
77
|
+
**Good examples (from the rubric):**
|
|
78
|
+
- [examples.good item 1 from the rubric YAML]
|
|
79
|
+
- [examples.good item 2]
|
|
80
|
+
|
|
81
|
+
**What to avoid:**
|
|
82
|
+
- [anti_patterns item 1 from the rubric YAML]
|
|
83
|
+
|
|
84
|
+
**Action:** [Specific, verb-first instruction. Reference the artifact section. Specify format.]
|
|
85
|
+
|
|
86
|
+
**Effort:** [quick fix / moderate / significant rework]
|
|
87
|
+
|
|
88
|
+
---
|
|
89
|
+
|
|
90
|
+
## Priority 2 — Lift Low Scores (0–1)
|
|
91
|
+
|
|
92
|
+
> These are the biggest drag on your overall score. Address in order of dimension weight.
|
|
93
|
+
|
|
94
|
+
[For each score-0 or score-1 criterion, ordered by dimension weight descending:]
|
|
95
|
+
|
|
96
|
+
### [criterion_id]: [criterion question]
|
|
97
|
+
**Dimension:** [dimension name] (weight: [X.X])
|
|
98
|
+
**Current score:** [0 or 1] — [evaluator's rationale]
|
|
99
|
+
**Evidence gap:** [missing information from evaluation record]
|
|
100
|
+
|
|
101
|
+
**What score 3 looks like:**
|
|
102
|
+
> [Quote the scoring_guide "3" level descriptor from the rubric YAML]
|
|
103
|
+
|
|
104
|
+
**Good examples (from the rubric):**
|
|
105
|
+
- [examples.good from rubric YAML]
|
|
106
|
+
|
|
107
|
+
**What to avoid:**
|
|
108
|
+
- [anti_patterns from rubric YAML]
|
|
109
|
+
|
|
110
|
+
**Action:** [Specific instruction]
|
|
111
|
+
|
|
112
|
+
**Effort:** [quick fix / moderate / significant rework]
|
|
113
|
+
|
|
114
|
+
---
|
|
115
|
+
|
|
116
|
+
## Priority 3 — Status-Tipping Improvements
|
|
117
|
+
|
|
118
|
+
> Improving these score-2 criteria would push your overall score across the [target status] threshold.
|
|
119
|
+
|
|
120
|
+
[For each status-tipping score-2 criterion:]
|
|
121
|
+
|
|
122
|
+
### [criterion_id]: [criterion question]
|
|
123
|
+
**Current score:** 2
|
|
124
|
+
**Impact:** Improving to 3 adds [+Z.Z] to overall score → new overall = [X.X] ([status change if applicable])
|
|
125
|
+
**Evidence gap:** [what is present vs. what is missing]
|
|
126
|
+
|
|
127
|
+
**What score 3 looks like:**
|
|
128
|
+
> [scoring_guide "3" descriptor from rubric YAML]
|
|
129
|
+
|
|
130
|
+
**Good examples (from the rubric):**
|
|
131
|
+
- [examples.good from rubric YAML]
|
|
132
|
+
|
|
133
|
+
**Action:** [Specific instruction]
|
|
134
|
+
|
|
135
|
+
**Effort:** [quick fix / moderate / significant rework]
|
|
136
|
+
|
|
137
|
+
---
|
|
138
|
+
|
|
139
|
+
## Priority 4 — Incremental Improvements
|
|
140
|
+
|
|
141
|
+
> These score-2 criteria improve the overall score but do not change the status tier on their own.
|
|
142
|
+
> Address after Priority 1–3 if you are targeting a higher score within the same status tier.
|
|
143
|
+
|
|
144
|
+
[Collapsed summary — offer to expand if user wants detail:]
|
|
145
|
+
|
|
146
|
+
| Criterion | Current | Target | Estimated Effort |
|
|
147
|
+
|-----------|---------|--------|-----------------|
|
|
148
|
+
| [ID] | 2 | 3 | [effort] |
|
|
149
|
+
| [ID] | 2 | 3 | [effort] |
|
|
150
|
+
|
|
151
|
+
[Say: "Ask me to expand any of these and I'll provide the same detail as the sections above."]
|
|
152
|
+
|
|
153
|
+
---
|
|
154
|
+
|
|
155
|
+
## Effort Summary
|
|
156
|
+
|
|
157
|
+
| Priority | Actions | Estimated Effort |
|
|
158
|
+
|----------|---------|-----------------|
|
|
159
|
+
| P1 — Gate failures | [N] | [effort range] |
|
|
160
|
+
| P2 — Low scores (0–1) | [N] | [effort range] |
|
|
161
|
+
| P3 — Status-tipping (score 2) | [N] | [effort range] |
|
|
162
|
+
| P4 — Incremental (score 2) | [N] | [effort range] |
|
|
163
|
+
| **Total to reach [target status]** | **[P1+P2+P3]** | **[range]** |
|
|
164
|
+
|
|
165
|
+
---
|
|
166
|
+
|
|
167
|
+
## Next Step
|
|
168
|
+
|
|
169
|
+
Once these changes are made, re-run `earos-assess` on the updated artifact to verify the score improvement.
|
|
170
|
+
|
|
171
|
+
For help writing specific sections, use the `earos-template-fill` skill.
|
|
172
|
+
For help creating a new artifact from scratch, use the `earos-artifact-gen` skill.
|
|
173
|
+
|
|
174
|
+
*Remediation plan generated from evaluation record [evaluation_id] using EAROS [rubric_ids].*
|
|
175
|
+
```
|
|
176
|
+
|
|
177
|
+
---
|
|
178
|
+
|
|
179
|
+
## Formatting Rules
|
|
180
|
+
|
|
181
|
+
1. **Gate failures always appear first** — even if they seem minor. Status is determined by gates before averages.
|
|
182
|
+
2. **Show the math** — display the score delta calculation for status-tipping criteria. Authors need to see that fixing criterion X adds 0.3 to their overall score.
|
|
183
|
+
3. **Verb-first actions** — every action starts with a verb: Add, Create, Replace, Extend, Document, Remove, Restructure. Not "You should consider adding".
|
|
184
|
+
4. **Reference sections** — every action references where in the artifact to make the change. If the section doesn't exist yet, say "Add a new section titled X after Section Y".
|
|
185
|
+
5. **Source from the rubric, not from general knowledge** — the `scoring_guide`, `examples.good`, `anti_patterns`, and `remediation_hints` fields in the rubric YAML are the authoritative source for what good looks like. Quote them.
|
|
186
|
+
6. **Collapse Priority 4** — offer to expand rather than dumping all score-2 criteria at once. The most common mistake is overwhelming the author with 10 actions when they should focus on 3.
|
|
187
|
+
7. **Traffic lights for status** — use 🔴 Reject, 🟠 Rework Required, 🟡 Conditional Pass, 🟢 Pass in the header.
|
|
188
|
+
|
|
189
|
+
---
|
|
190
|
+
|
|
191
|
+
## Tone Guidelines
|
|
192
|
+
|
|
193
|
+
Write for the **artifact author**, not the evaluator. The evaluation record speaks the evaluator's language (criterion IDs, evidence classes, scores). The remediation plan speaks the author's language (sections, content, formats).
|
|
194
|
+
|
|
195
|
+
- Evaluate → Remediate: "TRC-01 scored 1 due to absent driver-component mapping" → "Add a table in Section 3 that maps each business driver to the components that realize it"
|
|
196
|
+
- Frame actions as achievable, not as indictments. "Add X" not "You failed to include X"
|
|
197
|
+
- Be direct about gate failures — they genuinely block passing, and the author needs to understand that urgency
|
|
198
|
+
|
|
199
|
+
The plan should feel like advice from a senior peer who has read the evaluation, read the rubric, and knows exactly what needs to change. Not a list of scores, and not generic architectural advice.
|
|
@@ -0,0 +1,330 @@
|
|
|
1
|
+
# Remediation Patterns — EAROS Remediation Skill
|
|
2
|
+
|
|
3
|
+
This reference contains common improvement patterns per dimension, effort estimation guidance, and before/after examples. Read this before Step 3 (triage) and when generating actions for a specific dimension.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## How to Use This File
|
|
8
|
+
|
|
9
|
+
For each priority criterion, find the matching dimension below. The patterns describe the most common root causes and fixes. They are templates — always tailor the action to reference the specific artifact sections and content identified in the evaluation record.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Effort Estimation Guidelines
|
|
14
|
+
|
|
15
|
+
Use these when assigning effort to each action:
|
|
16
|
+
|
|
17
|
+
| Label | Meaning | Typical work |
|
|
18
|
+
|-------|---------|-------------|
|
|
19
|
+
| `quick fix` | < 1 hour | Add a missing table, label an existing diagram, add a sentence of justification, fill a missing metadata field |
|
|
20
|
+
| `moderate` | Half day (3–5 hours) | Add a new section, create a stakeholder/traceability matrix, document quality attribute scenarios, write a decision rationale |
|
|
21
|
+
| `significant rework` | 1+ days | Redesign a component boundary, produce a missing architectural view, restructure the scope definition, develop a full DR plan |
|
|
22
|
+
|
|
23
|
+
**Key rule:** If the rubric's `scoring_guide` says "score 2 requires X and Y but you're missing Y", and Y is a table or a paragraph, that is typically a `quick fix` or `moderate`. If Y is a diagram or a new section that doesn't exist yet, that is `moderate` to `significant rework`.
|
|
24
|
+
|
|
25
|
+
---
|
|
26
|
+
|
|
27
|
+
## Dimension Patterns
|
|
28
|
+
|
|
29
|
+
### D1 — Stakeholder Identification and Fit (STK-01)
|
|
30
|
+
|
|
31
|
+
**Common root causes of low scores:**
|
|
32
|
+
- Stakeholders listed by role title only, with no concerns or information needs documented
|
|
33
|
+
- Missing key stakeholder groups (operations, security, compliance) that the architecture impacts
|
|
34
|
+
- Stakeholder table exists but is disconnected from the architecture decisions
|
|
35
|
+
|
|
36
|
+
**Pattern — Missing stakeholder table:**
|
|
37
|
+
```
|
|
38
|
+
Add a stakeholder table with columns: Role | Primary Concerns | Key Questions |
|
|
39
|
+
How This Architecture Addresses Them. Include at minimum: sponsoring business unit,
|
|
40
|
+
delivery/engineering team, operations team, and security/compliance representatives.
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
**Pattern — Stakeholders listed without concerns:**
|
|
44
|
+
```
|
|
45
|
+
Extend the existing stakeholder list to include a "concerns" column for each
|
|
46
|
+
stakeholder. For each concern, add a cross-reference to the section of the document
|
|
47
|
+
that addresses it. This transforms a nominal list into a decision-relevant index.
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
**Before (score 1):** "Stakeholders: Architecture Board, Development Team, Operations"
|
|
51
|
+
|
|
52
|
+
**After (score 3):** Stakeholder table with role, primary concern, secondary concern, and a reference to the section that addresses each concern.
|
|
53
|
+
|
|
54
|
+
**Effort:** `quick fix` if stakeholders are identified but concerns are missing; `moderate` if stakeholders themselves are incomplete.
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
58
|
+
### D2 — Scope and Boundaries (SCP-01)
|
|
59
|
+
|
|
60
|
+
**Common root causes of low scores:**
|
|
61
|
+
- Scope described in terms of business goals only, with no explicit technical boundary
|
|
62
|
+
- Missing in-scope / out-of-scope list
|
|
63
|
+
- System context diagram absent or not linked to the scope statement
|
|
64
|
+
|
|
65
|
+
**Pattern — Absent scope boundary:**
|
|
66
|
+
```
|
|
67
|
+
Add a "Scope and Boundaries" section before Section 1 (or in the executive summary).
|
|
68
|
+
Structure it as:
|
|
69
|
+
IN SCOPE: [list of systems, functions, data domains included]
|
|
70
|
+
OUT OF SCOPE: [explicit exclusions with a one-line rationale for each]
|
|
71
|
+
DEFERRED: [items acknowledged but explicitly not addressed in this version]
|
|
72
|
+
This prevents scope creep during review and gives the architecture a clear review target.
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
**Pattern — Scope stated but no system context diagram:**
|
|
76
|
+
```
|
|
77
|
+
Add a C4 Level 1 context diagram (or equivalent) showing the system under design as
|
|
78
|
+
a box, with all external actors and systems that interact with it. Label each
|
|
79
|
+
relationship. This makes the scope statement visual and reviewable.
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
**Before (score 1):** "This architecture covers the payments platform."
|
|
83
|
+
|
|
84
|
+
**After (score 3):** Explicit in-scope/out-of-scope list plus a context diagram showing the payments platform boundary and all integrating systems.
|
|
85
|
+
|
|
86
|
+
**Effort:** `quick fix` for in/out-of-scope list; `moderate` for adding context diagram.
|
|
87
|
+
|
|
88
|
+
---
|
|
89
|
+
|
|
90
|
+
### D3 — Traceability (TRC-01)
|
|
91
|
+
|
|
92
|
+
**Common root causes of low scores:**
|
|
93
|
+
- Architecture decisions not linked to the business drivers that motivated them
|
|
94
|
+
- Quality attribute requirements present but no mapping to how the architecture satisfies them
|
|
95
|
+
- Multiple layers (capability → component → deployment) without cross-references
|
|
96
|
+
|
|
97
|
+
**Pattern — Missing driver-to-component traceability:**
|
|
98
|
+
```
|
|
99
|
+
Add a traceability table in [Section X] with columns:
|
|
100
|
+
Business Driver / Requirement | Architectural Decision | Realizing Component(s) | Evidence
|
|
101
|
+
For each row, link the driver (from Section 1) to the decision that responds to it
|
|
102
|
+
(either inline or in an ADR reference) and the component(s) that implement it.
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
**Pattern — Quality attributes defined but not satisfied:**
|
|
106
|
+
```
|
|
107
|
+
For each quality attribute scenario in [Section Y], add a "Satisfied by" row that
|
|
108
|
+
names the specific architectural mechanism (e.g., "Availability 99.9%: satisfied
|
|
109
|
+
by active-active deployment in regions A and B, described in Section 5.3").
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
**Before (score 1):** Quality attribute targets listed in a table with no linkage to architecture decisions.
|
|
113
|
+
|
|
114
|
+
**After (score 3):** Each quality attribute scenario has a mechanism reference, and the architecture section explicitly states how it satisfies it.
|
|
115
|
+
|
|
116
|
+
**Effort:** `moderate` (building a traceability matrix from scratch). `quick fix` if cross-references just need to be added to existing content.
|
|
117
|
+
|
|
118
|
+
---
|
|
119
|
+
|
|
120
|
+
### D4 — Internal Consistency (CON-01)
|
|
121
|
+
|
|
122
|
+
**Common root causes of low scores:**
|
|
123
|
+
- Component names differ between diagrams and prose sections
|
|
124
|
+
- Data flows in the sequence diagram contradict the component diagram interfaces
|
|
125
|
+
- Scope statement includes elements not shown in any diagram
|
|
126
|
+
|
|
127
|
+
**Pattern — Name inconsistency:**
|
|
128
|
+
```
|
|
129
|
+
Create a glossary or component index (can be a table) that lists every named
|
|
130
|
+
component exactly once with its canonical name. Replace all alternate names or
|
|
131
|
+
abbreviations in diagrams and prose with the canonical name. Treat this as the
|
|
132
|
+
document's namespace.
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
**Pattern — Diagram-prose contradiction:**
|
|
136
|
+
```
|
|
137
|
+
Audit Section [X] against Diagram [Y]: for each component in the diagram, verify
|
|
138
|
+
that the prose description uses the same name and describes the same interfaces.
|
|
139
|
+
Note and resolve every discrepancy found. Common issue: the diagram was updated
|
|
140
|
+
after the prose was written.
|
|
141
|
+
```
|
|
142
|
+
|
|
143
|
+
**Effort:** `quick fix` for renaming; `moderate` if diagrams need to be redrawn.
|
|
144
|
+
|
|
145
|
+
---
|
|
146
|
+
|
|
147
|
+
### D5 — Rationale and Decision Quality (RAT-01)
|
|
148
|
+
|
|
149
|
+
**Common root causes of low scores:**
|
|
150
|
+
- Architecture decisions stated without alternatives considered
|
|
151
|
+
- Single option presented as though no choice existed
|
|
152
|
+
- Tradeoffs acknowledged but not quantified or substantiated
|
|
153
|
+
|
|
154
|
+
**Pattern — Missing alternatives:**
|
|
155
|
+
```
|
|
156
|
+
For each major architectural decision in [Section X], add an "Alternatives Considered"
|
|
157
|
+
subsection with:
|
|
158
|
+
Option A: [name] — [one-line description] — Rejected because [reason]
|
|
159
|
+
Option B: [name] — [one-line description] — Rejected because [reason]
|
|
160
|
+
Selected: [name] — [rationale tied to the driving constraints]
|
|
161
|
+
This documents that a real choice was made, not just a default selection.
|
|
162
|
+
```
|
|
163
|
+
|
|
164
|
+
**Pattern — Tradeoffs stated but not evidenced:**
|
|
165
|
+
```
|
|
166
|
+
Replace vague tradeoff statements ("this approach offers better scalability") with
|
|
167
|
+
quantified or observable claims ("this approach scales horizontally without schema
|
|
168
|
+
migration, which satisfies the 10× traffic growth requirement from Business Driver 3 —
|
|
169
|
+
validated in load test from [date/link]").
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
**Before (score 1):** "We chose Kafka for messaging."
|
|
173
|
+
|
|
174
|
+
**After (score 3):** Decision section with the driving requirement, two alternatives with rejection rationale, and the selected option linked to a specific non-functional requirement.
|
|
175
|
+
|
|
176
|
+
**Effort:** `moderate` per decision. If 3+ decisions need this treatment: `significant rework`.
|
|
177
|
+
|
|
178
|
+
---
|
|
179
|
+
|
|
180
|
+
### D6 — Compliance and Governance Fit (CMP-01)
|
|
181
|
+
|
|
182
|
+
**Common root causes of low scores:**
|
|
183
|
+
- No reference to mandatory standards, policies, or control frameworks
|
|
184
|
+
- Compliance section present but not linked to specific architecture decisions
|
|
185
|
+
- Security or data controls described generically without mapping to requirements
|
|
186
|
+
|
|
187
|
+
**Pattern — Missing compliance references:**
|
|
188
|
+
```
|
|
189
|
+
Add a "Standards and Compliance" section listing:
|
|
190
|
+
- Applicable standards: [e.g., ISO 27001, PCI DSS, internal policy X]
|
|
191
|
+
- For each standard: which sections of this architecture address it
|
|
192
|
+
- Controls implemented: [list with mechanism and location in the design]
|
|
193
|
+
- Controls deferred: [list with owner and target date]
|
|
194
|
+
```
|
|
195
|
+
|
|
196
|
+
**Pattern — Compliance section exists but not connected:**
|
|
197
|
+
```
|
|
198
|
+
For each compliance requirement in [Section X], add a "Satisfied by" reference
|
|
199
|
+
pointing to the specific component or mechanism in the architecture that implements
|
|
200
|
+
it. A compliance section that lists requirements without showing how the architecture
|
|
201
|
+
satisfies them scores 1, not 2.
|
|
202
|
+
```
|
|
203
|
+
|
|
204
|
+
**Effort:** `moderate` if compliance requirements are known but not documented; `significant rework` if the architecture has unaddressed control gaps.
|
|
205
|
+
|
|
206
|
+
---
|
|
207
|
+
|
|
208
|
+
### D7 — Maintainability and Evolution (MNT-01)
|
|
209
|
+
|
|
210
|
+
**Common root causes of low scores:**
|
|
211
|
+
- No version history or changelog
|
|
212
|
+
- Owner not named
|
|
213
|
+
- No documented process for updating the artifact
|
|
214
|
+
|
|
215
|
+
**Pattern — Missing metadata:**
|
|
216
|
+
```
|
|
217
|
+
Add a document control table at the top of the artifact:
|
|
218
|
+
Title | Version | Status | Owner | Last Updated | Review Date
|
|
219
|
+
Change Log:
|
|
220
|
+
[version] | [date] | [author] | [change summary]
|
|
221
|
+
This is a quick fix that immediately improves MNT-01.
|
|
222
|
+
```
|
|
223
|
+
|
|
224
|
+
**Effort:** `quick fix` — this is almost always a metadata gap, not a content gap.
|
|
225
|
+
|
|
226
|
+
---
|
|
227
|
+
|
|
228
|
+
### D8 — Actionability (ACT-01)
|
|
229
|
+
|
|
230
|
+
**Common root causes of low scores:**
|
|
231
|
+
- No next steps or implementation guidance
|
|
232
|
+
- Recommendations listed without owners or timelines
|
|
233
|
+
- Decision-ready content buried in appendices without a clear summary
|
|
234
|
+
|
|
235
|
+
**Pattern — Missing next steps:**
|
|
236
|
+
```
|
|
237
|
+
Add a "Next Steps and Implementation Guidance" section at the end of the document:
|
|
238
|
+
- Immediate actions (within sprint / next 2 weeks)
|
|
239
|
+
- Decisions outstanding (with owner and deadline)
|
|
240
|
+
- Assumptions that need validation before implementation begins
|
|
241
|
+
- Dependencies on external teams or systems
|
|
242
|
+
```
|
|
243
|
+
|
|
244
|
+
**Effort:** `quick fix` to `moderate` depending on whether implementation guidance already exists in fragments.
|
|
245
|
+
|
|
246
|
+
---
|
|
247
|
+
|
|
248
|
+
### D9 — Clarity and Communication (CLR-01)
|
|
249
|
+
|
|
250
|
+
**Common root causes of low scores:**
|
|
251
|
+
- Dense jargon without a glossary
|
|
252
|
+
- Diagrams present but not explained in prose
|
|
253
|
+
- No executive summary for non-technical stakeholders
|
|
254
|
+
|
|
255
|
+
**Pattern — Missing executive summary:**
|
|
256
|
+
```
|
|
257
|
+
Add a 1-page executive summary at the beginning that covers:
|
|
258
|
+
- What problem this architecture solves (1–2 sentences)
|
|
259
|
+
- The key design decisions and their rationale (3–5 bullets)
|
|
260
|
+
- What this means for delivery (timeline, risks, dependencies)
|
|
261
|
+
This makes the document accessible to stakeholders who will not read the full detail.
|
|
262
|
+
```
|
|
263
|
+
|
|
264
|
+
**Pattern — Diagrams without prose explanation:**
|
|
265
|
+
```
|
|
266
|
+
For each diagram in the document, add a numbered walkthrough immediately below it
|
|
267
|
+
(e.g., "① The API gateway receives the request → ② Routes to the appropriate
|
|
268
|
+
microservice → ③ ..."). This forces the author to validate the diagram and
|
|
269
|
+
dramatically improves reviewer comprehension.
|
|
270
|
+
```
|
|
271
|
+
|
|
272
|
+
**Effort:** `quick fix` for glossary and diagram walkthroughs; `moderate` for executive summary.
|
|
273
|
+
|
|
274
|
+
---
|
|
275
|
+
|
|
276
|
+
## Cross-Cutting Patterns
|
|
277
|
+
|
|
278
|
+
### Adding a traceability matrix (general)
|
|
279
|
+
|
|
280
|
+
A traceability matrix is the single highest-value addition to most architecture artifacts. The minimum viable version:
|
|
281
|
+
|
|
282
|
+
| Source | Criterion Addressed | Document Section | Evidence Type |
|
|
283
|
+
|--------|-------------------|-----------------|---------------|
|
|
284
|
+
| Business Driver 1 | TRC-01, RAT-01 | Section 2.1 | Observed |
|
|
285
|
+
| NFR: 99.9% availability | ACT-01, RA-QA-01 | Section 4.3 | Observed |
|
|
286
|
+
| Compliance: PCI DSS Req 6 | CMP-01 | Section 6.2 | Observed |
|
|
287
|
+
|
|
288
|
+
### Adding quality attribute scenarios (QAS)
|
|
289
|
+
|
|
290
|
+
Quality attribute scenarios should follow this format (from ISO 25010 / ATAM):
|
|
291
|
+
|
|
292
|
+
```
|
|
293
|
+
Scenario ID: QAS-001
|
|
294
|
+
Quality Attribute: Availability
|
|
295
|
+
Stimulus: Hardware failure in primary region
|
|
296
|
+
Response: Automatic failover to secondary region
|
|
297
|
+
Measure: Recovery time < 5 minutes, zero data loss
|
|
298
|
+
Architectural Mechanism: Active-active multi-region deployment (Section 5.2)
|
|
299
|
+
Validation: Load test results from [date]
|
|
300
|
+
```
|
|
301
|
+
|
|
302
|
+
### Adding an ADR reference
|
|
303
|
+
|
|
304
|
+
When the evaluator flags missing decision rationale, the fastest fix is to reference or attach an ADR. If ADRs don't exist, create a simplified inline version:
|
|
305
|
+
|
|
306
|
+
```
|
|
307
|
+
Decision: [Name]
|
|
308
|
+
Date: [YYYY-MM-DD]
|
|
309
|
+
Status: Accepted
|
|
310
|
+
Context: [Why a decision was needed]
|
|
311
|
+
Options: [Option A] vs [Option B] vs [Selected]
|
|
312
|
+
Decision: [Selected option]
|
|
313
|
+
Rationale: [Why — linked to a specific driver or constraint]
|
|
314
|
+
Consequences: [Trade-offs accepted]
|
|
315
|
+
```
|
|
316
|
+
|
|
317
|
+
---
|
|
318
|
+
|
|
319
|
+
## Effort by Status Transition
|
|
320
|
+
|
|
321
|
+
Use this to set expectations with the author before they start:
|
|
322
|
+
|
|
323
|
+
| Current Status | Target Status | Typical Work |
|
|
324
|
+
|---------------|--------------|-------------|
|
|
325
|
+
| Rework Required | Conditional Pass | Fix gate failures + lift 2–3 criteria from 1→2. Usually 1–2 days. |
|
|
326
|
+
| Rework Required | Pass | Fix all gates + significant improvements across multiple criteria. Usually 3–5 days. |
|
|
327
|
+
| Conditional Pass | Pass | Targeted improvements on 2–3 criteria at score 2. Usually half day to 1 day. |
|
|
328
|
+
| Reject (critical gate) | Conditional Pass | Must fix the critical gate failure. Can be quick fix or significant rework depending on the criterion. |
|
|
329
|
+
|
|
330
|
+
The single most impactful action is almost always fixing a gate failure or lifting a score-0 criterion to score-2. Do not invest effort improving a score-3 criterion to score-4 when score-0 criteria remain unaddressed.
|