bmad-plus 0.4.4 → 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (133) hide show
  1. package/CHANGELOG.md +31 -0
  2. package/README.md +3 -3
  3. package/package.json +1 -1
  4. package/readme-international/README.de.md +2 -2
  5. package/readme-international/README.es.md +2 -2
  6. package/readme-international/README.fr.md +2 -2
  7. package/src/bmad-plus/module.yaml +43 -12
  8. package/src/bmad-plus/packs/pack-shield/README.md +110 -0
  9. package/src/bmad-plus/packs/pack-shield/categories/accessibility-esg/csrd-agent.md +262 -0
  10. package/src/bmad-plus/packs/pack-shield/categories/accessibility-esg/section508-agent.md +179 -0
  11. package/src/bmad-plus/packs/pack-shield/categories/accessibility-esg/wcag-agent.md +201 -0
  12. package/src/bmad-plus/packs/pack-shield/categories/ai-governance/eu-ai-act-agent.md +97 -0
  13. package/src/bmad-plus/packs/pack-shield/categories/ai-governance/iso42001-agent.md +251 -0
  14. package/src/bmad-plus/packs/pack-shield/categories/ai-governance/nist-ai-rmf-agent.md +133 -0
  15. package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/cis-controls-agent.md +221 -0
  16. package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/ism-agent.md +150 -0
  17. package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/iso27001-agent.md +167 -0
  18. package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/nis2-agent.md +83 -0
  19. package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/nist-800-53-agent.md +250 -0
  20. package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/nist-csf-agent.md +218 -0
  21. package/src/bmad-plus/packs/pack-shield/categories/data-privacy/ccpa-agent.md +94 -0
  22. package/src/bmad-plus/packs/pack-shield/categories/data-privacy/dpdpa-agent.md +136 -0
  23. package/src/bmad-plus/packs/pack-shield/categories/data-privacy/gdpr-agent.md +296 -0
  24. package/src/bmad-plus/packs/pack-shield/categories/data-privacy/iso27701-agent.md +134 -0
  25. package/src/bmad-plus/packs/pack-shield/categories/data-privacy/lgpd-agent.md +129 -0
  26. package/src/bmad-plus/packs/pack-shield/categories/defense-export/cmmc-agent.md +127 -0
  27. package/src/bmad-plus/packs/pack-shield/categories/defense-export/ear-agent.md +272 -0
  28. package/src/bmad-plus/packs/pack-shield/categories/defense-export/itar-agent.md +202 -0
  29. package/src/bmad-plus/packs/pack-shield/categories/defense-export/tsa-agent.md +367 -0
  30. package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/dora-agent.md +510 -0
  31. package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/fedramp-agent.md +247 -0
  32. package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/hipaa-agent.md +173 -0
  33. package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/pci-dss-agent.md +239 -0
  34. package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/soc2-agent.md +266 -0
  35. package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/swift-csp-agent.md +164 -0
  36. package/src/bmad-plus/packs/pack-shield/categories/workflows/ai-act-classifier.md +131 -0
  37. package/src/bmad-plus/packs/pack-shield/categories/workflows/ai-act-fria.md +155 -0
  38. package/src/bmad-plus/packs/pack-shield/categories/workflows/ai-act-incidents.md +187 -0
  39. package/src/bmad-plus/packs/pack-shield/categories/workflows/ai-act-roles.md +113 -0
  40. package/src/bmad-plus/packs/pack-shield/categories/workflows/breach-sentinel.md +197 -0
  41. package/src/bmad-plus/packs/pack-shield/categories/workflows/cookie-policy-gen.md +180 -0
  42. package/src/bmad-plus/packs/pack-shield/categories/workflows/dpia-sentinel.md +235 -0
  43. package/src/bmad-plus/packs/pack-shield/categories/workflows/legitimate-interest.md +159 -0
  44. package/src/bmad-plus/packs/pack-shield/categories/workflows/privacy-advisor.md +133 -0
  45. package/src/bmad-plus/packs/pack-shield/categories/workflows/privacy-notice-gen.md +160 -0
  46. package/src/bmad-plus/packs/pack-shield/categories/workflows/privacy-policy-gen.md +135 -0
  47. package/src/bmad-plus/packs/pack-shield/references/ccpa/ccpa-gdpr-comparison.md +117 -0
  48. package/src/bmad-plus/packs/pack-shield/references/ccpa/consumer-rights-workflows.md +177 -0
  49. package/src/bmad-plus/packs/pack-shield/references/cis-controls/framework-mappings.md +162 -0
  50. package/src/bmad-plus/packs/pack-shield/references/cis-controls/implementation-guidance.md +235 -0
  51. package/src/bmad-plus/packs/pack-shield/references/cis-controls/safeguards-detail.md +252 -0
  52. package/src/bmad-plus/packs/pack-shield/references/cmmc/cmmc-assessment.md +170 -0
  53. package/src/bmad-plus/packs/pack-shield/references/cmmc/cmmc-levels.md +113 -0
  54. package/src/bmad-plus/packs/pack-shield/references/cmmc/cmmc-practices.md +211 -0
  55. package/src/bmad-plus/packs/pack-shield/references/csrd/compliance-program.md +281 -0
  56. package/src/bmad-plus/packs/pack-shield/references/csrd/double-materiality.md +253 -0
  57. package/src/bmad-plus/packs/pack-shield/references/csrd/esrs-standards.md +401 -0
  58. package/src/bmad-plus/packs/pack-shield/references/dora/article-reference.md +441 -0
  59. package/src/bmad-plus/packs/pack-shield/references/dora/incident-classification.md +297 -0
  60. package/src/bmad-plus/packs/pack-shield/references/dora/rts-its-guide.md +306 -0
  61. package/src/bmad-plus/packs/pack-shield/references/dora/third-party-risk.md +349 -0
  62. package/src/bmad-plus/packs/pack-shield/references/dpdpa/gdpr-comparison.md +173 -0
  63. package/src/bmad-plus/packs/pack-shield/references/dpdpa/rights-and-obligations.md +426 -0
  64. package/src/bmad-plus/packs/pack-shield/references/dpdpa/rules-2025.md +599 -0
  65. package/src/bmad-plus/packs/pack-shield/references/dpdpa/sections-reference.md +319 -0
  66. package/src/bmad-plus/packs/pack-shield/references/ear/ccl-eccn-guide.md +250 -0
  67. package/src/bmad-plus/packs/pack-shield/references/ear/compliance-program.md +280 -0
  68. package/src/bmad-plus/packs/pack-shield/references/ear/license-exceptions.md +207 -0
  69. package/src/bmad-plus/packs/pack-shield/references/eu-ai-act/gpai-governance.md +267 -0
  70. package/src/bmad-plus/packs/pack-shield/references/eu-ai-act/obligations-high-risk.md +287 -0
  71. package/src/bmad-plus/packs/pack-shield/references/eu-ai-act/risk-classification.md +182 -0
  72. package/src/bmad-plus/packs/pack-shield/references/fedramp/appendices-guide.md +209 -0
  73. package/src/bmad-plus/packs/pack-shield/references/fedramp/control-families.md +281 -0
  74. package/src/bmad-plus/packs/pack-shield/references/fedramp/poam-guide.md +93 -0
  75. package/src/bmad-plus/packs/pack-shield/references/fedramp/readiness-checklist.md +134 -0
  76. package/src/bmad-plus/packs/pack-shield/references/fedramp/sap-sar-guide.md +86 -0
  77. package/src/bmad-plus/packs/pack-shield/references/fedramp/ssp-guide.md +129 -0
  78. package/src/bmad-plus/packs/pack-shield/references/gdpr-compliance/documents.md +192 -0
  79. package/src/bmad-plus/packs/pack-shield/references/gdpr-compliance/dpa-template.md +121 -0
  80. package/src/bmad-plus/packs/pack-shield/references/gdpr-compliance/privacy-notice.md +87 -0
  81. package/src/bmad-plus/packs/pack-shield/references/hipaa-compliance/breach-notification.md +293 -0
  82. package/src/bmad-plus/packs/pack-shield/references/hipaa-compliance/privacy-rule.md +276 -0
  83. package/src/bmad-plus/packs/pack-shield/references/hipaa-compliance/security-rule.md +299 -0
  84. package/src/bmad-plus/packs/pack-shield/references/hipaa-compliance/templates.md +568 -0
  85. package/src/bmad-plus/packs/pack-shield/references/ism/control-applicability.md +181 -0
  86. package/src/bmad-plus/packs/pack-shield/references/ism/guidelines-overview.md +183 -0
  87. package/src/bmad-plus/packs/pack-shield/references/iso27001/annex-a-2013.md +203 -0
  88. package/src/bmad-plus/packs/pack-shield/references/iso27001/annex-a-2022.md +132 -0
  89. package/src/bmad-plus/packs/pack-shield/references/iso27001/control-mapping.md +153 -0
  90. package/src/bmad-plus/packs/pack-shield/references/iso27701/annex-a-controls.md +195 -0
  91. package/src/bmad-plus/packs/pack-shield/references/iso27701/regulatory-mapping.md +229 -0
  92. package/src/bmad-plus/packs/pack-shield/references/iso27701/transition-guide.md +219 -0
  93. package/src/bmad-plus/packs/pack-shield/references/iso42001/iso42001-ai-risk-assessment.md +258 -0
  94. package/src/bmad-plus/packs/pack-shield/references/iso42001/iso42001-clauses-requirements.md +279 -0
  95. package/src/bmad-plus/packs/pack-shield/references/iso42001/iso42001-controls-annex-a.md +155 -0
  96. package/src/bmad-plus/packs/pack-shield/references/itar/compliance-program.md +174 -0
  97. package/src/bmad-plus/packs/pack-shield/references/itar/licensing-guide.md +146 -0
  98. package/src/bmad-plus/packs/pack-shield/references/itar/usml-categories.md +93 -0
  99. package/src/bmad-plus/packs/pack-shield/references/lgpd/anpd-enforcement.md +147 -0
  100. package/src/bmad-plus/packs/pack-shield/references/lgpd/compliance-program.md +272 -0
  101. package/src/bmad-plus/packs/pack-shield/references/lgpd/lgpd-articles.md +271 -0
  102. package/src/bmad-plus/packs/pack-shield/references/nis2/article-21-measures.md +153 -0
  103. package/src/bmad-plus/packs/pack-shield/references/nis2/iso27001-nis2-mapping.md +68 -0
  104. package/src/bmad-plus/packs/pack-shield/references/nist-800-53/assessment-rmf.md +349 -0
  105. package/src/bmad-plus/packs/pack-shield/references/nist-800-53/baselines-tailoring.md +277 -0
  106. package/src/bmad-plus/packs/pack-shield/references/nist-800-53/control-families.md +450 -0
  107. package/src/bmad-plus/packs/pack-shield/references/nist-ai-rmf/rmf-core.md +361 -0
  108. package/src/bmad-plus/packs/pack-shield/references/nist-ai-rmf/rmf-profiles.md +192 -0
  109. package/src/bmad-plus/packs/pack-shield/references/nist-csf/csf-10-to-20-mapping.md +143 -0
  110. package/src/bmad-plus/packs/pack-shield/references/nist-csf/csf-20-functions-categories.md +278 -0
  111. package/src/bmad-plus/packs/pack-shield/references/nist-csf/csf-implementation-tiers.md +135 -0
  112. package/src/bmad-plus/packs/pack-shield/references/pci-compliance/pci-dss-requirements.md +366 -0
  113. package/src/bmad-plus/packs/pack-shield/references/pci-compliance/pci-dss-saq-guide.md +217 -0
  114. package/src/bmad-plus/packs/pack-shield/references/pci-compliance/pci-dss-v4-changes.md +190 -0
  115. package/src/bmad-plus/packs/pack-shield/references/section-508/wcag-mapping.md +160 -0
  116. package/src/bmad-plus/packs/pack-shield/references/soc2/controls.md +241 -0
  117. package/src/bmad-plus/packs/pack-shield/references/soc2/evidence.md +236 -0
  118. package/src/bmad-plus/packs/pack-shield/references/soc2/policies.md +254 -0
  119. package/src/bmad-plus/packs/pack-shield/references/soc2/vendor.md +276 -0
  120. package/src/bmad-plus/packs/pack-shield/references/swift-csp/swift-assessment.md +202 -0
  121. package/src/bmad-plus/packs/pack-shield/references/swift-csp/swift-controls.md +545 -0
  122. package/src/bmad-plus/packs/pack-shield/references/tsa-compliance/tsa-crmp-requirements.md +359 -0
  123. package/src/bmad-plus/packs/pack-shield/references/tsa-compliance/tsa-directives-overview.md +187 -0
  124. package/src/bmad-plus/packs/pack-shield/references/tsa-compliance/tsa-incident-reporting.md +187 -0
  125. package/src/bmad-plus/packs/pack-shield/references/wcag/criteria-detail.md +510 -0
  126. package/src/bmad-plus/packs/pack-shield/shared/audit-report-template.md +103 -0
  127. package/src/bmad-plus/packs/pack-shield/shared/cross-framework-mapper.md +103 -0
  128. package/src/bmad-plus/packs/pack-shield/shared/gap-analysis-template.md +83 -0
  129. package/src/bmad-plus/packs/pack-shield/shield-orchestrator.md +229 -0
  130. package/src/bmad-plus/packs/pack-shield/upstream-sync.yaml +68 -0
  131. package/tools/cli/commands/install.js +21 -8
  132. package/tools/cli/commands/update.js +4 -2
  133. package/tools/cli/i18n.js +50 -10
@@ -0,0 +1,267 @@
1
+ # EU AI Act — GPAI, Governance, and Cross-Framework Reference
2
+
3
+ ## GPAI Model Obligations
4
+
5
+ ### Art. 3(63) — What Qualifies as a GPAI Model
6
+
7
+ A model qualifies as GPAI when **all** of the following apply:
8
+ - Trained with **large amounts of data** using **self-supervision at scale**
9
+ - Displays **significant generality** — competently performs a wide range of distinct tasks
10
+ - Can be used for multiple **different purposes** regardless of how placed on market
11
+ - **Excludes:** Models in pre-release R&D phase not yet placed on market
12
+
13
+ **Key implication:** Many foundation models (large language models, multimodal models) are GPAI models. A system built on such a model may be a "GPAI system" (Art. 3(64)).
14
+
15
+ ---
16
+
17
+ ### Art. 53 — Universal GPAI Provider Obligations (applies from 2 August 2025)
18
+
19
+ ALL GPAI model providers (regardless of size or systemic risk status) must:
20
+
21
+ **1. Technical Documentation (Annex XI content):**
22
+ - General description: intended purpose, types of tasks, interaction modes
23
+ - Development process: data sources, data processing and filtering, training techniques, compute used
24
+ - Testing and evaluation: benchmarks, evaluations, safety testing results
25
+ - Known limitations and foreseeable risks
26
+ - Responsible disclosure practices; contact information for reporting issues
27
+ - Keep up-to-date; provide to AI Office and national authorities on request
28
+
29
+ **2. Downstream Provider Information (Annex XII content):**
30
+ Document and make available to downstream AI system providers who integrate the GPAI model:
31
+ - Capabilities and limitations enabling compliance with their own obligations
32
+ - Integration instructions
33
+ - Security measures
34
+ - Evaluation results
35
+ - **Note:** Intellectual property protections are permitted — Annex XII information may be provided under reasonable commercial terms
36
+
37
+ **3. Copyright Compliance Policy (EU Directive 2019/790 Text and Data Mining):**
38
+ - Implement a policy to comply with EU copyright and related rights law
39
+ - Specifically: mechanisms to respect text and data mining opt-outs reserved by rights-holders under Art. 4(3) of Directive 2019/790
40
+ - Must be technically enforceable and publicly described
41
+
42
+ **4. Public Training Content Summary:**
43
+ - Draw up and make publicly available a sufficiently detailed summary of training content
44
+ - Must follow Commission-provided template
45
+ - Purpose: enable downstream providers and the public to understand what data was used
46
+
47
+ **Open-source / free-license exception:**
48
+ GPAI model providers releasing under open-source/free licenses need only comply with obligations **3 and 4** (copyright policy and training summary) — **UNLESS** the model is later designated as having systemic risk, in which case full Art. 53 + Art. 55 obligations apply.
49
+
50
+ ---
51
+
52
+ ### Art. 51 — Systemic Risk Classification
53
+
54
+ **Automatic presumption of systemic risk when:**
55
+ Training compute exceeds **10²⁵ FLOPs** (floating point operations, cumulative across all training runs)
56
+
57
+ **Commission designation (independent of FLOPs threshold):**
58
+ Commission may also designate a model as having systemic risk based on:
59
+ - High-impact capabilities identified through model evaluation
60
+ - Broad reach across the EU
61
+ - Negative effects on public health, safety, public security, fundamental rights
62
+ - Other criteria in Annex XIII (including parameter count, domain specificity, user base)
63
+
64
+ **Provider notification obligation:**
65
+ Providers that know or should reasonably know their model meets or is approaching the threshold must:
66
+ - Notify the **AI Office** within **2 weeks** of reaching the threshold
67
+ - Notification triggers systemic risk assessment dialogue
68
+
69
+ **Dynamic threshold:**
70
+ Commission may amend the 10²⁵ FLOPs threshold via delegated acts as AI capabilities evolve.
71
+
72
+ **Rebuttable presumption:**
73
+ Providers may present evidence demonstrating systemic risk does not apply despite meeting the FLOPs threshold.
74
+
75
+ ---
76
+
77
+ ### Art. 55 — Systemic Risk GPAI Provider Additional Obligations (applies from 2 August 2025)
78
+
79
+ In addition to Art. 53 universal obligations, providers of systemic-risk GPAI models must:
80
+
81
+ **1. Model Evaluation:**
82
+ - Conduct model evaluation according to standardised protocols and state-of-the-art tools
83
+ - Includes adversarial testing (red-teaming) to identify and mitigate systemic risks
84
+ - May be conducted in cooperation with the AI Office
85
+
86
+ **2. Risk Assessment and Mitigation:**
87
+ - Assess and mitigate possible systemic risks arising from development, market placement, or deployment
88
+ - Systemic risks include: actual or foreseeable negative effects on public health, safety, public security, or fundamental rights at Union level, including serious societal disturbances
89
+
90
+ **3. Serious Incident Reporting:**
91
+ - Track, document, and promptly report any serious incidents and possible corrective actions
92
+ - Report to: **AI Office** and relevant **national competent authorities**
93
+ - Report without undue delay upon becoming aware
94
+
95
+ **4. Cybersecurity:**
96
+ - Ensure adequate cybersecurity protection for the model, its physical infrastructure, and the supply chain
97
+ - Includes protection against model theft, unauthorized access, adversarial attacks on model weights
98
+
99
+ **Compliance pathways for GPAI obligations:**
100
+ 1. **Codes of Practice** — voluntary, developed with AI Office involvement; interim compliance pathway
101
+ 2. **Harmonised standards** — when published under Commission mandate; compliance gives presumption of conformity
102
+ 3. **Alternative adequate means** — provider demonstrates compliance through other methods acceptable to Commission
103
+
104
+ ---
105
+
106
+ ## Governance Structure
107
+
108
+ ### AI Office (Arts. 64–68)
109
+
110
+ - Established within the **European Commission** (not an independent agency)
111
+ - Primary responsibility for: GPAI model oversight, monitoring, compliance assessment
112
+ - Conducts GPAI model evaluations — both compliance assessments and systemic risk investigations
113
+ - May request access to model weights, source code, training procedures, technical documentation
114
+ - Receives complaints from downstream providers against upstream GPAI providers
115
+ - Provides secretariat support for the AI Board
116
+ - Key contact point for: systemic risk notifications, Codes of Practice development, serious incident reports from GPAI providers
117
+ - Issues non-binding guidance; makes recommendations to the Commission
118
+
119
+ ### European Artificial Intelligence Board — AI Board (Art. 65)
120
+
121
+ **Composition:**
122
+ - One representative per EU Member State (3-year renewable mandate)
123
+ - European Data Protection Supervisor: permanent observer
124
+ - AI Office representative: attends, no voting rights
125
+ - Other national/Union authorities: invited case-by-case
126
+
127
+ **Governance:**
128
+ - Elects Chair from Member State representatives
129
+ - Adopts rules of procedure by two-thirds majority
130
+ - Two standing sub-groups: (i) market surveillance authorities; (ii) notifying authorities
131
+
132
+ **Core tasks:**
133
+ - Advise and assist Commission and Member States on consistent AI Act implementation
134
+ - Issue opinions, recommendations, and guidance on matters affecting multiple Member States
135
+ - Coordinate and facilitate information exchange between national authorities
136
+ - Facilitate development of harmonised technical standards
137
+ - Provide opinions on common specifications
138
+
139
+ ### National Competent Authorities
140
+
141
+ Each Member State designates one or more national competent authorities with powers covering:
142
+ - **Market surveillance** (Art. 74): access to documentation, testing datasets, source code
143
+ - **Conformity assessment oversight**: notified body designation and monitoring
144
+ - **Enforcement and investigation**: power to require corrective action, withdrawal, recalls, fines
145
+ - **Annual reporting** to Commission on prohibited practices and enforcement actions
146
+
147
+ **Sector-specific authorities:**
148
+ - Financial supervisors for AI in banking and insurance
149
+ - Data protection authorities for law enforcement AI applications
150
+ - European Data Protection Supervisor for EU institutions' AI systems (Art. 77)
151
+
152
+ ### Scientific Panel (Arts. 68–70)
153
+
154
+ - Independent expert body for GPAI matters
155
+ - Can alert AI Office about high-impact GPAI capabilities that may qualify for systemic risk designation
156
+ - Provides technical opinions to AI Office and Commission
157
+ - Issues views on model evaluation methodologies and capabilities assessment
158
+
159
+ ### AI Regulatory Sandboxes (Art. 57)
160
+
161
+ - At least one operational per Member State by 2 August 2026
162
+ - Enable controlled testing of innovative AI systems under regulatory supervision before market placement
163
+ - Participants: reduced administrative burden; authorities: supervisory insight
164
+ - Participation does not provide automatic compliance certification
165
+
166
+ ---
167
+
168
+ ## Post-Market Monitoring and Incident Reporting
169
+
170
+ ### Art. 72 — Post-Market Monitoring System
171
+
172
+ **Applies to:** All high-risk AI system providers.
173
+
174
+ **Requirements:**
175
+ - Establish and document a post-market monitoring system proportionate to the nature of the AI technology and the risk
176
+ - Continuously collect, document, and analyze relevant performance data from deployed systems throughout the operational lifetime
177
+ - Monitor for risks to health, safety, and fundamental rights
178
+ - Inform risk management system updates (Art. 9)
179
+ - Must form part of technical documentation (Art. 11)
180
+
181
+ **Commission template:** Commission must publish a standardised template for post-market monitoring plans by 2 February 2026.
182
+
183
+ **Integration:** Existing post-market monitoring systems under sectoral legislation (medical devices, machinery, etc.) may be adapted and integrated to avoid duplication.
184
+
185
+ ### Art. 73 — Serious Incident Reporting
186
+
187
+ **What constitutes a serious incident:**
188
+ Any incident or malfunctioning of a high-risk AI system that directly or indirectly leads or could lead to:
189
+ - Death of a person or serious damage to health of a person
190
+ - Serious and irreversible disruption to management/operation of critical infrastructure
191
+ - Infringement of obligations under Union law protecting fundamental rights
192
+ - Serious damage to property or the environment
193
+
194
+ **Who reports:**
195
+ - **Providers:** Report to market surveillance authority of Member State where incident occurred; within timeframe proportionate to severity (immediately for death-related incidents; within 15 days for other serious incidents; within 2 days for widespread safety threats)
196
+ - **Deployers:** Immediately notify provider, importer/distributor, AND market surveillance authority
197
+ - **GPAI systemic risk providers:** Report to AI Office and relevant national authorities
198
+
199
+ ---
200
+
201
+ ## Cross-Framework Mapping
202
+
203
+ ### ISO 42001:2023 (AI Management System)
204
+
205
+ ISO 42001 is the primary AI management system standard complementing AI Act compliance.
206
+
207
+ | AI Act Article | ISO 42001 Element |
208
+ |---------------|-------------------|
209
+ | Art. 9 (Risk Management) | Clause 6.1, Annex A.6 (AI risk assessment), Annex B.3 |
210
+ | Art. 10 (Data Governance) | Clause 8.4, Annex A.8 (AI system operation), A.7 (AI data management) |
211
+ | Art. 13 (Transparency) | Annex A.4.1 (intended purpose documentation), A.4.2 |
212
+ | Art. 14 (Human Oversight) | Annex A.9.3, A.10.1 (human factors in AI) |
213
+ | Art. 15 (Accuracy/Robustness) | Annex A.9.4 (AI system performance), B.5 |
214
+ | Art. 17 (QMS) | Clause 4–10 PDCA management system structure |
215
+ | Art. 72 (Post-Market Monitoring) | Clause 9.1 (performance evaluation), A.10.3 |
216
+ | Art. 73 (Incident Reporting) | Annex A.10.5 (adverse impacts), Clause 10.2 |
217
+
218
+ **Practical guidance:** ISO 42001 certification can support demonstration of Art. 17 QMS compliance. ISO 42001's Statement of Applicability (SoA) process may be adapted for mapping conformity with AI Act requirements.
219
+
220
+ ### NIST AI RMF (AI Risk Management Framework 1.0)
221
+
222
+ | AI Act Obligation | NIST AI RMF Function/Category |
223
+ |------------------|-------------------------------|
224
+ | Art. 17 QMS | GOVERN 1 (Policies, processes, accountability) |
225
+ | Art. 6 classification | MAP 1 (Context establishment), MAP 2 (Risk identification) |
226
+ | Art. 9 risk management | MAP 3, MEASURE 1–2 |
227
+ | Art. 10 data governance | MEASURE 2.5, 2.6 (Data and bias evaluation) |
228
+ | Art. 15 accuracy/robustness | MEASURE 2.1–2.4 (Testing, evaluation, red-teaming) |
229
+ | Art. 14 human oversight | MANAGE 2 (Human-AI interaction), GOVERN 4 |
230
+ | Art. 72 post-market monitoring | MANAGE 3, MANAGE 4 (Post-deployment operations) |
231
+ | Art. 73 incident reporting | MANAGE 4.1–4.2 (Incident response) |
232
+
233
+ ### GDPR Alignment
234
+
235
+ The AI Act operates concurrently with GDPR — both apply when AI systems process personal data.
236
+
237
+ | Intersection | AI Act | GDPR |
238
+ |-------------|--------|------|
239
+ | Data governance | Art. 10 | Art. 5 (data quality principles), Art. 25 (data protection by design) |
240
+ | DPIA requirement | Art. 26 (deployer obligation reference) | Art. 35 |
241
+ | Special category data | Art. 10(5) (bias detection) | Art. 9 |
242
+ | Profiling definition | Art. 3(52) (matches precisely) | Art. 4(4) |
243
+ | Transparency to users | Art. 50 | Art. 13/14 (information notices) |
244
+ | Fundamental rights impact | Art. 27 (FRIA for public authorities) | Art. 35 DPIA |
245
+ | RBI system authorisation | Art. 5(1)(h) | Art. 9 + law enforcement exemptions |
246
+ | Supervisory authorities | Art. 77 (DPAs have access rights) | Supervisory authority under Chapter VI |
247
+ | Enforcement | Art. 99 (AI Act fines) | Art. 83 (GDPR fines up to 4% global turnover) |
248
+
249
+ **Key distinction:** GDPR governs personal data processing; the AI Act governs AI system development and deployment. Many AI systems trigger obligations under both — operators need dual compliance programmes.
250
+
251
+ ---
252
+
253
+ ## Penalties Reference — Art. 99
254
+
255
+ | Violation Type | Maximum Fine |
256
+ |----------------|--------------|
257
+ | **Prohibited AI practices** (Art. 5 violations) | €35,000,000 or **7%** of total worldwide annual turnover (higher applies) |
258
+ | **Provider/deployer/notified body violations** (Arts. 16, 22, 23, 24, 26, 31, 33, 34, 50) | €15,000,000 or **3%** of total worldwide annual turnover (higher applies) |
259
+ | **Incorrect/misleading information** to notified bodies or competent authorities | €7,500,000 or **1%** of total worldwide annual turnover (higher applies) |
260
+
261
+ **SME / startup reduction:** For SMEs and startups, the lower of the fixed amount or percentage applies (Art. 99(6)).
262
+
263
+ **Proportionality factors:** Nature, gravity, and duration; prior infringements; company size and market share; intentionality; cooperation level; mitigation measures taken; actual damage caused.
264
+
265
+ **Member State reporting:** Annual reporting obligation to Commission on fines issued.
266
+
267
+ **GPAI enforcement:** AI Office enforces GPAI obligations. For systemic risk GPAI models, Commission itself may conduct investigations and impose fines directly.
@@ -0,0 +1,287 @@
1
+ # EU AI Act — High-Risk AI System Obligations Reference
2
+
3
+ All obligations below apply to **providers** unless stated otherwise. Annex III systems: applies from **2 August 2026**. Annex I safety component systems: applies from **2 August 2027**.
4
+
5
+ ---
6
+
7
+ ## Art. 9 — Risk Management System
8
+
9
+ **Purpose:** Identify and mitigate risks to health, safety, and fundamental rights throughout the AI system lifecycle.
10
+
11
+ **Requirements:**
12
+ - Continuous, iterative process maintained from development through decommissioning
13
+ - Must cover **all phases**: development, testing, pre-deployment assessment, post-market monitoring
14
+ - **5-step process:**
15
+ 1. Identify all known and reasonably foreseeable risks (normal use, reasonably foreseeable misuse)
16
+ 2. Estimate and evaluate risks under all intended and reasonably foreseeable uses
17
+ 3. Evaluate risks emerging from post-market monitoring data
18
+ 4. Adopt appropriate and targeted risk mitigation measures (Art. 9(4))
19
+ 5. Assess residual risk acceptability; document reasoning
20
+ - **Risk mitigation priority hierarchy (Art. 9(4)):**
21
+ 1. Design-based risk elimination first
22
+ 2. Adequate mitigation measures if elimination not possible
23
+ 3. Information provision to deployers and users
24
+ - **Testing (Art. 9(7)):** Before market placement; against pre-defined metrics and probabilistic thresholds; representative test data
25
+ - **Under-18 impact:** Special attention required for systems likely to adversely impact minors
26
+
27
+ **Documentation required:** Risk management system must be documented and form part of technical documentation (Art. 11, Annex IV).
28
+
29
+ ---
30
+
31
+ ## Art. 10 — Data and Data Governance
32
+
33
+ **Purpose:** Ensure training, validation, and testing datasets are appropriate for the intended purpose.
34
+
35
+ **Dataset requirements (Art. 10(2)):**
36
+ - Relevant to intended purpose
37
+ - Sufficiently representative of the persons/contexts the system will encounter
38
+ - Free of errors (where appropriate)
39
+ - Complete for the intended purpose
40
+ - Statistically appropriate for target geographic, contextual, and behavioral population
41
+
42
+ **Data governance practices (Art. 10(3)):**
43
+ - Design choices documentation
44
+ - Data collection method documentation
45
+ - Data origin examination and documentation
46
+ - Annotation, labeling, cleaning, enrichment, and aggregation procedures
47
+ - Bias examination and identification
48
+ - Gap identification in relevant properties
49
+
50
+ **Special category data (Art. 10(5)) — bias detection exception:**
51
+ Processing of special-category personal data (GDPR Art. 9) permitted ONLY when ALL conditions met:
52
+ 1. No adequate alternative data source exists
53
+ 2. Technical and organizational protections are in place (anonymization, pseudonymization where possible)
54
+ 3. Access controls ensure minimum necessary access
55
+ 4. No third-party data transfer
56
+ 5. Data deleted upon completion of bias detection
57
+ 6. Documented justification maintained
58
+
59
+ ---
60
+
61
+ ## Art. 11 — Technical Documentation
62
+
63
+ **Who:** Providers (before market placement), kept up-to-date throughout lifecycle.
64
+
65
+ **Content specified in Annex IV:**
66
+ - General description: intended purpose, provider identity, version, interactions with other systems
67
+ - Detailed description of elements and development process: training methodologies, design choices, algorithms, data used
68
+ - Information about training data (including general description, origin, annotation procedures, design choices)
69
+ - Assessment of the measures required to interpret outputs and enable human oversight
70
+ - Detailed description of the system design (architecture, source code or training code access if applicable)
71
+ - Validation and testing: procedures, applied metrics, performance benchmarks, testing datasets
72
+ - Risk management system documentation (Art. 9)
73
+ - Changes made to system over lifecycle
74
+ - List of harmonised standards applied (or description of alternative solutions for conformity)
75
+ - EU Declaration of Conformity
76
+
77
+ **Retention:** 10 years after market placement or putting into service.
78
+
79
+ ---
80
+
81
+ ## Art. 12 — Record-Keeping / Automatic Logging
82
+
83
+ **Who:** Providers must build in automatic logging capability; deployers must retain logs.
84
+
85
+ **Provider obligations:**
86
+ - High-risk systems must be capable of automatically generating event logs throughout operation
87
+ - Logging capability enables post-deployment reconstruction of circumstances surrounding risks
88
+ - For biometric identification: logs must enable identification of persons involved and circumstances
89
+
90
+ **Deployer obligations (Art. 26(6)):**
91
+ - Retain automatically generated logs for **minimum 6 months** from use, or as required by applicable Union/national law (whichever longer)
92
+ - Public sector deployers: stricter retention may apply under public records obligations
93
+
94
+ ---
95
+
96
+ ## Art. 13 — Transparency and Provision of Information to Deployers
97
+
98
+ **Purpose:** Enable deployers to understand and correctly use the AI system.
99
+
100
+ **System design requirement:** High-risk AI systems must be designed with sufficient transparency for deployers to interpret outputs and use appropriately.
101
+
102
+ **Instructions for use must include:**
103
+ - Provider identity, address, registration data
104
+ - System characteristics, capabilities, and intended purpose
105
+ - Performance metrics: level of accuracy, robustness, cybersecurity
106
+ - Known or foreseeable circumstances that may lead to risks
107
+ - System performance for specific persons/groups
108
+ - Specifications for input data (data types, formats, dimensions)
109
+ - Information on changes made vs prior versions
110
+ - Human oversight measures (Art. 14) and technical measures to enable oversight
111
+ - Computational resources required; expected system lifetime
112
+ - Description of logging mechanisms (Art. 12)
113
+ - Any predetermined modifications or updates
114
+
115
+ **Format:** Concise, complete, correct, clear, relevant; provided in digital and, where appropriate, physical format; accessible to deployers in appropriate language.
116
+
117
+ ---
118
+
119
+ ## Art. 14 — Human Oversight
120
+
121
+ **Purpose:** Enable effective human monitoring during operation to identify and correct AI errors.
122
+
123
+ **System design obligations (providers, Art. 14(3)):**
124
+ High-risk AI systems must be designed to allow natural persons to:
125
+ - Understand system capabilities and limitations
126
+ - Recognize **automation bias** (over-reliance on AI outputs)
127
+ - Correctly interpret AI outputs (including interpretability features)
128
+ - Decide not to use, or disregard, override, or reverse AI outputs
129
+ - Intervene through a **stop button** or similar interrupt mechanism
130
+
131
+ **Deployer implementation obligations (Art. 14(4)):**
132
+ - Assign human oversight to natural persons with necessary competence, authority, and resources
133
+ - Train oversight persons on system capabilities/limitations and automation bias risk
134
+
135
+ **Biometric identification specific (Art. 14(5)):**
136
+ - Minimum **two separate persons** must independently verify and confirm identification before action is taken
137
+
138
+ **Proportionality:** Oversight measures are built in by providers proportionate to risks and level of autonomy; deployers may need to supplement with organizational measures.
139
+
140
+ ---
141
+
142
+ ## Art. 15 — Accuracy, Robustness, and Cybersecurity
143
+
144
+ **Accuracy:**
145
+ - System must achieve appropriate accuracy levels for intended purpose throughout its lifecycle
146
+ - Declared accuracy levels and metrics must be in instructions for use (Art. 13)
147
+ - Training, validation, and testing data must be statistically appropriate to achieve declared levels
148
+
149
+ **Robustness:**
150
+ - Resilience against errors, faults, or inconsistencies during operation and foreseeable misuse
151
+ - Consistent performance throughout operational lifetime
152
+ - Where appropriate: technical redundancy solutions, backup plans, fail-safe mechanisms
153
+ - For continuous learning systems: eliminate/reduce risks of feedback loops amplifying biased outputs
154
+
155
+ **Cybersecurity:**
156
+ Resilience against attempts to alter use, outputs, or performance, including:
157
+ - **Data poisoning attacks** (contaminating training data)
158
+ - **Model poisoning attacks** (manipulating model weights)
159
+ - **Adversarial examples** (crafted inputs causing misclassification)
160
+ - **Confidentiality attacks** (extracting training data or model internals)
161
+ - **Model flaws exploitation** (taking advantage of design weaknesses)
162
+
163
+ **Technical solutions appropriate to context:** Air-gapping sensitive components, adversarial testing, input validation, output monitoring.
164
+
165
+ ---
166
+
167
+ ## Art. 16 — Provider Obligations (Complete 12-Item Checklist)
168
+
169
+ Before placing on market or putting into service, providers must:
170
+
171
+ 1. ☐ Ensure system complies with Section 2 requirements (Arts. 9–15)
172
+ 2. ☐ Display name, trademark, and contact address on system, packaging, or documentation
173
+ 3. ☐ Establish and implement quality management system (Art. 17)
174
+ 4. ☐ Keep technical documentation (Art. 11, Annex IV)
175
+ 5. ☐ Retain automatically generated logs (Art. 12) where under provider's control
176
+ 6. ☐ Complete required conformity assessment (Art. 43) before market placement
177
+ 7. ☐ Draw up EU Declaration of Conformity (Art. 47)
178
+ 8. ☐ Affix CE marking (Art. 48)
179
+ 9. ☐ Register in EU AI database (Art. 49) before market placement
180
+ 10. ☐ Take immediate corrective action for non-conforming systems; notify authorities and deployers (Art. 20)
181
+ 11. ☐ Demonstrate conformity upon request of national competent authority
182
+ 12. ☐ Ensure accessible design where required (Directives 2016/2102, 2019/882)
183
+
184
+ ---
185
+
186
+ ## Art. 17 — Quality Management System (13 Required Components)
187
+
188
+ Documented policies and procedures covering:
189
+
190
+ 1. **Regulatory compliance strategy** including conformity assessment procedures and modifications management
191
+ 2. **Design techniques** — procedures for designing, controlling, and verifying the AI system
192
+ 3. **Development quality control** procedures
193
+ 4. **Testing and validation** — before, during, and after development
194
+ 5. **Technical standards application** — harmonised standards and common specifications
195
+ 6. **Data management** — acquisition, labeling, storage, filtering, retention, cleaning procedures
196
+ 7. **Risk management** per Art. 9
197
+ 8. **Post-market monitoring** per Art. 72
198
+ 9. **Serious incident reporting** per Art. 73 — communication with national authorities
199
+ 10. **Authority communication** procedures for corrective actions and recalls (Art. 20)
200
+ 11. **Record-keeping and documentation** — retention periods and access procedures
201
+ 12. **Resource management** including supply-chain security measures
202
+ 13. **Accountability framework** — clear responsibility assignment for all above components
203
+
204
+ **Proportionality:** QMS must be proportionate to provider organization size. Existing sectoral QMS may be adapted/integrated.
205
+
206
+ ---
207
+
208
+ ## Art. 26 — Deployer Obligations
209
+
210
+ 1. **Instructions compliance:** Use system in accordance with provider's instructions for use (Art. 13)
211
+ 2. **Staff assignment:** Assign human oversight to persons with necessary competence, training, authority, and resources
212
+ 3. **Input data control:** Where deployer controls input data — ensure it is relevant and sufficiently representative
213
+ 4. **Continuous monitoring:** Monitor operations; if risk identified — notify provider/importer/distributor AND market surveillance authority; suspend use where appropriate
214
+ 5. **Serious incidents:** Immediately notify provider, then importer/distributor and market surveillance authority
215
+ 6. **Log retention:** Retain automatically generated logs for **at least 6 months**
216
+ 7. **Worker notification:** In employment/worker management contexts — inform workers and representatives before deployment
217
+ 8. **Public authority registration:** Public authorities must register use in EU AI database (Art. 60) before deployment; cannot use unregistered high-risk systems
218
+ 9. **GDPR:** Conduct data protection impact assessments where required under GDPR Art. 35
219
+ 10. **Fundamental Rights Impact Assessment:** Required for public authorities before deploying certain high-risk systems under Art. 27
220
+
221
+ ---
222
+
223
+ ## Arts. 43–49 — Conformity Assessment and CE Marking
224
+
225
+ ### Art. 43 — Conformity Assessment Paths
226
+
227
+ **Annex III — Point 1 Systems (Biometrics):** Provider chooses between:
228
+ - **(A) Self-assessment** — Internal control via Annex VI procedure; OR
229
+ - **(B) Notified body** — Third-party assessment under Annex VII (QMS review + technical documentation assessment)
230
+
231
+ Notified body (B) is **mandatory** when:
232
+ - No harmonised standards covering the system exist
233
+ - Standards exist but provider has not applied them (or only partially)
234
+ - Common specifications are unavailable or not used
235
+ - Standards published with restrictions that limit presumption of conformity
236
+
237
+ **Annex III — Points 2–8 Systems (Areas 2–8):** Self-assessment only — no notified body assessment available.
238
+
239
+ **Annex I Products (safety components):** Conformity assessment integrated into the existing conformity procedure for the product under Annex I legislation.
240
+
241
+ **Law enforcement / immigration / EU institutions:** Market surveillance authority acts as notified body.
242
+
243
+ **Substantial modifications:** Require full reassessment; except predetermined learning modifications documented in original technical file and declared in conformity declaration.
244
+
245
+ ### Art. 47 — EU Declaration of Conformity
246
+ - Drawn up by provider (or authorised representative); provider assumes full responsibility
247
+ - Must confirm compliance with all applicable AI Act requirements (Section 2)
248
+ - Must identify: system name, provider, version, intended purpose, conformity assessment procedure followed, standards applied
249
+ - Machine-readable format; translation required into languages required by national authorities
250
+ - Maintained for **10 years** from market placement
251
+
252
+ ### Art. 48 — CE Marking
253
+ - Visible, legible, indelible affixation on system, packaging, or documentation
254
+ - Only affixed after successful conformity assessment and drawing up of EU Declaration of Conformity
255
+ - Subject to general principles of CE marking (Regulation 765/2008)
256
+ - Affixed before market placement; re-affixed if system substantially modified
257
+
258
+ ### Arts. 49 and 60 — EU AI Database Registration
259
+
260
+ **Provider registration (Art. 49):**
261
+ - Before market placement (Annex III systems — Art. 6(2))
262
+ - Information includes: provider/representative identity; system description, intended purpose, accuracy; conformity assessment procedure; CE marking notified body (if applicable)
263
+ - Provider registration covers all deployers and users of that system
264
+
265
+ **Public authority deployer registration (Art. 60):**
266
+ - Before deployment of registered high-risk systems
267
+ - Additional system-specific information for public authority use
268
+
269
+ **Database operation:** Operational from 2 August 2026 (Commission responsibility, Art. 71).
270
+
271
+ **Publicly accessible information vs. restricted:** Most information public; some law enforcement/immigration information accessible only to competent authorities.
272
+
273
+ ---
274
+
275
+ ## Art. 27 — Fundamental Rights Impact Assessment (FRIA)
276
+
277
+ **Who:** Public authorities and bodies deploying high-risk AI systems in Annex III Areas 1, 2, 4, 5(b–d), 6, 7, and 8.
278
+
279
+ **Content:**
280
+ - Description of the deployer's processes in which the system will be used
281
+ - Time period, frequency, and number of persons affected
282
+ - The specific categories of persons likely to be affected
283
+ - Specific risks of harm to categories of affected persons
284
+ - Human oversight measures (Art. 14) planned
285
+ - Measures to address fundamental rights risks
286
+
287
+ **Relationship to GDPR DPIA:** Where GDPR DPIA is also required, the FRIA may be conducted alongside or integrated into the DPIA.