bmad-plus 0.4.3 → 0.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +48 -0
- package/README.md +4 -3
- package/package.json +5 -1
- package/readme-international/README.de.md +2 -2
- package/readme-international/README.es.md +2 -2
- package/readme-international/README.fr.md +2 -2
- package/src/bmad-plus/module.yaml +43 -12
- package/src/bmad-plus/packs/pack-shield/README.md +110 -0
- package/src/bmad-plus/packs/pack-shield/categories/accessibility-esg/csrd-agent.md +262 -0
- package/src/bmad-plus/packs/pack-shield/categories/accessibility-esg/section508-agent.md +179 -0
- package/src/bmad-plus/packs/pack-shield/categories/accessibility-esg/wcag-agent.md +201 -0
- package/src/bmad-plus/packs/pack-shield/categories/ai-governance/eu-ai-act-agent.md +97 -0
- package/src/bmad-plus/packs/pack-shield/categories/ai-governance/iso42001-agent.md +251 -0
- package/src/bmad-plus/packs/pack-shield/categories/ai-governance/nist-ai-rmf-agent.md +133 -0
- package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/cis-controls-agent.md +221 -0
- package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/ism-agent.md +150 -0
- package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/iso27001-agent.md +167 -0
- package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/nis2-agent.md +83 -0
- package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/nist-800-53-agent.md +250 -0
- package/src/bmad-plus/packs/pack-shield/categories/cybersecurity/nist-csf-agent.md +218 -0
- package/src/bmad-plus/packs/pack-shield/categories/data-privacy/ccpa-agent.md +94 -0
- package/src/bmad-plus/packs/pack-shield/categories/data-privacy/dpdpa-agent.md +136 -0
- package/src/bmad-plus/packs/pack-shield/categories/data-privacy/gdpr-agent.md +296 -0
- package/src/bmad-plus/packs/pack-shield/categories/data-privacy/iso27701-agent.md +134 -0
- package/src/bmad-plus/packs/pack-shield/categories/data-privacy/lgpd-agent.md +129 -0
- package/src/bmad-plus/packs/pack-shield/categories/defense-export/cmmc-agent.md +127 -0
- package/src/bmad-plus/packs/pack-shield/categories/defense-export/ear-agent.md +272 -0
- package/src/bmad-plus/packs/pack-shield/categories/defense-export/itar-agent.md +202 -0
- package/src/bmad-plus/packs/pack-shield/categories/defense-export/tsa-agent.md +367 -0
- package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/dora-agent.md +510 -0
- package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/fedramp-agent.md +247 -0
- package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/hipaa-agent.md +173 -0
- package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/pci-dss-agent.md +239 -0
- package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/soc2-agent.md +266 -0
- package/src/bmad-plus/packs/pack-shield/categories/industry-compliance/swift-csp-agent.md +164 -0
- package/src/bmad-plus/packs/pack-shield/categories/workflows/ai-act-classifier.md +131 -0
- package/src/bmad-plus/packs/pack-shield/categories/workflows/ai-act-fria.md +155 -0
- package/src/bmad-plus/packs/pack-shield/categories/workflows/ai-act-incidents.md +187 -0
- package/src/bmad-plus/packs/pack-shield/categories/workflows/ai-act-roles.md +113 -0
- package/src/bmad-plus/packs/pack-shield/categories/workflows/breach-sentinel.md +197 -0
- package/src/bmad-plus/packs/pack-shield/categories/workflows/cookie-policy-gen.md +180 -0
- package/src/bmad-plus/packs/pack-shield/categories/workflows/dpia-sentinel.md +235 -0
- package/src/bmad-plus/packs/pack-shield/categories/workflows/legitimate-interest.md +159 -0
- package/src/bmad-plus/packs/pack-shield/categories/workflows/privacy-advisor.md +133 -0
- package/src/bmad-plus/packs/pack-shield/categories/workflows/privacy-notice-gen.md +160 -0
- package/src/bmad-plus/packs/pack-shield/categories/workflows/privacy-policy-gen.md +135 -0
- package/src/bmad-plus/packs/pack-shield/references/ccpa/ccpa-gdpr-comparison.md +117 -0
- package/src/bmad-plus/packs/pack-shield/references/ccpa/consumer-rights-workflows.md +177 -0
- package/src/bmad-plus/packs/pack-shield/references/cis-controls/framework-mappings.md +162 -0
- package/src/bmad-plus/packs/pack-shield/references/cis-controls/implementation-guidance.md +235 -0
- package/src/bmad-plus/packs/pack-shield/references/cis-controls/safeguards-detail.md +252 -0
- package/src/bmad-plus/packs/pack-shield/references/cmmc/cmmc-assessment.md +170 -0
- package/src/bmad-plus/packs/pack-shield/references/cmmc/cmmc-levels.md +113 -0
- package/src/bmad-plus/packs/pack-shield/references/cmmc/cmmc-practices.md +211 -0
- package/src/bmad-plus/packs/pack-shield/references/csrd/compliance-program.md +281 -0
- package/src/bmad-plus/packs/pack-shield/references/csrd/double-materiality.md +253 -0
- package/src/bmad-plus/packs/pack-shield/references/csrd/esrs-standards.md +401 -0
- package/src/bmad-plus/packs/pack-shield/references/dora/article-reference.md +441 -0
- package/src/bmad-plus/packs/pack-shield/references/dora/incident-classification.md +297 -0
- package/src/bmad-plus/packs/pack-shield/references/dora/rts-its-guide.md +306 -0
- package/src/bmad-plus/packs/pack-shield/references/dora/third-party-risk.md +349 -0
- package/src/bmad-plus/packs/pack-shield/references/dpdpa/gdpr-comparison.md +173 -0
- package/src/bmad-plus/packs/pack-shield/references/dpdpa/rights-and-obligations.md +426 -0
- package/src/bmad-plus/packs/pack-shield/references/dpdpa/rules-2025.md +599 -0
- package/src/bmad-plus/packs/pack-shield/references/dpdpa/sections-reference.md +319 -0
- package/src/bmad-plus/packs/pack-shield/references/ear/ccl-eccn-guide.md +250 -0
- package/src/bmad-plus/packs/pack-shield/references/ear/compliance-program.md +280 -0
- package/src/bmad-plus/packs/pack-shield/references/ear/license-exceptions.md +207 -0
- package/src/bmad-plus/packs/pack-shield/references/eu-ai-act/gpai-governance.md +267 -0
- package/src/bmad-plus/packs/pack-shield/references/eu-ai-act/obligations-high-risk.md +287 -0
- package/src/bmad-plus/packs/pack-shield/references/eu-ai-act/risk-classification.md +182 -0
- package/src/bmad-plus/packs/pack-shield/references/fedramp/appendices-guide.md +209 -0
- package/src/bmad-plus/packs/pack-shield/references/fedramp/control-families.md +281 -0
- package/src/bmad-plus/packs/pack-shield/references/fedramp/poam-guide.md +93 -0
- package/src/bmad-plus/packs/pack-shield/references/fedramp/readiness-checklist.md +134 -0
- package/src/bmad-plus/packs/pack-shield/references/fedramp/sap-sar-guide.md +86 -0
- package/src/bmad-plus/packs/pack-shield/references/fedramp/ssp-guide.md +129 -0
- package/src/bmad-plus/packs/pack-shield/references/gdpr-compliance/documents.md +192 -0
- package/src/bmad-plus/packs/pack-shield/references/gdpr-compliance/dpa-template.md +121 -0
- package/src/bmad-plus/packs/pack-shield/references/gdpr-compliance/privacy-notice.md +87 -0
- package/src/bmad-plus/packs/pack-shield/references/hipaa-compliance/breach-notification.md +293 -0
- package/src/bmad-plus/packs/pack-shield/references/hipaa-compliance/privacy-rule.md +276 -0
- package/src/bmad-plus/packs/pack-shield/references/hipaa-compliance/security-rule.md +299 -0
- package/src/bmad-plus/packs/pack-shield/references/hipaa-compliance/templates.md +568 -0
- package/src/bmad-plus/packs/pack-shield/references/ism/control-applicability.md +181 -0
- package/src/bmad-plus/packs/pack-shield/references/ism/guidelines-overview.md +183 -0
- package/src/bmad-plus/packs/pack-shield/references/iso27001/annex-a-2013.md +203 -0
- package/src/bmad-plus/packs/pack-shield/references/iso27001/annex-a-2022.md +132 -0
- package/src/bmad-plus/packs/pack-shield/references/iso27001/control-mapping.md +153 -0
- package/src/bmad-plus/packs/pack-shield/references/iso27701/annex-a-controls.md +195 -0
- package/src/bmad-plus/packs/pack-shield/references/iso27701/regulatory-mapping.md +229 -0
- package/src/bmad-plus/packs/pack-shield/references/iso27701/transition-guide.md +219 -0
- package/src/bmad-plus/packs/pack-shield/references/iso42001/iso42001-ai-risk-assessment.md +258 -0
- package/src/bmad-plus/packs/pack-shield/references/iso42001/iso42001-clauses-requirements.md +279 -0
- package/src/bmad-plus/packs/pack-shield/references/iso42001/iso42001-controls-annex-a.md +155 -0
- package/src/bmad-plus/packs/pack-shield/references/itar/compliance-program.md +174 -0
- package/src/bmad-plus/packs/pack-shield/references/itar/licensing-guide.md +146 -0
- package/src/bmad-plus/packs/pack-shield/references/itar/usml-categories.md +93 -0
- package/src/bmad-plus/packs/pack-shield/references/lgpd/anpd-enforcement.md +147 -0
- package/src/bmad-plus/packs/pack-shield/references/lgpd/compliance-program.md +272 -0
- package/src/bmad-plus/packs/pack-shield/references/lgpd/lgpd-articles.md +271 -0
- package/src/bmad-plus/packs/pack-shield/references/nis2/article-21-measures.md +153 -0
- package/src/bmad-plus/packs/pack-shield/references/nis2/iso27001-nis2-mapping.md +68 -0
- package/src/bmad-plus/packs/pack-shield/references/nist-800-53/assessment-rmf.md +349 -0
- package/src/bmad-plus/packs/pack-shield/references/nist-800-53/baselines-tailoring.md +277 -0
- package/src/bmad-plus/packs/pack-shield/references/nist-800-53/control-families.md +450 -0
- package/src/bmad-plus/packs/pack-shield/references/nist-ai-rmf/rmf-core.md +361 -0
- package/src/bmad-plus/packs/pack-shield/references/nist-ai-rmf/rmf-profiles.md +192 -0
- package/src/bmad-plus/packs/pack-shield/references/nist-csf/csf-10-to-20-mapping.md +143 -0
- package/src/bmad-plus/packs/pack-shield/references/nist-csf/csf-20-functions-categories.md +278 -0
- package/src/bmad-plus/packs/pack-shield/references/nist-csf/csf-implementation-tiers.md +135 -0
- package/src/bmad-plus/packs/pack-shield/references/pci-compliance/pci-dss-requirements.md +366 -0
- package/src/bmad-plus/packs/pack-shield/references/pci-compliance/pci-dss-saq-guide.md +217 -0
- package/src/bmad-plus/packs/pack-shield/references/pci-compliance/pci-dss-v4-changes.md +190 -0
- package/src/bmad-plus/packs/pack-shield/references/section-508/wcag-mapping.md +160 -0
- package/src/bmad-plus/packs/pack-shield/references/soc2/controls.md +241 -0
- package/src/bmad-plus/packs/pack-shield/references/soc2/evidence.md +236 -0
- package/src/bmad-plus/packs/pack-shield/references/soc2/policies.md +254 -0
- package/src/bmad-plus/packs/pack-shield/references/soc2/vendor.md +276 -0
- package/src/bmad-plus/packs/pack-shield/references/swift-csp/swift-assessment.md +202 -0
- package/src/bmad-plus/packs/pack-shield/references/swift-csp/swift-controls.md +545 -0
- package/src/bmad-plus/packs/pack-shield/references/tsa-compliance/tsa-crmp-requirements.md +359 -0
- package/src/bmad-plus/packs/pack-shield/references/tsa-compliance/tsa-directives-overview.md +187 -0
- package/src/bmad-plus/packs/pack-shield/references/tsa-compliance/tsa-incident-reporting.md +187 -0
- package/src/bmad-plus/packs/pack-shield/references/wcag/criteria-detail.md +510 -0
- package/src/bmad-plus/packs/pack-shield/shared/audit-report-template.md +103 -0
- package/src/bmad-plus/packs/pack-shield/shared/cross-framework-mapper.md +103 -0
- package/src/bmad-plus/packs/pack-shield/shared/gap-analysis-template.md +83 -0
- package/src/bmad-plus/packs/pack-shield/shield-orchestrator.md +229 -0
- package/src/bmad-plus/packs/pack-shield/upstream-sync.yaml +68 -0
- package/tools/cli/commands/install.js +22 -9
- package/tools/cli/commands/update.js +4 -2
- package/tools/cli/i18n.js +514 -394
|
@@ -0,0 +1,361 @@
|
|
|
1
|
+
# NIST AI RMF 1.0 — Full Category and Subcategory Reference
|
|
2
|
+
|
|
3
|
+
Source: NIST AI 100-1 (January 2023) and the AI RMF Playbook (NIST AI 100-1 Playbook)
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## GOVERN Function (6 Categories, 21 Subcategories)
|
|
8
|
+
|
|
9
|
+
GOVERN establishes the organizational culture, policies, accountability, and risk tolerance that underpin all other functions. It should be addressed first and revisited continuously.
|
|
10
|
+
|
|
11
|
+
### GV-1: Policies, Processes, Procedures and Practices in Place
|
|
12
|
+
|
|
13
|
+
**Purpose:** Ensure the organization has formalized policies and processes for AI risk management across the full AI lifecycle.
|
|
14
|
+
|
|
15
|
+
| Subcategory | Description |
|
|
16
|
+
|-------------|-------------|
|
|
17
|
+
| GV-1.1 | AI risk management is integrated into the organization's broader enterprise risk management (ERM) processes |
|
|
18
|
+
| GV-1.2 | The characteristics of trustworthy AI are integrated into organizational policies, processes, and practices |
|
|
19
|
+
| GV-1.3 | Organizational risk tolerance for AI is established, communicated, and reflected in AI policies |
|
|
20
|
+
| GV-1.4 | Organizational teams are committed to a culture of risk awareness and continuous improvement |
|
|
21
|
+
| GV-1.5 | Organizational policies for AI risk management are reviewed and updated on a periodic cadence |
|
|
22
|
+
| GV-1.6 | Policies for complying with applicable AI laws, regulations, and standards are established |
|
|
23
|
+
| GV-1.7 | Processes for regular review of AI risk policies to incorporate emerging AI risks are established |
|
|
24
|
+
|
|
25
|
+
**Suggested Actions:**
|
|
26
|
+
- Publish an organization-wide AI Risk Management Policy signed by senior leadership
|
|
27
|
+
- Define AI risk appetite statements (e.g., acceptable false positive rates, bias thresholds)
|
|
28
|
+
- Incorporate AI risk into existing ERM committee agendas and quarterly reviews
|
|
29
|
+
- Establish a policy review cycle (minimum annual) with a designated AI risk owner
|
|
30
|
+
|
|
31
|
+
---
|
|
32
|
+
|
|
33
|
+
### GV-2: Accountability Structures
|
|
34
|
+
|
|
35
|
+
**Purpose:** Assign clear ownership of AI risk management decisions at the organizational level.
|
|
36
|
+
|
|
37
|
+
| Subcategory | Description |
|
|
38
|
+
|-------------|-------------|
|
|
39
|
+
| GV-2.1 | Roles and responsibilities for AI risk management across organizational levels are documented |
|
|
40
|
+
| GV-2.2 | The organization designates senior officials accountable for AI risk outcomes |
|
|
41
|
+
| GV-2.3 | Executive leadership understands AI risk and fosters an accountable culture |
|
|
42
|
+
|
|
43
|
+
**Suggested Actions:**
|
|
44
|
+
- Appoint an AI Risk Owner or Chief AI Officer with board-level reporting
|
|
45
|
+
- Define RACI for AI development, deployment, and monitoring decisions
|
|
46
|
+
- Include AI risk in executive performance goals and leadership dashboards
|
|
47
|
+
|
|
48
|
+
---
|
|
49
|
+
|
|
50
|
+
### GV-3: Roles and Responsibilities
|
|
51
|
+
|
|
52
|
+
**Purpose:** Identify and define all roles involved in AI design, development, deployment, and evaluation.
|
|
53
|
+
|
|
54
|
+
| Subcategory | Description |
|
|
55
|
+
|-------------|-------------|
|
|
56
|
+
| GV-3.1 | AI risk management roles span the entire AI lifecycle from design through decommission |
|
|
57
|
+
| GV-3.2 | AI risk responsibilities are defined for development teams, operators, and deployers |
|
|
58
|
+
| GV-3.3 | Responsibilities for AI risk are assigned to both technical and non-technical roles |
|
|
59
|
+
|
|
60
|
+
**Suggested Actions:**
|
|
61
|
+
- Create an AI roles register mapping each lifecycle stage to responsible team/individual
|
|
62
|
+
- Ensure business owners (not just engineers) are accountable for deployed AI outcomes
|
|
63
|
+
- Define responsibilities for external AI vendors and third-party model providers
|
|
64
|
+
|
|
65
|
+
---
|
|
66
|
+
|
|
67
|
+
### GV-4: Cross-Functional Team Collaboration
|
|
68
|
+
|
|
69
|
+
**Purpose:** Ensure AI risk management involves diverse perspectives across the organization.
|
|
70
|
+
|
|
71
|
+
| Subcategory | Description |
|
|
72
|
+
|-------------|-------------|
|
|
73
|
+
| GV-4.1 | Cross-functional AI risk teams include AI/ML, legal, privacy, security, HR, and ethics functions |
|
|
74
|
+
| GV-4.2 | Processes for communicating AI risks between teams are documented |
|
|
75
|
+
| GV-4.3 | Mechanisms for escalating AI risk concerns are established |
|
|
76
|
+
|
|
77
|
+
**Suggested Actions:**
|
|
78
|
+
- Establish an AI Risk Working Group with quarterly cross-functional reviews
|
|
79
|
+
- Create an AI risk escalation path from development teams to executive leadership
|
|
80
|
+
- Include privacy, legal, and security representatives in AI system design reviews
|
|
81
|
+
|
|
82
|
+
---
|
|
83
|
+
|
|
84
|
+
### GV-5: Organizational Risk Tolerance for AI
|
|
85
|
+
|
|
86
|
+
**Purpose:** Communicate AI risk tolerance and link it to operational decisions.
|
|
87
|
+
|
|
88
|
+
| Subcategory | Description |
|
|
89
|
+
|-------------|-------------|
|
|
90
|
+
| GV-5.1 | AI risk tolerance is defined and reflects organizational values |
|
|
91
|
+
| GV-5.2 | AI risk tolerance is reviewed when new AI systems are deployed or contexts change |
|
|
92
|
+
| GV-5.3 | Risk tolerance statements inform go/no-go decisions for AI system deployment |
|
|
93
|
+
|
|
94
|
+
**Suggested Actions:**
|
|
95
|
+
- Define risk tolerance per AI system category (e.g., low-stakes recommendations vs. high-stakes decisions affecting individuals)
|
|
96
|
+
- Create a deployment checklist that validates an AI system against stated risk tolerance before launch
|
|
97
|
+
- Link risk tolerance to specific bias and accuracy thresholds in testing requirements
|
|
98
|
+
|
|
99
|
+
---
|
|
100
|
+
|
|
101
|
+
### GV-6: AI Risk Aligned to Laws, Regulations, and Principles
|
|
102
|
+
|
|
103
|
+
**Purpose:** Ensure AI risks and risk management practices align with applicable laws, ethical principles, and industry standards.
|
|
104
|
+
|
|
105
|
+
| Subcategory | Description |
|
|
106
|
+
|-------------|-------------|
|
|
107
|
+
| GV-6.1 | Legal and regulatory requirements for AI are identified and tracked |
|
|
108
|
+
| GV-6.2 | AI risk management processes are aligned with applicable ethical principles |
|
|
109
|
+
| GV-6.3 | The organization engages with emerging AI regulations on a proactive basis |
|
|
110
|
+
|
|
111
|
+
**Suggested Actions:**
|
|
112
|
+
- Maintain a regulatory register for applicable AI laws (EU AI Act, state AI laws, sector-specific requirements)
|
|
113
|
+
- Align AI risk policies to NIST AI 100-1, ISO/IEC 42001, and relevant sector frameworks (FINRA, HIPAA, etc.)
|
|
114
|
+
- Designate a legal/compliance representative on the AI governance committee
|
|
115
|
+
|
|
116
|
+
---
|
|
117
|
+
|
|
118
|
+
## MAP Function (5 Categories, 20 Subcategories)
|
|
119
|
+
|
|
120
|
+
MAP establishes context before risks are measured or managed. A well-executed MAP prevents investing resources in the wrong risk treatments.
|
|
121
|
+
|
|
122
|
+
### MP-1: Context Is Established
|
|
123
|
+
|
|
124
|
+
**Purpose:** Understand the intended use, operating environment, and affected populations of each AI system.
|
|
125
|
+
|
|
126
|
+
| Subcategory | Description |
|
|
127
|
+
|-------------|-------------|
|
|
128
|
+
| MP-1.1 | The organization's mission and goals related to AI are documented |
|
|
129
|
+
| MP-1.2 | Intended uses of the AI system are documented and bounded |
|
|
130
|
+
| MP-1.3 | The AI system's operating environment and constraints are defined |
|
|
131
|
+
| MP-1.4 | Affected individuals, groups, communities, and organizations are identified |
|
|
132
|
+
| MP-1.5 | Potential harms, misuses, and unintended uses are scoped |
|
|
133
|
+
| MP-1.6 | Legal, regulatory, and contractual constraints are identified |
|
|
134
|
+
|
|
135
|
+
**Suggested Actions:**
|
|
136
|
+
- Produce an AI System Description Document for each deployed system covering: purpose, inputs, outputs, decision authority, operator vs. user roles
|
|
137
|
+
- Identify affected populations at the beginning of design — not at deployment
|
|
138
|
+
- Document prohibited use cases explicitly
|
|
139
|
+
|
|
140
|
+
---
|
|
141
|
+
|
|
142
|
+
### MP-2: Scientific Understanding Applied
|
|
143
|
+
|
|
144
|
+
**Purpose:** Apply current scientific understanding of AI capabilities and limitations to the design and risk assessment.
|
|
145
|
+
|
|
146
|
+
| Subcategory | Description |
|
|
147
|
+
|-------------|-------------|
|
|
148
|
+
| MP-2.1 | AI and ML capabilities and limitations are documented for the specific system type |
|
|
149
|
+
| MP-2.2 | Assumptions and constraints of the AI system's training data are documented |
|
|
150
|
+
| MP-2.3 | Uncertainty and variability in AI outputs are characterized |
|
|
151
|
+
|
|
152
|
+
**Suggested Actions:**
|
|
153
|
+
- Document model card or system card information including training data sources, known biases, and performance bounds
|
|
154
|
+
- Quantify output uncertainty (confidence intervals, calibration metrics) where applicable
|
|
155
|
+
- Review relevant literature on known failure modes for the model architecture in use
|
|
156
|
+
|
|
157
|
+
---
|
|
158
|
+
|
|
159
|
+
### MP-3: Risks and Benefits Mapped to Stakeholders
|
|
160
|
+
|
|
161
|
+
**Purpose:** Identify who benefits from the AI system and who bears its risks — these are often different groups.
|
|
162
|
+
|
|
163
|
+
| Subcategory | Description |
|
|
164
|
+
|-------------|-------------|
|
|
165
|
+
| MP-3.1 | Benefits and risks are documented for each identified stakeholder group |
|
|
166
|
+
| MP-3.2 | Affected communities are engaged where feasible to understand perceived risks and benefits |
|
|
167
|
+
| MP-3.3 | The distribution of benefits vs. risks across stakeholder groups is evaluated |
|
|
168
|
+
| MP-3.4 | Feedback mechanisms for affected individuals to report harm are established |
|
|
169
|
+
|
|
170
|
+
**Suggested Actions:**
|
|
171
|
+
- Create a stakeholder risk/benefit matrix: rows = stakeholder group, columns = risk type, benefit type
|
|
172
|
+
- Implement a feedback channel (e.g., complaint form, audit log review) for users to report unexpected outcomes
|
|
173
|
+
- Conduct equity analysis: which groups are disproportionately affected by errors?
|
|
174
|
+
|
|
175
|
+
---
|
|
176
|
+
|
|
177
|
+
### MP-4: Risks Prioritized
|
|
178
|
+
|
|
179
|
+
**Purpose:** Prioritize identified risks to focus MEASURE and MANAGE resources effectively.
|
|
180
|
+
|
|
181
|
+
| Subcategory | Description |
|
|
182
|
+
|-------------|-------------|
|
|
183
|
+
| MP-4.1 | Risk prioritization criteria are established (e.g., severity, breadth, reversibility) |
|
|
184
|
+
| MP-4.2 | Risks are ranked and documented in the AI risk register |
|
|
185
|
+
| MP-4.3 | Highest-priority risks are escalated to GOVERN for risk tolerance review |
|
|
186
|
+
|
|
187
|
+
**Suggested Actions:**
|
|
188
|
+
- Use a severity × breadth × reversibility scoring model for prioritization
|
|
189
|
+
- Flag any risk that affects a protected class, creates legal exposure, or is irreversible as high-priority
|
|
190
|
+
- Review prioritization at each model version update or significant context change
|
|
191
|
+
|
|
192
|
+
---
|
|
193
|
+
|
|
194
|
+
### MP-5: Likelihood and Impact Characterized
|
|
195
|
+
|
|
196
|
+
**Purpose:** Characterize the probability and potential severity of identified harms.
|
|
197
|
+
|
|
198
|
+
| Subcategory | Description |
|
|
199
|
+
|-------------|-------------|
|
|
200
|
+
| MP-5.1 | Likelihood of harm is estimated using historical data, expert judgment, or red-teaming |
|
|
201
|
+
| MP-5.2 | Potential impact is assessed across physical, psychological, financial, and reputational dimensions |
|
|
202
|
+
| MP-5.3 | Cumulative and systemic risks (e.g., societal effects of widespread deployment) are considered |
|
|
203
|
+
|
|
204
|
+
**Suggested Actions:**
|
|
205
|
+
- Conduct red-team exercises and adversarial testing to estimate real-world failure rates
|
|
206
|
+
- Assess impact across harm dimensions: physical safety, financial, psychological, reputational, societal
|
|
207
|
+
- For large-scale deployments, model aggregate societal effects (e.g., labour market impact of an automated hiring tool)
|
|
208
|
+
|
|
209
|
+
---
|
|
210
|
+
|
|
211
|
+
## MEASURE Function (4 Categories, 16 Subcategories)
|
|
212
|
+
|
|
213
|
+
MEASURE employs quantitative and qualitative tools to evaluate AI risks identified in MAP.
|
|
214
|
+
|
|
215
|
+
### MS-1: Measurement Approaches Identified
|
|
216
|
+
|
|
217
|
+
**Purpose:** Identify appropriate methods and tools for measuring AI risks.
|
|
218
|
+
|
|
219
|
+
| Subcategory | Description |
|
|
220
|
+
|-------------|-------------|
|
|
221
|
+
| MS-1.1 | Metrics for each identified risk are defined (technical, operational, and societal) |
|
|
222
|
+
| MS-1.2 | Measurement approaches are appropriate for the AI system type and deployment context |
|
|
223
|
+
| MS-1.3 | Gaps in measurement capabilities are documented and addressed |
|
|
224
|
+
|
|
225
|
+
**Suggested Actions:**
|
|
226
|
+
- Define metrics for each trustworthiness property: accuracy, fairness (demographic parity, equalized odds), robustness (adversarial accuracy), explainability (LIME/SHAP scores), privacy (differential privacy ε)
|
|
227
|
+
- Document measurement tools and their known limitations
|
|
228
|
+
- Identify where human evaluation is required instead of (or in addition to) automated metrics
|
|
229
|
+
|
|
230
|
+
---
|
|
231
|
+
|
|
232
|
+
### MS-2: AI Systems Evaluated for Trustworthiness
|
|
233
|
+
|
|
234
|
+
**Purpose:** Evaluate AI systems against the trustworthiness properties throughout the lifecycle.
|
|
235
|
+
|
|
236
|
+
| Subcategory | Description |
|
|
237
|
+
|-------------|-------------|
|
|
238
|
+
| MS-2.1 | AI systems are evaluated pre-deployment for technical performance and safety |
|
|
239
|
+
| MS-2.2 | Bias and fairness testing is conducted across demographic groups |
|
|
240
|
+
| MS-2.3 | Explainability and interpretability requirements are tested and documented |
|
|
241
|
+
| MS-2.4 | Security and privacy of the AI system are assessed |
|
|
242
|
+
| MS-2.5 | Human oversight mechanisms are tested and validated |
|
|
243
|
+
| MS-2.6 | Evaluation results are documented and shared with relevant stakeholders |
|
|
244
|
+
|
|
245
|
+
**Suggested Actions:**
|
|
246
|
+
- Require pre-deployment evaluation report covering all seven trustworthiness properties
|
|
247
|
+
- Run disaggregated performance testing across demographic subgroups (age, gender, race, geography)
|
|
248
|
+
- Test adversarial robustness using standard benchmark datasets relevant to the model type
|
|
249
|
+
- Document SHAP/LIME explanations for models affecting high-stakes individual decisions
|
|
250
|
+
|
|
251
|
+
---
|
|
252
|
+
|
|
253
|
+
### MS-3: AI Risk Tracked Over Time
|
|
254
|
+
|
|
255
|
+
**Purpose:** Monitor AI risk continuously after deployment to detect drift, degradation, or new harms.
|
|
256
|
+
|
|
257
|
+
| Subcategory | Description |
|
|
258
|
+
|-------------|-------------|
|
|
259
|
+
| MS-3.1 | Ongoing monitoring metrics are defined and implemented post-deployment |
|
|
260
|
+
| MS-3.2 | Model drift and performance degradation are detected and trigger review |
|
|
261
|
+
| MS-3.3 | New risks identified post-deployment are fed back into MAP |
|
|
262
|
+
| MS-3.4 | External signals (regulatory updates, academic findings, media reports) inform monitoring |
|
|
263
|
+
|
|
264
|
+
**Suggested Actions:**
|
|
265
|
+
- Implement model monitoring dashboards tracking accuracy, fairness metrics, and input data distribution
|
|
266
|
+
- Set alert thresholds that trigger human review (e.g., accuracy drops >5%, demographic parity gap exceeds threshold)
|
|
267
|
+
- Assign a model owner responsible for monthly monitoring reviews
|
|
268
|
+
- Subscribe to NIST NVD and relevant AI safety/bias research feeds
|
|
269
|
+
|
|
270
|
+
---
|
|
271
|
+
|
|
272
|
+
### MS-4: Feedback Informs MANAGE
|
|
273
|
+
|
|
274
|
+
**Purpose:** Ensure measurement results directly inform risk treatment decisions in MANAGE.
|
|
275
|
+
|
|
276
|
+
| Subcategory | Description |
|
|
277
|
+
|-------------|-------------|
|
|
278
|
+
| MS-4.1 | Measurement outputs are communicated to decision-makers responsible for MANAGE |
|
|
279
|
+
| MS-4.2 | Measurement limitations and uncertainties are communicated alongside results |
|
|
280
|
+
| MS-4.3 | Measurement results are used to update the AI risk register and treatment plans |
|
|
281
|
+
|
|
282
|
+
**Suggested Actions:**
|
|
283
|
+
- Create a measurement-to-action protocol: define which measurement findings trigger which MANAGE actions
|
|
284
|
+
- Include measurement uncertainty caveats in all AI risk reports
|
|
285
|
+
- Automate risk register updates from monitoring dashboards where feasible
|
|
286
|
+
|
|
287
|
+
---
|
|
288
|
+
|
|
289
|
+
## MANAGE Function (4 Categories, 18 Subcategories)
|
|
290
|
+
|
|
291
|
+
MANAGE addresses identified AI risks through treatment, monitoring, and improvement.
|
|
292
|
+
|
|
293
|
+
### MG-1: Risks Prioritized and Documented
|
|
294
|
+
|
|
295
|
+
**Purpose:** Ensure the most impactful AI risks receive treatment resources first.
|
|
296
|
+
|
|
297
|
+
| Subcategory | Description |
|
|
298
|
+
|-------------|-------------|
|
|
299
|
+
| MG-1.1 | AI risk register entries are prioritized and assigned treatment owners |
|
|
300
|
+
| MG-1.2 | Risk prioritization reflects organizational risk tolerance (GOVERN 1.3) |
|
|
301
|
+
| MG-1.3 | Residual risks after treatment are documented and accepted by appropriate authority |
|
|
302
|
+
|
|
303
|
+
**Suggested Actions:**
|
|
304
|
+
- Assign a treatment owner, target date, and treatment approach for every risk register entry
|
|
305
|
+
- Require senior approval for accepting residual risks above the defined risk tolerance threshold
|
|
306
|
+
- Review residual risk acceptance annually or when significant system changes occur
|
|
307
|
+
|
|
308
|
+
---
|
|
309
|
+
|
|
310
|
+
### MG-2: Strategies Planned and Actioned
|
|
311
|
+
|
|
312
|
+
**Purpose:** Develop and execute risk treatment strategies that reduce AI risk to acceptable levels.
|
|
313
|
+
|
|
314
|
+
| Subcategory | Description |
|
|
315
|
+
|-------------|-------------|
|
|
316
|
+
| MG-2.1 | Risk treatment options are identified (mitigate, transfer, avoid, accept) |
|
|
317
|
+
| MG-2.2 | Treatment strategies are resourced and implemented |
|
|
318
|
+
| MG-2.3 | Emergency interventions (e.g., system shutdown) are defined for critical failures |
|
|
319
|
+
| MG-2.4 | Benefits of AI systems are preserved while reducing risks |
|
|
320
|
+
|
|
321
|
+
**Suggested Actions:**
|
|
322
|
+
- For each high-priority risk, identify one or more treatment options: technical (retrain, constrain, add human review), operational (restrict use case), contractual (indemnification), or avoidance (decommission)
|
|
323
|
+
- Define a "kill switch" or emergency shutdown procedure for AI systems affecting safety
|
|
324
|
+
- Document benefit-risk tradeoffs for any accepted risk
|
|
325
|
+
|
|
326
|
+
---
|
|
327
|
+
|
|
328
|
+
### MG-3: Risk Responses Monitored and Adjusted
|
|
329
|
+
|
|
330
|
+
**Purpose:** Ensure risk treatments remain effective over time and adapt to changing conditions.
|
|
331
|
+
|
|
332
|
+
| Subcategory | Description |
|
|
333
|
+
|-------------|-------------|
|
|
334
|
+
| MG-3.1 | Effectiveness of risk treatments is monitored using defined metrics |
|
|
335
|
+
| MG-3.2 | AI incidents are documented, reported, and investigated |
|
|
336
|
+
| MG-3.3 | Lessons learned from incidents are applied to future risk management |
|
|
337
|
+
| MG-3.4 | Stakeholders are notified of significant AI risks or incidents |
|
|
338
|
+
|
|
339
|
+
**Suggested Actions:**
|
|
340
|
+
- Implement an AI incident log with severity classification (low/medium/high/critical)
|
|
341
|
+
- Define notification thresholds: internal escalation, external customer notification, regulatory disclosure
|
|
342
|
+
- Conduct post-incident reviews and update the risk register and GOVERN policies
|
|
343
|
+
- Share anonymized incident learnings with industry where appropriate (e.g., AI incident databases)
|
|
344
|
+
|
|
345
|
+
---
|
|
346
|
+
|
|
347
|
+
### MG-4: Risk Treatment Reviewed and Improved
|
|
348
|
+
|
|
349
|
+
**Purpose:** Close the loop — feed treatment outcomes back into GOVERN and MAP for continuous improvement.
|
|
350
|
+
|
|
351
|
+
| Subcategory | Description |
|
|
352
|
+
|-------------|-------------|
|
|
353
|
+
| MG-4.1 | AI risk management processes are periodically reviewed for effectiveness |
|
|
354
|
+
| MG-4.2 | Improvements to AI risk management are identified and implemented |
|
|
355
|
+
| MG-4.3 | Lessons learned inform updates to organizational AI risk policies |
|
|
356
|
+
| MG-4.4 | The organization's AI risk profile is reviewed when significant changes occur |
|
|
357
|
+
|
|
358
|
+
**Suggested Actions:**
|
|
359
|
+
- Schedule quarterly AI risk programme reviews covering all four functions
|
|
360
|
+
- Use external assessment or third-party audit every 1–2 years to validate programme effectiveness
|
|
361
|
+
- Update GOVERN policies and MAP context documents following every major incident or model update
|
|
@@ -0,0 +1,192 @@
|
|
|
1
|
+
# NIST AI RMF — AI Risk Profiles, Metrics, and Cross-Framework Mapping
|
|
2
|
+
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
## AI Risk Profile Concept
|
|
6
|
+
|
|
7
|
+
An **AI Risk Profile** is an organization's customization of the AI RMF to reflect:
|
|
8
|
+
- Its specific AI use cases and deployment contexts
|
|
9
|
+
- Applicable laws and regulations
|
|
10
|
+
- Its defined risk tolerance
|
|
11
|
+
- The trustworthiness properties most relevant to its AI systems
|
|
12
|
+
|
|
13
|
+
The AI RMF defines two profile types:
|
|
14
|
+
|
|
15
|
+
| Profile Type | Description | Use |
|
|
16
|
+
|-------------|-------------|-----|
|
|
17
|
+
| **Current Profile** | Where the organization is today — which categories are implemented and to what degree | Baseline assessment |
|
|
18
|
+
| **Target Profile** | Where the organization wants to be — desired state for each category | Gap analysis and roadmap |
|
|
19
|
+
|
|
20
|
+
The gap between Current and Target Profile drives the risk management roadmap.
|
|
21
|
+
|
|
22
|
+
### How to Build an AI Risk Profile
|
|
23
|
+
|
|
24
|
+
1. **Scope** — Define which AI systems are in scope (all AI, specific high-risk systems, or a single system)
|
|
25
|
+
2. **Assess Current State** — Rate each of the 19 categories: Not Started (0) / Partial (1) / Implemented (2) / Optimized (3)
|
|
26
|
+
3. **Set Target State** — Define the desired maturity level for each category based on risk tolerance and regulatory requirements
|
|
27
|
+
4. **Gap Analysis** — Categories where Target > Current are gaps requiring action
|
|
28
|
+
5. **Prioritise** — Weight gaps by the risk they represent; address highest-risk gaps first
|
|
29
|
+
6. **Roadmap** — Assign owners, timelines, and resources to close each gap
|
|
30
|
+
|
|
31
|
+
---
|
|
32
|
+
|
|
33
|
+
## Trustworthy AI Characteristics — Metrics and Indicators
|
|
34
|
+
|
|
35
|
+
### 1. Accuracy and Validity
|
|
36
|
+
| Metric | Description |
|
|
37
|
+
|--------|-------------|
|
|
38
|
+
| Precision / Recall / F1 | Classification accuracy metrics |
|
|
39
|
+
| AUC-ROC | Discriminative performance |
|
|
40
|
+
| Mean Absolute Error (MAE) | Regression performance |
|
|
41
|
+
| Calibration Error | Alignment between predicted probability and actual frequency |
|
|
42
|
+
| Out-of-distribution (OOD) performance | Performance on inputs outside training distribution |
|
|
43
|
+
|
|
44
|
+
### 2. Fairness and Bias Management
|
|
45
|
+
| Metric | Description |
|
|
46
|
+
|--------|-------------|
|
|
47
|
+
| Demographic Parity | Equal positive prediction rates across groups |
|
|
48
|
+
| Equalized Odds | Equal true positive and false positive rates across groups |
|
|
49
|
+
| Counterfactual Fairness | Would outcome change if sensitive attribute changed? |
|
|
50
|
+
| Disparate Impact Ratio | Ratio of positive outcome rates between groups (≥0.8 is EEOC "4/5ths rule") |
|
|
51
|
+
| Disaggregated performance reporting | Performance broken down by demographic subgroup |
|
|
52
|
+
|
|
53
|
+
### 3. Explainability and Interpretability
|
|
54
|
+
| Method | Description |
|
|
55
|
+
|--------|-------------|
|
|
56
|
+
| SHAP (SHapley Additive exPlanations) | Global and local feature attribution |
|
|
57
|
+
| LIME (Local Interpretable Model-agnostic Explanations) | Local approximation of complex models |
|
|
58
|
+
| Counterfactual explanations | "What would need to change for a different outcome?" |
|
|
59
|
+
| Model cards | Standardized summary of model performance, intended use, and limitations |
|
|
60
|
+
| Saliency maps | Visual highlighting of input features driving image model decisions |
|
|
61
|
+
|
|
62
|
+
### 4. Robustness and Reliability
|
|
63
|
+
| Metric | Description |
|
|
64
|
+
|--------|-------------|
|
|
65
|
+
| Adversarial accuracy | Performance under evasion attacks (FGSM, PGD) |
|
|
66
|
+
| Poisoning resilience | Resistance to training data manipulation |
|
|
67
|
+
| Input perturbation sensitivity | Performance stability under small input variations |
|
|
68
|
+
| Availability under load | System uptime and latency under peak conditions |
|
|
69
|
+
| Model drift detection | Statistical tests (PSI, KS test) on input data distribution over time |
|
|
70
|
+
|
|
71
|
+
### 5. Privacy
|
|
72
|
+
| Approach | Description |
|
|
73
|
+
|----------|-------------|
|
|
74
|
+
| Differential Privacy (ε) | Mathematical privacy guarantee; lower ε = stronger privacy |
|
|
75
|
+
| k-Anonymity | Minimum group size in training data to prevent re-identification |
|
|
76
|
+
| Federated learning | Training on decentralized data without centralizing raw data |
|
|
77
|
+
| Membership inference attack resistance | Resistance to inferring whether a record was in training data |
|
|
78
|
+
|
|
79
|
+
### 6. Security
|
|
80
|
+
| Threat | Description |
|
|
81
|
+
|--------|-------------|
|
|
82
|
+
| Evasion attacks | Adversarial inputs crafted to fool the deployed model |
|
|
83
|
+
| Poisoning attacks | Malicious training data injected to corrupt model behaviour |
|
|
84
|
+
| Model extraction / inversion | Reverse-engineering the model or reconstructing training data from outputs |
|
|
85
|
+
| Prompt injection (LLMs) | Malicious input that hijacks model behaviour through instructions |
|
|
86
|
+
|
|
87
|
+
---
|
|
88
|
+
|
|
89
|
+
## Sector-Specific AI Risk Considerations
|
|
90
|
+
|
|
91
|
+
| Sector | Key AI Risk Priorities | Relevant Regulations |
|
|
92
|
+
|--------|----------------------|---------------------|
|
|
93
|
+
| **Healthcare** | Safety (diagnosis errors), bias across patient populations, explainability for clinical decisions | HIPAA, FDA AI/ML-based SaMD guidance, EU AI Act (high-risk: medical devices) |
|
|
94
|
+
| **Financial Services** | Credit decision fairness, model explainability (adverse action notices), fraud detection accuracy | ECOA, Fair Housing Act, CFPB guidance, EU AI Act (high-risk: credit scoring) |
|
|
95
|
+
| **HR / Recruitment** | Hiring bias, EEOC disparate impact, explainability of screening decisions | EEOC, NYCA Local Law 144, EU AI Act (high-risk: employment) |
|
|
96
|
+
| **Criminal Justice** | Recidivism prediction bias, due process, transparency | EU AI Act (prohibited: real-time biometric, social scoring) |
|
|
97
|
+
| **Government / Public Sector** | Transparency, equal access, civil rights compliance | Executive Order 13960, EO 14110, AI Act (public authority deployments) |
|
|
98
|
+
| **Education** | Bias in admissions/grading, student privacy | FERPA, COPPA, state AI in education laws |
|
|
99
|
+
| **Autonomous Systems** | Physical safety, fault tolerance, human override | ISO 26262 (automotive), DO-178C (aviation), EU AI Act (high-risk) |
|
|
100
|
+
|
|
101
|
+
---
|
|
102
|
+
|
|
103
|
+
## Cross-Framework Mapping
|
|
104
|
+
|
|
105
|
+
### NIST AI RMF ↔ ISO/IEC 42001:2023
|
|
106
|
+
|
|
107
|
+
| AI RMF Function/Category | ISO 42001 Equivalent |
|
|
108
|
+
|--------------------------|---------------------|
|
|
109
|
+
| GOVERN 1 (Policies in place) | Clause 5 (Leadership), Clause 6 (Planning), A.2 (AI policy) |
|
|
110
|
+
| GOVERN 2 (Accountability) | Clause 5.3 (Roles and responsibilities), A.2.3 |
|
|
111
|
+
| GOVERN 3 (Roles) | Clause 5.3, A.2.5 (Responsibilities for AI system impact) |
|
|
112
|
+
| GOVERN 4 (Cross-functional teams) | Clause 7.1 (Resources), A.2.5 |
|
|
113
|
+
| GOVERN 5 (Risk tolerance) | Clause 6.1 (Risk and opportunity), A.5.2 (AI risk assessment) |
|
|
114
|
+
| MAP 1 (Context) | Clause 4 (Context of organization), A.3 (Internal/external context) |
|
|
115
|
+
| MAP 2 (Scientific understanding) | A.6 (AI system lifecycle) |
|
|
116
|
+
| MAP 3 (Stakeholder risk/benefit) | Clause 4.2 (Interested parties), A.8.4 (Impact assessment) |
|
|
117
|
+
| MAP 5 (Likelihood/impact) | A.5.2 (AI risk assessment methodology) |
|
|
118
|
+
| MEASURE 2 (System evaluation) | A.6.2 (AI system design), A.10 (Use of AI systems) |
|
|
119
|
+
| MEASURE 3 (Ongoing monitoring) | Clause 9.1 (Monitoring and measurement), A.6.2.5 |
|
|
120
|
+
| MANAGE 2 (Treatment strategies) | Clause 6.1.3 (AI risk treatment), A.5.3 |
|
|
121
|
+
| MANAGE 3 (Incident response) | A.9 (Performance evaluation), Clause 10 (Improvement) |
|
|
122
|
+
| MANAGE 4 (Review and improve) | Clause 10.2 (Nonconformity), Clause 9.3 (Management review) |
|
|
123
|
+
|
|
124
|
+
---
|
|
125
|
+
|
|
126
|
+
### NIST AI RMF ↔ EU AI Act (Regulation (EU) 2024/1689)
|
|
127
|
+
|
|
128
|
+
| AI RMF Function | EU AI Act Requirement |
|
|
129
|
+
|----------------|----------------------|
|
|
130
|
+
| GOVERN 1 (AI risk policies) | Art. 9 (Risk management system) for high-risk AI |
|
|
131
|
+
| GOVERN 2/3 (Accountability) | Art. 16 (Obligations of high-risk AI providers), Art. 26 (Deployer obligations) |
|
|
132
|
+
| MAP 1 (Intended use) | Art. 9(2) — risk management must cover intended and reasonably foreseeable misuse |
|
|
133
|
+
| MAP 3 (Stakeholder mapping) | Art. 9(2)(b) — identification and analysis of known and foreseeable risks |
|
|
134
|
+
| MEASURE 2 (System evaluation) | Art. 10 (Data governance), Art. 15 (Accuracy, robustness, cybersecurity) |
|
|
135
|
+
| MEASURE 3 (Ongoing monitoring) | Art. 72 (Post-market monitoring), Art. 26(5) — deployer monitoring obligations |
|
|
136
|
+
| MANAGE 3 (Incident response) | Art. 73 (Reporting of serious incidents to market surveillance) |
|
|
137
|
+
| All functions | Annex IX (Technical documentation requirements for high-risk AI systems) |
|
|
138
|
+
|
|
139
|
+
**Key difference:** The EU AI Act is mandatory for in-scope providers and deployers; the NIST AI RMF is voluntary. Organizations subject to the EU AI Act should use NIST AI RMF as the risk management methodology that satisfies Art. 9's "appropriate risk management system" requirement.
|
|
140
|
+
|
|
141
|
+
---
|
|
142
|
+
|
|
143
|
+
### NIST AI RMF ↔ NIST CSF 2.0
|
|
144
|
+
|
|
145
|
+
| AI RMF Function | NIST CSF 2.0 Function |
|
|
146
|
+
|----------------|----------------------|
|
|
147
|
+
| GOVERN | GV (Govern) — directly analogous |
|
|
148
|
+
| MAP | ID (Identify) — risk identification |
|
|
149
|
+
| MEASURE | DE (Detect) + ID (Identify) |
|
|
150
|
+
| MANAGE | RS (Respond) + RC (Recover) + PR (Protect) |
|
|
151
|
+
|
|
152
|
+
**Relationship:** NIST CSF 2.0 covers cybersecurity risk broadly; NIST AI RMF extends this specifically to AI risks including bias, fairness, explainability, and societal harms — which are outside CSF's scope. Organizations should implement both: CSF for cybersecurity risk, AI RMF for AI-specific risks.
|
|
153
|
+
|
|
154
|
+
---
|
|
155
|
+
|
|
156
|
+
### NIST AI RMF ↔ NIST Privacy Framework
|
|
157
|
+
|
|
158
|
+
| AI RMF Category | Privacy Framework Core |
|
|
159
|
+
|----------------|----------------------|
|
|
160
|
+
| GV-6 (Regulatory alignment) | GV.PO-P (Privacy policies) |
|
|
161
|
+
| MAP 1 (Context) | CT.DM-P (Data model context) |
|
|
162
|
+
| MAP 3 (Affected individuals) | CT.DP-P (Disassociated processing) |
|
|
163
|
+
| MEASURE 2 (Privacy evaluation) | CM.PO-P (Communication policies) |
|
|
164
|
+
| MANAGE 3 (Incident response) | RS.CO-P (Response communications) |
|
|
165
|
+
|
|
166
|
+
---
|
|
167
|
+
|
|
168
|
+
## AI RMF Implementation Tiers
|
|
169
|
+
|
|
170
|
+
Similar to NIST CSF Implementation Tiers, the AI RMF describes four organizational maturity levels:
|
|
171
|
+
|
|
172
|
+
| Tier | Name | Description |
|
|
173
|
+
|------|------|-------------|
|
|
174
|
+
| 1 | **Partial** | Ad hoc AI risk practices; limited awareness; reactive to AI incidents |
|
|
175
|
+
| 2 | **Risk Informed** | Approved AI risk policies exist; practices not fully organization-wide; awareness at management level |
|
|
176
|
+
| 3 | **Repeatable** | AI risk management formally documented, consistently applied, regularly reviewed |
|
|
177
|
+
| 4 | **Adaptive** | Organization learns from AI risk experience; proactively updates practices; contributes to sector knowledge |
|
|
178
|
+
|
|
179
|
+
Most organizations begin at Tier 1–2. Target Tier 3 for regulated contexts; Tier 4 for AI-intensive industries.
|
|
180
|
+
|
|
181
|
+
---
|
|
182
|
+
|
|
183
|
+
## Common Gap Patterns and Remediation Priorities
|
|
184
|
+
|
|
185
|
+
| Gap Pattern | Likely Cause | Recommended Action |
|
|
186
|
+
|-------------|-------------|-------------------|
|
|
187
|
+
| GOVERN complete but MAP/MEASURE weak | Policies written but not operationalized | Run system-level AI risk assessments using MAP for each deployed AI system |
|
|
188
|
+
| MAP done but MEASURE absent | Risk identified but not measured | Instrument deployed models with monitoring; define metrics for each identified risk |
|
|
189
|
+
| MEASURE present but no MANAGE actions | Measurements not connected to treatment | Establish measurement-to-action protocols; assign risk owners to register entries |
|
|
190
|
+
| Inconsistent across business units | No centralized AI risk programme | Establish central AI governance function with cross-BU working group |
|
|
191
|
+
| Strong technical controls, weak societal risk view | Engineering-led programme | Add legal, ethics, and affected-community perspectives to MAP 3 and MEASURE 2 |
|
|
192
|
+
| No lifecycle coverage | Only deployment-phase focus | Extend AI RMF coverage to design, development, and decommission stages |
|