@open-agreements/open-agreements 0.2.2 → 0.3.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +30 -0
- package/content/templates/closing-checklist/template.docx +0 -0
- package/content/templates/common-paper-ai-addendum/README.md +18 -0
- package/content/templates/common-paper-ai-addendum/metadata.yaml +136 -0
- package/content/templates/common-paper-ai-addendum/replacements.json +5 -0
- package/content/templates/common-paper-ai-addendum/selections.json +62 -0
- package/content/templates/common-paper-ai-addendum/template.docx +0 -0
- package/content/templates/common-paper-ai-addendum-in-app/metadata.yaml +88 -0
- package/content/templates/common-paper-ai-addendum-in-app/replacements.json +5 -0
- package/content/templates/common-paper-ai-addendum-in-app/selections.json +62 -0
- package/content/templates/common-paper-amendment/README.md +18 -0
- package/content/templates/common-paper-amendment/metadata.yaml +48 -0
- package/content/templates/common-paper-amendment/template.docx +0 -0
- package/content/templates/common-paper-business-associate-agreement/README.md +20 -1
- package/content/templates/common-paper-business-associate-agreement/metadata.yaml +111 -3
- package/content/templates/common-paper-business-associate-agreement/replacements.json +2 -1
- package/content/templates/common-paper-business-associate-agreement/selections.json +38 -0
- package/content/templates/common-paper-business-associate-agreement/template.docx +0 -0
- package/content/templates/common-paper-cloud-service-agreement/README.md +18 -0
- package/content/templates/common-paper-cloud-service-agreement/metadata.yaml +48 -0
- package/content/templates/common-paper-cloud-service-agreement/template.docx +0 -0
- package/content/templates/common-paper-csa-with-ai/README.md +18 -0
- package/content/templates/common-paper-csa-with-ai/metadata.yaml +462 -2
- package/content/templates/common-paper-csa-with-ai/replacements.json +5 -2
- package/content/templates/common-paper-csa-with-ai/selections.json +291 -0
- package/content/templates/common-paper-csa-with-ai/template.docx +0 -0
- package/content/templates/common-paper-csa-with-sla/README.md +18 -0
- package/content/templates/common-paper-csa-with-sla/metadata.yaml +387 -2
- package/content/templates/common-paper-csa-with-sla/replacements.json +4 -2
- package/content/templates/common-paper-csa-with-sla/selections.json +257 -0
- package/content/templates/common-paper-csa-with-sla/template.docx +0 -0
- package/content/templates/common-paper-csa-without-sla/README.md +18 -0
- package/content/templates/common-paper-csa-without-sla/metadata.yaml +380 -2
- package/content/templates/common-paper-csa-without-sla/replacements.json +5 -2
- package/content/templates/common-paper-csa-without-sla/selections.json +250 -0
- package/content/templates/common-paper-csa-without-sla/template.docx +0 -0
- package/content/templates/common-paper-data-processing-agreement/README.md +16 -0
- package/content/templates/common-paper-data-processing-agreement/metadata.yaml +397 -3
- package/content/templates/common-paper-data-processing-agreement/replacements.json +2 -1
- package/content/templates/common-paper-data-processing-agreement/selections.json +211 -0
- package/content/templates/common-paper-data-processing-agreement/template.docx +0 -0
- package/content/templates/common-paper-design-partner-agreement/README.md +18 -0
- package/content/templates/common-paper-design-partner-agreement/metadata.yaml +99 -3
- package/content/templates/common-paper-design-partner-agreement/selections.json +27 -0
- package/content/templates/common-paper-design-partner-agreement/template.docx +0 -0
- package/content/templates/common-paper-independent-contractor-agreement/README.md +18 -0
- package/content/templates/common-paper-independent-contractor-agreement/clean.json +8 -0
- package/content/templates/common-paper-independent-contractor-agreement/metadata.yaml +52 -0
- package/content/templates/common-paper-independent-contractor-agreement/replacements.json +3 -0
- package/content/templates/common-paper-independent-contractor-agreement/template.docx +0 -0
- package/content/templates/common-paper-letter-of-intent/README.md +18 -0
- package/content/templates/common-paper-letter-of-intent/metadata.yaml +48 -0
- package/content/templates/common-paper-letter-of-intent/template.docx +0 -0
- package/content/templates/common-paper-mutual-nda/README.md +29 -7
- package/content/templates/common-paper-mutual-nda/metadata.yaml +48 -0
- package/content/templates/common-paper-mutual-nda/template.docx +0 -0
- package/content/templates/common-paper-one-way-nda/README.md +13 -0
- package/content/templates/common-paper-one-way-nda/metadata.yaml +24 -0
- package/content/templates/common-paper-one-way-nda/selections.json +38 -0
- package/content/templates/common-paper-one-way-nda/template.docx +0 -0
- package/content/templates/common-paper-order-form/README.md +18 -0
- package/content/templates/common-paper-order-form/metadata.yaml +115 -3
- package/content/templates/common-paper-order-form/replacements.json +5 -2
- package/content/templates/common-paper-order-form/selections.json +56 -0
- package/content/templates/common-paper-order-form/template.docx +0 -0
- package/content/templates/common-paper-order-form-with-sla/README.md +18 -0
- package/content/templates/common-paper-order-form-with-sla/metadata.yaml +149 -3
- package/content/templates/common-paper-order-form-with-sla/replacements.json +6 -2
- package/content/templates/common-paper-order-form-with-sla/selections.json +64 -0
- package/content/templates/common-paper-order-form-with-sla/template.docx +0 -0
- package/content/templates/common-paper-partnership-agreement/README.md +18 -0
- package/content/templates/common-paper-partnership-agreement/metadata.yaml +293 -4
- package/content/templates/common-paper-partnership-agreement/replacements.json +5 -2
- package/content/templates/common-paper-partnership-agreement/selections.json +138 -0
- package/content/templates/common-paper-partnership-agreement/template.docx +0 -0
- package/content/templates/common-paper-pilot-agreement/README.md +18 -0
- package/content/templates/common-paper-pilot-agreement/metadata.yaml +48 -0
- package/content/templates/common-paper-pilot-agreement/template.docx +0 -0
- package/content/templates/common-paper-professional-services-agreement/README.md +18 -0
- package/content/templates/common-paper-professional-services-agreement/metadata.yaml +338 -4
- package/content/templates/common-paper-professional-services-agreement/replacements.json +7 -4
- package/content/templates/common-paper-professional-services-agreement/selections.json +207 -0
- package/content/templates/common-paper-professional-services-agreement/template.docx +0 -0
- package/content/templates/common-paper-statement-of-work/README.md +18 -0
- package/content/templates/common-paper-statement-of-work/metadata.yaml +110 -2
- package/content/templates/common-paper-statement-of-work/replacements.json +4 -1
- package/content/templates/common-paper-statement-of-work/selections.json +55 -0
- package/content/templates/common-paper-statement-of-work/template.docx +0 -0
- package/content/templates/common-paper-term-sheet/README.md +18 -0
- package/content/templates/common-paper-term-sheet/metadata.yaml +48 -0
- package/content/templates/common-paper-term-sheet/template.docx +0 -0
- package/content/templates/working-group-list/template.docx +0 -0
- package/dist/commands/checklist.d.ts.map +1 -1
- package/dist/commands/checklist.js +2 -1
- package/dist/commands/checklist.js.map +1 -1
- package/dist/commands/list.d.ts.map +1 -1
- package/dist/commands/list.js +1 -46
- package/dist/commands/list.js.map +1 -1
- package/dist/core/checklist/format-checklist-docx.d.ts +10 -0
- package/dist/core/checklist/format-checklist-docx.d.ts.map +1 -0
- package/dist/core/checklist/format-checklist-docx.js +321 -0
- package/dist/core/checklist/format-checklist-docx.js.map +1 -0
- package/dist/core/checklist/index.d.ts +1 -0
- package/dist/core/checklist/index.d.ts.map +1 -1
- package/dist/core/checklist/index.js +7 -3
- package/dist/core/checklist/index.js.map +1 -1
- package/dist/core/engine.d.ts +1 -0
- package/dist/core/engine.d.ts.map +1 -1
- package/dist/core/engine.js +72 -11
- package/dist/core/engine.js.map +1 -1
- package/dist/core/selector.d.ts +2 -0
- package/dist/core/selector.d.ts.map +1 -1
- package/dist/core/selector.js +181 -39
- package/dist/core/selector.js.map +1 -1
- package/dist/core/template-listing.d.ts +40 -0
- package/dist/core/template-listing.d.ts.map +1 -0
- package/dist/core/template-listing.js +91 -0
- package/dist/core/template-listing.js.map +1 -0
- package/dist/core/validation/template.d.ts.map +1 -1
- package/dist/core/validation/template.js +10 -2
- package/dist/core/validation/template.js.map +1 -1
- package/dist/index.d.ts +2 -0
- package/dist/index.d.ts.map +1 -1
- package/dist/index.js +4 -0
- package/dist/index.js.map +1 -1
- package/package.json +8 -2
- package/skills/iso-27001-evidence-collection/CONNECTORS.md +25 -9
- package/skills/iso-27001-evidence-collection/SKILL.md +10 -6
- package/skills/iso-27001-internal-audit/CONNECTORS.md +25 -9
- package/skills/iso-27001-internal-audit/SKILL.md +12 -9
- package/skills/soc2-readiness/CONNECTORS.md +25 -9
- package/skills/soc2-readiness/SKILL.md +17 -5
- package/skills/soc2-readiness/rules/change-vendor-management.md +104 -0
- package/skills/soc2-readiness/rules/communication-info.md +85 -0
- package/skills/soc2-readiness/rules/control-activities.md +95 -0
- package/skills/soc2-readiness/rules/control-environment.md +126 -0
- package/skills/soc2-readiness/rules/logical-access.md +264 -0
- package/skills/soc2-readiness/rules/monitoring-activities.md +66 -0
- package/skills/soc2-readiness/rules/optional-categories.md +264 -0
- package/skills/soc2-readiness/rules/privacy-criteria.md +359 -0
- package/skills/soc2-readiness/rules/risk-assessment.md +100 -0
- package/skills/soc2-readiness/rules/system-operations.md +170 -0
- package/skills/soc2-readiness/rules/trust-services.md +0 -230
|
@@ -0,0 +1,264 @@
|
|
|
1
|
+
# Optional Categories — Availability, Processing Integrity, Confidentiality
|
|
2
|
+
|
|
3
|
+
Per-criterion audit guidance for non-Security trust categories. Include these when your scope requires them (see decision tree in SKILL.md).
|
|
4
|
+
|
|
5
|
+
## A 1.1 — Availability monitoring
|
|
6
|
+
|
|
7
|
+
**Priority**: High | **NIST**: SC-5, SI-4 | **ISO**: A.8.6, A.8.16
|
|
8
|
+
|
|
9
|
+
Auditors verify that the organization monitors system capacity and availability against commitments made to customers. If your SLA promises 99.9% uptime, you need data showing you measured and met that target. Monitoring must be proactive (alerting before outages) not just reactive (noticing when customers complain).
|
|
10
|
+
|
|
11
|
+
**What auditors test**:
|
|
12
|
+
- Uptime monitoring configured for all customer-facing services with alerting thresholds
|
|
13
|
+
- Capacity metrics tracked: CPU, memory, storage, network — with alerts before exhaustion
|
|
14
|
+
- Status page maintained for customers showing current and historical availability
|
|
15
|
+
- SLA tracking: actual availability measured and compared against contractual commitments
|
|
16
|
+
- Capacity planning process: evidence of scaling decisions made before capacity limits are reached
|
|
17
|
+
|
|
18
|
+
**Evidence to prepare**:
|
|
19
|
+
```bash
|
|
20
|
+
# GCP: uptime checks
|
|
21
|
+
gcloud monitoring uptime list-configs --format=json | jq '.[] | {displayName, monitoredResource, period}'
|
|
22
|
+
|
|
23
|
+
# GCP: alerting policies for availability
|
|
24
|
+
gcloud monitoring policies list --format=json | jq '.[] | select(.displayName | test("uptime|availability|latency"; "i")) | .displayName'
|
|
25
|
+
|
|
26
|
+
# Azure: availability metrics
|
|
27
|
+
az monitor metrics list --resource {resource_id} --metric "Availability" --output json
|
|
28
|
+
```
|
|
29
|
+
- Status page URL and historical uptime data
|
|
30
|
+
- SLA attainment report for the audit period
|
|
31
|
+
- Capacity planning documentation or auto-scaling configuration
|
|
32
|
+
- Incident records for any availability disruptions during the audit period
|
|
33
|
+
|
|
34
|
+
**Startup pitfalls**:
|
|
35
|
+
- SLA promised to customers but no mechanism to measure actual uptime
|
|
36
|
+
- Monitoring exists for application but not for dependencies (database, cache, CDN)
|
|
37
|
+
- No status page — customers learn about outages from Twitter before the company communicates
|
|
38
|
+
|
|
39
|
+
---
|
|
40
|
+
|
|
41
|
+
## A 1.2 — Recovery infrastructure
|
|
42
|
+
|
|
43
|
+
**Priority**: High | **NIST**: CP-2, CP-6, CP-7 | **ISO**: A.5.30, A.8.14
|
|
44
|
+
|
|
45
|
+
Auditors verify that infrastructure supports recovery from disruptions — redundancy, failover, and geographic distribution proportionate to availability commitments. The question isn't whether you're multi-region; it's whether your architecture matches your stated RTO/RPO.
|
|
46
|
+
|
|
47
|
+
**What auditors test**:
|
|
48
|
+
- Production infrastructure has redundancy appropriate to SLA commitments
|
|
49
|
+
- Multi-AZ or multi-region deployment for critical services (verify configuration, not just documentation)
|
|
50
|
+
- Auto-scaling configured to handle demand spikes without manual intervention
|
|
51
|
+
- Failover mechanisms tested: load balancer health checks, database replica promotion
|
|
52
|
+
- Environmental protections: cloud provider's physical redundancy covered by their SOC 2 report
|
|
53
|
+
|
|
54
|
+
**Evidence to prepare**:
|
|
55
|
+
```bash
|
|
56
|
+
# GCP: instance groups and auto-scaling
|
|
57
|
+
gcloud compute instance-groups managed list --format=json | jq '.[] | {name, zone, targetSize}'
|
|
58
|
+
gcloud compute instance-groups managed describe {group} --zone={zone} --format=json | jq '.autoscaler'
|
|
59
|
+
|
|
60
|
+
# GCP: Cloud SQL high availability
|
|
61
|
+
gcloud sql instances describe {instance} --format=json | jq '{availabilityType, region, gceZone, secondaryGceZone}'
|
|
62
|
+
|
|
63
|
+
# Azure: availability sets and zones
|
|
64
|
+
az vm availability-set list --output json | jq '.[] | {name, platformFaultDomainCount}'
|
|
65
|
+
```
|
|
66
|
+
- Architecture diagram showing redundancy and failover paths
|
|
67
|
+
- Auto-scaling configuration documentation
|
|
68
|
+
- Cloud provider SOC 2 report sections covering physical and environmental protections
|
|
69
|
+
|
|
70
|
+
---
|
|
71
|
+
|
|
72
|
+
## A 1.3 — Recovery testing
|
|
73
|
+
|
|
74
|
+
**Priority**: High | **NIST**: CP-4 | **ISO**: A.5.30
|
|
75
|
+
|
|
76
|
+
Auditors verify that recovery capabilities are tested — not just designed. An untested DR plan is assumed to be broken. Expect auditors to ask for the date, scope, and results of your most recent recovery test.
|
|
77
|
+
|
|
78
|
+
**What auditors test**:
|
|
79
|
+
- At least one recovery test conducted during the audit period
|
|
80
|
+
- Test scope covers critical systems (not just a non-critical staging restore)
|
|
81
|
+
- Test results documented: actual recovery time vs. RTO, data loss vs. RPO, issues encountered
|
|
82
|
+
- Lessons learned captured and applied (if test revealed problems, they were fixed)
|
|
83
|
+
- Test participants include operations team members who would execute real recovery
|
|
84
|
+
|
|
85
|
+
**Evidence to prepare**:
|
|
86
|
+
- Recovery test report: date, scope, participants, procedure followed
|
|
87
|
+
- Test metrics: actual recovery time, data point recovered to, success/failure criteria
|
|
88
|
+
- Issues log from test execution with resolution status
|
|
89
|
+
- Post-test action items and their completion status
|
|
90
|
+
- Year-over-year comparison showing improvement (if multiple tests)
|
|
91
|
+
|
|
92
|
+
**Startup pitfalls**:
|
|
93
|
+
- "We restored a backup once" doesn't count if it wasn't documented with timing and results
|
|
94
|
+
- Test only covers database restore, not full application recovery
|
|
95
|
+
- DR plan tested in staging but production has different configuration
|
|
96
|
+
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
## PI 1.1 — Processing completeness
|
|
100
|
+
|
|
101
|
+
**Priority**: Medium | **NIST**: SI-10 | **ISO**: A.8.28
|
|
102
|
+
|
|
103
|
+
Auditors verify that the system processes all transactions completely — nothing is lost, duplicated, or partially processed. For data processing services, this means input validation, transaction tracking, and reconciliation controls.
|
|
104
|
+
|
|
105
|
+
**What auditors test**:
|
|
106
|
+
- Input validation: system rejects malformed data and reports errors to the sender
|
|
107
|
+
- Transaction completeness: mechanism to detect dropped or stuck transactions (dead letter queues, retry tracking)
|
|
108
|
+
- Batch processing: start/end record counts are reconciled
|
|
109
|
+
- Error handling: failed transactions are logged, alerted, and reprocessed or escalated
|
|
110
|
+
- Idempotency: reprocessing the same input doesn't create duplicates
|
|
111
|
+
|
|
112
|
+
**Evidence to prepare**:
|
|
113
|
+
- Input validation rules documentation for key data interfaces
|
|
114
|
+
- Transaction monitoring dashboard showing throughput and error rates
|
|
115
|
+
- Dead letter queue or error queue monitoring configuration
|
|
116
|
+
- Reconciliation procedures for batch processing
|
|
117
|
+
- Sample error reports showing detection and resolution
|
|
118
|
+
|
|
119
|
+
---
|
|
120
|
+
|
|
121
|
+
## PI 1.2 — Processing accuracy
|
|
122
|
+
|
|
123
|
+
**Priority**: Medium | **NIST**: SI-10, SI-11 | **ISO**: A.8.28
|
|
124
|
+
|
|
125
|
+
Auditors verify that processing produces accurate results — calculations are correct, data transformations are faithful, and outputs match expected values. Accuracy controls prevent garbage-in/garbage-out scenarios.
|
|
126
|
+
|
|
127
|
+
**What auditors test**:
|
|
128
|
+
- Data validation at processing boundaries: input checks, transformation verification, output validation
|
|
129
|
+
- Automated testing: unit tests covering calculation logic, integration tests for data pipelines
|
|
130
|
+
- Reference data integrity: lookup tables, configuration values, and mappings are version-controlled
|
|
131
|
+
- Reconciliation: output totals reconciled against input totals for batch or aggregate processing
|
|
132
|
+
- Exception reporting: anomalous results flagged for human review
|
|
133
|
+
|
|
134
|
+
**Evidence to prepare**:
|
|
135
|
+
- Test suite covering processing accuracy (test results from CI/CD)
|
|
136
|
+
- Data validation rules and boundary checks documentation
|
|
137
|
+
- Reconciliation reports for the audit period
|
|
138
|
+
- Exception reports and resolution records
|
|
139
|
+
- Change log for reference data modifications
|
|
140
|
+
|
|
141
|
+
---
|
|
142
|
+
|
|
143
|
+
## PI 1.3 — Processing timeliness
|
|
144
|
+
|
|
145
|
+
**Priority**: Medium | **NIST**: SI-10 | **ISO**: A.8.28
|
|
146
|
+
|
|
147
|
+
Auditors verify that processing occurs within committed timeframes. If your service promises real-time processing or next-day batch completion, you need evidence of measuring and meeting those targets.
|
|
148
|
+
|
|
149
|
+
**What auditors test**:
|
|
150
|
+
- Processing SLAs defined for each service or data flow
|
|
151
|
+
- Latency monitoring: actual processing times measured and tracked
|
|
152
|
+
- Alerting on SLA breaches: team is notified when processing exceeds committed timeframes
|
|
153
|
+
- Historical compliance: percentage of transactions processed within SLA during the audit period
|
|
154
|
+
- Escalation process for sustained processing delays
|
|
155
|
+
|
|
156
|
+
**Evidence to prepare**:
|
|
157
|
+
- Processing SLA definitions per service
|
|
158
|
+
- Latency monitoring dashboard or metrics export
|
|
159
|
+
- SLA compliance report for the audit period
|
|
160
|
+
- Alert configuration for processing delay thresholds
|
|
161
|
+
- Incident records for any SLA breaches and their resolution
|
|
162
|
+
|
|
163
|
+
---
|
|
164
|
+
|
|
165
|
+
## PI 1.4 — Output completeness and accuracy
|
|
166
|
+
|
|
167
|
+
**Priority**: Medium | **NIST**: SI-11 | **ISO**: A.8.28
|
|
168
|
+
|
|
169
|
+
Auditors verify that system outputs are complete, accurate, and delivered to the intended recipients. This is the end-to-end check — even if processing is correct internally, outputs must reach the right destination in the right format.
|
|
170
|
+
|
|
171
|
+
**What auditors test**:
|
|
172
|
+
- Output validation: system verifies outputs before delivery (record counts, checksums, format checks)
|
|
173
|
+
- Delivery confirmation: evidence that outputs reached intended recipients
|
|
174
|
+
- Output access control: only authorized recipients receive the output
|
|
175
|
+
- Output reconciliation: recipients can verify completeness against expected results
|
|
176
|
+
- Error handling for failed deliveries: retry, alert, and escalation mechanisms
|
|
177
|
+
|
|
178
|
+
**Evidence to prepare**:
|
|
179
|
+
- Output validation rules and procedures
|
|
180
|
+
- Delivery confirmation logs (API responses, email delivery receipts, file transfer logs)
|
|
181
|
+
- Output access control configuration
|
|
182
|
+
- Reconciliation procedures between sender and recipient
|
|
183
|
+
- Failed delivery handling procedures and sample incident records
|
|
184
|
+
|
|
185
|
+
---
|
|
186
|
+
|
|
187
|
+
## PI 1.5 — Data matching and reconciliation
|
|
188
|
+
|
|
189
|
+
**Priority**: Medium | **NIST**: SI-10 | **ISO**: A.8.28
|
|
190
|
+
|
|
191
|
+
Auditors verify that the organization reconciles data across systems and processing stages. When data flows between systems, discrepancies should be detected and resolved. This is especially relevant for financial data processing, multi-system architectures, and data synchronization.
|
|
192
|
+
|
|
193
|
+
**What auditors test**:
|
|
194
|
+
- Cross-system reconciliation procedures defined for key data flows
|
|
195
|
+
- Automated reconciliation: scheduled jobs that compare records across systems
|
|
196
|
+
- Exception reporting: discrepancies flagged for investigation and resolution
|
|
197
|
+
- Resolution tracking: reconciliation exceptions are logged, investigated, and resolved within defined timeframes
|
|
198
|
+
- Reconciliation frequency matches the criticality of the data flow
|
|
199
|
+
|
|
200
|
+
**Evidence to prepare**:
|
|
201
|
+
- Reconciliation procedure documentation for each key data flow
|
|
202
|
+
- Automated reconciliation job configuration and schedule
|
|
203
|
+
- Sample reconciliation reports showing matches and exceptions
|
|
204
|
+
- Exception resolution records for the audit period
|
|
205
|
+
- Reconciliation completion metrics (percentage matched, exceptions outstanding)
|
|
206
|
+
|
|
207
|
+
---
|
|
208
|
+
|
|
209
|
+
## C 1.1 — Confidential information protection
|
|
210
|
+
|
|
211
|
+
**Priority**: High | **NIST**: AC-21, SC-28 | **ISO**: A.5.14, A.8.24
|
|
212
|
+
|
|
213
|
+
Auditors verify that information classified as confidential is protected throughout its lifecycle — collection, processing, storage, and transmission. Protection must be proportionate to the classification level and consistent across all systems where confidential data resides.
|
|
214
|
+
|
|
215
|
+
**What auditors test**:
|
|
216
|
+
- Data classification scheme exists: at least 2-3 levels (e.g., public, internal, confidential)
|
|
217
|
+
- Confidential data identified and inventoried: the organization knows where its confidential data lives
|
|
218
|
+
- Encryption at rest for all stores containing confidential data (databases, file storage, backups)
|
|
219
|
+
- Access restricted based on classification: not everyone can access confidential data
|
|
220
|
+
- DLP or data handling procedures: controls to prevent unauthorized sharing or exfiltration
|
|
221
|
+
|
|
222
|
+
**Evidence to prepare**:
|
|
223
|
+
```bash
|
|
224
|
+
# GCP: encryption configuration for Cloud SQL
|
|
225
|
+
gcloud sql instances describe {instance} --format=json | jq '{diskEncryptionConfiguration, diskEncryptionStatus}'
|
|
226
|
+
|
|
227
|
+
# GCP: bucket encryption
|
|
228
|
+
gcloud storage buckets describe gs://{bucket} --format=json | jq '.encryption'
|
|
229
|
+
|
|
230
|
+
# Azure: storage encryption
|
|
231
|
+
az storage account show --name {account} --query '{encryption: encryption.services}' --output json
|
|
232
|
+
```
|
|
233
|
+
- Data classification policy with definitions for each level
|
|
234
|
+
- Data inventory or data flow diagram showing where confidential data resides
|
|
235
|
+
- Encryption configuration evidence for all stores with confidential data
|
|
236
|
+
- Access control lists for confidential data repositories
|
|
237
|
+
- Data handling procedures for sharing or transferring confidential information
|
|
238
|
+
|
|
239
|
+
**Startup pitfalls**:
|
|
240
|
+
- No data classification — everything treated the same regardless of sensitivity
|
|
241
|
+
- Classification scheme exists on paper but not implemented in systems (no labels, no access differentiation)
|
|
242
|
+
- Customer data not identified as confidential — stored in general-purpose repositories accessible to all engineers
|
|
243
|
+
|
|
244
|
+
---
|
|
245
|
+
|
|
246
|
+
## C 1.2 — Confidential information disposal
|
|
247
|
+
|
|
248
|
+
**Priority**: Medium | **NIST**: MP-6, SI-12 | **ISO**: A.7.14, A.8.10
|
|
249
|
+
|
|
250
|
+
Auditors verify that confidential information is disposed of securely when retention periods expire or when the information is no longer needed. Disposal must be verifiable — "we deleted it" without a log or procedure is insufficient.
|
|
251
|
+
|
|
252
|
+
**What auditors test**:
|
|
253
|
+
- Data retention policy defines retention periods for different data types
|
|
254
|
+
- Disposal procedures documented: what method is used for different media types
|
|
255
|
+
- Disposal records: evidence of data deletion or destruction (logs, certificates, screenshots)
|
|
256
|
+
- Automated retention enforcement: scheduled deletion jobs for data past retention period
|
|
257
|
+
- Verification: disposal method renders data irrecoverable (cryptographic erasure, overwrite, physical destruction)
|
|
258
|
+
|
|
259
|
+
**Evidence to prepare**:
|
|
260
|
+
- Data retention schedule (data type, retention period, disposal method)
|
|
261
|
+
- Disposal records from the audit period (database purge logs, media destruction certificates)
|
|
262
|
+
- Automated retention policy configuration (object lifecycle rules, database TTL settings)
|
|
263
|
+
- Hardware disposal records with certificates of destruction
|
|
264
|
+
- Policy covering disposal standards (reference NIST 800-88 for media sanitization)
|
|
@@ -0,0 +1,359 @@
|
|
|
1
|
+
# Privacy Criteria — P 1.1 through P 8.1
|
|
2
|
+
|
|
3
|
+
Per-criterion audit guidance for the Privacy trust category. Include when the service processes personally identifiable information (PII).
|
|
4
|
+
|
|
5
|
+
## P 1.1 — Privacy notice at collection
|
|
6
|
+
|
|
7
|
+
**Priority**: High | **NIST**: — | **ISO**: A.5.34
|
|
8
|
+
|
|
9
|
+
Auditors verify that individuals are informed about the organization's privacy practices at or before the point of data collection. The privacy notice must be accessible, understandable, and presented before (not after) personal data is collected.
|
|
10
|
+
|
|
11
|
+
**What auditors test**:
|
|
12
|
+
- Privacy notice is presented at or before collection (not buried in a Terms of Service footer)
|
|
13
|
+
- Notice is accessible: prominent link on sign-up forms, data collection pages, and mobile apps
|
|
14
|
+
- Notice is understandable: written in plain language, not dense legalese
|
|
15
|
+
- Notice timing: presented before the user submits personal data, with opt-in where required
|
|
16
|
+
- Multi-channel: if data is collected via web, mobile, API, and phone — each channel has appropriate notice
|
|
17
|
+
|
|
18
|
+
**Evidence to prepare**:
|
|
19
|
+
- Privacy notice URL and screenshots showing placement at collection points
|
|
20
|
+
- Privacy notice text (current version with effective date)
|
|
21
|
+
- Screenshots of collection forms showing privacy notice link/checkbox before submit button
|
|
22
|
+
- Mobile app privacy notice presentation (screenshot or screen recording)
|
|
23
|
+
- Privacy notice version history showing updates during audit period
|
|
24
|
+
|
|
25
|
+
**Startup pitfalls**:
|
|
26
|
+
- Privacy notice exists but is only linked in the website footer — not at the actual collection point
|
|
27
|
+
- Notice says "we may collect" but doesn't specify what is actually collected
|
|
28
|
+
- Mobile app collects data without presenting the notice until after account creation
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
## P 1.2 — Cover all required disclosures
|
|
33
|
+
|
|
34
|
+
**Priority**: High | **NIST**: — | **ISO**: A.5.34
|
|
35
|
+
|
|
36
|
+
Auditors verify that the privacy notice covers all required disclosure topics — what data is collected, how it's used, who it's shared with, how long it's retained, and how individuals can exercise their rights. An incomplete notice is a finding even if a notice exists.
|
|
37
|
+
|
|
38
|
+
**What auditors test**:
|
|
39
|
+
- Notice identifies specific types of personal data collected (not just "we collect your information")
|
|
40
|
+
- Purpose of collection stated for each data type
|
|
41
|
+
- Categories of third parties with whom data is shared
|
|
42
|
+
- Retention periods disclosed (or criteria for determining retention)
|
|
43
|
+
- Individual rights described: access, correction, deletion, opt-out
|
|
44
|
+
- Contact information for privacy inquiries
|
|
45
|
+
- Cross-border transfer disclosures (if applicable)
|
|
46
|
+
|
|
47
|
+
**Evidence to prepare**:
|
|
48
|
+
- Privacy notice completeness checklist (mapping each required disclosure to notice section)
|
|
49
|
+
- Data inventory mapping data types to collection purposes
|
|
50
|
+
- Third-party sharing inventory (categories, not individual vendor names)
|
|
51
|
+
- Comparison of privacy notice against applicable regulatory requirements (GDPR Article 13/14, CCPA, etc.)
|
|
52
|
+
|
|
53
|
+
---
|
|
54
|
+
|
|
55
|
+
## P 2.1 — Consent and choice
|
|
56
|
+
|
|
57
|
+
**Priority**: High | **NIST**: — | **ISO**: A.5.34
|
|
58
|
+
|
|
59
|
+
Auditors verify that individuals have meaningful choice about how their personal data is used. Consent must be informed, specific, and freely given — pre-checked boxes and forced consent bundled with service access are red flags.
|
|
60
|
+
|
|
61
|
+
**What auditors test**:
|
|
62
|
+
- Consent mechanisms: opt-in for sensitive data, opt-out for marketing communications
|
|
63
|
+
- Consent is granular: individuals can consent to some uses while declining others
|
|
64
|
+
- Consent records: the organization can demonstrate when and how consent was obtained
|
|
65
|
+
- Withdrawal mechanism: individuals can revoke consent as easily as they gave it
|
|
66
|
+
- No "dark patterns": consent is not obtained through deceptive design
|
|
67
|
+
|
|
68
|
+
**Evidence to prepare**:
|
|
69
|
+
- Consent management platform configuration or consent collection screenshots
|
|
70
|
+
- Consent records sample showing timestamp, user, and specific consents granted
|
|
71
|
+
- Opt-out mechanism documentation (unsubscribe links, preference centers, account settings)
|
|
72
|
+
- Cookie consent banner configuration (if applicable)
|
|
73
|
+
- Consent withdrawal process documentation and sample withdrawal records
|
|
74
|
+
|
|
75
|
+
**Startup pitfalls**:
|
|
76
|
+
- Single "I agree to everything" checkbox — no granularity for different uses
|
|
77
|
+
- No consent records — impossible to prove what the user agreed to
|
|
78
|
+
- Opt-out requires emailing support instead of a self-service mechanism
|
|
79
|
+
|
|
80
|
+
---
|
|
81
|
+
|
|
82
|
+
## P 3.1 — Collection limited to purpose
|
|
83
|
+
|
|
84
|
+
**Priority**: Medium | **NIST**: — | **ISO**: A.5.34
|
|
85
|
+
|
|
86
|
+
Auditors verify that the organization collects only the personal data necessary for the stated purposes — no more. Data minimization is the principle; collecting "just in case" data is a finding.
|
|
87
|
+
|
|
88
|
+
**What auditors test**:
|
|
89
|
+
- Data collection is limited to what's described in the privacy notice
|
|
90
|
+
- Each data field collected has a documented business purpose
|
|
91
|
+
- Optional vs. required fields are clearly distinguished in collection forms
|
|
92
|
+
- Data minimization reviews: periodic assessment of whether all collected data is still necessary
|
|
93
|
+
- New data collection requires privacy review before implementation
|
|
94
|
+
|
|
95
|
+
**Evidence to prepare**:
|
|
96
|
+
- Data inventory mapping each data element to its collection purpose
|
|
97
|
+
- Collection form screenshots showing required vs. optional fields
|
|
98
|
+
- Privacy impact assessment records for new features or data collection changes
|
|
99
|
+
- Data minimization review records (if conducted during audit period)
|
|
100
|
+
|
|
101
|
+
---
|
|
102
|
+
|
|
103
|
+
## P 3.2 — Implicit and explicit consent
|
|
104
|
+
|
|
105
|
+
**Priority**: Medium | **NIST**: — | **ISO**: A.5.34
|
|
106
|
+
|
|
107
|
+
Auditors verify that the type of consent obtained matches the sensitivity of the data and the applicable regulatory requirements. Explicit consent (affirmative action) is required for sensitive data; implicit consent (reasonable inference from context) may suffice for basic processing.
|
|
108
|
+
|
|
109
|
+
**What auditors test**:
|
|
110
|
+
- Sensitive data categories identified (health, financial, biometric, children's data)
|
|
111
|
+
- Explicit consent obtained for sensitive data processing (not implied from continued use)
|
|
112
|
+
- Consent type appropriate for jurisdiction: GDPR requires explicit consent for certain processing; CCPA focuses on opt-out rights
|
|
113
|
+
- Consent renewal: long-running processing doesn't rely on stale consent from years ago
|
|
114
|
+
- Records distinguish between explicit and implicit consent
|
|
115
|
+
|
|
116
|
+
**Evidence to prepare**:
|
|
117
|
+
- Sensitive data inventory with consent type required for each
|
|
118
|
+
- Explicit consent collection mechanism (screenshots, UI flows)
|
|
119
|
+
- Consent validity tracking (when consent was obtained, when it expires or should be renewed)
|
|
120
|
+
- Jurisdiction-specific consent requirements matrix
|
|
121
|
+
|
|
122
|
+
---
|
|
123
|
+
|
|
124
|
+
## P 4.1 — Purpose-limited use and retention
|
|
125
|
+
|
|
126
|
+
**Priority**: High | **NIST**: — | **ISO**: A.5.34
|
|
127
|
+
|
|
128
|
+
Auditors verify that personal data is used only for the purposes disclosed at collection and retained only as long as necessary. Purpose creep — using data for new purposes without notice or consent — is a significant finding.
|
|
129
|
+
|
|
130
|
+
**What auditors test**:
|
|
131
|
+
- Data use matches the purposes stated in the privacy notice
|
|
132
|
+
- New uses of existing data trigger privacy review and updated notice/consent
|
|
133
|
+
- Access to personal data is restricted to personnel who need it for the stated purpose
|
|
134
|
+
- Purpose limitation enforced technically (access controls, data segregation) not just by policy
|
|
135
|
+
- Retention limits enforced: data is actually deleted when the retention period expires
|
|
136
|
+
|
|
137
|
+
**Evidence to prepare**:
|
|
138
|
+
- Purpose-to-use mapping (each purpose in privacy notice → actual processing activities)
|
|
139
|
+
- Privacy impact assessments for any new uses of personal data during the audit period
|
|
140
|
+
- Access control configuration for personal data stores
|
|
141
|
+
- Data retention schedule with evidence of enforcement (deletion logs, purge records)
|
|
142
|
+
- Sample of automated retention enforcement (database TTL, object lifecycle rules)
|
|
143
|
+
|
|
144
|
+
**Startup pitfalls**:
|
|
145
|
+
- Analytics tracking collects data for purposes not disclosed in the privacy notice
|
|
146
|
+
- Retention period defined in policy but no automated enforcement — data accumulates indefinitely
|
|
147
|
+
- Personal data accessible to all engineers via shared database access
|
|
148
|
+
|
|
149
|
+
---
|
|
150
|
+
|
|
151
|
+
## P 4.2 — Retention and disposal schedule
|
|
152
|
+
|
|
153
|
+
**Priority**: Medium | **NIST**: — | **ISO**: A.5.34
|
|
154
|
+
|
|
155
|
+
Auditors verify that the organization has defined retention periods for personal data categories and disposes of data when those periods expire. The retention schedule should be based on legal requirements, business need, and individual expectations.
|
|
156
|
+
|
|
157
|
+
**What auditors test**:
|
|
158
|
+
- Retention schedule exists covering all personal data categories
|
|
159
|
+
- Retention periods are justified (legal requirement, contractual obligation, or documented business need)
|
|
160
|
+
- Disposal method is specified per data type (deletion, anonymization, physical destruction)
|
|
161
|
+
- Automated enforcement: scheduled jobs that purge expired data
|
|
162
|
+
- Disposal verification: evidence that data was actually disposed of per schedule
|
|
163
|
+
|
|
164
|
+
**Evidence to prepare**:
|
|
165
|
+
- Data retention schedule (data type, retention period, justification, disposal method)
|
|
166
|
+
- Automated retention enforcement configuration (database TTL, object lifecycle policies)
|
|
167
|
+
- Disposal execution logs from the audit period
|
|
168
|
+
- Exceptions log: data retained beyond schedule with documented justification
|
|
169
|
+
- Anonymization procedures (if used instead of deletion)
|
|
170
|
+
|
|
171
|
+
---
|
|
172
|
+
|
|
173
|
+
## P 5.1 — Right of access
|
|
174
|
+
|
|
175
|
+
**Priority**: High | **NIST**: — | **ISO**: A.5.34
|
|
176
|
+
|
|
177
|
+
Auditors verify that individuals can request and receive a copy of their personal data. The access process must be documented, accessible, and responsive within regulatory timeframes (30 days for GDPR, 45 days for CCPA).
|
|
178
|
+
|
|
179
|
+
**What auditors test**:
|
|
180
|
+
- Data subject access request (DSAR) process documented and accessible to individuals
|
|
181
|
+
- Request channel: easy to find (privacy page, account settings, email address)
|
|
182
|
+
- Identity verification: process to confirm the requester is the data subject
|
|
183
|
+
- Response completeness: all personal data across all systems is included in the response
|
|
184
|
+
- Response timeliness: fulfilled within applicable regulatory timeframes
|
|
185
|
+
- Request tracking: log of requests, dates, and response status
|
|
186
|
+
|
|
187
|
+
**Evidence to prepare**:
|
|
188
|
+
- DSAR procedure document
|
|
189
|
+
- Request intake form or mechanism (web form, email address, account setting)
|
|
190
|
+
- Identity verification process documentation
|
|
191
|
+
- DSAR tracking log from the audit period (if requests were received)
|
|
192
|
+
- Sample DSAR response format (redacted)
|
|
193
|
+
- Data discovery process: how all personal data across systems is located for a request
|
|
194
|
+
|
|
195
|
+
**Startup pitfalls**:
|
|
196
|
+
- No defined DSAR process — requests handled ad-hoc by whoever receives the email
|
|
197
|
+
- Data scattered across systems — unable to locate all personal data for a complete response
|
|
198
|
+
- No identity verification — anyone can request another person's data
|
|
199
|
+
|
|
200
|
+
---
|
|
201
|
+
|
|
202
|
+
## P 5.2 — Correction, amendment, and deletion
|
|
203
|
+
|
|
204
|
+
**Priority**: High | **NIST**: — | **ISO**: A.5.34
|
|
205
|
+
|
|
206
|
+
Auditors verify that individuals can request correction of inaccurate data and deletion of their data. The right to deletion is subject to exceptions (legal holds, regulatory requirements), but the process must exist and be documented.
|
|
207
|
+
|
|
208
|
+
**What auditors test**:
|
|
209
|
+
- Correction and deletion request processes documented alongside access requests
|
|
210
|
+
- Deletion scope: all systems, backups, and third-party copies addressed
|
|
211
|
+
- Exceptions documented: when deletion cannot be fulfilled (legal hold, regulatory retention)
|
|
212
|
+
- Confirmation provided to the individual when action is completed
|
|
213
|
+
- Third-party notification: vendors who received the data are notified of corrections/deletions
|
|
214
|
+
|
|
215
|
+
**Evidence to prepare**:
|
|
216
|
+
- Correction and deletion procedure documentation
|
|
217
|
+
- Deletion scope checklist (all systems where personal data resides)
|
|
218
|
+
- Exceptions matrix (when deletion cannot be fulfilled, with legal basis)
|
|
219
|
+
- Sample correction/deletion confirmation to individual
|
|
220
|
+
- Third-party notification procedure and evidence (if deletions require downstream action)
|
|
221
|
+
|
|
222
|
+
---
|
|
223
|
+
|
|
224
|
+
## P 6.1 — Disclosure to third parties
|
|
225
|
+
|
|
226
|
+
**Priority**: High | **NIST**: — | **ISO**: A.5.34
|
|
227
|
+
|
|
228
|
+
Auditors verify that personal data is shared with third parties only for disclosed purposes, with appropriate safeguards, and with the individual's knowledge. Every third party receiving personal data should be covered by a data processing agreement or equivalent contract.
|
|
229
|
+
|
|
230
|
+
**What auditors test**:
|
|
231
|
+
- Third-party data sharing inventory: all vendors and partners receiving personal data
|
|
232
|
+
- Sharing matches privacy notice disclosures (no undisclosed sharing)
|
|
233
|
+
- Data processing agreements (DPAs) in place with all processors of personal data
|
|
234
|
+
- Contractual security requirements: DPAs include data protection obligations
|
|
235
|
+
- Due diligence: third-party security posture assessed before sharing
|
|
236
|
+
|
|
237
|
+
**Evidence to prepare**:
|
|
238
|
+
- Third-party data sharing inventory (vendor name, data shared, purpose, DPA status)
|
|
239
|
+
- Executed DPAs with all data processors
|
|
240
|
+
- DPA template showing required security and privacy clauses
|
|
241
|
+
- Third-party security assessment records for vendors receiving personal data
|
|
242
|
+
- Privacy notice section disclosing third-party sharing categories
|
|
243
|
+
|
|
244
|
+
---
|
|
245
|
+
|
|
246
|
+
## P 6.2 — Authorized third-party disclosures
|
|
247
|
+
|
|
248
|
+
**Priority**: Medium | **NIST**: — | **ISO**: A.5.34
|
|
249
|
+
|
|
250
|
+
Auditors verify that disclosures to third parties are authorized — either by the individual's consent, contractual necessity, or legal obligation. Unauthorized sharing is a breach of privacy commitments.
|
|
251
|
+
|
|
252
|
+
**What auditors test**:
|
|
253
|
+
- Authorization basis documented for each third-party sharing arrangement
|
|
254
|
+
- Consent-based sharing: consent records link to specific third-party disclosures
|
|
255
|
+
- Contract-based sharing: agreements specify permitted data uses
|
|
256
|
+
- Legal disclosures: process for responding to law enforcement and legal requests
|
|
257
|
+
- Audit trail: records of what data was shared with whom and when
|
|
258
|
+
|
|
259
|
+
**Evidence to prepare**:
|
|
260
|
+
- Authorization basis matrix (each sharing arrangement → consent/contract/legal basis)
|
|
261
|
+
- Consent records linked to third-party sharing (if consent-based)
|
|
262
|
+
- Law enforcement request response procedure
|
|
263
|
+
- Data sharing logs or audit trail for the audit period
|
|
264
|
+
- Sample of authorized sharing records
|
|
265
|
+
|
|
266
|
+
---
|
|
267
|
+
|
|
268
|
+
## P 6.3 — Unauthorized disclosure notification
|
|
269
|
+
|
|
270
|
+
**Priority**: High | **NIST**: — | **ISO**: A.5.34
|
|
271
|
+
|
|
272
|
+
Auditors verify that the organization has a process to detect and notify affected individuals and regulators when unauthorized disclosure of personal data occurs. This connects to incident response (CC 7.3) but with privacy-specific notification requirements.
|
|
273
|
+
|
|
274
|
+
**What auditors test**:
|
|
275
|
+
- Breach notification procedure specific to personal data incidents
|
|
276
|
+
- Notification timelines defined per jurisdiction (72 hours for GDPR, varies by US state)
|
|
277
|
+
- Notification content: what information is included in breach notifications
|
|
278
|
+
- Notification channels: how individuals are reached (email, letter, public notice)
|
|
279
|
+
- Regulatory notification requirements mapped by jurisdiction and data type
|
|
280
|
+
- Breach assessment process: how to determine if notification is required
|
|
281
|
+
|
|
282
|
+
**Evidence to prepare**:
|
|
283
|
+
- Privacy breach notification procedure (standalone or section of IR plan)
|
|
284
|
+
- Notification timeline matrix by jurisdiction
|
|
285
|
+
- Breach notification template (for individuals and regulators)
|
|
286
|
+
- Breach assessment criteria (when notification is triggered vs. when it isn't)
|
|
287
|
+
- Sample breach notification (if an incident occurred during the audit period)
|
|
288
|
+
|
|
289
|
+
---
|
|
290
|
+
|
|
291
|
+
## P 6.7 — Sub-processor oversight
|
|
292
|
+
|
|
293
|
+
**Priority**: High | **NIST**: — | **ISO**: A.5.34
|
|
294
|
+
|
|
295
|
+
Auditors verify that the organization maintains oversight of sub-processors — third parties engaged by your processors to further process personal data. The chain of custody for personal data must be tracked and controlled.
|
|
296
|
+
|
|
297
|
+
**What auditors test**:
|
|
298
|
+
- Sub-processor inventory: processors have disclosed their sub-processors
|
|
299
|
+
- Sub-processor changes: notification process when processors add or change sub-processors
|
|
300
|
+
- Contractual flow-down: DPAs require processors to impose equivalent obligations on sub-processors
|
|
301
|
+
- Individual notification: individuals informed of sub-processor use (typically via privacy notice or DPA)
|
|
302
|
+
- Sub-processor security: assessment or reliance on processor's assessment of sub-processor security
|
|
303
|
+
|
|
304
|
+
**Evidence to prepare**:
|
|
305
|
+
- Sub-processor lists from key data processors (usually published on processor's website)
|
|
306
|
+
- DPA clauses requiring sub-processor notification and equivalent protections
|
|
307
|
+
- Sub-processor change notification records from the audit period
|
|
308
|
+
- Process for reviewing and approving sub-processor changes
|
|
309
|
+
- Privacy notice or DPA section disclosing sub-processor use
|
|
310
|
+
|
|
311
|
+
---
|
|
312
|
+
|
|
313
|
+
## P 7.1 — Data quality assurance
|
|
314
|
+
|
|
315
|
+
**Priority**: Medium | **NIST**: — | **ISO**: A.5.34
|
|
316
|
+
|
|
317
|
+
Auditors verify that the organization maintains the accuracy and completeness of personal data throughout its lifecycle. Inaccurate personal data that leads to adverse decisions about individuals is a privacy failure.
|
|
318
|
+
|
|
319
|
+
**What auditors test**:
|
|
320
|
+
- Data quality procedures: input validation, duplicate detection, data cleansing
|
|
321
|
+
- Individuals can update their own data: self-service profile management or update request process
|
|
322
|
+
- Data accuracy verification: periodic review of data accuracy for critical personal data sets
|
|
323
|
+
- Third-party data quality: if personal data is received from third parties, accuracy is verified
|
|
324
|
+
- Correction propagation: when data is corrected, all copies and downstream systems are updated
|
|
325
|
+
|
|
326
|
+
**Evidence to prepare**:
|
|
327
|
+
- Input validation rules for personal data fields
|
|
328
|
+
- Self-service data management features (account settings screenshots)
|
|
329
|
+
- Data quality monitoring metrics (duplicate rates, validation failure rates)
|
|
330
|
+
- Correction propagation procedure for multi-system environments
|
|
331
|
+
- Data quality review records from the audit period
|
|
332
|
+
|
|
333
|
+
---
|
|
334
|
+
|
|
335
|
+
## P 8.1 — Monitoring and enforcement
|
|
336
|
+
|
|
337
|
+
**Priority**: Medium | **NIST**: — | **ISO**: A.5.34
|
|
338
|
+
|
|
339
|
+
Auditors verify that the organization monitors compliance with its privacy commitments and has mechanisms to enforce policies. Privacy isn't self-executing — someone must be watching, measuring, and acting on deviations.
|
|
340
|
+
|
|
341
|
+
**What auditors test**:
|
|
342
|
+
- Privacy program governance: named privacy owner or DPO with defined responsibilities
|
|
343
|
+
- Privacy compliance monitoring: regular assessments of privacy practice alignment with commitments
|
|
344
|
+
- Privacy incident tracking: mechanism to detect and respond to privacy violations
|
|
345
|
+
- Employee training: privacy-specific awareness training beyond general security training
|
|
346
|
+
- Enforcement: consequences for privacy policy violations (connects to CC 1.5 accountability)
|
|
347
|
+
|
|
348
|
+
**Evidence to prepare**:
|
|
349
|
+
- Privacy program charter or DPO appointment documentation
|
|
350
|
+
- Privacy compliance assessment records from the audit period
|
|
351
|
+
- Privacy incident log (including near-misses and minor violations)
|
|
352
|
+
- Privacy-specific training records with completion rates
|
|
353
|
+
- Privacy policy violation response procedures
|
|
354
|
+
- Privacy metrics reported to management (DSAR volumes, incidents, training completion)
|
|
355
|
+
|
|
356
|
+
**Startup pitfalls**:
|
|
357
|
+
- No named privacy owner — responsibility diffused across the organization
|
|
358
|
+
- Privacy treated as a legal-only concern — no operational monitoring or enforcement
|
|
359
|
+
- Privacy training bundled into security training with no privacy-specific content
|