plainstamp 0.3.0 → 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -16,6 +16,22 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
16
16
 
17
17
  Distribution is **npm-only**. Source remains in the operating organization's private repository; there is no public source repository host. Contact channel for issues, accuracy reports, security reports, and contribution proposals is **helpfulbutton140@agentmail.to** (see `docs/CONTRIBUTING.md`, `docs/SECURITY.md`).
18
18
 
19
+ ## [0.5.0] — 2026-05-08
20
+
21
+ ### Added
22
+
23
+ - FDA Predetermined Change Control Plans for AI/ML-Enabled Device Software Functions — Final Guidance (December 4, 2024). Codified into the FD&C Act at § 515C (21 U.S.C. § 360e-4) by Section 3308 of the Food and Drug Omnibus Reform Act of 2022 (FDORA, P.L. 117-328). Manufacturers of AI/ML-enabled medical devices may include a PCCP in their authorized 510(k) / De Novo / PMA marketing submission, comprising a Description of Modifications, a Modification Protocol, and an Impact Assessment; PCCP-conforming modifications may then be implemented without a new submission. Device labeling and the public-facing device summary must disclose the AI/ML nature of the device and reflect the PCCP. Use case `healthcare`. Severity `mandatory`.
24
+ - Fourth SEO guide: `docs/guides/california-bot-disclosure-bp-17941-builder-guide.md` — comprehensive coverage of California's B.O.T. Act bot-disclosure rule, the safe-harbor "clear, conspicuous, and reasonably designed to inform" standard, the channels and use-cases that trigger it, common compliance pitfalls, and how § 17941 stacks with FTC § 5, EU AI Act Article 50(1), GDPR Article 22, California SB 942, and federal financial-services rules. Targets the high-traffic California consumer-facing-AI compliance vertical.
25
+ - Rule count 21 → 22. Tests still 51/51 passing.
26
+
27
+ ## [0.4.0] — 2026-05-08
28
+
29
+ ### Added
30
+
31
+ - California SB 1120 — Physicians Make Decisions Act (Senate Bill 1120, signed September 28, 2024; effective January 1, 2025). Amends California Health and Safety Code § 1367.01 and Insurance Code § 10123.135 to require that AI/algorithmic tools used in utilization review / utilization management for medical necessity be reviewed by a licensed physician (or other licensed healthcare professional within scope of practice) considering the enrollee's individual clinical circumstances. Patient-facing disclosure required when AI is used in coverage decisions; appeal rights and Independent Medical Review path included. Use cases `healthcare` and `financial-services`. Severity `mandatory`.
32
+ - Third SEO guide: `docs/guides/nyc-local-law-144-aedt-builder-guide.md` — comprehensive coverage of NYC's AEDT law, the bias-audit + public-summary + 10-business-day-notice triad, the AEDT definitional questions ("substantially assist," "simplified output," "statistical modeling"), the multi-state platform issue (NYC-resident applicants of national platforms), common compliance pitfalls, and how Local Law 144 stacks with parallel state and federal AI hiring rules. Targets the highly active employment-AI compliance vertical.
33
+ - Rule count 20 → 21. Tests still 51/51 passing.
34
+
19
35
  ## [0.3.0] — 2026-05-08
20
36
 
21
37
  ### Added
@@ -0,0 +1,230 @@
1
+ # California bot disclosure (B&P § 17941): a builder's guide
2
+
3
+ > **Informational only — not legal advice.** Verify against the cited
4
+ > regulator-published text and consult counsel for production deployments.
5
+ > See `AI-DISCLOSURE.md` in this package.
6
+
7
+ If your AI chatbot, voice agent, video avatar, or any other automated
8
+ communicator can interact with California residents online — and your
9
+ goal is commercial (selling something) or electoral (influencing a
10
+ vote) — California Business and Professions Code **§ 17941** applies
11
+ to you. The statute has been in active enforcement since July 1, 2019.
12
+ This guide covers what § 17941 actually requires, who is covered,
13
+ what counts as compliant disclosure, the elements that catch builders
14
+ off guard, and how the rule stacks with parallel state and federal
15
+ AI-disclosure regimes.
16
+
17
+ ## What § 17941 actually requires
18
+
19
+ California enacted the bot disclosure law (commonly called the "B.O.T.
20
+ Act") through SB 1001 in 2018; it is codified at California Business
21
+ and Professions Code §§ 17940–17943. Section 17941 makes it **unlawful
22
+ for any person to use a bot to communicate or interact with another
23
+ person in California online, with the intent to mislead the other
24
+ person about its artificial identity** for either of two purposes:
25
+
26
+ 1. **Commercial transaction.** Knowingly deceiving the person about
27
+ the content of the communication in order to incentivize a
28
+ purchase or sale of goods or services.
29
+ 2. **Electoral influence.** Knowingly deceiving the person about the
30
+ content of the communication in order to influence a vote in an
31
+ election.
32
+
33
+ The statute provides a **safe harbor**: a person using a bot does not
34
+ violate § 17941 if the person discloses, in a manner that is "clear,
35
+ conspicuous, and reasonably designed to inform persons with whom the
36
+ bot communicates or interacts" that it is a bot.
37
+
38
+ Penalties: enforcement is through the California Attorney General and
39
+ through actions brought by district attorneys, county counsel, or
40
+ city attorneys; civil penalties under California's Unfair Competition
41
+ Law (B&P § 17200) and False Advertising Law (B&P § 17500) apply, and
42
+ plaintiffs can also pursue private remedies under those statutes.
43
+
44
+ ## What's a "bot" — the definitional question
45
+
46
+ "Bot" is defined at B&P § 17940(a): "an automated online account
47
+ where all or substantially all of the actions or posts of that
48
+ account are not the result of a person." The definition is broad:
49
+
50
+ - Chatbots powered by LLMs are bots.
51
+ - Customer-support agents that auto-respond, even if a human is
52
+ occasionally in the loop, are bots if "substantially all" of the
53
+ responses are automated.
54
+ - Voice agents and IVR systems that conduct sales conversations are
55
+ bots.
56
+ - Video avatars driven by AI are bots.
57
+ - Hybrid systems that automate the first response and only escalate
58
+ to a human after several turns are bots **for those automated
59
+ turns**.
60
+
61
+ Three elements catch builders off guard:
62
+
63
+ - **"Substantially all"** is fact-specific. A workflow where a
64
+ bot drafts a response that a human approves with one click is
65
+ closer to a bot than to a human-authored communication, but
66
+ enforcement scrutiny will look at the specific facts.
67
+ - **"Online"** includes any online platform with at least 10 million
68
+ unique monthly U.S. visitors, but the practical scope sweeps in
69
+ most consumer-facing chat and voice channels.
70
+ - **"Intent to mislead"** is the trigger; § 17941 does not require
71
+ disclosure on every bot interaction, only on those where the
72
+ operator's intent is to deceive about the bot's artificial nature
73
+ for commercial or electoral purposes. **Best practice** is to
74
+ disclose by default — intent is hard to demonstrate after the fact,
75
+ and the safe-harbor disclosure is cheap.
76
+
77
+ ## What "clear and conspicuous" means
78
+
79
+ The statute does not specify exact text. Operators have generally
80
+ implemented the safe-harbor disclosure in three ways:
81
+
82
+ 1. **First-message disclosure** in the chat surface itself: "You are
83
+ chatting with an automated AI assistant, not a human."
84
+ 2. **Persistent UI label** (e.g., "AI Assistant" badge next to the
85
+ bot's name) combined with a first-message disclosure.
86
+ 3. **Voice channel pre-roll** ("Hello, you've reached the automated
87
+ assistant for [company name]") at the start of the call.
88
+
89
+ The safe harbor requires the disclosure be:
90
+
91
+ - **Clear**: stated in plain language, not buried in technical jargon.
92
+ - **Conspicuous**: visible to a reasonable user without scrolling,
93
+ hunting through menus, or expanding collapsed sections.
94
+ - **Reasonably designed to inform**: appropriate to the channel
95
+ (text in chat, audio in voice, on-screen in video).
96
+
97
+ A disclosure buried in terms-of-service documentation, or one that
98
+ appears only after the user has provided a credit card, generally
99
+ does not meet the safe harbor.
100
+
101
+ ## Channels and use cases that trigger § 17941
102
+
103
+ The plainstamp rule (`us-ca-bot-disclosure-17941`) covers:
104
+
105
+ - **Channels**: `live-chat`, `voice`, `video-avatar`.
106
+ - **Use cases**: `b2c-customer-support`, `b2c-marketing`,
107
+ `b2c-sales`, `civic-or-electoral`.
108
+
109
+ The use-case fit catches some builders off guard:
110
+
111
+ - **B2C customer support** is in scope when the bot's role includes
112
+ surfacing upsells, retention offers, or any commercial
113
+ communication. A pure technical-support bot that never tries to
114
+ sell anything is arguably outside § 17941's commercial-transaction
115
+ trigger but still inside the safe-harbor best practice.
116
+ - **B2B sales bots** are not the principal target of § 17941 (which
117
+ is consumer-protection), but B2B prospects who are California
118
+ residents reading the bot output may still be in scope. Disclose
119
+ by default.
120
+ - **Civic/electoral** is a separate trigger — political chatbots
121
+ during election cycles must disclose regardless of commercial
122
+ intent.
123
+
124
+ ## How § 17941 stacks with parallel rules
125
+
126
+ California's B&P § 17941 is the consumer-protection layer. AI
127
+ operators with consumer-facing communications must layer:
128
+
129
+ - **Federal** — FTC § 5 (deceptive acts and practices). Failing to
130
+ disclose AI in a way that materially affects a consumer's
131
+ decision is a deceptive practice; the FTC's 2024 fake-reviews rule
132
+ (16 CFR Part 465) addresses adjacent fabricated content concerns.
133
+ - **EU AI Act Article 50(1)** — for any chatbot that interacts with
134
+ natural persons in the EU. The EU rule's threshold is lower —
135
+ disclosure is required regardless of commercial intent and applies
136
+ to providers of the AI system itself.
137
+ - **GDPR Article 22** — for automated decisions that affect EU
138
+ residents, even where § 17941 itself doesn't reach.
139
+ - **California AI Transparency Act (SB 942)** — covers GenAI-system
140
+ providers with significant California reach; layers on top of
141
+ § 17941 for AI-generated content disclosure.
142
+ - **Federal financial-services rules** — CFPB Circular 2023-03
143
+ (ECOA / Reg. B) when the bot output drives credit decisions; FINRA
144
+ Regulatory Notice 24-09 when the bot output is a "communication
145
+ with the public" for a member firm.
146
+
147
+ ## Common compliance pitfalls
148
+
149
+ - **Deferring to ToS-only disclosure.** A line in a 10,000-word
150
+ terms-of-service document does not meet "clear and conspicuous."
151
+ - **Relying on a small "AI" badge alone.** Persistent UI badges
152
+ help, but absent a first-message statement they may not satisfy
153
+ the safe harbor for first-time visitors.
154
+ - **Voice channels without pre-roll.** A voice agent that only
155
+ identifies as a bot if asked fails the safe harbor.
156
+ - **Video avatars where the visual is photorealistic.** The
157
+ photorealism increases the deception risk; explicit on-screen
158
+ AI labeling is best practice.
159
+ - **Multi-turn escalation without disclosure on bot turns.** If a
160
+ bot answers the first 5 messages and then escalates, the bot
161
+ turns must carry their own disclosure — the human-handoff message
162
+ doesn't retroactively cure earlier deception.
163
+ - **Geo-detection failures.** California residents traveling outside
164
+ California are still California residents; California residents
165
+ using VPNs are still California residents. Disclose by default to
166
+ avoid geo-detection edge cases.
167
+ - **A/B testing the disclosure copy.** The safe harbor protects
168
+ disclosures "reasonably designed to inform"; A/B-testing toward
169
+ lower-disclosure variants risks failing that standard.
170
+
171
+ ## How plainstamp helps
172
+
173
+ `plainstamp` ships a `us-ca-bot-disclosure-17941` rule that returns
174
+ the live disclosure-element checklist for § 17941, ready-to-paste
175
+ plain-language and formal-language templates, citation back to the
176
+ California Legislative Information source URL, and a `last_verified`
177
+ date. Lookup:
178
+
179
+ ```bash
180
+ npx plainstamp lookup --jurisdiction us-ca \
181
+ --channel live-chat \
182
+ --use-case b2c-customer-support
183
+ ```
184
+
185
+ Returns the § 17941 rule and any federal-floor and EU-overlay rules
186
+ that also apply (the lookup engine inherits parent jurisdictions —
187
+ querying `us-ca` picks up `us` federal rules as well).
188
+
189
+ For multi-channel deployments (chat + voice + video avatar), query
190
+ each channel and union the disclosure obligations — § 17941 covers
191
+ all three and the disclosure language can be shared, but the
192
+ **form** of disclosure (text vs. audio vs. on-screen) varies by
193
+ channel.
194
+
195
+ ## The minimum viable § 17941 disclosure
196
+
197
+ If you ship one thing this week, ship a first-interaction disclosure
198
+ that meets all three safe-harbor criteria:
199
+
200
+ 1. **Clear**: plain language, no jargon. "You are chatting with an
201
+ automated AI assistant, not a human."
202
+ 2. **Conspicuous**: in-channel, visible without action by the user.
203
+ In chat: as the first bot message. In voice: as the pre-roll.
204
+ In video: as on-screen text + audio.
205
+ 3. **Reasonably designed to inform**: appropriate to the channel
206
+ and the user population. For California-resident-heavy traffic,
207
+ prefer the more explicit disclosure variant.
208
+
209
+ Then, layer on the EU AI Act Article 50(1) overlay for any traffic
210
+ that reaches the EU (the EU rule's bar is lower — disclosure required
211
+ regardless of intent).
212
+
213
+ ## Source-of-truth links
214
+
215
+ - **California Business and Professions Code § 17941**
216
+ ([leginfo.legislature.ca.gov](https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?lawCode=BPC&sectionNum=17941))
217
+ - **California B.O.T. Act (SB 1001, 2018) — full bill text**
218
+ ([leginfo.legislature.ca.gov](https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201720180SB1001))
219
+ - **California Attorney General — consumer-protection guidance on
220
+ AI / bots** ([oag.ca.gov](https://oag.ca.gov/))
221
+ - **FTC § 5 — Deceptive Acts and Practices**
222
+ ([ftc.gov](https://www.ftc.gov/legal-library/browse/statutes/federal-trade-commission-act))
223
+
224
+ `plainstamp` is maintained by an autonomous AI agent operating under
225
+ KS Elevated Solutions LLC. Accuracy reports, rule-update suggestions,
226
+ and security disclosures: [helpfulbutton140@agentmail.to](mailto:helpfulbutton140@agentmail.to).
227
+
228
+ ---
229
+
230
+ [`← Back to plainstamp on npm`](https://www.npmjs.com/package/plainstamp)
@@ -0,0 +1,206 @@
1
+ # NYC Local Law 144 (AEDT): a builder's guide
2
+
3
+ > **Informational only — not legal advice.** Verify against the cited
4
+ > regulator-published text and consult counsel for production deployments.
5
+ > See `AI-DISCLOSURE.md` in this package.
6
+
7
+ If your AI hiring or promotion tool can be used to evaluate any
8
+ candidate or employee who **resides in New York City**, NYC Local Law
9
+ 144 — the **Automated Employment Decision Tool (AEDT)** law — applies
10
+ to you, even if your company is headquartered outside New York. The
11
+ law has been in active enforcement since July 5, 2023 and is one of
12
+ the most concrete US AI-employment compliance regimes in operation
13
+ today. This guide covers what it requires, who is covered, what
14
+ counts as compliance, and the elements that catch builders off guard.
15
+
16
+ ## What Local Law 144 actually requires
17
+
18
+ NYC Local Law 144 of 2021 (codified at NYC Administrative Code §§ 20-870
19
+ through 20-873) prohibits employers and employment agencies operating
20
+ in New York City from using an Automated Employment Decision Tool
21
+ (AEDT) to substantially assist or replace discretionary decision-making
22
+ for an employment decision unless **three** conditions are all met:
23
+
24
+ 1. **Bias audit.** The tool has been the subject of a bias audit
25
+ conducted by an independent auditor no more than one year prior to
26
+ the tool's use.
27
+ 2. **Public summary.** A summary of the most recent bias audit and the
28
+ distribution date of the AEDT is publicly available on the
29
+ employer's or employment agency's website.
30
+ 3. **10-business-day candidate notice.** Candidates and employees who
31
+ reside in NYC have been given at least 10 business days' notice
32
+ before the AEDT is used to assess them. The notice must include:
33
+ the fact that an AEDT will be used; the job qualifications and
34
+ characteristics that the AEDT will use; and information about how
35
+ to request an alternative selection process or accommodation.
36
+
37
+ Penalties: **$500** per first violation; **$500–$1,500** per
38
+ subsequent or continuing violation per day per candidate.
39
+
40
+ ## What's an "AEDT" — the key definitional question
41
+
42
+ Local Law 144 defines an AEDT as a "computational process, derived
43
+ from machine learning, statistical modeling, data analytics, or
44
+ artificial intelligence, that issues simplified output, including a
45
+ score, classification, or recommendation, that is used to substantially
46
+ assist or replace discretionary decision-making for making
47
+ employment decisions that impact natural persons."
48
+
49
+ Three elements catch builders off guard:
50
+
51
+ - **"Substantially assist or replace"** is a fact-specific standard.
52
+ A scored ranking that hiring managers actually use — even if a human
53
+ makes the final call — typically substantially assists the decision.
54
+ A purely descriptive analytics dashboard that surfaces information
55
+ without producing a ranking or score may not.
56
+ - **"Simplified output"** includes scores, classifications, and
57
+ recommendations. A free-text LLM-generated note that doesn't reduce
58
+ to a score may be outside scope; an LLM that outputs a numeric "fit
59
+ score" is squarely inside.
60
+ - **"Statistical modeling"** is broad — even tools that are not
61
+ machine-learning-based but rely on statistical modeling are covered.
62
+
63
+ ## The bias audit (the procedural heart of the law)
64
+
65
+ Bias audits must:
66
+
67
+ - Be conducted by an independent auditor (not the employer, the
68
+ vendor, or any party with a material conflict).
69
+ - Use the most recent year of historical use data, or, where the
70
+ tool is new and lacks a year of data, test data that the employer
71
+ or employment agency has good reason to believe represents
72
+ reasonable use.
73
+ - Compute, at minimum:
74
+ - The selection rate for each race/ethnicity and sex category
75
+ required to be reported under EEOC guidance.
76
+ - The impact ratio for each category, calculated against the
77
+ most-selected category (the four-fifths rule baseline).
78
+ - For tools producing scoring, the median score for each category
79
+ and the mean score across all categories where appropriate.
80
+
81
+ The auditor must publish a summary that includes the source and
82
+ type of data used, the number of applications by category, the
83
+ selection rates, and the impact ratios.
84
+
85
+ ## The candidate notice — what to ship
86
+
87
+ The 10-business-day notice must reach NYC-resident candidates and
88
+ employees before the AEDT is used in their evaluation. It must:
89
+
90
+ - State that an AEDT will be used to assess the candidate or
91
+ employee.
92
+ - Disclose the job qualifications and characteristics that the AEDT
93
+ will evaluate.
94
+ - Provide information about how to request an alternative selection
95
+ process or a reasonable accommodation under the Americans with
96
+ Disabilities Act.
97
+
98
+ Form: written. Channel: any reasonable means — email, application
99
+ portal, posted notice. The 10-business-day window is not waivable;
100
+ "10 calendar days" or "ASAP" don't satisfy the rule.
101
+
102
+ ## Who is "in New York City" for purposes of the law
103
+
104
+ This is the question that catches multi-state employers most often.
105
+ The DCWP's interpretation, reinforced by enforcement guidance, is
106
+ that the law applies **whenever the candidate or employee resides in
107
+ NYC at the time the AEDT is used**, regardless of where the
108
+ employer is headquartered or where the job is located. A company
109
+ in Texas using an AEDT to evaluate a candidate who lives in
110
+ Brooklyn is covered by Local Law 144 for that candidate's
111
+ evaluation.
112
+
113
+ This means national-scope hiring platforms with NYC-resident
114
+ applicants are subject to the law for those applicants —
115
+ even if the platform's other applicants from other jurisdictions
116
+ are not.
117
+
118
+ ## How Local Law 144 stacks with other rules
119
+
120
+ Local Law 144 is the city-level layer. Builders deploying AI hiring
121
+ tools across multiple jurisdictions need to layer state and federal
122
+ obligations:
123
+
124
+ - **Federal**: EEOC technical assistance applying Title VII / Uniform
125
+ Guidelines to AI selection procedures. Federal floor; the Local
126
+ Law 144 bias audit's four-fifths-rule analysis is consistent with
127
+ the Uniform Guidelines.
128
+ - **Illinois HB 3773**: amends the Illinois Human Rights Act to
129
+ require AI-in-employment notice and substantive non-discrimination
130
+ for covered decisions; effective January 1, 2026.
131
+ - **Maryland Labor & Employment § 3-717**: facial-recognition services
132
+ during pre-employment interviews require a written consent waiver.
133
+ - **Colorado SB 24-205**: high-risk AI system used in employment
134
+ decisions triggers consumer-disclosure obligations.
135
+ - **EU**: AI Act + GDPR Article 22 if any candidate is in the EU.
136
+
137
+ ## Common compliance pitfalls
138
+
139
+ - **Using the vendor's bias audit as the employer's bias audit.**
140
+ The auditor must be independent of both the employer and the
141
+ vendor. A vendor-paid audit is generally insufficient.
142
+ - **Posting the bias-audit summary on the vendor's site instead of
143
+ the employer's.** The summary must be on the employer's or
144
+ employment agency's website.
145
+ - **Treating "bias audit pending" as compliance.** Until the audit
146
+ is complete and within the prior year, the AEDT cannot be used.
147
+ - **Counting calendar days instead of business days.** "10 business
148
+ days" excludes weekends and NYC holidays.
149
+ - **Forgetting the alternative-process information.** The notice
150
+ must include how to request an alternative selection process — not
151
+ just "contact HR." Best practice is a specific email or web form.
152
+ - **Multi-state platform error.** A platform that uses AEDT for all
153
+ candidates regardless of residence applies Local Law 144 to its
154
+ NYC-resident applicants and may run afoul of differing state
155
+ obligations for non-NYC applicants.
156
+
157
+ ## How plainstamp helps
158
+
159
+ `plainstamp` ships an `us-ny-nyc-local-law-144-aedt` rule that
160
+ returns the live disclosure-element checklist for Local Law 144,
161
+ ready-to-paste plain-language and formal-language candidate-notice
162
+ templates, citation back to the NYC Rules / DCWP source URL, and a
163
+ `last_verified` date. Lookup:
164
+
165
+ ```bash
166
+ npx plainstamp lookup --jurisdiction us-ny-nyc \
167
+ --channel email-transactional \
168
+ --use-case employment-decisions
169
+ ```
170
+
171
+ Returns the AEDT rule. Because plainstamp's lookup engine inherits
172
+ parent jurisdictions, querying `us-ny-nyc` also picks up NY-state-level
173
+ rules and federal-level rules; querying `us-ny` does not pick up the
174
+ city-specific Local Law 144 rule (city is a child of state, not the
175
+ other way).
176
+
177
+ For multi-state employers, query each candidate's residence
178
+ jurisdiction in parallel — the disclosure copy must satisfy each
179
+ applicable layer.
180
+
181
+ ## The minimum viable Local Law 144 disclosure
182
+
183
+ If you ship one thing this week, ship the candidate notice (the
184
+ 10-business-day notice). It must include:
185
+
186
+ 1. A clear statement that an AEDT will be used.
187
+ 2. The job qualifications and characteristics the AEDT will evaluate.
188
+ 3. A path to request an alternative selection process or accommodation.
189
+
190
+ Then book the independent bias audit. The audit takes weeks, not
191
+ days, and must complete before the AEDT can be deployed for any
192
+ NYC-resident candidate.
193
+
194
+ ## Source-of-truth links
195
+
196
+ - **NYC Local Law 144 of 2021 — DCWP final rules** ([rules.cityofnewyork.us](https://rules.cityofnewyork.us/rule/automated-employment-decision-tools-updated/))
197
+ - **DCWP enforcement guidance** ([nyc.gov/dca](https://www.nyc.gov/site/dca/businesses/automated-employment-decision-tools.page))
198
+ - **EEOC technical assistance on AI in employment selection** ([eeoc.gov](https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial-intelligence-employment-selection-procedures))
199
+
200
+ `plainstamp` is maintained by an autonomous AI agent operating under
201
+ KS Elevated Solutions LLC. Accuracy reports, rule-update suggestions,
202
+ and security disclosures: [helpfulbutton140@agentmail.to](mailto:helpfulbutton140@agentmail.to).
203
+
204
+ ---
205
+
206
+ [`← Back to plainstamp on npm`](https://www.npmjs.com/package/plainstamp)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "plainstamp",
3
- "version": "0.3.0",
3
+ "version": "0.5.0",
4
4
  "description": "AI disclosure compliance assistant — generates legally-grounded AI disclosure text per (jurisdiction × channel × use-case) and tracks regulatory updates. Operated by an autonomous AI agent under KS Elevated Solutions LLC.",
5
5
  "type": "module",
6
6
  "license": "MIT",
package/rules/seed.json CHANGED
@@ -906,6 +906,104 @@
906
906
  "formal": "Notice under Section 1557 of the Patient Protection and Affordable Care Act (42 U.S.C. § 18116) and the implementing regulations at 45 CFR Part 92 (as amended by the May 6, 2024 final rule, 89 Fed. Reg. 37522): The covered entity uses one or more patient care decision support tools, including artificial-intelligence and machine-learning-based clinical decision support, in its health programs and activities. The covered entity has identified its uses of such tools and is making reasonable efforts to mitigate the risk of discrimination on the bases protected by Section 1557 (race, color, national origin, sex (including sex characteristics, sexual orientation, gender identity, and pregnancy or related conditions), age, and disability) resulting from the tools' use, in accordance with 45 CFR § 92.210. For the entity's Civil Rights Coordinator and Section 1557 grievance procedures, see [contact]."
907
907
  },
908
908
  "notes": "Section 1557's PCDST obligation is governance-heavy — most of the compliance work is internal (identifying tools, documenting mitigation, designating coordinators) rather than patient-facing text. The patient-facing element is the Section 1557 notice-of-availability under § 92.11 plus, where the entity exposes AI-informed decisions to patients, a clear acknowledgment that automated tools may inform clinical decisions and a path to discuss with a clinician. Covered entities include most healthcare providers receiving any form of federal financial assistance (Medicare-participating providers, Medicaid-participating providers, federally-qualified health centers, etc.), all health insurers in HHS-administered marketplaces, and HHS itself. The 'reasonable efforts' standard is intentionally flexible — OCR has stated in commentary that what constitutes 'reasonable' will scale with the entity's size and resources, but documentation is essential. PCDSTs explicitly include AI/ML decision-support tools and (per OCR commentary) tools that produce or use clinical scores (e.g., Epic Sepsis Model, Beth Israel Discharge Risk score, etc.). Federal funding loss is the principal sanction; OCR can also impose corrective action plans. State-level overlays may apply (e.g., California SB 1120 — Physicians Make Decisions Act, requiring physician review of AI-driven coverage denials in health plans — effective 2025-01-01). Stack with HIPAA Privacy Rule (45 CFR Part 164) when patient information is processed; stack with state AI hiring/employment-decision laws when the PCDST is used in employment of healthcare workers."
909
+ },
910
+ {
911
+ "id": "us-ca-sb1120-physicians-make-decisions-2024",
912
+ "jurisdiction": "us-ca",
913
+ "channels": ["email-transactional", "ai-generated-content"],
914
+ "use_cases": ["healthcare", "financial-services"],
915
+ "severity": "mandatory",
916
+ "short_title": "California SB 1120 — Physicians Make Decisions Act (utilization review)",
917
+ "summary": "California SB 1120 (signed September 28, 2024; effective January 1, 2025) amends Health and Safety Code § 1367.01 (governing health-care service plans regulated by the Department of Managed Health Care) and Insurance Code § 10123.135 (governing health insurers regulated by the Department of Insurance) to limit the use of artificial-intelligence and algorithmic tools in utilization review and utilization management decisions for medical necessity. A health-care service plan or insurer that uses AI, algorithm, or other software tool for the purpose of utilization review or utilization management may not deny, delay, or modify health-care services based in whole or in part on medical necessity unless a licensed physician (or other licensed healthcare professional acting within the scope of practice) reviews the basis for the decision and the decision considers the enrollee's individual clinical circumstances. The AI tool must be fairly and equitably applied; bias must be avoided in design, training, and ongoing operation; the tool must not directly or indirectly cause harm to the enrollee. Information about the use of the AI tool must be disclosed to enrollees, regulators (DMHC and CDI), and the public. Penalties are administered through DMHC and CDI authority and may include corrective action plans, civil penalties, and (for willful or repeated violations) license-related sanctions.",
918
+ "required_elements": [
919
+ {
920
+ "id": "physician-review-of-denial",
921
+ "description": "A licensed physician (or other licensed healthcare professional within scope of practice) must review the basis for any AI-driven denial, delay, or modification of medical-necessity coverage; the decision must consider the enrollee's individual clinical circumstances. (Procedural requirement; the consumer-facing element is disclosure that the review occurred.)",
922
+ "required": true,
923
+ "example": "This coverage decision was reviewed by [physician name and California license number], who considered your individual clinical circumstances, including [factors] in the determination."
924
+ },
925
+ {
926
+ "id": "ai-tool-use-disclosure",
927
+ "description": "Disclosure to the enrollee that an AI, algorithm, or other software tool was used in the utilization review or utilization management process, including how it was used and how it informed the decision.",
928
+ "required": true,
929
+ "example": "An automated decision-support tool was used in evaluating your prior authorization request. The tool [analyzed claim history / scored medical necessity / surfaced relevant guidelines]; its output was reviewed by a licensed physician before this decision."
930
+ },
931
+ {
932
+ "id": "appeal-rights-notice",
933
+ "description": "Notice of the enrollee's appeal rights, including the right to internal grievance, external independent medical review, and (for life-threatening conditions) expedited review.",
934
+ "required": true,
935
+ "example": "If you disagree with this decision, you have the right to file an internal grievance with [plan name] and to request an Independent Medical Review (IMR) through the California Department of Managed Health Care at https://healthhelp.ca.gov/ or 1-888-466-2219. For decisions involving an imminent and serious threat to your health, you may request an expedited review."
936
+ },
937
+ {
938
+ "id": "fair-and-equitable-application",
939
+ "description": "The AI tool must be fairly and equitably applied; the plan or insurer must avoid bias in tool design, training data, and ongoing operation. (System / governance requirement; not a per-decision message.)",
940
+ "required": false
941
+ },
942
+ {
943
+ "id": "regulator-disclosure",
944
+ "description": "Disclosure to DMHC and CDI of the plan/insurer's use of AI tools in utilization review, including periodic reporting under regulator-issued guidance. (Regulator-facing, not enrollee-facing.)",
945
+ "required": false
946
+ }
947
+ ],
948
+ "citation": {
949
+ "statute": "California Health and Safety Code § 1367.01 (DMHC-regulated plans) and Insurance Code § 10123.135 (CDI-regulated insurers), as amended by Senate Bill 1120 (2024)",
950
+ "section": "Use of artificial-intelligence and algorithmic tools in utilization review / utilization management",
951
+ "source_url": "https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1120",
952
+ "publisher": "California Legislative Information"
953
+ },
954
+ "effective_date": "2025-01-01",
955
+ "last_verified": "2026-05-08",
956
+ "template": {
957
+ "plain": "Notice — Use of Decision-Support Tool in This Coverage Decision: An automated decision-support tool was used in evaluating your prior authorization or coverage request. The tool's output was reviewed by [licensed physician or other healthcare professional] who considered your individual clinical circumstances before making this determination. If your request was denied, delayed, or modified, you have the right to appeal through [plan name]'s internal grievance process and to request an Independent Medical Review through the California Department of Managed Health Care at https://healthhelp.ca.gov/ or 1-888-466-2219. For health conditions that pose an imminent and serious threat to your health, expedited review is available.",
958
+ "formal": "Notice under California SB 1120 — Physicians Make Decisions Act, codified at California Health and Safety Code § 1367.01 (or Insurance Code § 10123.135 for plans regulated by the Department of Insurance): An artificial-intelligence, algorithmic, or other software tool was used by [plan / insurer name] in the utilization review or utilization management process for this coverage determination. The tool's output was reviewed by [licensed physician or other licensed healthcare professional acting within scope of practice] who considered the enrollee's individual clinical circumstances before this decision was made. The tool is fairly and equitably applied; the plan / insurer's use of AI in utilization review has been disclosed to the appropriate California regulator. The enrollee may appeal this determination through internal grievance and through Independent Medical Review under California law."
959
+ },
960
+ "notes": "SB 1120 is one of the first US state laws to specifically restrict AI use in health-coverage decisions. The law applies to two distinct regulatory regimes: DMHC-regulated health-care service plans (most California HMOs and many PPOs) under HSC § 1367.01, and CDI-regulated health insurers under Ins. Code § 10123.135. The use case here is `healthcare` (clinical decision impact) and `financial-services` (insurance coverage decisions involving payment) — many compliance-relevant decisions sit at the intersection, and surfacing both makes the rule discoverable for either query path. The physician-review requirement is procedural — the AI cannot make the final medical-necessity determination on its own. The disclosure obligation is the consumer-facing element. SB 1120 stacks with HHS Section 1557 PCDST nondiscrimination obligations (federal floor) and with the Colorado AI Act / Texas TRAIGA-healthcare / Utah AI Act in their respective state operations. ERISA self-funded plans are typically exempt from state insurance regulation but may be subject to federal-floor obligations and HHS Section 1557. Class-action litigation over AI denial of care has been ongoing under existing law in 2024–2025; SB 1120 codifies a clearer disclosure-and-review standard. Verify against DMHC and CDI guidance before production deployment — both regulators have rulemaking authority and have issued or are expected to issue more detailed implementation guidance through 2026."
961
+ },
962
+ {
963
+ "id": "us-fda-pccp-aiml-device-software-2024",
964
+ "jurisdiction": "us",
965
+ "channels": ["ai-generated-content", "about-page", "terms-of-service"],
966
+ "use_cases": ["healthcare"],
967
+ "severity": "mandatory",
968
+ "short_title": "FDA Predetermined Change Control Plans for AI/ML-Enabled Device Software Functions (Final Guidance, December 2024)",
969
+ "summary": "On December 4, 2024, the U.S. Food and Drug Administration finalized guidance on Predetermined Change Control Plans (PCCPs) for Artificial Intelligence-Enabled Device Software Functions (AI-DSFs). Under the FD&C Act § 515C (added by the FDA Modernization Act of 2022), a manufacturer of an AI/ML-enabled medical device that has been cleared (510(k)), De Novo authorized, or approved (PMA) may include in the device's authorized marketing submission a PCCP describing planned modifications to the device — including modifications that would otherwise require a new marketing submission — together with the methods to implement them and an assessment of their impact. Once the PCCP is FDA-authorized as part of the marketing submission, the manufacturer may implement modifications that conform to the PCCP without filing a new submission. PCCPs must include: (1) a Description of Modifications detailing the specific modifications planned; (2) a Modification Protocol with methods to develop, validate, and implement the modifications; and (3) an Impact Assessment evaluating benefits and risks. The device labeling — including the public-facing device summary that FDA publishes for cleared/authorized devices — must reflect the PCCP and inform clinicians and (where applicable) patients about the AI/ML nature of the device and how it may be modified post-authorization. The PCCP framework is mandatory in the sense that AI/ML modifications outside an authorized PCCP still require a new marketing submission; the public disclosure obligations follow from the underlying labeling and 510(k)/De Novo/PMA disclosure regimes administered by FDA's Center for Devices and Radiological Health (CDRH). Penalties for non-compliance with FDA device requirements can include warning letters, seizure, injunction, civil monetary penalties, and criminal prosecution under the FD&C Act.",
970
+ "required_elements": [
971
+ {
972
+ "id": "pccp-in-marketing-submission",
973
+ "description": "Authorized PCCP in the device's marketing submission (510(k), De Novo, or PMA), comprising a Description of Modifications, a Modification Protocol, and an Impact Assessment. (Pre-market regulatory requirement; must be FDA-authorized before any PCCP-covered modifications are implemented.)",
974
+ "required": false
975
+ },
976
+ {
977
+ "id": "device-labeling-aiml-disclosure",
978
+ "description": "Device labeling must disclose that the device is an AI/ML-enabled device software function, summarize the PCCP (where present), and inform users that the device may be modified within the bounds of the authorized PCCP without a new marketing submission.",
979
+ "required": true,
980
+ "example": "This device incorporates an artificial intelligence / machine-learning algorithm. The device's authorized marketing submission includes a Predetermined Change Control Plan (PCCP) under FD&C Act § 515C; the manufacturer may implement modifications conforming to the PCCP without a new marketing submission. For the current PCCP scope and version, see [manufacturer device summary URL]."
981
+ },
982
+ {
983
+ "id": "user-facing-aiml-summary",
984
+ "description": "Plain-language summary of the AI/ML nature of the device, intended use, performance characteristics, and the kinds of modifications anticipated under the PCCP, made available to clinicians and (where the device is patient-facing) to patients.",
985
+ "required": true,
986
+ "example": "This device uses machine learning to [intended task]. The model's performance has been validated for [population / indication]. Under our authorized PCCP, future updates may [list of anticipated modification types]. Users should consult the latest device summary at [URL] for the current model version and validation data."
987
+ },
988
+ {
989
+ "id": "post-implementation-transparency",
990
+ "description": "Post-implementation transparency: when a PCCP-conforming modification is implemented, the manufacturer must update device labeling and the public-facing device summary to reflect the modification and its impact, and must document the modification under the PCCP's Modification Protocol.",
991
+ "required": false
992
+ }
993
+ ],
994
+ "citation": {
995
+ "statute": "Federal Food, Drug, and Cosmetic Act § 515C (21 U.S.C. § 360e-4), as added by Section 3308 of the Food and Drug Omnibus Reform Act of 2022 (FDORA, P.L. 117-328, Division FF, Title III)",
996
+ "section": "Predetermined Change Control Plans for Artificial Intelligence-Enabled Device Software Functions: Guidance for Industry and Food and Drug Administration Staff (Final, December 4, 2024)",
997
+ "source_url": "https://www.fda.gov/regulatory-information/search-fda-guidance-documents/predetermined-change-control-plans-artificial-intelligence-enabled-device-software-functions",
998
+ "publisher": "U.S. Food and Drug Administration, Center for Devices and Radiological Health"
999
+ },
1000
+ "effective_date": "2024-12-04",
1001
+ "last_verified": "2026-05-08",
1002
+ "template": {
1003
+ "plain": "Notice — AI/ML-Enabled Medical Device: This device incorporates an artificial intelligence or machine-learning algorithm. The device has been authorized for marketing by the U.S. Food and Drug Administration under [510(k) / De Novo / PMA number]. The manufacturer's authorized marketing submission includes a Predetermined Change Control Plan (PCCP) describing the modifications that may be implemented to the device's algorithm without a new FDA submission. For the current PCCP scope, the device's intended use, validated performance, and the latest model version, see the manufacturer's device summary at [URL]. Discuss any clinical decisions informed by this device with your healthcare provider.",
1004
+ "formal": "Notice under FD&C Act § 515C (21 U.S.C. § 360e-4) and FDA's Predetermined Change Control Plans for Artificial Intelligence-Enabled Device Software Functions (Final Guidance, December 4, 2024): The device identified herein is an artificial intelligence-enabled device software function (AI-DSF) authorized by FDA under [submission type and reference number]. The manufacturer's authorized marketing submission includes a Predetermined Change Control Plan (PCCP) comprising a Description of Modifications, a Modification Protocol, and an Impact Assessment. PCCP-conforming modifications may be implemented without a new marketing submission; modifications outside the authorized PCCP require a new submission per applicable FDA regulations. The device's labeling reflects the PCCP; the manufacturer's public device summary at [URL] reflects the current model version, validation data, and the cumulative record of PCCP-conforming modifications implemented to date."
1005
+ },
1006
+ "notes": "PCCP is the FDA's response to the 'locked algorithm' problem for AI/ML medical devices: prior to FDORA § 515C (2022), any change to the algorithm of a cleared/authorized AI/ML device that affected safety or effectiveness typically required a new 510(k) / De Novo / PMA submission, which made iterative model improvement impractical. The PCCP framework lets manufacturers pre-authorize a bounded set of modifications and the validation methods for each. The December 2024 final guidance applies to all medical devices regardless of pathway (510(k), De Novo, PMA) and supersedes the April 2023 draft. Disclosure scope: the FDA-required labeling under 21 CFR Part 801 (device labeling) and the public-facing 510(k) summary / De Novo decision summary / PMA approval order published on FDA's website constitute the public disclosure surface; manufacturers typically also publish device-summary pages on their own websites with current model version and validation data. Use case is `healthcare`. Stack with HHS Section 1557 PCDST nondiscrimination obligations and with state-level rules like California SB 1120 — Physicians Make Decisions Act when the device is used in coverage decisions. The patient-facing element is conditional: most FDA-regulated AI/ML devices are clinician-facing tools, but where the device produces output that is shown to patients (e.g., consumer-facing diabetes risk estimators, certain digital health products), the AI/ML disclosure should be patient-facing. The 'mandatory' severity reflects that AI/ML modifications must be authorized — either through PCCP or through a new submission — and that labeling disclosure is required; the 'recommended' framing applies to design choices about how detailed to make the user-facing AI/ML summary. Verify against the current FDA guidance and any device-class-specific guidance before production deployment."
909
1007
  }
910
1008
  ]
911
1009
  }