plainstamp 0.0.1 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -6,19 +6,37 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
 
7
7
  ## [Unreleased]
8
8
 
9
- ### Planned for 0.1.0
9
+ ### Planned next
10
10
 
11
11
  - Add EU member-state AI Act implementation specifics where they diverge from the regulation (Germany, France, Spain, Italy, Netherlands first).
12
- - Add sector-specific rules: FDA Software-as-a-Medical-Device AI guidance, FINRA chatbot disclosure, healthcare HIPAA-adjacent AI rules.
13
- - Add additional watcher sources to the existing regulatory-update watcher (Cal Leg Info first; EUR-Lex if a usable feed surfaces).
14
- - Apify Actor wrapper for the paid hosted tier.
15
- - Cloudflare Workers deployment of the MCP server for free-tier hosted access.
12
+ - Add a third watcher source (Cal Leg Info first; EUR-Lex if a usable feed surfaces).
13
+ - Cloudflare Workers deployment of the MCP server for free-tier hosted access gates the smithery.ai / pulsemcp.com / official MCP registry submissions, all of which require a hosted MCP endpoint or GitHub-verified namespace ownership.
16
14
 
17
15
  ### Distribution
18
16
 
19
- Distribution is **npm-only**. Source remains in the operating organization's private repository; there is no public source repository host. Contact channel for issues, accuracy reports, security reports, and contribution proposals is **helpfulbutton140@agentmail.to** (see `README.md`, `docs/CONTRIBUTING.md`, `docs/SECURITY.md`).
17
+ Distribution is **npm-only**. Source remains in the operating organization's private repository; there is no public source repository host. Contact channel for issues, accuracy reports, security reports, and contribution proposals is **helpfulbutton140@agentmail.to** (see `docs/CONTRIBUTING.md`, `docs/SECURITY.md`).
20
18
 
21
- ### Added since 0.0.1
19
+ ## [0.2.0] — 2026-05-08
20
+
21
+ ### Added
22
+
23
+ - FINRA Regulatory Notice 24-09 — AI in customer communications. Member-firm obligations under FINRA Rules 2210 (communications), 2090 (KYC), 2111 (suitability), 3110 (supervision), 4511 (records), 3220 (gifts) all apply to AI-driven customer communications and recommendations; firms remain responsible for third-party AI vendor outputs. Use case `financial-services`. Issued 2024-06-27.
24
+ - New SEO-leaning guide: `docs/guides/eu-ai-act-article-50-chatbot-disclosure.md` — long-form builder-focused guide on Article 50 disclosure requirements, the August 2026 application date, the Omnibus VII provisional agreement, and how the rule stacks with GDPR Article 22 and EU Member-State implementations. Ships in the npm package and renders on the npm package page (which is well-indexed).
25
+ - Package `files` array now includes `docs/guides` so SEO-leaning content ships with the published artifact.
26
+ - Keywords expanded: `gdpr`, `finra`, `cfpb`, `eeoc`, `regtech` added to support discovery via npm search and search-engine indexing of the npm package page.
27
+ - Rule count 18 → 19. Tests still 51/51 passing.
28
+
29
+ ## [0.1.0] — 2026-05-08
30
+
31
+ ### Added
32
+
33
+ - Federal EEOC technical assistance on AI in employment selection procedures (Title VII / Uniform Guidelines, May 18, 2023). Severity `recommended` — the disclosure itself is best practice; the underlying disparate-impact obligation is binding. Federal floor for any AI hiring tool used in the U.S.; layers under stricter state mandates (IL HB 3773, NYC Local Law 144, CO SB 24-205).
34
+ - EU GDPR Article 22 — automated decision-making rights. Right to not be subject to a decision based solely on automated processing where it produces legal or similarly significant effects; right to human intervention, point-of-view expression, and contestation; controllers must provide meaningful information about logic, significance, and envisaged consequences (Arts. 13(2)(f), 14(2)(g)). Spans `employment-decisions`, `financial-services`, `healthcare`, `legal-services`, `general`. Effective 2018-05-25; penalties up to €20M or 4% of turnover.
35
+ - Tennessee ELVIS Act — voice and likeness protection (HB 2091 / SB 2096, codified at Tenn. Code Ann. Title 47, Chapter 25, Part 11). Consent-based statute; published AI-synthesized voice or likeness requires written authorization from the individual or rights-holder. Channels `ai-generated-audio`, `ai-generated-video`, `ai-generated-content`. Use cases include `b2c-marketing`, `b2b-marketing`, `civic-or-electoral`, `general`. Effective 2024-07-01.
36
+ - CFPB Circular 2023-03 — adverse-action notices for AI/ML credit decisions under ECOA / Regulation B. Specific principal reasons must be provided per applicant; generic boilerplate codes are insufficient; if the AI/ML model cannot be explained well enough to identify the specific reasons that drove the decision in this applicant's case, the model likely cannot lawfully be used. Channel `email-transactional` + `ai-generated-content`; use case `financial-services`. Issued 2023-09-19; ongoing CFPB enforcement priority.
37
+ - Rule count 14 → 18. Jurisdictions 8 → 11 (added `us-tn`). Tests 51/51 passing.
38
+
39
+ ### Added since 0.0.1 (rolled into 0.1.0 history)
22
40
 
23
41
  - Brand committed: working slug `disclo` retired in favor of `plainstamp` after a namespace availability check (github.com/disclo is taken by an unrelated $6.75M-funded HR/workforce SaaS).
24
42
  - Colorado AI Act (SB 24-205) — consumer-interaction disclosure; effective 2026-06-30 after a delay from 2026-02-01.
@@ -0,0 +1,161 @@
1
+ # EU AI Act Article 50: a builder's guide to chatbot disclosure
2
+
3
+ > **Informational only — not legal advice.** Verify against the cited
4
+ > regulator-published text and consult counsel for production deployments.
5
+ > See `AI-DISCLOSURE.md` in this package.
6
+
7
+ If your product talks to people in the EU and an AI is doing the talking,
8
+ Article 50 of the EU AI Act applies to you. This guide covers what the
9
+ rule actually says, when it applies, what counts as compliance, and the
10
+ deadline pressure most teams aren't tracking yet.
11
+
12
+ ## What Article 50 actually requires
13
+
14
+ Article 50(1) of Regulation (EU) 2024/1689 says:
15
+
16
+ > Providers of AI systems intended to interact directly with natural
17
+ > persons must design and develop them in such a way that the natural
18
+ > persons concerned are informed that they are interacting with an AI
19
+ > system.
20
+
21
+ There is one exception: if the fact that the user is talking to an AI
22
+ is "obvious from the point of view of a reasonably well-informed person
23
+ taking into account the circumstances and the context of use," the
24
+ disclosure is not required. The bar for "obvious" is high — a chat
25
+ window labeled "AI Assistant" probably qualifies; a chat window
26
+ labeled "Customer Support" does not, even if the bot sounds robotic.
27
+
28
+ Article 50(2) layers a separate obligation: any AI-generated synthetic
29
+ audio, image, video, or text must be marked as artificially generated
30
+ or manipulated, in a machine-readable format. The text-content
31
+ sub-clause has narrow exemptions (assistive editing, no substantive
32
+ change, etc.) that we cover later.
33
+
34
+ ## Who is the "provider"
35
+
36
+ The Act distinguishes **providers** (who develop or place the AI system
37
+ on the market) from **deployers** (who use it). Article 50 falls
38
+ primarily on providers — but the deployer obligations under Article 50(4)
39
+ on emotion-recognition / biometric systems and on deepfakes still apply
40
+ where relevant.
41
+
42
+ For a typical SaaS chatbot: the company that builds the chatbot model
43
+ or wraps an LLM into a product is the provider. The customer that
44
+ embeds the chatbot on their site is a deployer. Both have obligations
45
+ under different Article 50 paragraphs.
46
+
47
+ ## When the obligation kicks in
48
+
49
+ Article 50 applies as soon as a natural person begins interacting with
50
+ the AI system. Practically, this means **the disclosure must appear at
51
+ the start of the conversation**, before the AI has produced any
52
+ substantive output that a user might rely on.
53
+
54
+ A persistent banner reading "You are chatting with an AI assistant" at
55
+ the top of the chat surface satisfies this for most chat UIs. A
56
+ voice-channel disclosure must be spoken at session start. A
57
+ video-avatar disclosure typically combines a spoken introduction with a
58
+ visible on-screen indicator.
59
+
60
+ ## The "machine-readable" requirement (Art. 50(2))
61
+
62
+ For AI-generated synthetic content, the marking must be machine-
63
+ readable. The Act doesn't mandate a specific technical standard, but
64
+ the European Commission has signaled that watermarking schemes
65
+ compliant with C2PA, the SynthID variants, and similar provenance
66
+ metadata will be acceptable. As of 2026, the Commission is finalizing
67
+ implementing acts that will narrow the technical options.
68
+
69
+ If you're producing AI-generated images, audio, or video at scale,
70
+ adopt a watermarking standard now — retrofitting watermarks across an
71
+ existing content corpus is materially harder than baking them into the
72
+ generation pipeline.
73
+
74
+ ## Penalties and timing
75
+
76
+ Article 50 obligations apply from **August 2, 2026**. Penalties under
77
+ Article 99 of the Act for Article 50 violations can reach **€15 million
78
+ or 3% of global annual turnover, whichever is higher**.
79
+
80
+ A separate provisional agreement under the EU's Omnibus VII package
81
+ (provisional agreement 2026-05-07) reduced the transparency-solutions
82
+ grace period from 6 months to 3 months, moving the practical
83
+ compliance deadline for Article 50(2) machine-readable marking
84
+ implementations to **December 2, 2026**. Re-verify against the final
85
+ adopted text — Omnibus VII's provisional agreement may shift before
86
+ formal adoption.
87
+
88
+ ## How Article 50 stacks with other EU rules
89
+
90
+ Article 50 doesn't operate in isolation. Builders should also check:
91
+
92
+ - **GDPR Article 22** — if the AI conversation feeds into an automated
93
+ decision producing legal or similarly significant effects, the
94
+ data-subject rights to human intervention, point-of-view expression,
95
+ and contestation apply on top of the chatbot disclosure.
96
+ - **GDPR Articles 13(2)(f) and 14(2)(g)** — when personal data is
97
+ collected during the AI interaction, the controller must inform the
98
+ data subject about the existence of automated decision-making, the
99
+ logic involved, and the envisaged consequences.
100
+ - **EU AI Act Article 13** — high-risk AI systems have separate
101
+ transparency obligations to deployers (instructions for use,
102
+ expected outputs, characteristics and limitations).
103
+ - **Digital Services Act (Regulation (EU) 2022/2065)** — provider
104
+ obligations around content moderation transparency layer over the
105
+ chatbot disclosure when the AI is moderating user-generated content.
106
+ - **Member-state implementations** — Germany's BDSG, France's Loi
107
+ Informatique et Libertés, and Spain's AESIA framework all add
108
+ national-level safeguards on top of the EU regulation. Verify the
109
+ rules of every Member State your service operates in.
110
+
111
+ ## How plainstamp helps
112
+
113
+ `plainstamp` ships with `eu-ai-act-art50-chatbot` and
114
+ `eu-ai-act-art50-genai-content` rules that surface the live text of
115
+ Article 50, the required disclosure elements, and ready-to-paste plain-
116
+ language and formal-language disclosure templates. Each rule cites the
117
+ EUR-Lex source URL and carries a `last_verified` date so you know
118
+ whether the text you're reading is current.
119
+
120
+ A typical lookup:
121
+
122
+ ```bash
123
+ npx plainstamp lookup --jurisdiction eu \
124
+ --channel live-chat \
125
+ --use-case b2c-customer-support
126
+ ```
127
+
128
+ returns the rule, the disclosure-element checklist, and template text
129
+ you can drop into your chat surface. For deployers running across
130
+ multiple jurisdictions, the same query against `us-ca`, `us-co`,
131
+ `us-il`, `us-tx`, `us-ut`, etc. will surface the parallel state-level
132
+ obligations that often layer on top.
133
+
134
+ ## The minimum viable Article 50 disclosure
135
+
136
+ If you ship one thing this week, ship a chat-surface header that
137
+ includes:
138
+
139
+ 1. A clear statement that the user is interacting with an AI ("You
140
+ are chatting with an AI assistant").
141
+ 2. A path to escalate to a human (where applicable to your service
142
+ model and required by sectoral rules — e.g., financial-services
143
+ rules in many jurisdictions require an escalation path).
144
+ 3. A link to your privacy notice covering AI data use.
145
+
146
+ Then, if you process AI-generated synthetic media, prioritize
147
+ machine-readable marking for the Art. 50(2) deadline.
148
+
149
+ ## Source-of-truth links
150
+
151
+ - **Regulation (EU) 2024/1689 — full text on EUR-Lex** ([eur-lex.europa.eu](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689))
152
+ - **GDPR Article 22 on EUR-Lex** ([eur-lex.europa.eu](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679))
153
+ - **EDPB Guidelines on Automated Decision-Making (WP251rev.01)** — apply alongside Art. 22 obligations.
154
+
155
+ `plainstamp` is maintained by an autonomous AI agent operating under
156
+ KS Elevated Solutions LLC. Accuracy reports, rule-update suggestions,
157
+ and security disclosures: [helpfulbutton140@agentmail.to](mailto:helpfulbutton140@agentmail.to).
158
+
159
+ ---
160
+
161
+ [`← Back to plainstamp on npm`](https://www.npmjs.com/package/plainstamp)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "plainstamp",
3
- "version": "0.0.1",
3
+ "version": "0.2.0",
4
4
  "description": "AI disclosure compliance assistant — generates legally-grounded AI disclosure text per (jurisdiction × channel × use-case) and tracks regulatory updates. Operated by an autonomous AI agent under KS Elevated Solutions LLC.",
5
5
  "type": "module",
6
6
  "license": "MIT",
@@ -17,6 +17,7 @@
17
17
  "files": [
18
18
  "dist",
19
19
  "rules",
20
+ "docs/guides",
20
21
  "README.md",
21
22
  "AI-DISCLOSURE.md",
22
23
  "CHANGELOG.md",
@@ -53,7 +54,12 @@
53
54
  "compliance",
54
55
  "ccpa",
55
56
  "eu-ai-act",
57
+ "gdpr",
56
58
  "ftc",
59
+ "finra",
60
+ "cfpb",
61
+ "eeoc",
62
+ "regtech",
57
63
  "agent",
58
64
  "autonomous-ai"
59
65
  ]
package/rules/seed.json CHANGED
@@ -615,6 +615,250 @@
615
615
  "formal": "Consent Waiver under Maryland Labor and Employment Article § 3-717 (HB 1202, Chapter 446 of the 2020 Laws of Maryland): The applicant identified by name and interview date below consents to the use of facial-recognition services by the employer during the pre-employment interview, and acknowledges having read this waiver. Applicant: [NAME]. Date of Interview: [DATE]. Employer: [EMPLOYER]. Signature: _______________ Date: ____________________"
616
616
  },
617
617
  "notes": "The statute is narrow — it applies to facial-recognition services that build a machine-interpretable pattern of facial features, used during interviews. AI hiring tools that record but do not analyze face patterns may be outside scope; tools that score expressions or compute similarity to other faces are inside scope. When in doubt, obtain the waiver — the cost is one form versus the cost of an LE-Article-3-717 violation claim. The waiver requirement runs in parallel with separate disclosure obligations under the IL HB 3773 and NYC Local Law 144 rules — multi-jurisdictional employers using AI interview tools need to satisfy each applicable obligation."
618
+ },
619
+ {
620
+ "id": "us-eeoc-title-vii-ai-employment-2023",
621
+ "jurisdiction": "us",
622
+ "channels": ["email-transactional", "ai-generated-content", "about-page"],
623
+ "use_cases": ["employment-decisions"],
624
+ "severity": "recommended",
625
+ "short_title": "EEOC Title VII technical assistance — AI selection procedures (2023)",
626
+ "summary": "The U.S. Equal Employment Opportunity Commission issued technical assistance on May 18, 2023 addressing the application of Title VII of the Civil Rights Act of 1964 to automated systems and AI used in employment-related selection procedures. The guidance reaffirms that the Uniform Guidelines on Employee Selection Procedures (1978) apply to AI/algorithmic tools used to make hiring, promotion, transfer, or firing decisions: such tools are 'selection procedures' under the Uniform Guidelines and are subject to the four-fifths rule for measuring adverse impact. Employers remain liable for discriminatory outcomes from AI tools they use, even tools developed by third-party vendors. The EEOC recommends — but does not strictly mandate — that employers (a) audit AI tools for adverse impact before deployment and on an ongoing basis, (b) be transparent with applicants and employees about the use of AI tools, and (c) provide reasonable accommodations and alternative selection procedures on request. This is interpretive guidance, not a regulation; substantive Title VII liability for disparate-impact discrimination is the binding obligation.",
627
+ "required_elements": [
628
+ {
629
+ "id": "ai-tool-use-notice",
630
+ "description": "Notice to applicants and employees that an AI or algorithmic tool will be used in the selection procedure (recommended).",
631
+ "required": true,
632
+ "example": "Notice: This employer uses an automated decision-making tool to assist in evaluating applications. Use of this tool will form part of the selection process for this role."
633
+ },
634
+ {
635
+ "id": "alternative-process-availability",
636
+ "description": "Information about how to request an alternative, non-AI selection process or a reasonable accommodation under the Americans with Disabilities Act.",
637
+ "required": true,
638
+ "example": "If you would prefer an alternative selection process, or require a reasonable accommodation under the ADA, please contact the employer's human resources team."
639
+ },
640
+ {
641
+ "id": "four-fifths-adverse-impact-audit",
642
+ "description": "Periodic adverse-impact audit of the AI selection tool against the four-fifths rule of the Uniform Guidelines (1978). (System / governance requirement, not in-message disclosure.)",
643
+ "required": false
644
+ }
645
+ ],
646
+ "citation": {
647
+ "statute": "Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e et seq., interpreted via Uniform Guidelines on Employee Selection Procedures (1978), 29 CFR Part 1607",
648
+ "section": "EEOC Technical Assistance: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII (May 18, 2023)",
649
+ "source_url": "https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial-intelligence",
650
+ "publisher": "U.S. Equal Employment Opportunity Commission"
651
+ },
652
+ "effective_date": "2023-05-18",
653
+ "last_verified": "2026-05-08",
654
+ "template": {
655
+ "plain": "Notice: This employer uses an automated decision-making (AI) tool to assist in evaluating applications and employment decisions. The tool's outputs are reviewed by human decision-makers and are subject to the federal Title VII non-discrimination requirements. If you would prefer an alternative, non-AI selection process, or require a reasonable accommodation under the Americans with Disabilities Act, please contact our human resources team.",
656
+ "formal": "Notice under EEOC technical assistance applying Title VII of the Civil Rights Act of 1964 (42 U.S.C. § 2000e et seq.) and the Uniform Guidelines on Employee Selection Procedures (29 CFR Part 1607) to AI selection procedures: This employer uses an automated decision-making tool as part of one or more employment-related selection procedures (which may include hiring, promotion, transfer, or termination decisions). Such tools are subject to the same disparate-impact analysis as any other selection procedure, including the four-fifths rule for measuring adverse impact. Applicants and employees may request an alternative selection procedure or reasonable accommodation under the Americans with Disabilities Act."
657
+ },
658
+ "notes": "EEOC technical assistance is interpretive guidance — not a regulation. The binding obligation is Title VII's prohibition on disparate-impact discrimination, which has been law since the 1971 Griggs v. Duke Power decision. The 2023 guidance simply confirms that AI/algorithmic selection tools are 'selection procedures' under the Uniform Guidelines and subject to the same scrutiny. Severity is `recommended` because the disclosure itself is best-practice, not mandated; the underlying disparate-impact obligation is non-negotiable. The state-level mandates (IL HB 3773, NYC Local Law 144, CO SB 24-205) are stricter than this federal guidance and supersede it where they apply. Employers using AI hiring tools across multiple states should treat the federal EEOC guidance as a floor and the strictest applicable state rule as the ceiling. Note that the EEOC also issued separate technical assistance under the ADA (May 12, 2022) addressing reasonable-accommodation obligations for applicants who cannot effectively interface with AI selection tools — that guidance complements this one and should be consulted alongside it."
659
+ },
660
+ {
661
+ "id": "eu-gdpr-art22-automated-decisions",
662
+ "jurisdiction": "eu",
663
+ "channels": ["email-transactional", "ai-generated-content", "privacy-policy"],
664
+ "use_cases": [
665
+ "employment-decisions",
666
+ "financial-services",
667
+ "healthcare",
668
+ "legal-services",
669
+ "general"
670
+ ],
671
+ "severity": "mandatory",
672
+ "short_title": "EU GDPR Article 22 — automated decision-making rights",
673
+ "summary": "Under the EU General Data Protection Regulation (Regulation (EU) 2016/679), Article 22(1) gives data subjects the right not to be subject to a decision based solely on automated processing — including profiling — that produces legal effects concerning them or similarly significantly affects them. Exceptions in Article 22(2) permit such decisions if (a) necessary for entering into or performing a contract, (b) authorized by Union or Member-State law that provides safeguards, or (c) based on the data subject's explicit consent. Where one of these exceptions applies, the controller must implement suitable measures to safeguard the data subject's rights and freedoms, including at minimum the right to obtain human intervention, to express their point of view, and to contest the decision (Art. 22(3)). Articles 13(2)(f) and 14(2)(g) require the controller to provide, at the time data is collected, meaningful information about the logic involved in any such automated decision-making and the significance and envisaged consequences of such processing for the data subject. Penalties under Art. 83(5): up to €20 million or 4% of global annual turnover, whichever is higher.",
674
+ "required_elements": [
675
+ {
676
+ "id": "automated-decision-notice",
677
+ "description": "Notice that the data subject is being subjected to automated decision-making, including profiling, that produces legal or similarly significant effects.",
678
+ "required": true,
679
+ "example": "Notice: This decision was made by an automated system, including profiling, and produces effects relating to your application or account that are significant to you."
680
+ },
681
+ {
682
+ "id": "logic-involved",
683
+ "description": "Meaningful information about the logic involved in the automated decision (the type of inputs and the way they are weighted, not the underlying source code or proprietary model parameters).",
684
+ "required": true,
685
+ "example": "The decision is based on inputs you provided in your application, your prior interaction history with us, and a credit score from an authorized bureau, weighted to predict outcome likelihood."
686
+ },
687
+ {
688
+ "id": "significance-and-consequences",
689
+ "description": "Information about the significance and envisaged consequences of the automated processing for the data subject.",
690
+ "required": true,
691
+ "example": "An adverse decision means your application will not proceed; you may reapply after 30 days, or request a human review now."
692
+ },
693
+ {
694
+ "id": "right-to-human-intervention",
695
+ "description": "Right to obtain human intervention on the part of the controller, to express the data subject's point of view, and to contest the decision.",
696
+ "required": true,
697
+ "example": "You have the right to request that a human review this decision, to provide additional context for consideration, and to contest the decision. To exercise these rights, contact our data-protection team at [contact]."
698
+ },
699
+ {
700
+ "id": "lawful-basis-disclosure",
701
+ "description": "Disclosure of the Article 22(2) lawful basis under which the automated decision is made (contract, EU/Member-State law, or explicit consent). (Information requirement, not single in-message text.)",
702
+ "required": false
703
+ }
704
+ ],
705
+ "citation": {
706
+ "statute": "Regulation (EU) 2016/679 (General Data Protection Regulation)",
707
+ "section": "Article 22 — automated individual decision-making, including profiling; in conjunction with Articles 13(2)(f) and 14(2)(g)",
708
+ "source_url": "https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679",
709
+ "publisher": "Publications Office of the European Union (EUR-Lex)"
710
+ },
711
+ "effective_date": "2018-05-25",
712
+ "last_verified": "2026-05-08",
713
+ "template": {
714
+ "plain": "This decision was made by an automated system. The decision considers [inputs / categories of data] and produces effects relating to [employment / credit / insurance / other significant outcome]. You have the right to request human review of this decision, to express your point of view, and to contest the decision — contact us at [data-protection address]. For more on the logic involved and the consequences of this automated processing, see our privacy notice at [URL].",
715
+ "formal": "Notice under Article 22 of Regulation (EU) 2016/679 (GDPR): This decision is based solely on automated processing, including profiling, that produces legal effects or similarly significant effects concerning you. The lawful basis for this automated decision is [contract performance / EU or Member-State law / your explicit consent — Article 22(2)(a), (b), or (c)]. Meaningful information about the logic involved: [description of inputs, weights at high level, decision threshold]. The significance and envisaged consequences of the processing are: [description]. You have the right under Article 22(3) to obtain human intervention by the controller, to express your point of view, and to contest this decision. To exercise these rights, contact the data-protection team at [contact]. You also have the right to lodge a complaint with your supervisory authority."
716
+ },
717
+ "notes": "Article 22 applies only to decisions based 'solely' on automated processing. Decisions where a human meaningfully reviews the AI output before it takes effect are NOT solely automated and are outside Article 22's scope, although other GDPR transparency obligations (Arts. 13–14) still apply. The EDPB's Guidelines on Automated Decision-Making (WP251rev.01) clarify that 'meaningful' human review must be substantive — rubber-stamping the AI's recommendation is not enough. The Schufa Holding judgment (CJEU C-634/21, 2023) confirmed that automated credit scoring constitutes a decision under Art. 22 even when the score is then passed to a human-operated lender — because the score itself drives the outcome. EU Member States may impose additional safeguards (e.g., France's Loi Informatique et Libertés, Germany's BDSG § 37); developers should layer Member-State requirements on top. Sectoral overlaps: in employment-decisions use, Article 22 stacks with the EU AI Act's Article 50 chatbot disclosure (where chat is used) and any Member-State implementations; in financial-services, with the EU AI Act's high-risk classification of credit-scoring systems."
718
+ },
719
+ {
720
+ "id": "us-tn-elvis-act-voice-likeness-2024",
721
+ "jurisdiction": "us-tn",
722
+ "channels": [
723
+ "ai-generated-audio",
724
+ "ai-generated-video",
725
+ "ai-generated-content"
726
+ ],
727
+ "use_cases": [
728
+ "b2c-marketing",
729
+ "b2b-marketing",
730
+ "civic-or-electoral",
731
+ "general"
732
+ ],
733
+ "severity": "mandatory",
734
+ "short_title": "Tennessee ELVIS Act — voice and likeness protection (HB 2091 / SB 2096, 2024)",
735
+ "summary": "The Tennessee Ensuring Likeness, Voice, and Image Security Act of 2024 (the 'ELVIS Act') amends Tennessee Code Annotated Title 47, Chapter 25, Part 11 to extend Tennessee's right-of-publicity statute to a person's VOICE in addition to their name, photograph, and likeness. It is unlawful for any person, with knowledge that an individual's voice or likeness is being used without authorization, to publish, perform, distribute, transmit, or otherwise make available to the public an algorithm, software, tool, or other technology, service, or device the primary purpose or function of which is the production of a particular individual's voice or likeness without that individual's authorization. Civil remedies include injunctive relief, treble damages, and attorney's fees; the act creates a Class A misdemeanor for criminal violations and gives standing to the individual, their estate, or any person/entity holding exclusive license to the individual's voice or likeness. Effective July 1, 2024.",
736
+ "required_elements": [
737
+ {
738
+ "id": "ai-voice-likeness-authorization",
739
+ "description": "Authorization (license, consent, or other express permission) from the individual whose voice or likeness is being synthesized, BEFORE the AI-generated voice or likeness is published, performed, distributed, or otherwise made available to the public. (Authorization-not-disclosure rule: the obligation is to obtain consent first; disclosure of the AI nature of the content is a parallel best practice but not the statutory cure.)",
740
+ "required": true,
741
+ "example": "I, [individual name or authorized rights-holder], grant [licensee] permission to use my voice / likeness in AI-generated audio / video for the purposes of [scope] for the period of [term]. Signed: __________ Date: __________"
742
+ },
743
+ {
744
+ "id": "ai-generated-content-label",
745
+ "description": "Where authorization is granted, accompanying clear and conspicuous label that the published content includes AI-synthesized voice or likeness (best practice; complementary to EU AI Act Art. 50(2) and aligned with general FTC endorsement guidance).",
746
+ "required": true,
747
+ "example": "This audio (or video) includes an AI-synthesized voice of [individual] used with their permission."
748
+ },
749
+ {
750
+ "id": "no-tool-distribution-without-authorization",
751
+ "description": "Prohibition on publishing or distributing tools whose primary purpose is producing a particular individual's voice or likeness without authorization. (System / product-design requirement, not per-message disclosure.)",
752
+ "required": false
753
+ }
754
+ ],
755
+ "citation": {
756
+ "statute": "Tennessee Code Annotated, Title 47, Chapter 25, Part 11 (as amended by the Ensuring Likeness, Voice, and Image Security Act of 2024 — HB 2091 / SB 2096, Public Chapter 588)",
757
+ "section": "Personal Rights Protection Act, as amended by the ELVIS Act",
758
+ "source_url": "https://wapp.capitol.tn.gov/apps/BillInfo/Default.aspx?BillNumber=HB2091&ga=113",
759
+ "publisher": "Tennessee General Assembly"
760
+ },
761
+ "effective_date": "2024-07-01",
762
+ "last_verified": "2026-05-08",
763
+ "template": {
764
+ "plain": "AI Voice / Likeness Notice (Tennessee ELVIS Act): The audio (or video) includes an AI-synthesized voice or likeness of [individual]. Use of that voice or likeness has been authorized in writing by [the individual / their authorized rights-holder] for the scope and term of this communication.",
765
+ "formal": "Notice under the Tennessee Ensuring Likeness, Voice, and Image Security Act of 2024 (HB 2091 / SB 2096, codified at Tenn. Code Ann. Title 47, Chapter 25, Part 11): The published material includes AI-synthesized voice and/or likeness of [individual], used pursuant to written authorization from [the individual or the authorized exclusive rights-holder] dated [date]. The synthesis was performed by [system / service] for the limited purpose of [purpose]. Inquiries about the underlying authorization may be directed to [contact]."
766
+ },
767
+ "notes": "The ELVIS Act is a CONSENT-BASED statute, not solely a disclosure statute. The legal cure for AI voice or likeness use is authorization from the individual; a disclosure label without authorization does NOT cure a violation. The act applies whenever the published content reaches the public AND the actor knew the voice or likeness was being used without authorization — the knowledge standard creates real exposure for any service publishing user-generated AI synthesis content. The act has both civil (treble damages, attorney's fees, injunctive relief) and criminal (Class A misdemeanor) liability tracks. Tool publishers (the providers of voice-cloning or face-swap tools) face independent liability where the tool's primary purpose is producing a particular individual's voice or likeness without authorization — generic voice-synthesis tools that allow the user to clone arbitrary voices may not be covered, but tools marketed around a specific celebrity's voice clearly are. ELVIS Act-style protection is also emerging in California (AB 2602, AB 2655, AB 1836) and at the federal level via the proposed NO FAKES Act; multi-jurisdictional rights-clearance workflows should consider Tennessee + California + (eventually) federal in parallel."
768
+ },
769
+ {
770
+ "id": "us-cfpb-circular-2023-03-ai-adverse-action",
771
+ "jurisdiction": "us",
772
+ "channels": ["email-transactional", "ai-generated-content"],
773
+ "use_cases": ["financial-services"],
774
+ "severity": "mandatory",
775
+ "short_title": "CFPB Circular 2023-03 — adverse-action notices for AI credit decisions (ECOA / Regulation B)",
776
+ "summary": "The Consumer Financial Protection Bureau, in Circular 2023-03 (issued September 19, 2023), confirmed that creditors using complex algorithms or artificial intelligence to make credit decisions must still provide statements of specific reasons for adverse actions as required by the Equal Credit Opportunity Act (ECOA, 15 U.S.C. § 1691(d)) and Regulation B (12 CFR § 1002.9). Creditors cannot use the technological complexity of an AI/ML model as a defense for failing to identify the specific principal reasons that adversely affected the applicant. Generic or boilerplate reasons (e.g., 'failed credit-decision model') are insufficient; the creditor must identify the particular factors specific to the applicant's situation. If a creditor cannot accurately identify the specific reasons for an AI-driven adverse decision, the creditor likely cannot lawfully use the model. ECOA penalties include actual damages, punitive damages up to $10,000 per individual action / 1% of net worth in class actions, and attorney's fees; ongoing enforcement priority for the CFPB through 2026.",
777
+ "required_elements": [
778
+ {
779
+ "id": "specific-principal-reasons",
780
+ "description": "Statement of the specific principal reasons for the adverse credit action — the particular, applicant-specific factors that drove the decision; not generic or boilerplate explanations.",
781
+ "required": true,
782
+ "example": "Specific reasons your application was declined: (1) recent delinquencies on existing accounts; (2) high ratio of unsecured debt to monthly income; (3) short length of credit history. These factors most adversely affected the decision in your case."
783
+ },
784
+ {
785
+ "id": "right-to-statement-of-reasons",
786
+ "description": "Notice of the applicant's right to a statement of specific reasons for the adverse action and the timing for requesting it.",
787
+ "required": true,
788
+ "example": "If you would like a written statement of the specific reasons for this adverse action, you must request it within 60 days. We will provide the statement within 30 days of your request."
789
+ },
790
+ {
791
+ "id": "ecoa-equal-credit-notice",
792
+ "description": "ECOA equal-credit notice — the standard statement of the prohibited bases for credit discrimination.",
793
+ "required": true,
794
+ "example": "The federal Equal Credit Opportunity Act prohibits creditors from discriminating against credit applicants on the basis of race, color, religion, national origin, sex, marital status, age (provided the applicant has the capacity to enter into a binding contract), because all or part of the applicant's income derives from any public assistance program, or because the applicant has in good faith exercised any right under the Consumer Credit Protection Act. The federal agency that administers compliance with this law concerning this creditor is [agency and address]."
795
+ },
796
+ {
797
+ "id": "ai-driven-decision-explainability",
798
+ "description": "If the adverse action was driven by an AI / ML model, the creditor's underlying obligation to be able to identify the specific reasons for the model's output (model explainability requirement, governance-side rather than per-message text).",
799
+ "required": false
800
+ }
801
+ ],
802
+ "citation": {
803
+ "statute": "Equal Credit Opportunity Act, 15 U.S.C. § 1691(d); Regulation B, 12 CFR § 1002.9; interpreted via CFPB Circular 2023-03 (Adverse action notification requirements and the proper use of the CFPB's sample forms provided in Regulation B)",
804
+ "section": "Adverse-action notices for AI/ML credit decisions",
805
+ "source_url": "https://www.consumerfinance.gov/compliance/circulars/circular-2023-03-adverse-action-notification-requirements-and-the-proper-use-of-the-cfpbs-sample-forms-provided-in-regulation-b/",
806
+ "publisher": "Consumer Financial Protection Bureau"
807
+ },
808
+ "effective_date": "2023-09-19",
809
+ "last_verified": "2026-05-08",
810
+ "template": {
811
+ "plain": "Adverse Credit Decision Notice. We have decided not to approve your application. Specific reasons for this decision: (1) [reason 1 specific to your application]; (2) [reason 2]; (3) [reason 3]. These factors most adversely affected the decision in your case. Note: Federal law prohibits creditors from discriminating against credit applicants on the bases listed below. The federal agency administering this creditor's compliance with the Equal Credit Opportunity Act is [agency, address]. Prohibited bases: race, color, religion, national origin, sex, marital status, age (where the applicant has contract-binding capacity), receipt of income from any public-assistance program, or good-faith exercise of any Consumer Credit Protection Act right. If you would like a written statement of the specific reasons for this adverse action, you must request it within 60 days; we will provide it within 30 days of your request.",
812
+ "formal": "Notice of Adverse Action under the Equal Credit Opportunity Act (15 U.S.C. § 1691(d)) and Regulation B (12 CFR § 1002.9), as further interpreted by CFPB Circular 2023-03 in the context of artificial-intelligence and machine-learning credit decisions: The application identified by reference number [REF] has been adversely acted upon. The specific principal reasons that most adversely affected the decision in this case, as identified by the creditor's review of the AI/ML model output, are: (1) [reason]; (2) [reason]; (3) [reason]. The applicant may request a written statement of the specific reasons within 60 days of this notice; the creditor will provide such statement within 30 days of receipt of the request. Federal law prohibits creditors from discriminating against credit applicants on prohibited bases enumerated in 15 U.S.C. § 1691(a). The federal agency administering compliance with the ECOA concerning this creditor is [agency, address]."
813
+ },
814
+ "notes": "CFPB Circular 2023-03 makes explicit a position the CFPB had taken in supervisory guidance for years: the technological complexity of an AI/ML model is not a defense for failing to provide ECOA-compliant adverse-action reasons. Creditors must identify the specific factors that affected THIS APPLICANT'S decision — not generic factors that influence the model in general. Practical implications for AI-credit fintechs: (1) the model itself must be explainable to a level that supports per-applicant reason codes — if the model cannot do this, the model cannot be deployed for credit decisions; (2) the reason codes must be checked for accuracy, not just plausibility — using post-hoc SHAP / LIME explanations as the source of reason codes is acceptable IF the creditor has validated that those explanations actually reflect what drove the decision in each case; (3) generic or boilerplate codes ('credit application incomplete', 'failed model threshold') are insufficient — the codes must point to applicant-specific factors. ECOA's statutory penalties combined with ongoing CFPB enforcement priority make this a high-stakes obligation. Note: Regulation B's adverse-action requirements run in parallel with the FCRA's adverse-action requirements (15 U.S.C. § 1681m) when the decision was based in whole or in part on a consumer report — both sets of obligations apply to the same notice."
815
+ },
816
+ {
817
+ "id": "us-finra-rn-24-09-ai-customer-communications",
818
+ "jurisdiction": "us",
819
+ "channels": ["live-chat", "voice", "email-marketing", "ai-generated-content"],
820
+ "use_cases": ["financial-services"],
821
+ "severity": "mandatory",
822
+ "short_title": "FINRA Regulatory Notice 24-09 — AI in customer communications",
823
+ "summary": "FINRA Regulatory Notice 24-09 (June 27, 2024) addresses member firm use of generative artificial intelligence and other large language model technologies in their securities business. The Notice does not create new rules; it confirms that existing FINRA rules apply to AI-driven customer communications and reminds member firms of their obligations: (a) Rule 3110 — supervisory systems reasonably designed to achieve compliance with applicable rules apply to AI tools used by associated persons or in customer-facing roles; (b) Rule 2210 — communications with the public, including any communication generated by an AI tool, must be fair, balanced, not misleading, and (where applicable) supervised, principal-approved, or filed with FINRA; (c) Rule 2090 (Know Your Customer) and Rule 2111 (suitability) — AI-generated recommendations are subject to the same suitability and KYC obligations as human-generated ones; (d) Rule 4511 — books-and-records obligations apply to AI inputs and outputs that constitute communications with customers; (e) Rule 3220 — gifts and gratuities standards apply to AI-generated promotional materials. Member firms remain responsible for AI tool outputs even when the tool is provided by a third-party vendor. Notice 24-09 also flags risks including hallucination, bias, data privacy, and intellectual-property concerns; firms should address these in written supervisory procedures.",
824
+ "required_elements": [
825
+ {
826
+ "id": "ai-communication-supervision",
827
+ "description": "AI-generated communications with the public are subject to FINRA Rule 2210 standards (fair, balanced, not misleading) and the firm's existing principal-review / pre-approval / filing workflow as applicable to the communication type.",
828
+ "required": true,
829
+ "example": "All customer-facing communications generated by the AI assistant are reviewed by a qualified principal under FINRA Rule 2210 before delivery and retained per the firm's books-and-records policy under Rule 4511."
830
+ },
831
+ {
832
+ "id": "ai-recommendation-suitability",
833
+ "description": "AI-generated investment recommendations or advice are subject to FINRA Rule 2111 suitability obligations on the same terms as human-generated recommendations; firm WSPs must address how AI-generated recommendations are reviewed for suitability.",
834
+ "required": true,
835
+ "example": "Any investment recommendation generated by the AI tool for a customer account is subject to a Rule 2111 suitability review against the customer's investment profile under the firm's written supervisory procedures."
836
+ },
837
+ {
838
+ "id": "third-party-vendor-responsibility",
839
+ "description": "Firm responsibility for AI tool outputs persists when the tool is operated by a third-party vendor; vendor due diligence and oversight are part of the firm's Rule 3110 supervisory obligation.",
840
+ "required": true,
841
+ "example": "AI tools operated by third-party vendors are vetted, monitored, and supervised by the firm under FINRA Rule 3110; the firm remains responsible for any communications, recommendations, or records generated by those tools in connection with its securities business."
842
+ },
843
+ {
844
+ "id": "wsp-ai-coverage",
845
+ "description": "Written supervisory procedures address AI tool use, including risk areas of hallucination, bias, data privacy, and IP. (System / governance requirement, not per-message text.)",
846
+ "required": false
847
+ }
848
+ ],
849
+ "citation": {
850
+ "statute": "FINRA Rules 2210, 2090, 2111, 3110, 4511, 3220 (existing); FINRA Regulatory Notice 24-09, 'FINRA Reminds Member Firms of Their Obligations When Using Generative Artificial Intelligence and Large Language Models' (June 27, 2024)",
851
+ "section": "Member-firm obligations when using AI in securities business",
852
+ "source_url": "https://www.finra.org/rules-guidance/notices/24-09",
853
+ "publisher": "Financial Industry Regulatory Authority"
854
+ },
855
+ "effective_date": "2024-06-27",
856
+ "last_verified": "2026-05-08",
857
+ "template": {
858
+ "plain": "Notice — Customer Communication via AI Tool: This message (or recommendation) was prepared with the assistance of an artificial-intelligence tool and is subject to the same review and supervision standards as any communication delivered by [Member Firm]. The communication is reviewed under FINRA Rule 2210 standards and, where applicable, has been reviewed by a qualified principal. Any investment recommendation in this communication remains subject to the firm's suitability analysis under FINRA Rule 2111 against your investment profile. If you have questions about this communication or the role of AI in producing it, contact [contact].",
859
+ "formal": "Notice under FINRA Regulatory Notice 24-09 and Rules 2210, 2090, 2111, 3110, 4511, and 3220: This communication was generated, in whole or in part, with the assistance of artificial-intelligence technology. The member firm has reviewed and supervised this communication under its written supervisory procedures consistent with FINRA Rule 3110, and the communication satisfies the standards of FINRA Rule 2210 governing communications with the public. Any investment recommendation contained herein has been evaluated for suitability under FINRA Rule 2111 against the customer's investment profile under FINRA Rule 2090. The firm retains records of this communication under FINRA Rule 4511. The member firm remains responsible for AI tool outputs whether the tool is internally operated or provided by a third-party vendor."
860
+ },
861
+ "notes": "FINRA Regulatory Notice 24-09 is reminder-and-clarification guidance — it does not create new rules. The binding obligations are the existing FINRA rules (2210, 2090, 2111, 3110, 4511, 3220), which apply by their existing terms to AI-driven communications, recommendations, and records. Member firms (broker-dealers and their associated persons) are bound; non-member firms are not directly bound by FINRA rules but may face parallel obligations under SEC rules (e.g., Rule 17a-4 books-and-records, Investment Advisers Act fiduciary duty for IA-registered firms) — this rule's `jurisdiction` is `us` because FINRA is a self-regulatory organization with national scope, not a single-state regulator. The 2023 SEC Staff Bulletin on conflicts of interest for AI/PDA-using broker-dealers and investment advisers (and the SEC's proposed PDA rule, Rel. No. 34-97990) layers additional obligations specifically around conflicts; firms with PDA / AI advisory tools should consult both. FINRA expects firms to update their WSPs to specifically address AI tool use; using AI without WSP coverage is an immediate Rule 3110 supervision deficiency. Firms should also be aware of state-level adverse-action and disclosure overlays (e.g., NYDFS's October 2024 cybersecurity / AI guidance for licensed entities)."
618
862
  }
619
863
  ]
620
864
  }