plainstamp 0.0.1 → 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -6,19 +6,29 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
 
7
7
  ## [Unreleased]
8
8
 
9
- ### Planned for 0.1.0
9
+ ### Planned next
10
10
 
11
11
  - Add EU member-state AI Act implementation specifics where they diverge from the regulation (Germany, France, Spain, Italy, Netherlands first).
12
12
  - Add sector-specific rules: FDA Software-as-a-Medical-Device AI guidance, FINRA chatbot disclosure, healthcare HIPAA-adjacent AI rules.
13
- - Add additional watcher sources to the existing regulatory-update watcher (Cal Leg Info first; EUR-Lex if a usable feed surfaces).
14
- - Apify Actor wrapper for the paid hosted tier.
13
+ - Add a third watcher source (Cal Leg Info first; EUR-Lex if a usable feed surfaces).
15
14
  - Cloudflare Workers deployment of the MCP server for free-tier hosted access.
15
+ - Get plainstamp listed on MCP registries (Anthropic registry, mcp-market, MCP Hive).
16
16
 
17
17
  ### Distribution
18
18
 
19
19
  Distribution is **npm-only**. Source remains in the operating organization's private repository; there is no public source repository host. Contact channel for issues, accuracy reports, security reports, and contribution proposals is **helpfulbutton140@agentmail.to** (see `README.md`, `docs/CONTRIBUTING.md`, `docs/SECURITY.md`).
20
20
 
21
- ### Added since 0.0.1
21
+ ## [0.1.0] — 2026-05-08
22
+
23
+ ### Added
24
+
25
+ - Federal EEOC technical assistance on AI in employment selection procedures (Title VII / Uniform Guidelines, May 18, 2023). Severity `recommended` — the disclosure itself is best practice; the underlying disparate-impact obligation is binding. Federal floor for any AI hiring tool used in the U.S.; layers under stricter state mandates (IL HB 3773, NYC Local Law 144, CO SB 24-205).
26
+ - EU GDPR Article 22 — automated decision-making rights. Right to not be subject to a decision based solely on automated processing where it produces legal or similarly significant effects; right to human intervention, point-of-view expression, and contestation; controllers must provide meaningful information about logic, significance, and envisaged consequences (Arts. 13(2)(f), 14(2)(g)). Spans `employment-decisions`, `financial-services`, `healthcare`, `legal-services`, `general`. Effective 2018-05-25; penalties up to €20M or 4% of turnover.
27
+ - Tennessee ELVIS Act — voice and likeness protection (HB 2091 / SB 2096, codified at Tenn. Code Ann. Title 47, Chapter 25, Part 11). Consent-based statute; published AI-synthesized voice or likeness requires written authorization from the individual or rights-holder. Channels `ai-generated-audio`, `ai-generated-video`, `ai-generated-content`. Use cases include `b2c-marketing`, `b2b-marketing`, `civic-or-electoral`, `general`. Effective 2024-07-01.
28
+ - CFPB Circular 2023-03 — adverse-action notices for AI/ML credit decisions under ECOA / Regulation B. Specific principal reasons must be provided per applicant; generic boilerplate codes are insufficient; if the AI/ML model cannot be explained well enough to identify the specific reasons that drove the decision in this applicant's case, the model likely cannot lawfully be used. Channel `email-transactional` + `ai-generated-content`; use case `financial-services`. Issued 2023-09-19; ongoing CFPB enforcement priority.
29
+ - Rule count 14 → 18. Jurisdictions 8 → 11 (added `us-tn`). Tests 51/51 passing.
30
+
31
+ ### Added since 0.0.1 (rolled into 0.1.0 history)
22
32
 
23
33
  - Brand committed: working slug `disclo` retired in favor of `plainstamp` after a namespace availability check (github.com/disclo is taken by an unrelated $6.75M-funded HR/workforce SaaS).
24
34
  - Colorado AI Act (SB 24-205) — consumer-interaction disclosure; effective 2026-06-30 after a delay from 2026-02-01.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "plainstamp",
3
- "version": "0.0.1",
3
+ "version": "0.1.0",
4
4
  "description": "AI disclosure compliance assistant — generates legally-grounded AI disclosure text per (jurisdiction × channel × use-case) and tracks regulatory updates. Operated by an autonomous AI agent under KS Elevated Solutions LLC.",
5
5
  "type": "module",
6
6
  "license": "MIT",
package/rules/seed.json CHANGED
@@ -615,6 +615,203 @@
615
615
  "formal": "Consent Waiver under Maryland Labor and Employment Article § 3-717 (HB 1202, Chapter 446 of the 2020 Laws of Maryland): The applicant identified by name and interview date below consents to the use of facial-recognition services by the employer during the pre-employment interview, and acknowledges having read this waiver. Applicant: [NAME]. Date of Interview: [DATE]. Employer: [EMPLOYER]. Signature: _______________ Date: ____________________"
616
616
  },
617
617
  "notes": "The statute is narrow — it applies to facial-recognition services that build a machine-interpretable pattern of facial features, used during interviews. AI hiring tools that record but do not analyze face patterns may be outside scope; tools that score expressions or compute similarity to other faces are inside scope. When in doubt, obtain the waiver — the cost is one form versus the cost of an LE-Article-3-717 violation claim. The waiver requirement runs in parallel with separate disclosure obligations under the IL HB 3773 and NYC Local Law 144 rules — multi-jurisdictional employers using AI interview tools need to satisfy each applicable obligation."
618
+ },
619
+ {
620
+ "id": "us-eeoc-title-vii-ai-employment-2023",
621
+ "jurisdiction": "us",
622
+ "channels": ["email-transactional", "ai-generated-content", "about-page"],
623
+ "use_cases": ["employment-decisions"],
624
+ "severity": "recommended",
625
+ "short_title": "EEOC Title VII technical assistance — AI selection procedures (2023)",
626
+ "summary": "The U.S. Equal Employment Opportunity Commission issued technical assistance on May 18, 2023 addressing the application of Title VII of the Civil Rights Act of 1964 to automated systems and AI used in employment-related selection procedures. The guidance reaffirms that the Uniform Guidelines on Employee Selection Procedures (1978) apply to AI/algorithmic tools used to make hiring, promotion, transfer, or firing decisions: such tools are 'selection procedures' under the Uniform Guidelines and are subject to the four-fifths rule for measuring adverse impact. Employers remain liable for discriminatory outcomes from AI tools they use, even tools developed by third-party vendors. The EEOC recommends — but does not strictly mandate — that employers (a) audit AI tools for adverse impact before deployment and on an ongoing basis, (b) be transparent with applicants and employees about the use of AI tools, and (c) provide reasonable accommodations and alternative selection procedures on request. This is interpretive guidance, not a regulation; substantive Title VII liability for disparate-impact discrimination is the binding obligation.",
627
+ "required_elements": [
628
+ {
629
+ "id": "ai-tool-use-notice",
630
+ "description": "Notice to applicants and employees that an AI or algorithmic tool will be used in the selection procedure (recommended).",
631
+ "required": true,
632
+ "example": "Notice: This employer uses an automated decision-making tool to assist in evaluating applications. Use of this tool will form part of the selection process for this role."
633
+ },
634
+ {
635
+ "id": "alternative-process-availability",
636
+ "description": "Information about how to request an alternative, non-AI selection process or a reasonable accommodation under the Americans with Disabilities Act.",
637
+ "required": true,
638
+ "example": "If you would prefer an alternative selection process, or require a reasonable accommodation under the ADA, please contact the employer's human resources team."
639
+ },
640
+ {
641
+ "id": "four-fifths-adverse-impact-audit",
642
+ "description": "Periodic adverse-impact audit of the AI selection tool against the four-fifths rule of the Uniform Guidelines (1978). (System / governance requirement, not in-message disclosure.)",
643
+ "required": false
644
+ }
645
+ ],
646
+ "citation": {
647
+ "statute": "Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e et seq., interpreted via Uniform Guidelines on Employee Selection Procedures (1978), 29 CFR Part 1607",
648
+ "section": "EEOC Technical Assistance: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII (May 18, 2023)",
649
+ "source_url": "https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial-intelligence",
650
+ "publisher": "U.S. Equal Employment Opportunity Commission"
651
+ },
652
+ "effective_date": "2023-05-18",
653
+ "last_verified": "2026-05-08",
654
+ "template": {
655
+ "plain": "Notice: This employer uses an automated decision-making (AI) tool to assist in evaluating applications and employment decisions. The tool's outputs are reviewed by human decision-makers and are subject to the federal Title VII non-discrimination requirements. If you would prefer an alternative, non-AI selection process, or require a reasonable accommodation under the Americans with Disabilities Act, please contact our human resources team.",
656
+ "formal": "Notice under EEOC technical assistance applying Title VII of the Civil Rights Act of 1964 (42 U.S.C. § 2000e et seq.) and the Uniform Guidelines on Employee Selection Procedures (29 CFR Part 1607) to AI selection procedures: This employer uses an automated decision-making tool as part of one or more employment-related selection procedures (which may include hiring, promotion, transfer, or termination decisions). Such tools are subject to the same disparate-impact analysis as any other selection procedure, including the four-fifths rule for measuring adverse impact. Applicants and employees may request an alternative selection procedure or reasonable accommodation under the Americans with Disabilities Act."
657
+ },
658
+ "notes": "EEOC technical assistance is interpretive guidance — not a regulation. The binding obligation is Title VII's prohibition on disparate-impact discrimination, which has been law since the 1971 Griggs v. Duke Power decision. The 2023 guidance simply confirms that AI/algorithmic selection tools are 'selection procedures' under the Uniform Guidelines and subject to the same scrutiny. Severity is `recommended` because the disclosure itself is best-practice, not mandated; the underlying disparate-impact obligation is non-negotiable. The state-level mandates (IL HB 3773, NYC Local Law 144, CO SB 24-205) are stricter than this federal guidance and supersede it where they apply. Employers using AI hiring tools across multiple states should treat the federal EEOC guidance as a floor and the strictest applicable state rule as the ceiling. Note that the EEOC also issued separate technical assistance under the ADA (May 12, 2022) addressing reasonable-accommodation obligations for applicants who cannot effectively interface with AI selection tools — that guidance complements this one and should be consulted alongside it."
659
+ },
660
+ {
661
+ "id": "eu-gdpr-art22-automated-decisions",
662
+ "jurisdiction": "eu",
663
+ "channels": ["email-transactional", "ai-generated-content", "privacy-policy"],
664
+ "use_cases": [
665
+ "employment-decisions",
666
+ "financial-services",
667
+ "healthcare",
668
+ "legal-services",
669
+ "general"
670
+ ],
671
+ "severity": "mandatory",
672
+ "short_title": "EU GDPR Article 22 — automated decision-making rights",
673
+ "summary": "Under the EU General Data Protection Regulation (Regulation (EU) 2016/679), Article 22(1) gives data subjects the right not to be subject to a decision based solely on automated processing — including profiling — that produces legal effects concerning them or similarly significantly affects them. Exceptions in Article 22(2) permit such decisions if (a) necessary for entering into or performing a contract, (b) authorized by Union or Member-State law that provides safeguards, or (c) based on the data subject's explicit consent. Where one of these exceptions applies, the controller must implement suitable measures to safeguard the data subject's rights and freedoms, including at minimum the right to obtain human intervention, to express their point of view, and to contest the decision (Art. 22(3)). Articles 13(2)(f) and 14(2)(g) require the controller to provide, at the time data is collected, meaningful information about the logic involved in any such automated decision-making and the significance and envisaged consequences of such processing for the data subject. Penalties under Art. 83(5): up to €20 million or 4% of global annual turnover, whichever is higher.",
674
+ "required_elements": [
675
+ {
676
+ "id": "automated-decision-notice",
677
+ "description": "Notice that the data subject is being subjected to automated decision-making, including profiling, that produces legal or similarly significant effects.",
678
+ "required": true,
679
+ "example": "Notice: This decision was made by an automated system, including profiling, and produces effects relating to your application or account that are significant to you."
680
+ },
681
+ {
682
+ "id": "logic-involved",
683
+ "description": "Meaningful information about the logic involved in the automated decision (the type of inputs and the way they are weighted, not the underlying source code or proprietary model parameters).",
684
+ "required": true,
685
+ "example": "The decision is based on inputs you provided in your application, your prior interaction history with us, and a credit score from an authorized bureau, weighted to predict outcome likelihood."
686
+ },
687
+ {
688
+ "id": "significance-and-consequences",
689
+ "description": "Information about the significance and envisaged consequences of the automated processing for the data subject.",
690
+ "required": true,
691
+ "example": "An adverse decision means your application will not proceed; you may reapply after 30 days, or request a human review now."
692
+ },
693
+ {
694
+ "id": "right-to-human-intervention",
695
+ "description": "Right to obtain human intervention on the part of the controller, to express the data subject's point of view, and to contest the decision.",
696
+ "required": true,
697
+ "example": "You have the right to request that a human review this decision, to provide additional context for consideration, and to contest the decision. To exercise these rights, contact our data-protection team at [contact]."
698
+ },
699
+ {
700
+ "id": "lawful-basis-disclosure",
701
+ "description": "Disclosure of the Article 22(2) lawful basis under which the automated decision is made (contract, EU/Member-State law, or explicit consent). (Information requirement, not single in-message text.)",
702
+ "required": false
703
+ }
704
+ ],
705
+ "citation": {
706
+ "statute": "Regulation (EU) 2016/679 (General Data Protection Regulation)",
707
+ "section": "Article 22 — automated individual decision-making, including profiling; in conjunction with Articles 13(2)(f) and 14(2)(g)",
708
+ "source_url": "https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679",
709
+ "publisher": "Publications Office of the European Union (EUR-Lex)"
710
+ },
711
+ "effective_date": "2018-05-25",
712
+ "last_verified": "2026-05-08",
713
+ "template": {
714
+ "plain": "This decision was made by an automated system. The decision considers [inputs / categories of data] and produces effects relating to [employment / credit / insurance / other significant outcome]. You have the right to request human review of this decision, to express your point of view, and to contest the decision — contact us at [data-protection address]. For more on the logic involved and the consequences of this automated processing, see our privacy notice at [URL].",
715
+ "formal": "Notice under Article 22 of Regulation (EU) 2016/679 (GDPR): This decision is based solely on automated processing, including profiling, that produces legal effects or similarly significant effects concerning you. The lawful basis for this automated decision is [contract performance / EU or Member-State law / your explicit consent — Article 22(2)(a), (b), or (c)]. Meaningful information about the logic involved: [description of inputs, weights at high level, decision threshold]. The significance and envisaged consequences of the processing are: [description]. You have the right under Article 22(3) to obtain human intervention by the controller, to express your point of view, and to contest this decision. To exercise these rights, contact the data-protection team at [contact]. You also have the right to lodge a complaint with your supervisory authority."
716
+ },
717
+ "notes": "Article 22 applies only to decisions based 'solely' on automated processing. Decisions where a human meaningfully reviews the AI output before it takes effect are NOT solely automated and are outside Article 22's scope, although other GDPR transparency obligations (Arts. 13–14) still apply. The EDPB's Guidelines on Automated Decision-Making (WP251rev.01) clarify that 'meaningful' human review must be substantive — rubber-stamping the AI's recommendation is not enough. The Schufa Holding judgment (CJEU C-634/21, 2023) confirmed that automated credit scoring constitutes a decision under Art. 22 even when the score is then passed to a human-operated lender — because the score itself drives the outcome. EU Member States may impose additional safeguards (e.g., France's Loi Informatique et Libertés, Germany's BDSG § 37); developers should layer Member-State requirements on top. Sectoral overlaps: in employment-decisions use, Article 22 stacks with the EU AI Act's Article 50 chatbot disclosure (where chat is used) and any Member-State implementations; in financial-services, with the EU AI Act's high-risk classification of credit-scoring systems."
718
+ },
719
+ {
720
+ "id": "us-tn-elvis-act-voice-likeness-2024",
721
+ "jurisdiction": "us-tn",
722
+ "channels": [
723
+ "ai-generated-audio",
724
+ "ai-generated-video",
725
+ "ai-generated-content"
726
+ ],
727
+ "use_cases": [
728
+ "b2c-marketing",
729
+ "b2b-marketing",
730
+ "civic-or-electoral",
731
+ "general"
732
+ ],
733
+ "severity": "mandatory",
734
+ "short_title": "Tennessee ELVIS Act — voice and likeness protection (HB 2091 / SB 2096, 2024)",
735
+ "summary": "The Tennessee Ensuring Likeness, Voice, and Image Security Act of 2024 (the 'ELVIS Act') amends Tennessee Code Annotated Title 47, Chapter 25, Part 11 to extend Tennessee's right-of-publicity statute to a person's VOICE in addition to their name, photograph, and likeness. It is unlawful for any person, with knowledge that an individual's voice or likeness is being used without authorization, to publish, perform, distribute, transmit, or otherwise make available to the public an algorithm, software, tool, or other technology, service, or device the primary purpose or function of which is the production of a particular individual's voice or likeness without that individual's authorization. Civil remedies include injunctive relief, treble damages, and attorney's fees; the act creates a Class A misdemeanor for criminal violations and gives standing to the individual, their estate, or any person/entity holding exclusive license to the individual's voice or likeness. Effective July 1, 2024.",
736
+ "required_elements": [
737
+ {
738
+ "id": "ai-voice-likeness-authorization",
739
+ "description": "Authorization (license, consent, or other express permission) from the individual whose voice or likeness is being synthesized, BEFORE the AI-generated voice or likeness is published, performed, distributed, or otherwise made available to the public. (Authorization-not-disclosure rule: the obligation is to obtain consent first; disclosure of the AI nature of the content is a parallel best practice but not the statutory cure.)",
740
+ "required": true,
741
+ "example": "I, [individual name or authorized rights-holder], grant [licensee] permission to use my voice / likeness in AI-generated audio / video for the purposes of [scope] for the period of [term]. Signed: __________ Date: __________"
742
+ },
743
+ {
744
+ "id": "ai-generated-content-label",
745
+ "description": "Where authorization is granted, accompanying clear and conspicuous label that the published content includes AI-synthesized voice or likeness (best practice; complementary to EU AI Act Art. 50(2) and aligned with general FTC endorsement guidance).",
746
+ "required": true,
747
+ "example": "This audio (or video) includes an AI-synthesized voice of [individual] used with their permission."
748
+ },
749
+ {
750
+ "id": "no-tool-distribution-without-authorization",
751
+ "description": "Prohibition on publishing or distributing tools whose primary purpose is producing a particular individual's voice or likeness without authorization. (System / product-design requirement, not per-message disclosure.)",
752
+ "required": false
753
+ }
754
+ ],
755
+ "citation": {
756
+ "statute": "Tennessee Code Annotated, Title 47, Chapter 25, Part 11 (as amended by the Ensuring Likeness, Voice, and Image Security Act of 2024 — HB 2091 / SB 2096, Public Chapter 588)",
757
+ "section": "Personal Rights Protection Act, as amended by the ELVIS Act",
758
+ "source_url": "https://wapp.capitol.tn.gov/apps/BillInfo/Default.aspx?BillNumber=HB2091&ga=113",
759
+ "publisher": "Tennessee General Assembly"
760
+ },
761
+ "effective_date": "2024-07-01",
762
+ "last_verified": "2026-05-08",
763
+ "template": {
764
+ "plain": "AI Voice / Likeness Notice (Tennessee ELVIS Act): The audio (or video) includes an AI-synthesized voice or likeness of [individual]. Use of that voice or likeness has been authorized in writing by [the individual / their authorized rights-holder] for the scope and term of this communication.",
765
+ "formal": "Notice under the Tennessee Ensuring Likeness, Voice, and Image Security Act of 2024 (HB 2091 / SB 2096, codified at Tenn. Code Ann. Title 47, Chapter 25, Part 11): The published material includes AI-synthesized voice and/or likeness of [individual], used pursuant to written authorization from [the individual or the authorized exclusive rights-holder] dated [date]. The synthesis was performed by [system / service] for the limited purpose of [purpose]. Inquiries about the underlying authorization may be directed to [contact]."
766
+ },
767
+ "notes": "The ELVIS Act is a CONSENT-BASED statute, not solely a disclosure statute. The legal cure for AI voice or likeness use is authorization from the individual; a disclosure label without authorization does NOT cure a violation. The act applies whenever the published content reaches the public AND the actor knew the voice or likeness was being used without authorization — the knowledge standard creates real exposure for any service publishing user-generated AI synthesis content. The act has both civil (treble damages, attorney's fees, injunctive relief) and criminal (Class A misdemeanor) liability tracks. Tool publishers (the providers of voice-cloning or face-swap tools) face independent liability where the tool's primary purpose is producing a particular individual's voice or likeness without authorization — generic voice-synthesis tools that allow the user to clone arbitrary voices may not be covered, but tools marketed around a specific celebrity's voice clearly are. ELVIS Act-style protection is also emerging in California (AB 2602, AB 2655, AB 1836) and at the federal level via the proposed NO FAKES Act; multi-jurisdictional rights-clearance workflows should consider Tennessee + California + (eventually) federal in parallel."
768
+ },
769
+ {
770
+ "id": "us-cfpb-circular-2023-03-ai-adverse-action",
771
+ "jurisdiction": "us",
772
+ "channels": ["email-transactional", "ai-generated-content"],
773
+ "use_cases": ["financial-services"],
774
+ "severity": "mandatory",
775
+ "short_title": "CFPB Circular 2023-03 — adverse-action notices for AI credit decisions (ECOA / Regulation B)",
776
+ "summary": "The Consumer Financial Protection Bureau, in Circular 2023-03 (issued September 19, 2023), confirmed that creditors using complex algorithms or artificial intelligence to make credit decisions must still provide statements of specific reasons for adverse actions as required by the Equal Credit Opportunity Act (ECOA, 15 U.S.C. § 1691(d)) and Regulation B (12 CFR § 1002.9). Creditors cannot use the technological complexity of an AI/ML model as a defense for failing to identify the specific principal reasons that adversely affected the applicant. Generic or boilerplate reasons (e.g., 'failed credit-decision model') are insufficient; the creditor must identify the particular factors specific to the applicant's situation. If a creditor cannot accurately identify the specific reasons for an AI-driven adverse decision, the creditor likely cannot lawfully use the model. ECOA penalties include actual damages, punitive damages up to $10,000 per individual action / 1% of net worth in class actions, and attorney's fees; ongoing enforcement priority for the CFPB through 2026.",
777
+ "required_elements": [
778
+ {
779
+ "id": "specific-principal-reasons",
780
+ "description": "Statement of the specific principal reasons for the adverse credit action — the particular, applicant-specific factors that drove the decision; not generic or boilerplate explanations.",
781
+ "required": true,
782
+ "example": "Specific reasons your application was declined: (1) recent delinquencies on existing accounts; (2) high ratio of unsecured debt to monthly income; (3) short length of credit history. These factors most adversely affected the decision in your case."
783
+ },
784
+ {
785
+ "id": "right-to-statement-of-reasons",
786
+ "description": "Notice of the applicant's right to a statement of specific reasons for the adverse action and the timing for requesting it.",
787
+ "required": true,
788
+ "example": "If you would like a written statement of the specific reasons for this adverse action, you must request it within 60 days. We will provide the statement within 30 days of your request."
789
+ },
790
+ {
791
+ "id": "ecoa-equal-credit-notice",
792
+ "description": "ECOA equal-credit notice — the standard statement of the prohibited bases for credit discrimination.",
793
+ "required": true,
794
+ "example": "The federal Equal Credit Opportunity Act prohibits creditors from discriminating against credit applicants on the basis of race, color, religion, national origin, sex, marital status, age (provided the applicant has the capacity to enter into a binding contract), because all or part of the applicant's income derives from any public assistance program, or because the applicant has in good faith exercised any right under the Consumer Credit Protection Act. The federal agency that administers compliance with this law concerning this creditor is [agency and address]."
795
+ },
796
+ {
797
+ "id": "ai-driven-decision-explainability",
798
+ "description": "If the adverse action was driven by an AI / ML model, the creditor's underlying obligation to be able to identify the specific reasons for the model's output (model explainability requirement, governance-side rather than per-message text).",
799
+ "required": false
800
+ }
801
+ ],
802
+ "citation": {
803
+ "statute": "Equal Credit Opportunity Act, 15 U.S.C. § 1691(d); Regulation B, 12 CFR § 1002.9; interpreted via CFPB Circular 2023-03 (Adverse action notification requirements and the proper use of the CFPB's sample forms provided in Regulation B)",
804
+ "section": "Adverse-action notices for AI/ML credit decisions",
805
+ "source_url": "https://www.consumerfinance.gov/compliance/circulars/circular-2023-03-adverse-action-notification-requirements-and-the-proper-use-of-the-cfpbs-sample-forms-provided-in-regulation-b/",
806
+ "publisher": "Consumer Financial Protection Bureau"
807
+ },
808
+ "effective_date": "2023-09-19",
809
+ "last_verified": "2026-05-08",
810
+ "template": {
811
+ "plain": "Adverse Credit Decision Notice. We have decided not to approve your application. Specific reasons for this decision: (1) [reason 1 specific to your application]; (2) [reason 2]; (3) [reason 3]. These factors most adversely affected the decision in your case. Note: Federal law prohibits creditors from discriminating against credit applicants on the bases listed below. The federal agency administering this creditor's compliance with the Equal Credit Opportunity Act is [agency, address]. Prohibited bases: race, color, religion, national origin, sex, marital status, age (where the applicant has contract-binding capacity), receipt of income from any public-assistance program, or good-faith exercise of any Consumer Credit Protection Act right. If you would like a written statement of the specific reasons for this adverse action, you must request it within 60 days; we will provide it within 30 days of your request.",
812
+ "formal": "Notice of Adverse Action under the Equal Credit Opportunity Act (15 U.S.C. § 1691(d)) and Regulation B (12 CFR § 1002.9), as further interpreted by CFPB Circular 2023-03 in the context of artificial-intelligence and machine-learning credit decisions: The application identified by reference number [REF] has been adversely acted upon. The specific principal reasons that most adversely affected the decision in this case, as identified by the creditor's review of the AI/ML model output, are: (1) [reason]; (2) [reason]; (3) [reason]. The applicant may request a written statement of the specific reasons within 60 days of this notice; the creditor will provide such statement within 30 days of receipt of the request. Federal law prohibits creditors from discriminating against credit applicants on prohibited bases enumerated in 15 U.S.C. § 1691(a). The federal agency administering compliance with the ECOA concerning this creditor is [agency, address]."
813
+ },
814
+ "notes": "CFPB Circular 2023-03 makes explicit a position the CFPB had taken in supervisory guidance for years: the technological complexity of an AI/ML model is not a defense for failing to provide ECOA-compliant adverse-action reasons. Creditors must identify the specific factors that affected THIS APPLICANT'S decision — not generic factors that influence the model in general. Practical implications for AI-credit fintechs: (1) the model itself must be explainable to a level that supports per-applicant reason codes — if the model cannot do this, the model cannot be deployed for credit decisions; (2) the reason codes must be checked for accuracy, not just plausibility — using post-hoc SHAP / LIME explanations as the source of reason codes is acceptable IF the creditor has validated that those explanations actually reflect what drove the decision in each case; (3) generic or boilerplate codes ('credit application incomplete', 'failed model threshold') are insufficient — the codes must point to applicant-specific factors. ECOA's statutory penalties combined with ongoing CFPB enforcement priority make this a high-stakes obligation. Note: Regulation B's adverse-action requirements run in parallel with the FCRA's adverse-action requirements (15 U.S.C. § 1681m) when the decision was based in whole or in part on a consumer report — both sets of obligations apply to the same notice."
618
815
  }
619
816
  ]
620
817
  }