plainstamp 0.1.0 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -9,14 +9,22 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
9
9
  ### Planned next
10
10
 
11
11
  - Add EU member-state AI Act implementation specifics where they diverge from the regulation (Germany, France, Spain, Italy, Netherlands first).
12
- - Add sector-specific rules: FDA Software-as-a-Medical-Device AI guidance, FINRA chatbot disclosure, healthcare HIPAA-adjacent AI rules.
13
12
  - Add a third watcher source (Cal Leg Info first; EUR-Lex if a usable feed surfaces).
14
- - Cloudflare Workers deployment of the MCP server for free-tier hosted access.
15
- - Get plainstamp listed on MCP registries (Anthropic registry, mcp-market, MCP Hive).
13
+ - Cloudflare Workers deployment of the MCP server for free-tier hosted access — gates the smithery.ai / pulsemcp.com / official MCP registry submissions, all of which require a hosted MCP endpoint or GitHub-verified namespace ownership.
16
14
 
17
15
  ### Distribution
18
16
 
19
- Distribution is **npm-only**. Source remains in the operating organization's private repository; there is no public source repository host. Contact channel for issues, accuracy reports, security reports, and contribution proposals is **helpfulbutton140@agentmail.to** (see `README.md`, `docs/CONTRIBUTING.md`, `docs/SECURITY.md`).
17
+ Distribution is **npm-only**. Source remains in the operating organization's private repository; there is no public source repository host. Contact channel for issues, accuracy reports, security reports, and contribution proposals is **helpfulbutton140@agentmail.to** (see `docs/CONTRIBUTING.md`, `docs/SECURITY.md`).
18
+
19
+ ## [0.2.0] — 2026-05-08
20
+
21
+ ### Added
22
+
23
+ - FINRA Regulatory Notice 24-09 — AI in customer communications. Member-firm obligations under FINRA Rules 2210 (communications), 2090 (KYC), 2111 (suitability), 3110 (supervision), 4511 (records), 3220 (gifts) all apply to AI-driven customer communications and recommendations; firms remain responsible for third-party AI vendor outputs. Use case `financial-services`. Issued 2024-06-27.
24
+ - New SEO-leaning guide: `docs/guides/eu-ai-act-article-50-chatbot-disclosure.md` — long-form builder-focused guide on Article 50 disclosure requirements, the August 2026 application date, the Omnibus VII provisional agreement, and how the rule stacks with GDPR Article 22 and EU Member-State implementations. Ships in the npm package and renders on the npm package page (which is well-indexed).
25
+ - Package `files` array now includes `docs/guides` so SEO-leaning content ships with the published artifact.
26
+ - Keywords expanded: `gdpr`, `finra`, `cfpb`, `eeoc`, `regtech` added to support discovery via npm search and search-engine indexing of the npm package page.
27
+ - Rule count 18 → 19. Tests still 51/51 passing.
20
28
 
21
29
  ## [0.1.0] — 2026-05-08
22
30
 
@@ -0,0 +1,161 @@
1
+ # EU AI Act Article 50: a builder's guide to chatbot disclosure
2
+
3
+ > **Informational only — not legal advice.** Verify against the cited
4
+ > regulator-published text and consult counsel for production deployments.
5
+ > See `AI-DISCLOSURE.md` in this package.
6
+
7
+ If your product talks to people in the EU and an AI is doing the talking,
8
+ Article 50 of the EU AI Act applies to you. This guide covers what the
9
+ rule actually says, when it applies, what counts as compliance, and the
10
+ deadline pressure most teams aren't tracking yet.
11
+
12
+ ## What Article 50 actually requires
13
+
14
+ Article 50(1) of Regulation (EU) 2024/1689 says:
15
+
16
+ > Providers of AI systems intended to interact directly with natural
17
+ > persons must design and develop them in such a way that the natural
18
+ > persons concerned are informed that they are interacting with an AI
19
+ > system.
20
+
21
+ There is one exception: if the fact that the user is talking to an AI
22
+ is "obvious from the point of view of a reasonably well-informed person
23
+ taking into account the circumstances and the context of use," the
24
+ disclosure is not required. The bar for "obvious" is high — a chat
25
+ window labeled "AI Assistant" probably qualifies; a chat window
26
+ labeled "Customer Support" does not, even if the bot sounds robotic.
27
+
28
+ Article 50(2) layers a separate obligation: any AI-generated synthetic
29
+ audio, image, video, or text must be marked as artificially generated
30
+ or manipulated, in a machine-readable format. The text-content
31
+ sub-clause has narrow exemptions (assistive editing, no substantive
32
+ change, etc.) that we cover later.
33
+
34
+ ## Who is the "provider"
35
+
36
+ The Act distinguishes **providers** (who develop or place the AI system
37
+ on the market) from **deployers** (who use it). Article 50 falls
38
+ primarily on providers — but the deployer obligations under Article 50(4)
39
+ on emotion-recognition / biometric systems and on deepfakes still apply
40
+ where relevant.
41
+
42
+ For a typical SaaS chatbot: the company that builds the chatbot model
43
+ or wraps an LLM into a product is the provider. The customer that
44
+ embeds the chatbot on their site is a deployer. Both have obligations
45
+ under different Article 50 paragraphs.
46
+
47
+ ## When the obligation kicks in
48
+
49
+ Article 50 applies as soon as a natural person begins interacting with
50
+ the AI system. Practically, this means **the disclosure must appear at
51
+ the start of the conversation**, before the AI has produced any
52
+ substantive output that a user might rely on.
53
+
54
+ A persistent banner reading "You are chatting with an AI assistant" at
55
+ the top of the chat surface satisfies this for most chat UIs. A
56
+ voice-channel disclosure must be spoken at session start. A
57
+ video-avatar disclosure typically combines a spoken introduction with a
58
+ visible on-screen indicator.
59
+
60
+ ## The "machine-readable" requirement (Art. 50(2))
61
+
62
+ For AI-generated synthetic content, the marking must be machine-
63
+ readable. The Act doesn't mandate a specific technical standard, but
64
+ the European Commission has signaled that watermarking schemes
65
+ compliant with C2PA, the SynthID variants, and similar provenance
66
+ metadata will be acceptable. As of 2026, the Commission is finalizing
67
+ implementing acts that will narrow the technical options.
68
+
69
+ If you're producing AI-generated images, audio, or video at scale,
70
+ adopt a watermarking standard now — retrofitting watermarks across an
71
+ existing content corpus is materially harder than baking them into the
72
+ generation pipeline.
73
+
74
+ ## Penalties and timing
75
+
76
+ Article 50 obligations apply from **August 2, 2026**. Penalties under
77
+ Article 99 of the Act for Article 50 violations can reach **€15 million
78
+ or 3% of global annual turnover, whichever is higher**.
79
+
80
+ A separate provisional agreement under the EU's Omnibus VII package
81
+ (provisional agreement 2026-05-07) reduced the transparency-solutions
82
+ grace period from 6 months to 3 months, moving the practical
83
+ compliance deadline for Article 50(2) machine-readable marking
84
+ implementations to **December 2, 2026**. Re-verify against the final
85
+ adopted text — Omnibus VII's provisional agreement may shift before
86
+ formal adoption.
87
+
88
+ ## How Article 50 stacks with other EU rules
89
+
90
+ Article 50 doesn't operate in isolation. Builders should also check:
91
+
92
+ - **GDPR Article 22** — if the AI conversation feeds into an automated
93
+ decision producing legal or similarly significant effects, the
94
+ data-subject rights to human intervention, point-of-view expression,
95
+ and contestation apply on top of the chatbot disclosure.
96
+ - **GDPR Articles 13(2)(f) and 14(2)(g)** — when personal data is
97
+ collected during the AI interaction, the controller must inform the
98
+ data subject about the existence of automated decision-making, the
99
+ logic involved, and the envisaged consequences.
100
+ - **EU AI Act Article 13** — high-risk AI systems have separate
101
+ transparency obligations to deployers (instructions for use,
102
+ expected outputs, characteristics and limitations).
103
+ - **Digital Services Act (Regulation (EU) 2022/2065)** — provider
104
+ obligations around content moderation transparency layer over the
105
+ chatbot disclosure when the AI is moderating user-generated content.
106
+ - **Member-state implementations** — Germany's BDSG, France's Loi
107
+ Informatique et Libertés, and Spain's AESIA framework all add
108
+ national-level safeguards on top of the EU regulation. Verify the
109
+ rules of every Member State your service operates in.
110
+
111
+ ## How plainstamp helps
112
+
113
+ `plainstamp` ships with `eu-ai-act-art50-chatbot` and
114
+ `eu-ai-act-art50-genai-content` rules that surface the live text of
115
+ Article 50, the required disclosure elements, and ready-to-paste plain-
116
+ language and formal-language disclosure templates. Each rule cites the
117
+ EUR-Lex source URL and carries a `last_verified` date so you know
118
+ whether the text you're reading is current.
119
+
120
+ A typical lookup:
121
+
122
+ ```bash
123
+ npx plainstamp lookup --jurisdiction eu \
124
+ --channel live-chat \
125
+ --use-case b2c-customer-support
126
+ ```
127
+
128
+ returns the rule, the disclosure-element checklist, and template text
129
+ you can drop into your chat surface. For deployers running across
130
+ multiple jurisdictions, the same query against `us-ca`, `us-co`,
131
+ `us-il`, `us-tx`, `us-ut`, etc. will surface the parallel state-level
132
+ obligations that often layer on top.
133
+
134
+ ## The minimum viable Article 50 disclosure
135
+
136
+ If you ship one thing this week, ship a chat-surface header that
137
+ includes:
138
+
139
+ 1. A clear statement that the user is interacting with an AI ("You
140
+ are chatting with an AI assistant").
141
+ 2. A path to escalate to a human (where applicable to your service
142
+ model and required by sectoral rules — e.g., financial-services
143
+ rules in many jurisdictions require an escalation path).
144
+ 3. A link to your privacy notice covering AI data use.
145
+
146
+ Then, if you process AI-generated synthetic media, prioritize
147
+ machine-readable marking for the Art. 50(2) deadline.
148
+
149
+ ## Source-of-truth links
150
+
151
+ - **Regulation (EU) 2024/1689 — full text on EUR-Lex** ([eur-lex.europa.eu](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689))
152
+ - **GDPR Article 22 on EUR-Lex** ([eur-lex.europa.eu](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679))
153
+ - **EDPB Guidelines on Automated Decision-Making (WP251rev.01)** — apply alongside Art. 22 obligations.
154
+
155
+ `plainstamp` is maintained by an autonomous AI agent operating under
156
+ KS Elevated Solutions LLC. Accuracy reports, rule-update suggestions,
157
+ and security disclosures: [helpfulbutton140@agentmail.to](mailto:helpfulbutton140@agentmail.to).
158
+
159
+ ---
160
+
161
+ [`← Back to plainstamp on npm`](https://www.npmjs.com/package/plainstamp)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "plainstamp",
3
- "version": "0.1.0",
3
+ "version": "0.2.0",
4
4
  "description": "AI disclosure compliance assistant — generates legally-grounded AI disclosure text per (jurisdiction × channel × use-case) and tracks regulatory updates. Operated by an autonomous AI agent under KS Elevated Solutions LLC.",
5
5
  "type": "module",
6
6
  "license": "MIT",
@@ -17,6 +17,7 @@
17
17
  "files": [
18
18
  "dist",
19
19
  "rules",
20
+ "docs/guides",
20
21
  "README.md",
21
22
  "AI-DISCLOSURE.md",
22
23
  "CHANGELOG.md",
@@ -53,7 +54,12 @@
53
54
  "compliance",
54
55
  "ccpa",
55
56
  "eu-ai-act",
57
+ "gdpr",
56
58
  "ftc",
59
+ "finra",
60
+ "cfpb",
61
+ "eeoc",
62
+ "regtech",
57
63
  "agent",
58
64
  "autonomous-ai"
59
65
  ]
package/rules/seed.json CHANGED
@@ -812,6 +812,53 @@
812
812
  "formal": "Notice of Adverse Action under the Equal Credit Opportunity Act (15 U.S.C. § 1691(d)) and Regulation B (12 CFR § 1002.9), as further interpreted by CFPB Circular 2023-03 in the context of artificial-intelligence and machine-learning credit decisions: The application identified by reference number [REF] has been adversely acted upon. The specific principal reasons that most adversely affected the decision in this case, as identified by the creditor's review of the AI/ML model output, are: (1) [reason]; (2) [reason]; (3) [reason]. The applicant may request a written statement of the specific reasons within 60 days of this notice; the creditor will provide such statement within 30 days of receipt of the request. Federal law prohibits creditors from discriminating against credit applicants on prohibited bases enumerated in 15 U.S.C. § 1691(a). The federal agency administering compliance with the ECOA concerning this creditor is [agency, address]."
813
813
  },
814
814
  "notes": "CFPB Circular 2023-03 makes explicit a position the CFPB had taken in supervisory guidance for years: the technological complexity of an AI/ML model is not a defense for failing to provide ECOA-compliant adverse-action reasons. Creditors must identify the specific factors that affected THIS APPLICANT'S decision — not generic factors that influence the model in general. Practical implications for AI-credit fintechs: (1) the model itself must be explainable to a level that supports per-applicant reason codes — if the model cannot do this, the model cannot be deployed for credit decisions; (2) the reason codes must be checked for accuracy, not just plausibility — using post-hoc SHAP / LIME explanations as the source of reason codes is acceptable IF the creditor has validated that those explanations actually reflect what drove the decision in each case; (3) generic or boilerplate codes ('credit application incomplete', 'failed model threshold') are insufficient — the codes must point to applicant-specific factors. ECOA's statutory penalties combined with ongoing CFPB enforcement priority make this a high-stakes obligation. Note: Regulation B's adverse-action requirements run in parallel with the FCRA's adverse-action requirements (15 U.S.C. § 1681m) when the decision was based in whole or in part on a consumer report — both sets of obligations apply to the same notice."
815
+ },
816
+ {
817
+ "id": "us-finra-rn-24-09-ai-customer-communications",
818
+ "jurisdiction": "us",
819
+ "channels": ["live-chat", "voice", "email-marketing", "ai-generated-content"],
820
+ "use_cases": ["financial-services"],
821
+ "severity": "mandatory",
822
+ "short_title": "FINRA Regulatory Notice 24-09 — AI in customer communications",
823
+ "summary": "FINRA Regulatory Notice 24-09 (June 27, 2024) addresses member firm use of generative artificial intelligence and other large language model technologies in their securities business. The Notice does not create new rules; it confirms that existing FINRA rules apply to AI-driven customer communications and reminds member firms of their obligations: (a) Rule 3110 — supervisory systems reasonably designed to achieve compliance with applicable rules apply to AI tools used by associated persons or in customer-facing roles; (b) Rule 2210 — communications with the public, including any communication generated by an AI tool, must be fair, balanced, not misleading, and (where applicable) supervised, principal-approved, or filed with FINRA; (c) Rule 2090 (Know Your Customer) and Rule 2111 (suitability) — AI-generated recommendations are subject to the same suitability and KYC obligations as human-generated ones; (d) Rule 4511 — books-and-records obligations apply to AI inputs and outputs that constitute communications with customers; (e) Rule 3220 — gifts and gratuities standards apply to AI-generated promotional materials. Member firms remain responsible for AI tool outputs even when the tool is provided by a third-party vendor. Notice 24-09 also flags risks including hallucination, bias, data privacy, and intellectual-property concerns; firms should address these in written supervisory procedures.",
824
+ "required_elements": [
825
+ {
826
+ "id": "ai-communication-supervision",
827
+ "description": "AI-generated communications with the public are subject to FINRA Rule 2210 standards (fair, balanced, not misleading) and the firm's existing principal-review / pre-approval / filing workflow as applicable to the communication type.",
828
+ "required": true,
829
+ "example": "All customer-facing communications generated by the AI assistant are reviewed by a qualified principal under FINRA Rule 2210 before delivery and retained per the firm's books-and-records policy under Rule 4511."
830
+ },
831
+ {
832
+ "id": "ai-recommendation-suitability",
833
+ "description": "AI-generated investment recommendations or advice are subject to FINRA Rule 2111 suitability obligations on the same terms as human-generated recommendations; firm WSPs must address how AI-generated recommendations are reviewed for suitability.",
834
+ "required": true,
835
+ "example": "Any investment recommendation generated by the AI tool for a customer account is subject to a Rule 2111 suitability review against the customer's investment profile under the firm's written supervisory procedures."
836
+ },
837
+ {
838
+ "id": "third-party-vendor-responsibility",
839
+ "description": "Firm responsibility for AI tool outputs persists when the tool is operated by a third-party vendor; vendor due diligence and oversight are part of the firm's Rule 3110 supervisory obligation.",
840
+ "required": true,
841
+ "example": "AI tools operated by third-party vendors are vetted, monitored, and supervised by the firm under FINRA Rule 3110; the firm remains responsible for any communications, recommendations, or records generated by those tools in connection with its securities business."
842
+ },
843
+ {
844
+ "id": "wsp-ai-coverage",
845
+ "description": "Written supervisory procedures address AI tool use, including risk areas of hallucination, bias, data privacy, and IP. (System / governance requirement, not per-message text.)",
846
+ "required": false
847
+ }
848
+ ],
849
+ "citation": {
850
+ "statute": "FINRA Rules 2210, 2090, 2111, 3110, 4511, 3220 (existing); FINRA Regulatory Notice 24-09, 'FINRA Reminds Member Firms of Their Obligations When Using Generative Artificial Intelligence and Large Language Models' (June 27, 2024)",
851
+ "section": "Member-firm obligations when using AI in securities business",
852
+ "source_url": "https://www.finra.org/rules-guidance/notices/24-09",
853
+ "publisher": "Financial Industry Regulatory Authority"
854
+ },
855
+ "effective_date": "2024-06-27",
856
+ "last_verified": "2026-05-08",
857
+ "template": {
858
+ "plain": "Notice — Customer Communication via AI Tool: This message (or recommendation) was prepared with the assistance of an artificial-intelligence tool and is subject to the same review and supervision standards as any communication delivered by [Member Firm]. The communication is reviewed under FINRA Rule 2210 standards and, where applicable, has been reviewed by a qualified principal. Any investment recommendation in this communication remains subject to the firm's suitability analysis under FINRA Rule 2111 against your investment profile. If you have questions about this communication or the role of AI in producing it, contact [contact].",
859
+ "formal": "Notice under FINRA Regulatory Notice 24-09 and Rules 2210, 2090, 2111, 3110, 4511, and 3220: This communication was generated, in whole or in part, with the assistance of artificial-intelligence technology. The member firm has reviewed and supervised this communication under its written supervisory procedures consistent with FINRA Rule 3110, and the communication satisfies the standards of FINRA Rule 2210 governing communications with the public. Any investment recommendation contained herein has been evaluated for suitability under FINRA Rule 2111 against the customer's investment profile under FINRA Rule 2090. The firm retains records of this communication under FINRA Rule 4511. The member firm remains responsible for AI tool outputs whether the tool is internally operated or provided by a third-party vendor."
860
+ },
861
+ "notes": "FINRA Regulatory Notice 24-09 is reminder-and-clarification guidance — it does not create new rules. The binding obligations are the existing FINRA rules (2210, 2090, 2111, 3110, 4511, 3220), which apply by their existing terms to AI-driven communications, recommendations, and records. Member firms (broker-dealers and their associated persons) are bound; non-member firms are not directly bound by FINRA rules but may face parallel obligations under SEC rules (e.g., Rule 17a-4 books-and-records, Investment Advisers Act fiduciary duty for IA-registered firms) — this rule's `jurisdiction` is `us` because FINRA is a self-regulatory organization with national scope, not a single-state regulator. The 2023 SEC Staff Bulletin on conflicts of interest for AI/PDA-using broker-dealers and investment advisers (and the SEC's proposed PDA rule, Rel. No. 34-97990) layers additional obligations specifically around conflicts; firms with PDA / AI advisory tools should consult both. FINRA expects firms to update their WSPs to specifically address AI tool use; using AI without WSP coverage is an immediate Rule 3110 supervision deficiency. Firms should also be aware of state-level adverse-action and disclosure overlays (e.g., NYDFS's October 2024 cybersecurity / AI guidance for licensed entities)."
815
862
  }
816
863
  ]
817
864
  }