@murumets-ee/notifications 0.12.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,94 @@
1
+ Elastic License 2.0 (ELv2)
2
+
3
+ URL: https://www.elastic.co/licensing/elastic-license
4
+
5
+ ## Acceptance
6
+
7
+ By using the software, you agree to all of the terms and conditions below.
8
+
9
+ ## Copyright License
10
+
11
+ The licensor grants you a non-exclusive, royalty-free, worldwide,
12
+ non-sublicensable, non-transferable license to use, copy, distribute, make
13
+ available, and prepare derivative works of the software, in each case subject
14
+ to the limitations and conditions below.
15
+
16
+ ## Limitations
17
+
18
+ You may not provide the software to third parties as a hosted or managed
19
+ service, where the service provides users with access to any substantial set
20
+ of the features or functionality of the software.
21
+
22
+ You may not move, change, disable, or circumvent the license key functionality
23
+ in the software, and you may not remove or obscure any functionality in the
24
+ software that is protected by the license key.
25
+
26
+ You may not alter, remove, or obscure any licensing, copyright, or other
27
+ notices of the licensor in the software. Any use of the licensor's trademarks
28
+ is subject to applicable law.
29
+
30
+ ## Patents
31
+
32
+ The licensor grants you a license, under any patent claims the licensor can
33
+ license, or becomes able to license, to make, have made, use, sell, offer for
34
+ sale, import and have imported the software, in each case subject to the
35
+ limitations and conditions in this license. This license does not cover any
36
+ patent claims that you cause to be infringed by modifications or additions to
37
+ the software. If you or your company make any written claim that the software
38
+ infringes or contributes to infringement of any patent, your patent license
39
+ for the software granted under these terms ends immediately. If your company
40
+ makes such a claim, your patent license ends immediately for work on behalf
41
+ of your company.
42
+
43
+ ## Notices
44
+
45
+ You must ensure that anyone who gets a copy of any part of the software from
46
+ you also gets a copy of these terms.
47
+
48
+ If you modify the software, you must include in any modified copies of the
49
+ software prominent notices stating that you have modified the software.
50
+
51
+ ## No Other Rights
52
+
53
+ These terms do not imply any licenses other than those expressly granted in
54
+ these terms.
55
+
56
+ ## Termination
57
+
58
+ If you use the software in violation of these terms, such use is not licensed,
59
+ and your licenses will automatically terminate. If the licensor provides you
60
+ with a notice of your violation, and you cease all violation of this license
61
+ no later than 30 days after you receive that notice, your licenses will be
62
+ reinstated retroactively. However, if you violate these terms after such
63
+ reinstatement, any additional violation of these terms will cause your
64
+ licenses to terminate automatically and permanently.
65
+
66
+ ## No Liability
67
+
68
+ As far as the law allows, the software comes as is, without any warranty or
69
+ condition, and the licensor will not be liable to you for any damages arising
70
+ out of these terms or the use or nature of the software, under any kind of
71
+ legal claim.
72
+
73
+ ## Definitions
74
+
75
+ The **licensor** is the entity offering these terms, and the **software** is
76
+ the software the licensor makes available under these terms, including any
77
+ portion of it.
78
+
79
+ **you** refers to the individual or entity agreeing to these terms.
80
+
81
+ **your company** is any legal entity, sole proprietorship, or other kind of
82
+ organization that you work for, plus all organizations that have control over,
83
+ are under the control of, or are under common control with that organization.
84
+ **control** means ownership of substantially all the assets of an entity, or
85
+ the power to direct the management and policies of an entity (for example, by
86
+ voting right, contract, or otherwise). Control can be direct or indirect.
87
+
88
+ **your licenses** are all the licenses granted to you for the software under
89
+ these terms.
90
+
91
+ **use** means anything you do with the software requiring one of your
92
+ licenses.
93
+
94
+ **trademark** means trademarks, service marks, and similar rights.
package/README.md ADDED
@@ -0,0 +1,102 @@
1
+ # @murumets-ee/notifications
2
+
3
+ User-addressed, multi-channel notifications — in-app + email — with persistence,
4
+ read state, per-user preferences, group-collapse, and live realtime delivery.
5
+
6
+ Part of [Lumi CMS](https://github.com/murumets-ee/lumi-cms) — a modular, type-safe CMS toolkit for Node.js.
7
+
8
+ ## Install
9
+
10
+ ```bash
11
+ npm install @murumets-ee/notifications
12
+ ```
13
+
14
+ ## Concept
15
+
16
+ Three pieces:
17
+
18
+ 1. **Plugins call `defineNotificationType(...)`** once to declare a kind of
19
+ notification (`ticketing.message.created`, etc.) — its supported channels,
20
+ default per-channel preferences, render function, and optional group key.
21
+ 2. **Publishers call `notify(...)`** with the type, recipients (explicit user
22
+ ids or async resolver), and a typed payload. The framework resolves
23
+ per-user preferences, renders only the surviving channels, persists per
24
+ recipient, fans out via realtime, and enqueues email.
25
+ 3. **Admin UI** (in `@murumets-ee/admin-ui`) exposes the bell + drawer +
26
+ per-user preferences page. The preferences page is auto-derived from the
27
+ type registry — no bespoke wiring per plugin.
28
+
29
+ ## Channels
30
+
31
+ - **In-app** — always enabled. The notification row insert/update is the
32
+ delivery; a realtime event fans out to the recipient's connected admin
33
+ tabs so the bell badge updates without polling.
34
+ - **Email** — auto-enables when both `@murumets-ee/mail` (with a configured
35
+ provider) and `@murumets-ee/queue` are in your plugins array, **AND
36
+ `mail()` is registered before `notifications()` in that array**. The
37
+ channel enqueues a `notifications:send-email` queue job (a single
38
+ INSERT into `toolkit_jobs`); the job re-renders from the row's
39
+ payload, calls `mail.send`, and flips the row's per-channel state to
40
+ `delivered` / `failed` / `skipped_unverified`. Mail I/O is async — the
41
+ publisher's request returns once the queue row is written, never
42
+ waiting on the provider round-trip.
43
+
44
+ > Plugin order matters: notifications' init eagerly calls
45
+ > `getMailConfig()`. If `mail()` is registered after `notifications()`,
46
+ > the call throws, the email channel is silently disabled (warn-logged
47
+ > but not fatal), and your in-app notifications still flow but no
48
+ > email is ever sent. The lumi-config canonical order is `queue()` →
49
+ > `mail()` → `notifications()`.
50
+
51
+ Override the from-address (or opt out explicitly) via the plugin options:
52
+
53
+ ```ts
54
+ notifications({
55
+ channels: {
56
+ email: {
57
+ from: 'no-reply@app.example',
58
+ // enabled: false, // explicit opt-out even if mail is configured
59
+ },
60
+ },
61
+ })
62
+ ```
63
+
64
+ ## Tx-aware notify
65
+
66
+ Pass a Drizzle transaction handle on `notify({ tx })` to bind every per-recipient
67
+ row INSERT/UPDATE — and the email channel's `queue.enqueue(...)` — to your
68
+ caller's transaction. Postgres MVCC hides uncommitted writes from the worker
69
+ until commit, and a publisher rollback removes both the notification rows and
70
+ the queued send-email job atomically.
71
+
72
+ ```ts
73
+ import { notify } from '@murumets-ee/notifications'
74
+
75
+ await db.transaction(async (tx) => {
76
+ const order = await orders.create({ ... }, { tx })
77
+ await notify({
78
+ type: OrderPlaced,
79
+ recipients: { userIds: [order.userId] },
80
+ payload: { orderId: order.id },
81
+ tx,
82
+ })
83
+ // Rollback → no order row, no notification row, no email job. Atomic.
84
+ })
85
+ ```
86
+
87
+ Calls without `tx` keep the original auto-commit semantics — every existing
88
+ caller (ticketing, queue terminal notifier, etc.) is unaffected. See
89
+ `PLAN-OUTBOX.md` Phase 5 (a.k.a. `PLAN-NOTIFICATIONS.md` PR F) for the design,
90
+ and `tests/tx-threading.integration.test.ts` for the contract specification.
91
+
92
+ **Realtime is intentionally not bound to the tx.** In-app fan-out fires
93
+ synchronously inside `channel.send()`, so connected receivers may briefly see
94
+ a `notification.created` event whose row is rolled back moments later. This is
95
+ the deliberate split between durable (outbox) and ephemeral (realtime)
96
+ transports — see `PLAN-REALTIME.md` §2 / `PLAN-OUTBOX.md` §2. If a real
97
+ consumer surfaces a need for "suppress realtime on rollback," defer the in-app
98
+ publish until commit at that point; v1 doesn't.
99
+
100
+ ## License
101
+
102
+ [Elastic License 2.0 (ELv2)](../../LICENSE)
@@ -0,0 +1,29 @@
1
+ import { AdminRoute } from "@murumets-ee/core";
2
+ import { PostgresJsDatabase } from "drizzle-orm/postgres-js";
3
+
4
+ //#region src/admin.d.ts
5
+ interface DecodedCursor {
6
+ createdAt: Date;
7
+ id: string;
8
+ }
9
+ /**
10
+ * Decode a base64url(JSON) cursor and validate that `createdAt` parses to a
11
+ * real Date. Rejects malformed input with a 400 — `new Date('foo')` would
12
+ * otherwise produce an `Invalid Date` and bubble to PG as a 500.
13
+ *
14
+ * Returns `null` when the cursor is rejected; the caller surfaces a 400.
15
+ */
16
+ declare function decodeCursor(raw: string): DecodedCursor | null;
17
+ declare function encodeCursor(c: DecodedCursor): string;
18
+ interface NotificationsRoutesConfig {
19
+ /** Override the read-write DB connection (tests). Production resolves via getApp(). */
20
+ db?: PostgresJsDatabase;
21
+ }
22
+ declare function notificationsRoutes(config?: NotificationsRoutesConfig): AdminRoute;
23
+ declare const __test__: {
24
+ decodeCursor: typeof decodeCursor;
25
+ encodeCursor: typeof encodeCursor;
26
+ };
27
+ //#endregion
28
+ export { __test__, notificationsRoutes };
29
+ //# sourceMappingURL=admin.d.mts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"admin.d.mts","names":[],"sources":["../src/admin.ts"],"mappings":";;;;UA4DU,aAAA;EACR,SAAA,EAAW,IAAA;EACX,EAAA;AAAA;;;;;;;;iBAUO,YAAA,CAAa,GAAA,WAAc,aAAA;AAAA,iBAgB3B,YAAA,CAAa,CAAA,EAAG,aAAA;AAAA,UAOf,yBAAA;;EAER,EAAA,GAAK,kBAAA;AAAA;AAAA,iBAGS,mBAAA,CAAoB,MAAA,GAAQ,yBAAA,GAAiC,UAAA;AAAA,cAgMhE,QAAA;uBAAyC,YAAA;uBAAA,YAAA;AAAA"}
package/dist/admin.mjs ADDED
@@ -0,0 +1,2 @@
1
+ import{s as e}from"./recipients-DDN8AJzX.mjs";import{d as t,f as n,n as r}from"./client-CtklhnNF.mjs";import{z as i}from"zod";const a=/^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i;function o(e,t=200){return new Response(JSON.stringify(e),{status:t,headers:{"Content-Type":`application/json`}})}function s(e,t){return o({error:e},t)}const c=i.object({limit:i.coerce.number().int().min(1).max(100).default(30),cursor:i.string().optional(),filter:i.enum([`unread`,`all`,`archived`]).default(`all`)});function l(e){let t;try{t=JSON.parse(Buffer.from(e,`base64url`).toString(`utf8`))}catch{return null}if(typeof t!=`object`||!t)return null;let n=t;if(typeof n.createdAt!=`string`||typeof n.id!=`string`||!a.test(n.id))return null;let r=new Date(n.createdAt);return Number.isFinite(r.getTime())?{createdAt:r,id:n.id}:null}function u(e){return Buffer.from(JSON.stringify({createdAt:e.createdAt.toISOString(),id:e.id}),`utf8`).toString(`base64url`)}function d(i={}){let d=async()=>{if(i.db)return i.db;let{getApp:e}=await import(`@murumets-ee/core`);return e().db.readWrite},f=async(t,n)=>{let r=new URL(t.url),i=c.safeParse(Object.fromEntries(r.searchParams));if(!i.success){let e=i.error.issues[0];return s(e?`${e.path.join(`.`)||`query`}: ${e.message}`:`Invalid query`,400)}let{limit:a,cursor:f,filter:p}=i.data,m=e.makeClient(await d()),h=p===`unread`?{readAt:{isNull:!0},archivedAt:{isNull:!0}}:p===`archived`?{archivedAt:{isNotNull:!0}}:{archivedAt:{isNull:!0}},g=null;if(f){let e=l(f);if(!e)return s(`Invalid cursor`,400);g={$or:[{createdAt:{lt:e.createdAt}},{createdAt:e.createdAt,id:{lt:e.id}}]}}let _=g?{$and:[{recipientUserId:n.user.id},h,g]}:{$and:[{recipientUserId:n.user.id},h]},v=await m.findMany({where:_,orderBy:[{column:`createdAt`,dir:`desc`},{column:`id`,dir:`desc`}],limit:a+1}),y=v.length>a,b=v.slice(0,a),x=b[b.length-1];return o({items:b,nextCursor:y&&x?u({createdAt:x.createdAt,id:x.id}):null})},p=async(e,t)=>{let n=r();return n?o({count:await n.unreadCount(t.user.id)}):s(`Notifications not initialized`,500)},m=async(e,t)=>{let i=t.segments[1];if(!i||!a.test(i))return s(`Invalid notification id`,400);let c=r();if(!c)return s(`Notifications not initialized`,500);let l=await c.markRead(t.user.id,i);if(!l)return t.audit?.({action:`notifications.read_failed`,entityType:`notification`,entityId:i,userId:t.user.id,metadata:{reason:`not_found_or_not_owner`}}),s(`Not found`,404);let u=await c.unreadCount(t.user.id);return l.mutated&&(n(t.user.id,{id:l.id,type:l.type,unreadCount:u}),t.audit?.({action:`notifications.read`,entityType:`notification`,entityId:i,userId:t.user.id})),o({id:l.id,readAt:l.readAt,unreadCount:u})},h=async(e,t)=>{let i=r();if(!i)return s(`Notifications not initialized`,500);let a=await i.markAllRead(t.user.id),c=await i.unreadCount(t.user.id);return n(t.user.id,{unreadCount:c}),t.audit?.({action:`notifications.mark_all_read`,userId:t.user.id,metadata:{affected:a}}),o({affected:a,unreadCount:c})},g=async(e,n)=>{let i=n.segments[1];if(!i||!a.test(i))return s(`Invalid notification id`,400);let c=r();if(!c)return s(`Notifications not initialized`,500);let l=await c.archive(n.user.id,i);if(!l)return n.audit?.({action:`notifications.archive_failed`,entityType:`notification`,entityId:i,userId:n.user.id,metadata:{reason:`not_found_or_not_owner`}}),s(`Not found`,404);let u=await c.unreadCount(n.user.id);return l.mutated&&(t(n.user.id,{id:l.id,type:l.type,unreadCount:u}),n.audit?.({action:`notifications.archive`,entityType:`notification`,entityId:i,userId:n.user.id})),o({id:l.id,unreadCount:u})};return{prefix:`notifications`,resource:`notifications`,actions:[`view`,`update`],handlers:{GET:async(e,t)=>{let n=t.segments;return n.length===0?f(e,t):n.length===1&&n[0]===`unread-count`?p(e,t):s(`Not found`,404)},POST:async(e,t)=>{let n=t.segments;return n.length===1&&n[0]===`mark-all-read`?h(e,t):n.length===2&&n[1]===`read`?m(e,t):n.length===2&&n[1]===`archive`?g(e,t):s(`Not found`,404)}}}}const f={decodeCursor:l,encodeCursor:u};export{f as __test__,d as notificationsRoutes};
2
+ //# sourceMappingURL=admin.mjs.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"admin.mjs","names":[],"sources":["../src/admin.ts"],"sourcesContent":["/**\n * Notifications admin API routes — per-user list, unread count, read state.\n *\n * Mounted under prefix `notifications`. URL surface:\n *\n * GET /api/admin/notifications ?limit=&cursor=&filter=unread|all|archived\n * GET /api/admin/notifications/unread-count\n * POST /api/admin/notifications/:id/read\n * POST /api/admin/notifications/mark-all-read\n * POST /api/admin/notifications/:id/archive\n *\n * Authorization model — recipient identity:\n * Every endpoint scopes to `recipientUserId === sessionUserId`. There is\n * no admin override in v1 — adding one needs a new permission resource\n * (`notifications:read_others`) and a new audit category. See\n * PLAN-NOTIFICATIONS §4 (point 4) and §8.\n *\n * The `resource: 'notifications'` declaration on the AdminRoute means the\n * admin handler runs the `notifications:view` / `notifications:update`\n * gate before our handler sees the request — but those gates are\n * *necessary*, not *sufficient*. Per-row recipient scoping is enforced\n * inside each handler.\n */\n\nimport type { AdminRoute, AdminRouteHandler } from '@murumets-ee/core'\nimport type { WhereClause } from '@murumets-ee/db'\nimport type { PostgresJsDatabase } from 'drizzle-orm/postgres-js'\nimport { z } from 'zod'\nimport { getActiveNotificationClient } from './client.js'\nimport { notificationsTable } from './notifications-table.js'\nimport { publishNotificationArchived, publishNotificationRead } from './realtime/publish.js'\n\nconst UUID_RE = /^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i\nconst DEFAULT_LIMIT = 30\nconst MAX_LIMIT = 100\n\ntype NotificationCols = (typeof notificationsTable)['schema']['columns']\ntype NotificationsWhere = WhereClause<NotificationCols>\n\n// ---------------------------------------------------------------------------\n// HTTP helpers\n// ---------------------------------------------------------------------------\n\nfunction json(data: unknown, status = 200): Response {\n return new Response(JSON.stringify(data), {\n status,\n headers: { 'Content-Type': 'application/json' },\n })\n}\n\nfunction errorJson(message: string, status: number): Response {\n return json({ error: message }, status)\n}\n\nconst listQuerySchema = z.object({\n limit: z.coerce.number().int().min(1).max(MAX_LIMIT).default(DEFAULT_LIMIT),\n cursor: z.string().optional(),\n filter: z.enum(['unread', 'all', 'archived']).default('all'),\n})\n\ninterface DecodedCursor {\n createdAt: Date\n id: string\n}\n\n/**\n * Decode a base64url(JSON) cursor and validate that `createdAt` parses to a\n * real Date. Rejects malformed input with a 400 — `new Date('foo')` would\n * otherwise produce an `Invalid Date` and bubble to PG as a 500.\n *\n * Returns `null` when the cursor is rejected; the caller surfaces a 400.\n */\nfunction decodeCursor(raw: string): DecodedCursor | null {\n let parsed: unknown\n try {\n parsed = JSON.parse(Buffer.from(raw, 'base64url').toString('utf8'))\n } catch {\n return null\n }\n if (typeof parsed !== 'object' || parsed === null) return null\n const obj = parsed as { createdAt?: unknown; id?: unknown }\n if (typeof obj.createdAt !== 'string' || typeof obj.id !== 'string') return null\n if (!UUID_RE.test(obj.id)) return null\n const date = new Date(obj.createdAt)\n if (!Number.isFinite(date.getTime())) return null\n return { createdAt: date, id: obj.id }\n}\n\nfunction encodeCursor(c: DecodedCursor): string {\n return Buffer.from(\n JSON.stringify({ createdAt: c.createdAt.toISOString(), id: c.id }),\n 'utf8',\n ).toString('base64url')\n}\n\ninterface NotificationsRoutesConfig {\n /** Override the read-write DB connection (tests). Production resolves via getApp(). */\n db?: PostgresJsDatabase\n}\n\nexport function notificationsRoutes(config: NotificationsRoutesConfig = {}): AdminRoute {\n // DB is resolved per-request via dynamic getApp() — avoids loading\n // `'server-only'` at config-load time (jiti) and matches the pattern\n // used by `@murumets-ee/queue/admin`. Tests inject `config.db` to skip\n // the dynamic-import dance.\n const getDb = async (): Promise<PostgresJsDatabase> => {\n if (config.db) return config.db\n const { getApp } = await import('@murumets-ee/core')\n return getApp().db.readWrite\n }\n\n const handleList: AdminRouteHandler = async (req, ctx) => {\n const url = new URL(req.url)\n const parseResult = listQuerySchema.safeParse(Object.fromEntries(url.searchParams))\n if (!parseResult.success) {\n const issue = parseResult.error.issues[0]\n return errorJson(\n issue ? `${issue.path.join('.') || 'query'}: ${issue.message}` : 'Invalid query',\n 400,\n )\n }\n const { limit, cursor, filter } = parseResult.data\n\n const t = notificationsTable.makeClient(await getDb())\n\n // Filter clause — `unread` excludes archived (matches the bell-badge\n // semantic: archived items are not \"unread\" even if readAt is NULL).\n const filterWhere: NotificationsWhere =\n filter === 'unread'\n ? { readAt: { isNull: true }, archivedAt: { isNull: true } }\n : filter === 'archived'\n ? { archivedAt: { isNotNull: true } }\n : { archivedAt: { isNull: true } }\n\n // Cursor — `(createdAt, id)` lexicographic compare so two rows with\n // identical createdAt aren't skipped across pages. Rendered as:\n // createdAt < c.createdAt OR (createdAt = c.createdAt AND id < c.id)\n let cursorWhere: NotificationsWhere | null = null\n if (cursor) {\n const decoded = decodeCursor(cursor)\n if (!decoded) return errorJson('Invalid cursor', 400)\n cursorWhere = {\n $or: [\n { createdAt: { lt: decoded.createdAt } },\n { createdAt: decoded.createdAt, id: { lt: decoded.id } },\n ],\n }\n }\n\n const where: NotificationsWhere = cursorWhere\n ? { $and: [{ recipientUserId: ctx.user.id }, filterWhere, cursorWhere] }\n : { $and: [{ recipientUserId: ctx.user.id }, filterWhere] }\n\n const rows = await t.findMany({\n where,\n orderBy: [\n { column: 'createdAt', dir: 'desc' },\n { column: 'id', dir: 'desc' },\n ],\n limit: limit + 1,\n })\n\n const hasMore = rows.length > limit\n const items = rows.slice(0, limit)\n const last = items[items.length - 1]\n const nextCursor =\n hasMore && last ? encodeCursor({ createdAt: last.createdAt, id: last.id }) : null\n\n return json({ items, nextCursor })\n }\n\n const handleUnreadCount: AdminRouteHandler = async (_req, ctx) => {\n const client = getActiveNotificationClient()\n if (!client) return errorJson('Notifications not initialized', 500)\n const count = await client.unreadCount(ctx.user.id)\n return json({ count })\n }\n\n const handleMarkRead: AdminRouteHandler = async (_req, ctx) => {\n const id = ctx.segments[1]\n if (!id || !UUID_RE.test(id)) return errorJson('Invalid notification id', 400)\n const client = getActiveNotificationClient()\n if (!client) return errorJson('Notifications not initialized', 500)\n\n const result = await client.markRead(ctx.user.id, id)\n if (!result) {\n // Distinguishable in the audit log so cross-user probing patterns\n // are observable, per CLAUDE.md §Security observability rule. No\n // 403 here — that would leak existence; 404 stays.\n ctx.audit?.({\n action: 'notifications.read_failed',\n entityType: 'notification',\n entityId: id,\n userId: ctx.user.id,\n metadata: { reason: 'not_found_or_not_owner' },\n })\n return errorJson('Not found', 404)\n }\n\n const unreadCount = await client.unreadCount(ctx.user.id)\n if (result.mutated) {\n // Only fire realtime when state actually changed — repeated reads\n // from a multi-tab race shouldn't fan out duplicate events.\n publishNotificationRead(ctx.user.id, {\n id: result.id,\n type: result.type,\n unreadCount,\n })\n ctx.audit?.({\n action: 'notifications.read',\n entityType: 'notification',\n entityId: id,\n userId: ctx.user.id,\n })\n }\n return json({ id: result.id, readAt: result.readAt, unreadCount })\n }\n\n const handleMarkAllRead: AdminRouteHandler = async (_req, ctx) => {\n const client = getActiveNotificationClient()\n if (!client) return errorJson('Notifications not initialized', 500)\n const affected = await client.markAllRead(ctx.user.id)\n // Re-fetch unreadCount in case a notify() raced between the mark-all\n // UPDATE and now (would otherwise lie with a hardcoded 0).\n const unreadCount = await client.unreadCount(ctx.user.id)\n // Bulk event — no single id/type to point at; the connected tab\n // reconciles by re-fetching the drawer (or zeroing the badge).\n publishNotificationRead(ctx.user.id, { unreadCount })\n ctx.audit?.({\n action: 'notifications.mark_all_read',\n userId: ctx.user.id,\n metadata: { affected },\n })\n return json({ affected, unreadCount })\n }\n\n const handleArchive: AdminRouteHandler = async (_req, ctx) => {\n const id = ctx.segments[1]\n if (!id || !UUID_RE.test(id)) return errorJson('Invalid notification id', 400)\n const client = getActiveNotificationClient()\n if (!client) return errorJson('Notifications not initialized', 500)\n const result = await client.archive(ctx.user.id, id)\n if (!result) {\n ctx.audit?.({\n action: 'notifications.archive_failed',\n entityType: 'notification',\n entityId: id,\n userId: ctx.user.id,\n metadata: { reason: 'not_found_or_not_owner' },\n })\n return errorJson('Not found', 404)\n }\n const unreadCount = await client.unreadCount(ctx.user.id)\n if (result.mutated) {\n publishNotificationArchived(ctx.user.id, {\n id: result.id,\n type: result.type,\n unreadCount,\n })\n ctx.audit?.({\n action: 'notifications.archive',\n entityType: 'notification',\n entityId: id,\n userId: ctx.user.id,\n })\n }\n return json({ id: result.id, unreadCount })\n }\n\n return {\n prefix: 'notifications',\n resource: 'notifications',\n actions: ['view', 'update'],\n handlers: {\n GET: async (req, ctx) => {\n const segs = ctx.segments\n if (segs.length === 0) return handleList(req, ctx)\n if (segs.length === 1 && segs[0] === 'unread-count') return handleUnreadCount(req, ctx)\n return errorJson('Not found', 404)\n },\n POST: async (req, ctx) => {\n const segs = ctx.segments\n if (segs.length === 1 && segs[0] === 'mark-all-read') return handleMarkAllRead(req, ctx)\n if (segs.length === 2 && segs[1] === 'read') return handleMarkRead(req, ctx)\n if (segs.length === 2 && segs[1] === 'archive') return handleArchive(req, ctx)\n return errorJson('Not found', 404)\n },\n },\n }\n}\n\n// Internals exported for unit tests.\nexport const __test__ = { decodeCursor, encodeCursor }\n"],"mappings":"8HAgCA,MAAM,EAAU,kEAWhB,SAAS,EAAK,EAAe,EAAS,IAAe,CACnD,OAAO,IAAI,SAAS,KAAK,UAAU,EAAK,CAAE,CACxC,SACA,QAAS,CAAE,eAAgB,mBAAoB,CAChD,CAAC,CAGJ,SAAS,EAAU,EAAiB,EAA0B,CAC5D,OAAO,EAAK,CAAE,MAAO,EAAS,CAAE,EAAO,CAGzC,MAAM,EAAkB,EAAE,OAAO,CAC/B,MAAO,EAAE,OAAO,QAAQ,CAAC,KAAK,CAAC,IAAI,EAAE,CAAC,IAAI,IAAU,CAAC,QAAQ,GAAc,CAC3E,OAAQ,EAAE,QAAQ,CAAC,UAAU,CAC7B,OAAQ,EAAE,KAAK,CAAC,SAAU,MAAO,WAAW,CAAC,CAAC,QAAQ,MAAM,CAC7D,CAAC,CAcF,SAAS,EAAa,EAAmC,CACvD,IAAI,EACJ,GAAI,CACF,EAAS,KAAK,MAAM,OAAO,KAAK,EAAK,YAAY,CAAC,SAAS,OAAO,CAAC,MAC7D,CACN,OAAO,KAET,GAAI,OAAO,GAAW,WAAY,EAAiB,OAAO,KAC1D,IAAM,EAAM,EAEZ,GADI,OAAO,EAAI,WAAc,UAAY,OAAO,EAAI,IAAO,UACvD,CAAC,EAAQ,KAAK,EAAI,GAAG,CAAE,OAAO,KAClC,IAAM,EAAO,IAAI,KAAK,EAAI,UAAU,CAEpC,OADK,OAAO,SAAS,EAAK,SAAS,CAAC,CAC7B,CAAE,UAAW,EAAM,GAAI,EAAI,GAAI,CADO,KAI/C,SAAS,EAAa,EAA0B,CAC9C,OAAO,OAAO,KACZ,KAAK,UAAU,CAAE,UAAW,EAAE,UAAU,aAAa,CAAE,GAAI,EAAE,GAAI,CAAC,CAClE,OACD,CAAC,SAAS,YAAY,CAQzB,SAAgB,EAAoB,EAAoC,EAAE,CAAc,CAKtF,IAAM,EAAQ,SAAyC,CACrD,GAAI,EAAO,GAAI,OAAO,EAAO,GAC7B,GAAM,CAAE,UAAW,MAAM,OAAO,qBAChC,OAAO,GAAQ,CAAC,GAAG,WAGf,EAAgC,MAAO,EAAK,IAAQ,CACxD,IAAM,EAAM,IAAI,IAAI,EAAI,IAAI,CACtB,EAAc,EAAgB,UAAU,OAAO,YAAY,EAAI,aAAa,CAAC,CACnF,GAAI,CAAC,EAAY,QAAS,CACxB,IAAM,EAAQ,EAAY,MAAM,OAAO,GACvC,OAAO,EACL,EAAQ,GAAG,EAAM,KAAK,KAAK,IAAI,EAAI,QAAQ,IAAI,EAAM,UAAY,gBACjE,IACD,CAEH,GAAM,CAAE,QAAO,SAAQ,UAAW,EAAY,KAExC,EAAI,EAAmB,WAAW,MAAM,GAAO,CAAC,CAIhD,EACJ,IAAW,SACP,CAAE,OAAQ,CAAE,OAAQ,GAAM,CAAE,WAAY,CAAE,OAAQ,GAAM,CAAE,CAC1D,IAAW,WACT,CAAE,WAAY,CAAE,UAAW,GAAM,CAAE,CACnC,CAAE,WAAY,CAAE,OAAQ,GAAM,CAAE,CAKpC,EAAyC,KAC7C,GAAI,EAAQ,CACV,IAAM,EAAU,EAAa,EAAO,CACpC,GAAI,CAAC,EAAS,OAAO,EAAU,iBAAkB,IAAI,CACrD,EAAc,CACZ,IAAK,CACH,CAAE,UAAW,CAAE,GAAI,EAAQ,UAAW,CAAE,CACxC,CAAE,UAAW,EAAQ,UAAW,GAAI,CAAE,GAAI,EAAQ,GAAI,CAAE,CACzD,CACF,CAGH,IAAM,EAA4B,EAC9B,CAAE,KAAM,CAAC,CAAE,gBAAiB,EAAI,KAAK,GAAI,CAAE,EAAa,EAAY,CAAE,CACtE,CAAE,KAAM,CAAC,CAAE,gBAAiB,EAAI,KAAK,GAAI,CAAE,EAAY,CAAE,CAEvD,EAAO,MAAM,EAAE,SAAS,CAC5B,QACA,QAAS,CACP,CAAE,OAAQ,YAAa,IAAK,OAAQ,CACpC,CAAE,OAAQ,KAAM,IAAK,OAAQ,CAC9B,CACD,MAAO,EAAQ,EAChB,CAAC,CAEI,EAAU,EAAK,OAAS,EACxB,EAAQ,EAAK,MAAM,EAAG,EAAM,CAC5B,EAAO,EAAM,EAAM,OAAS,GAIlC,OAAO,EAAK,CAAE,QAAO,WAFnB,GAAW,EAAO,EAAa,CAAE,UAAW,EAAK,UAAW,GAAI,EAAK,GAAI,CAAC,CAAG,KAE9C,CAAC,EAG9B,EAAuC,MAAO,EAAM,IAAQ,CAChE,IAAM,EAAS,GAA6B,CAG5C,OAFK,EAEE,EAAK,CAAE,MAAA,MADM,EAAO,YAAY,EAAI,KAAK,GAAG,CAC9B,CAAC,CAFF,EAAU,gCAAiC,IAAI,EAK/D,EAAoC,MAAO,EAAM,IAAQ,CAC7D,IAAM,EAAK,EAAI,SAAS,GACxB,GAAI,CAAC,GAAM,CAAC,EAAQ,KAAK,EAAG,CAAE,OAAO,EAAU,0BAA2B,IAAI,CAC9E,IAAM,EAAS,GAA6B,CAC5C,GAAI,CAAC,EAAQ,OAAO,EAAU,gCAAiC,IAAI,CAEnE,IAAM,EAAS,MAAM,EAAO,SAAS,EAAI,KAAK,GAAI,EAAG,CACrD,GAAI,CAAC,EAWH,OAPA,EAAI,QAAQ,CACV,OAAQ,4BACR,WAAY,eACZ,SAAU,EACV,OAAQ,EAAI,KAAK,GACjB,SAAU,CAAE,OAAQ,yBAA0B,CAC/C,CAAC,CACK,EAAU,YAAa,IAAI,CAGpC,IAAM,EAAc,MAAM,EAAO,YAAY,EAAI,KAAK,GAAG,CAgBzD,OAfI,EAAO,UAGT,EAAwB,EAAI,KAAK,GAAI,CACnC,GAAI,EAAO,GACX,KAAM,EAAO,KACb,cACD,CAAC,CACF,EAAI,QAAQ,CACV,OAAQ,qBACR,WAAY,eACZ,SAAU,EACV,OAAQ,EAAI,KAAK,GAClB,CAAC,EAEG,EAAK,CAAE,GAAI,EAAO,GAAI,OAAQ,EAAO,OAAQ,cAAa,CAAC,EAG9D,EAAuC,MAAO,EAAM,IAAQ,CAChE,IAAM,EAAS,GAA6B,CAC5C,GAAI,CAAC,EAAQ,OAAO,EAAU,gCAAiC,IAAI,CACnE,IAAM,EAAW,MAAM,EAAO,YAAY,EAAI,KAAK,GAAG,CAGhD,EAAc,MAAM,EAAO,YAAY,EAAI,KAAK,GAAG,CASzD,OANA,EAAwB,EAAI,KAAK,GAAI,CAAE,cAAa,CAAC,CACrD,EAAI,QAAQ,CACV,OAAQ,8BACR,OAAQ,EAAI,KAAK,GACjB,SAAU,CAAE,WAAU,CACvB,CAAC,CACK,EAAK,CAAE,WAAU,cAAa,CAAC,EAGlC,EAAmC,MAAO,EAAM,IAAQ,CAC5D,IAAM,EAAK,EAAI,SAAS,GACxB,GAAI,CAAC,GAAM,CAAC,EAAQ,KAAK,EAAG,CAAE,OAAO,EAAU,0BAA2B,IAAI,CAC9E,IAAM,EAAS,GAA6B,CAC5C,GAAI,CAAC,EAAQ,OAAO,EAAU,gCAAiC,IAAI,CACnE,IAAM,EAAS,MAAM,EAAO,QAAQ,EAAI,KAAK,GAAI,EAAG,CACpD,GAAI,CAAC,EAQH,OAPA,EAAI,QAAQ,CACV,OAAQ,+BACR,WAAY,eACZ,SAAU,EACV,OAAQ,EAAI,KAAK,GACjB,SAAU,CAAE,OAAQ,yBAA0B,CAC/C,CAAC,CACK,EAAU,YAAa,IAAI,CAEpC,IAAM,EAAc,MAAM,EAAO,YAAY,EAAI,KAAK,GAAG,CAczD,OAbI,EAAO,UACT,EAA4B,EAAI,KAAK,GAAI,CACvC,GAAI,EAAO,GACX,KAAM,EAAO,KACb,cACD,CAAC,CACF,EAAI,QAAQ,CACV,OAAQ,wBACR,WAAY,eACZ,SAAU,EACV,OAAQ,EAAI,KAAK,GAClB,CAAC,EAEG,EAAK,CAAE,GAAI,EAAO,GAAI,cAAa,CAAC,EAG7C,MAAO,CACL,OAAQ,gBACR,SAAU,gBACV,QAAS,CAAC,OAAQ,SAAS,CAC3B,SAAU,CACR,IAAK,MAAO,EAAK,IAAQ,CACvB,IAAM,EAAO,EAAI,SAGjB,OAFI,EAAK,SAAW,EAAU,EAAW,EAAK,EAAI,CAC9C,EAAK,SAAW,GAAK,EAAK,KAAO,eAAuB,EAAkB,EAAK,EAAI,CAChF,EAAU,YAAa,IAAI,EAEpC,KAAM,MAAO,EAAK,IAAQ,CACxB,IAAM,EAAO,EAAI,SAIjB,OAHI,EAAK,SAAW,GAAK,EAAK,KAAO,gBAAwB,EAAkB,EAAK,EAAI,CACpF,EAAK,SAAW,GAAK,EAAK,KAAO,OAAe,EAAe,EAAK,EAAI,CACxE,EAAK,SAAW,GAAK,EAAK,KAAO,UAAkB,EAAc,EAAK,EAAI,CACvE,EAAU,YAAa,IAAI,EAErC,CACF,CAIH,MAAa,EAAW,CAAE,eAAc,eAAc"}
@@ -0,0 +1,2 @@
1
+ import{n as e}from"./types-Dy_AGX6X.mjs";import{n as t,s as n,t as r}from"./recipients-DDN8AJzX.mjs";import{z as i}from"zod";import{createLogger as a}from"@murumets-ee/logging";import{defineSettings as o}from"@murumets-ee/settings/define";const s=a({name:`notifications:realtime`});let c=null;function l(e,t){e instanceof Error&&e.name===`RealtimeOversizeError`?s.error({err:e,topic:t},`realtime publish failed — payload oversize`):s.warn({err:e,topic:t},`realtime publish failed — event dropped`)}function u(e,t,n){if(c){try{c({topic:e,payload:n,scope:{userId:t}})}catch(t){l(t,e)}return}import(`@murumets-ee/core/realtime`).then(r=>{c=r.publishEvent,c({topic:e,payload:n,scope:{userId:t}})}).catch(t=>{l(t,e)})}function d(t,n){u(e.created,t,n)}function f(t,n){u(e.updated,t,n)}function p(t,n){u(e.read,t,n)}function m(t,n){u(e.archived,t,n)}const h={id:`inApp`,async send(e){let t={id:e.notificationId,type:e.typeId,unreadCount:e.unreadCount};return e.kind===`created`?d(e.recipient.id,t):f(e.recipient.id,t),{channel:`inApp`,state:`delivered`,at:new Date().toISOString()}}},g=i.record(i.string(),i.record(i.string(),i.boolean())),_=o({namespace:`notifications.preferences`,scope:`user`,label:`Notification preferences`,iconName:`bell`,hideFromMenu:!0,schema:{prefs:{type:`json`,label:`Per-type, per-channel preferences`,description:`Map of notification type id → channel id → enabled. Missing entries fall back to type defaults.`,default:{},schema:g}}});function v(e,t){let n=[],r=t[e.id]??{};for(let t of Object.entries(e.channels)){let e=t[0],i=t[1];i&&(r[e]??i.default)&&n.push(e)}return n}const y=/^(\d+)\/(\d+)([smh])$/,b=new Map;function x(e){let t=b.get(e);if(t)return t;let n=y.exec(e);if(!n)throw Error(`Invalid throttle spec "${e}": expected "<count>/<window>" where window is "<n>[s|m|h]" (e.g. "5/15m"). Pattern: ${y.source}`);let[,r,i,a]=n;if(!r||!i||!a)throw Error(`Invalid throttle spec "${e}"`);let o=Number.parseInt(r,10),s=Number.parseInt(i,10);if(o<=0||s<=0)throw Error(`Invalid throttle spec "${e}": count and window must be positive`);let c={s:1e3,m:6e4,h:36e5}[a];if(c===void 0)throw Error(`Invalid throttle spec "${e}": unknown unit "${a}"`);let l={count:o,windowMs:s*c};return b.set(e,l),l}var S=class{events=new Map;nowFn;constructor(e=Date.now){this.nowFn=e}tryConsume(e,t){let n=x(t),r=this.nowFn(),i=r-n.windowMs,a=this.events.get(e),o=a?a.filter(e=>e>i):[];return o.length>=n.count?(o.length===0?this.events.delete(e):this.events.set(e,o),!1):(o.push(r),this.events.set(e,o),!0)}static buildKey(e,t,n){return`${e}:${t}:${n}`}reset(){this.events.clear()}},C=class{db;logger;defaultLocale;recipientCap;channels;preferencesResolver;throttleStore;notificationsClient;constructor(e){this.db=e.db,this.logger=e.logger,this.defaultLocale=e.defaultLocale??`en`,this.recipientCap=e.recipientCap??1e3,this.channels=e.channels??{inApp:h},this.preferencesResolver=e.preferencesResolver??(async()=>({})),this.throttleStore=e.throttleStore??new S,this.notificationsClient=n.makeClient(e.db)}resolveNotificationsClient(e){return e?n.makeClient(e):this.notificationsClient}async notify(e){let{type:n,recipients:i,payload:a,tx:o}=e,s=await t(i,this.recipientCap),c=await r(this.db,s,this.defaultLocale),l=this.resolveNotificationsClient(o),u=new Map,d=[];for(let e of s){let t=c.get(e);if(!t){this.logger?.warn({userId:e,typeId:n.id},`notify(): recipient user not found, skipping`);continue}let r=u.get(e);if(!r){try{r=await this.preferencesResolver(e)}catch(t){this.logger?.warn({err:t,userId:e,typeId:n.id},`notify(): preferences resolver failed, applying type defaults`),r={}}u.set(e,r)}let i=n,s=a,f=v(i,r);if(f.length===0){this.logger?.debug({userId:e,typeId:n.id},`notify(): all channels disabled by user preferences, skipping`);continue}let p=i.render({payload:s,recipient:t,locale:t.locale}),m=await this.persistAndDispatch({type:i,recipient:t,payload:s,enabledChannels:f,rendered:p,notificationsClient:l,tx:o});m&&d.push(m)}return{notificationIds:d}}async persistAndDispatch(e){let{type:t,recipient:n,payload:r,enabledChannels:i,rendered:a,notificationsClient:o,tx:s}=e,c=t.groupBy?t.groupBy(r):null,l=`created`,u;if(c!==null){let e=await o.findOne({recipientUserId:n.id,type:t.id,groupKey:c,readAt:{isNull:!0}});if(e){let i=await o.update({id:e.id,recipientUserId:n.id},{payload:r,updatedAt:new Date});i?(u=i.id,l=`updated`):u=(await o.insert({recipientUserId:n.id,type:t.id,payload:r,channels:[],groupKey:c})).id}else u=(await o.insert({recipientUserId:n.id,type:t.id,payload:r,channels:[],groupKey:c})).id}else u=(await o.insert({recipientUserId:n.id,type:t.id,payload:r,channels:[],groupKey:null})).id;let d=await o.count({recipientUserId:n.id,readAt:{isNull:!0},archivedAt:{isNull:!0}}),f=[];for(let e of i){let r=t.channels[e],i=this.channels[e];if(!i||!r){this.logger?.warn({typeId:t.id,channelId:e},`notify(): channel declared but not registered, skipping`);continue}if(e===`email`&&!n.email){f.push({channel:e,state:`skipped_unverified`,at:new Date().toISOString()});continue}if(r.throttle){let i=S.buildKey(n.id,t.id,e);if(!this.throttleStore.tryConsume(i,r.throttle)){f.push({channel:e,state:`throttled`,at:new Date().toISOString(),detail:`throttle ${r.throttle} exceeded`});continue}}try{let e=await i.send({notificationId:u,kind:l,typeId:t.id,recipient:n,rendered:a,unreadCount:d,tx:s});f.push(e)}catch(n){this.logger?.error({err:n,typeId:t.id,channelId:e},`notify(): channel send threw — recording as failed`),f.push({channel:e,state:`failed`,at:new Date().toISOString(),detail:n instanceof Error?n.message:`unknown error`})}}return f.length>0&&await o.update({id:u,recipientUserId:n.id},{channels:f,updatedAt:new Date}),u}async unreadCount(e){return this.notificationsClient.count({recipientUserId:e,readAt:{isNull:!0},archivedAt:{isNull:!0}})}async markRead(e,t){let n=new Date,r=await this.notificationsClient.update({id:t,recipientUserId:e,readAt:{isNull:!0}},{readAt:n,updatedAt:n});if(r)return{id:r.id,type:r.type,readAt:r.readAt??n,mutated:!0};let i=await this.notificationsClient.findOne({id:t,recipientUserId:e});return!i||!i.readAt?null:{id:i.id,type:i.type,readAt:i.readAt,mutated:!1}}async markAllRead(e){let t=new Date;return(await this.notificationsClient.updateMany({recipientUserId:e,readAt:{isNull:!0}},{readAt:t,updatedAt:t})).length}async archive(e,t){let n=new Date,r=await this.notificationsClient.update({id:t,recipientUserId:e,archivedAt:{isNull:!0}},{archivedAt:n,updatedAt:n});if(r)return{id:r.id,type:r.type,mutated:!0};let i=await this.notificationsClient.findOne({id:t,recipientUserId:e});return!i||!i.archivedAt?null:{id:i.id,type:i.type,mutated:!1}}};const w=Symbol.for(`@murumets-ee/notifications:active-client`);function T(e){let t=globalThis;e===null?delete t[w]:t[w]=e}function E(){return globalThis[w]??null}async function D(e){let t=E();if(!t)throw Error(`notify(): no active NotificationClient. Add notifications() to your plugins array, or pass a NotificationClient instance directly.`);return t.notify(e)}export{S as a,_ as c,m as d,p as f,T as i,v as l,E as n,x as o,D as r,g as s,C as t,h as u};
2
+ //# sourceMappingURL=client-CtklhnNF.mjs.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"client-CtklhnNF.mjs","names":[],"sources":["../src/realtime/publish.ts","../src/channels/in-app.ts","../src/preferences.ts","../src/throttle.ts","../src/client.ts"],"sourcesContent":["/**\n * Typed realtime publishers for notification events.\n *\n * Every notification topic is scoped to the recipient — a connected admin\n * tab only ever sees events for the user it's authenticated as. The scope\n * shape is `{ userId }`, evaluated against the SSE handler's session per\n * `PLAN-REALTIME` §4.\n *\n * Publish failures are caught and logged — never thrown. The notification\n * row INSERT is the durable record; missing the realtime push means the\n * user's bell badge is stale until the next REST poll, not that data is lost.\n *\n * Why dynamic import: `@murumets-ee/core/realtime` carries\n * `import 'server-only'`. Static value-import would throw under non-RSC\n * Node loaders (jiti, tsx) used by `lumi migrate` / CI scripts. Same\n * pattern as `packages/queue/src/realtime/publish.ts`.\n */\n\nimport { createLogger } from '@murumets-ee/logging'\nimport {\n NOTIFICATION_TOPICS,\n type NotificationRealtimePayload,\n type NotificationTopic,\n} from '../types.js'\n\nconst logger = createLogger({ name: 'notifications:realtime' })\n\nlet cachedPublish: typeof import('@murumets-ee/core/realtime').publishEvent | null = null\n\nfunction logPublishError(err: unknown, topic: string): void {\n if (err instanceof Error && err.name === 'RealtimeOversizeError') {\n logger.error({ err, topic }, 'realtime publish failed — payload oversize')\n } else {\n logger.warn({ err, topic }, 'realtime publish failed — event dropped')\n }\n}\n\nfunction publishSafe(\n topic: NotificationTopic,\n userId: string,\n payload: NotificationRealtimePayload,\n): void {\n if (cachedPublish) {\n try {\n cachedPublish({ topic, payload, scope: { userId } })\n } catch (err) {\n logPublishError(err, topic)\n }\n return\n }\n import('@murumets-ee/core/realtime')\n .then((mod) => {\n cachedPublish = mod.publishEvent\n cachedPublish({ topic, payload, scope: { userId } })\n })\n .catch((err: unknown) => {\n logPublishError(err, topic)\n })\n}\n\nexport function publishNotificationCreated(\n userId: string,\n payload: NotificationRealtimePayload,\n): void {\n publishSafe(NOTIFICATION_TOPICS.created, userId, payload)\n}\n\nexport function publishNotificationUpdated(\n userId: string,\n payload: NotificationRealtimePayload,\n): void {\n publishSafe(NOTIFICATION_TOPICS.updated, userId, payload)\n}\n\nexport function publishNotificationRead(\n userId: string,\n payload: NotificationRealtimePayload,\n): void {\n publishSafe(NOTIFICATION_TOPICS.read, userId, payload)\n}\n\nexport function publishNotificationArchived(\n userId: string,\n payload: NotificationRealtimePayload,\n): void {\n publishSafe(NOTIFICATION_TOPICS.archived, userId, payload)\n}\n\n/**\n * @internal — for tests. Pre-resolve the dynamic import so subsequent\n * publishes are fully synchronous.\n */\nexport async function _warmRealtimePublisher(): Promise<void> {\n if (cachedPublish) return\n const mod = await import('@murumets-ee/core/realtime')\n cachedPublish = mod.publishEvent\n}\n\n/** @internal — for tests. Reset the cached publisher (and replace if provided). */\nexport function _resetRealtimePublisher(\n replacement?: typeof import('@murumets-ee/core/realtime').publishEvent | null,\n): void {\n cachedPublish = replacement ?? null\n}\n","/**\n * In-app channel — degenerate. The notification row INSERT/UPDATE is the\n * \"delivery\" and `notification.created` / `notification.updated` realtime\n * publish drives the live bell badge.\n *\n * `send()` is called AFTER the row has been written, so its only job is\n * to fan out the realtime event and return a `delivered` record. There's\n * no separate transport to call here — the notify pipeline handles the\n * row write itself.\n */\n\nimport { publishNotificationCreated, publishNotificationUpdated } from '../realtime/publish.js'\nimport type { ChannelDeliveryRecord } from '../types.js'\nimport type { ChannelSendContext, NotificationChannel } from './types.js'\n\nexport const inAppChannel: NotificationChannel = {\n id: 'inApp',\n async send(ctx: ChannelSendContext): Promise<ChannelDeliveryRecord> {\n const payload = {\n id: ctx.notificationId,\n type: ctx.typeId,\n unreadCount: ctx.unreadCount,\n }\n if (ctx.kind === 'created') {\n publishNotificationCreated(ctx.recipient.id, payload)\n } else {\n publishNotificationUpdated(ctx.recipient.id, payload)\n }\n return {\n channel: 'inApp',\n state: 'delivered',\n at: new Date().toISOString(),\n }\n },\n}\n","/**\n * Per-user preferences — resolution + the SettingsDefinition shape stored\n * under namespace `notifications.preferences` (user-scope).\n *\n * v1 stores the whole preferences blob as a single `prefs` json setting:\n *\n * prefs = { [typeId]: { [channelId]: boolean } }\n *\n * Resolution at notify time: missing key → fall back to\n * `type.channels[channelId].default`. Channels resolved to `false` are\n * skipped entirely (not rendered, not enqueued, not row-tracked).\n *\n * v2 (PLAN-NOTIFICATIONS Phase 3 / PR C) will auto-derive a richer admin\n * UI from the type registry. The storage shape stays the same — only the\n * editing surface changes.\n */\n\nimport { defineSettings } from '@murumets-ee/settings/define'\nimport { z } from 'zod'\nimport type { NotificationChannelId, NotificationPreferences, NotificationType } from './types.js'\n\n/**\n * Zod schema for the `prefs` json blob. Loose-by-design: keys are type\n * ids (validated in notify()), inner record values are channel ids →\n * boolean. Unknown channel ids are tolerated so a user's prefs survive\n * a plugin renaming or removing a channel.\n */\nexport const notificationPreferencesSchema: z.ZodType<NotificationPreferences> = z.record(\n z.string(),\n z.record(z.string(), z.boolean()),\n)\n\n/**\n * Settings definition for `notifications.preferences`. Contributed via\n * the notifications plugin's `shared.settings` so the existing settings\n * infra handles validation, persistence, audit, and route mounting.\n */\nexport const notificationPreferencesSettings = defineSettings({\n namespace: 'notifications.preferences',\n scope: 'user',\n label: 'Notification preferences',\n iconName: 'bell',\n // Hide from the global sidebar — a user-scoped preferences page is\n // surfaced from \"My preferences\" instead, not from the org-wide settings\n // sidebar where global namespaces live.\n hideFromMenu: true,\n schema: {\n prefs: {\n type: 'json' as const,\n label: 'Per-type, per-channel preferences',\n description:\n 'Map of notification type id → channel id → enabled. Missing entries fall back to type defaults.',\n default: {} as NotificationPreferences,\n schema: notificationPreferencesSchema,\n },\n },\n})\n\n/**\n * Resolve the effective enabled-channel list for a given (user, type),\n * applying user prefs over type defaults.\n *\n * Returns the channels declared by the type that are enabled for the\n * user. Order is preserved by `Object.keys(type.channels)` for stable\n * snapshot tests.\n */\nexport function resolveEnabledChannels(\n type: NotificationType,\n userPrefs: NotificationPreferences,\n): NotificationChannelId[] {\n const enabled: NotificationChannelId[] = []\n const userTypePrefs = userPrefs[type.id] ?? {}\n // Iterate `Object.entries` and narrow per-entry. The previous\n // `Object.keys(type.channels) as NotificationChannelId[]` lied if a\n // future `type.channels` carries a key outside the union; entries-form\n // makes that visible at the point of read.\n for (const entry of Object.entries(type.channels)) {\n const channelId = entry[0] as NotificationChannelId\n const def = entry[1]\n if (!def) continue\n const userChoice = userTypePrefs[channelId]\n const isEnabled = userChoice ?? def.default\n if (isEnabled) enabled.push(channelId)\n }\n return enabled\n}\n","/**\n * Per-channel in-memory throttle - sliding window per (userId, typeId, channel).\n *\n * v1 lives in process memory; multi-process slip is acceptable for the v1\n * targets (\"5 emails per 15 min per user\"). PLAN-NOTIFICATIONS section 6 Q2\n * calls out the trade-off and the upgrade path (Postgres-backed counter) if\n * the slip ever shows up in production.\n *\n * Format `<count>/<window>` - window is `<n><s|m|h>` (e.g. `5/15m`).\n *\n * Over-cap returns `false` from `tryConsume`. The caller (notify pipeline)\n * records a `throttled` ChannelDeliveryRecord on the row instead of\n * enqueueing the channel work.\n */\n\nconst THROTTLE_RE = /^(\\d+)\\/(\\d+)([smh])$/\n\ninterface ParsedThrottle {\n /** Max events allowed inside the window. */\n count: number\n /** Window length in milliseconds. */\n windowMs: number\n}\n\n/**\n * Cache parsed specs - `tryConsume` is hot and the regex match + Number\n * parse cost adds up at scale. Cache key is the spec string itself.\n */\nconst PARSE_CACHE = new Map<string, ParsedThrottle>()\n\nexport function parseThrottle(spec: string): ParsedThrottle {\n const cached = PARSE_CACHE.get(spec)\n if (cached) return cached\n\n const match = THROTTLE_RE.exec(spec)\n if (!match) {\n throw new Error(\n `Invalid throttle spec \"${spec}\": expected \"<count>/<window>\" where ` +\n `window is \"<n>[s|m|h]\" (e.g. \"5/15m\"). Pattern: ${THROTTLE_RE.source}`,\n )\n }\n // Captures from the regex above are guaranteed present once the test\n // succeeds; tighten the types for noUncheckedIndexedAccess.\n const [, countRaw, nRaw, unit] = match\n if (!countRaw || !nRaw || !unit) {\n throw new Error(`Invalid throttle spec \"${spec}\"`)\n }\n const count = Number.parseInt(countRaw, 10)\n const n = Number.parseInt(nRaw, 10)\n if (count <= 0 || n <= 0) {\n throw new Error(`Invalid throttle spec \"${spec}\": count and window must be positive`)\n }\n const unitMs: Record<string, number> = { s: 1000, m: 60_000, h: 3_600_000 }\n const factor = unitMs[unit]\n if (factor === undefined) {\n throw new Error(`Invalid throttle spec \"${spec}\": unknown unit \"${unit}\"`)\n }\n const parsed: ParsedThrottle = { count, windowMs: n * factor }\n PARSE_CACHE.set(spec, parsed)\n return parsed\n}\n\n/** @internal - for tests. */\nexport function clearThrottleParseCache(): void {\n PARSE_CACHE.clear()\n}\n\n/**\n * Tracks per-(userId, typeId, channel) event timestamps in memory.\n *\n * Memory bound: each key keeps at most `count` timestamps; entries with\n * no recent activity are evicted on the next consume call against that\n * key. The unbounded growth concern is therefore the *number of distinct\n * keys*, not per-key length. v2 may add a periodic sweeper if the key\n * count ever shows up in heap profiles.\n */\nexport class ThrottleStore {\n private readonly events = new Map<string, number[]>()\n private readonly nowFn: () => number\n\n constructor(nowFn: () => number = Date.now) {\n this.nowFn = nowFn\n }\n\n /**\n * @returns `true` if the event fits in the window (and is recorded);\n * `false` if it would exceed the cap (no record made - caller should\n * mark the delivery as `throttled` and skip the channel work).\n */\n tryConsume(key: string, spec: string): boolean {\n const parsed = parseThrottle(spec)\n const now = this.nowFn()\n const windowStart = now - parsed.windowMs\n\n const existing = this.events.get(key)\n const pruned = existing ? existing.filter((t) => t > windowStart) : []\n\n if (pruned.length >= parsed.count) {\n // Save the pruned list to keep memory bounded - even a refusal\n // entitles us to drop expired entries from this key.\n if (pruned.length === 0) {\n this.events.delete(key)\n } else {\n this.events.set(key, pruned)\n }\n return false\n }\n\n pruned.push(now)\n this.events.set(key, pruned)\n return true\n }\n\n /**\n * Build a stable throttle key from the per-recipient identity tuple.\n * Uses `:` as a separator - none of the components (better-auth\n * user ids, dot-namespaced type ids, camelCase channel ids) ever\n * contain `:`, so collisions are impossible.\n */\n static buildKey(userId: string, typeId: string, channel: string): string {\n return `${userId}:${typeId}:${channel}`\n }\n\n /** @internal - for tests. */\n reset(): void {\n this.events.clear()\n }\n}\n","/**\n * `NotificationClient` — the server-side implementation of `notify(...)`.\n *\n * The free-function `notify(...)` exported from the package barrel is a\n * thin wrapper that resolves the active client from `globalThis` (set by\n * the `notifications()` plugin's init hook). Tests instantiate\n * `NotificationClient` directly so a custom DB / channel set / throttle\n * store can be plugged in.\n *\n * v1 responsibilities (per PLAN-NOTIFICATIONS §3.3):\n * 1. Resolve recipients (await `resolveUsers` if provided, validate cap).\n * 2. For each recipient:\n * a. Resolve effective preferences for this type.\n * b. Render only the surviving channels.\n * c. Lookup `groupKey` collision against unread rows; UPDATE if hit,\n * else INSERT.\n * d. For each surviving channel: throttle gate, then call channel.send().\n * 3. Persist per-channel delivery state on the row.\n * 4. Return the list of notification ids.\n *\n * Tx-threading (PLAN-OUTBOX Phase 5 / PR E — a.k.a. PLAN-NOTIFICATIONS PR F):\n * callers may pass `options.tx` (a Drizzle transaction handle). When set,\n * every notification row INSERT/UPDATE and every email-channel queue enqueue\n * participates in the caller's transaction — Postgres MVCC then makes\n * the row invisible to other connections (worker included) until commit,\n * and a rollback removes both the row and the queued email job atomically.\n * Closes the v1 §3.3 \"ghost notifications on rollback\" limitation.\n *\n * Realtime publishes are intentionally NOT bound to the tx: realtime is\n * ephemeral by design (PLAN-REALTIME §2; PLAN-OUTBOX §2). The in-app\n * channel's `send()` runs synchronously inside the publisher's tx and\n * fires `publishEvent` BEFORE the tx commits — meaning a connected receiver\n * may briefly observe a `notification.created` event whose row is rolled\n * back moments later. The receiver's REST refresh then finds nothing.\n * That stale-event window is the deliberate trade-off; suppressing\n * realtime on rollback would require deferring the publish until the\n * commit boundary, and no consumer has surfaced that need.\n */\n\nimport type { Logger } from '@murumets-ee/core'\nimport type { PostgresJsDatabase } from 'drizzle-orm/postgres-js'\nimport { inAppChannel } from './channels/in-app.js'\nimport type { NotificationChannel } from './channels/types.js'\nimport { notificationsTable } from './notifications-table.js'\nimport { resolveEnabledChannels } from './preferences.js'\nimport { fetchRecipientContexts, resolveRecipientIds } from './recipients.js'\nimport { ThrottleStore } from './throttle.js'\nimport {\n type ChannelDeliveryRecord,\n DEFAULT_RECIPIENT_CAP,\n type NotificationChannelId,\n type NotificationPreferences,\n type NotificationType,\n type NotifyOptions,\n type NotifyResult,\n type RecipientContext,\n} from './types.js'\n\n/**\n * Resolver for per-user preferences. Allows the plugin to wire up a\n * read-through cache without forcing the client to depend on a specific\n * settings client implementation.\n */\nexport type PreferencesResolver = (userId: string) => Promise<NotificationPreferences>\n\nexport interface NotificationClientConfig {\n db: PostgresJsDatabase\n logger?: Logger\n /** Default locale fallback when a recipient has no per-user locale. */\n defaultLocale?: string\n /** Recipient cap for `resolveUsers()`. Defaults to 1000. */\n recipientCap?: number\n /** Channel implementations. Keyed by channel id. Defaults to `inApp` only. */\n channels?: Record<string, NotificationChannel>\n /** Resolver for per-user preferences. Defaults to \"always empty\" → type defaults apply. */\n preferencesResolver?: PreferencesResolver\n /** In-memory throttle store. Tests inject a deterministic clock. */\n throttleStore?: ThrottleStore\n}\n\nexport class NotificationClient {\n private readonly db: PostgresJsDatabase\n private readonly logger?: Logger | undefined\n private readonly defaultLocale: string\n private readonly recipientCap: number\n private readonly channels: Record<string, NotificationChannel>\n private readonly preferencesResolver: PreferencesResolver\n private readonly throttleStore: ThrottleStore\n private readonly notificationsClient: ReturnType<typeof notificationsTable.makeClient>\n\n constructor(config: NotificationClientConfig) {\n this.db = config.db\n this.logger = config.logger\n this.defaultLocale = config.defaultLocale ?? 'en'\n this.recipientCap = config.recipientCap ?? DEFAULT_RECIPIENT_CAP\n this.channels = config.channels ?? { inApp: inAppChannel }\n this.preferencesResolver = config.preferencesResolver ?? (async () => ({}))\n this.throttleStore = config.throttleStore ?? new ThrottleStore()\n this.notificationsClient = notificationsTable.makeClient(config.db)\n }\n\n /**\n * Resolve the table client to use for notification row writes. Mirrors\n * `QueueClient.resolveJobsClient` (PR A of PLAN-OUTBOX). When `tx` is\n * provided, every INSERT/UPDATE on `toolkit_notifications` participates\n * in the caller's transaction; otherwise the package's owned client is\n * used. Recipient lookup against the auth `user` table is a SELECT and\n * does not need tx-binding.\n */\n private resolveNotificationsClient(\n tx: PostgresJsDatabase | undefined,\n ): ReturnType<typeof notificationsTable.makeClient> {\n return tx ? notificationsTable.makeClient(tx) : this.notificationsClient\n }\n\n /**\n * Publish a notification to one or more recipients. See PLAN-NOTIFICATIONS §3.3.\n *\n * When `options.tx` is set, every row write and every email-channel\n * queue enqueue participates in the caller's transaction — see the\n * file-level comment for the full contract.\n */\n async notify<TPayload extends Record<string, unknown>>(\n options: NotifyOptions<TPayload>,\n ): Promise<NotifyResult> {\n const { type, recipients, payload, tx } = options\n\n // The passed `type` object is the authoritative source for `render` +\n // `channels` + `groupBy` — the registry is only used by the auto-derived\n // preferences UI deriver (PR C). Trust the call-site value here so the\n // path is the same in tests (which often skip registry population) and\n // in production. The type's `defineNotificationType` builder already\n // validates id/channels at construction, so we don't re-check here.\n\n const userIds = await resolveRecipientIds(recipients, this.recipientCap)\n // Recipient lookup is a SELECT — no need to bind to tx. Reading auth\n // user rows pre-commit doesn't change the tx-attached writes' atomicity.\n const recipientMap = await fetchRecipientContexts(this.db, userIds, this.defaultLocale)\n const notificationsClient = this.resolveNotificationsClient(tx)\n const userPrefsCache = new Map<string, NotificationPreferences>()\n const notificationIds: string[] = []\n\n for (const userId of userIds) {\n const recipient = recipientMap.get(userId)\n if (!recipient) {\n // The publisher targeted a user that no longer exists. Log and\n // skip — fan-out to deleted users is not a hard error (raceable\n // between resolveUsers and DB read).\n this.logger?.warn(\n { userId, typeId: type.id },\n 'notify(): recipient user not found, skipping',\n )\n continue\n }\n\n // Per-user preferences (cached by the request — same user appearing\n // twice in `userIds` only triggers one settings read).\n let prefs = userPrefsCache.get(userId)\n if (!prefs) {\n try {\n prefs = await this.preferencesResolver(userId)\n } catch (err) {\n this.logger?.warn(\n { err, userId, typeId: type.id },\n 'notify(): preferences resolver failed, applying type defaults',\n )\n prefs = {}\n }\n userPrefsCache.set(userId, prefs)\n }\n\n // Erase TPayload covariance so we can store this in untyped slots\n // (`persistAndDispatch`, channel registry). The runtime type is\n // unaffected — the call boundary above is where TypeScript verifies\n // payload conformance to the type's declared shape.\n const erasedType = type as unknown as NotificationType\n const erasedPayload = payload as unknown as Record<string, unknown>\n\n const enabledChannels = resolveEnabledChannels(erasedType, prefs)\n if (enabledChannels.length === 0) {\n this.logger?.debug(\n { userId, typeId: type.id },\n 'notify(): all channels disabled by user preferences, skipping',\n )\n continue\n }\n\n const rendered = erasedType.render({\n payload: erasedPayload,\n recipient,\n locale: recipient.locale,\n })\n\n const result = await this.persistAndDispatch({\n type: erasedType,\n recipient,\n payload: erasedPayload,\n enabledChannels,\n rendered,\n notificationsClient,\n tx,\n })\n\n if (result) notificationIds.push(result)\n }\n\n return { notificationIds }\n }\n\n /**\n * Per-recipient: collapse-or-insert the row, then dispatch each surviving\n * channel under the throttle gate, then persist channel delivery state.\n *\n * `notificationsClient` is the tx-resolved table client from `notify()` —\n * passed in instead of read from `this.notificationsClient` so every\n * row write inside this method participates in the caller's tx when set.\n * `tx` is forwarded to channels (the email channel needs it to attach\n * `queue.enqueue` to the same tx).\n */\n private async persistAndDispatch(args: {\n type: NotificationType\n recipient: RecipientContext\n payload: Record<string, unknown>\n enabledChannels: NotificationChannelId[]\n rendered: ReturnType<NotificationType['render']>\n notificationsClient: ReturnType<typeof notificationsTable.makeClient>\n tx: PostgresJsDatabase | undefined\n }): Promise<string | null> {\n const {\n type,\n recipient,\n payload,\n enabledChannels,\n rendered,\n notificationsClient,\n tx,\n } = args\n\n const groupKey = type.groupBy ? type.groupBy(payload) : null\n\n // Collapse mode — look for an existing UNREAD row with the same\n // (recipient, type, groupKey). Hit → UPDATE; miss → INSERT.\n let kind: 'created' | 'updated' = 'created'\n let notificationId: string\n\n if (groupKey !== null) {\n const existing = await notificationsClient.findOne({\n recipientUserId: recipient.id,\n type: type.id,\n groupKey,\n readAt: { isNull: true },\n })\n if (existing) {\n const updated = await notificationsClient.update(\n { id: existing.id, recipientUserId: recipient.id },\n {\n payload,\n updatedAt: new Date(),\n },\n )\n if (!updated) {\n // Race: row was archived/deleted between findOne and update.\n // Fall through and INSERT a fresh row — rare but harmless.\n const inserted = await notificationsClient.insert({\n recipientUserId: recipient.id,\n type: type.id,\n payload,\n channels: [],\n groupKey,\n })\n notificationId = inserted.id\n } else {\n notificationId = updated.id\n kind = 'updated'\n }\n } else {\n const inserted = await notificationsClient.insert({\n recipientUserId: recipient.id,\n type: type.id,\n payload,\n channels: [],\n groupKey,\n })\n notificationId = inserted.id\n }\n } else {\n const inserted = await notificationsClient.insert({\n recipientUserId: recipient.id,\n type: type.id,\n payload,\n channels: [],\n groupKey: null,\n })\n notificationId = inserted.id\n }\n\n // Unread count read goes through the SAME tx-resolved client so an\n // in-flight tx sees its own just-inserted row in the count, matching\n // the behaviour outside a tx (where each statement auto-commits and\n // the next read sees the previous insert).\n const unreadCount = await notificationsClient.count({\n recipientUserId: recipient.id,\n readAt: { isNull: true },\n archivedAt: { isNull: true },\n })\n\n const deliveryRecords: ChannelDeliveryRecord[] = []\n for (const channelId of enabledChannels) {\n const channelDef = type.channels[channelId]\n const channel = this.channels[channelId]\n if (!channel || !channelDef) {\n // Channel declared by the type but not registered in this process.\n // Log and skip rather than throw — cleaner failure mode for an\n // operator who mis-configured the plugin.\n this.logger?.warn(\n { typeId: type.id, channelId },\n 'notify(): channel declared but not registered, skipping',\n )\n continue\n }\n\n // Email channel without a verified address → skip cleanly.\n // Run BEFORE the throttle gate so a recipient with no verified email\n // doesn't burn their email throttle quota for sends that wouldn't\n // happen anyway. Once they verify, the throttle starts from zero.\n if (channelId === 'email' && !recipient.email) {\n deliveryRecords.push({\n channel: channelId,\n state: 'skipped_unverified',\n at: new Date().toISOString(),\n })\n continue\n }\n\n // Throttle gate — over-cap delivers a `throttled` record (so the\n // bell still updates for the in-app channel) but skips the channel's\n // send work for this fire.\n if (channelDef.throttle) {\n const key = ThrottleStore.buildKey(recipient.id, type.id, channelId)\n const allowed = this.throttleStore.tryConsume(key, channelDef.throttle)\n if (!allowed) {\n deliveryRecords.push({\n channel: channelId,\n state: 'throttled',\n at: new Date().toISOString(),\n detail: `throttle ${channelDef.throttle} exceeded`,\n })\n continue\n }\n }\n\n try {\n const record = await channel.send({\n notificationId,\n kind,\n typeId: type.id,\n recipient,\n rendered,\n unreadCount,\n tx,\n })\n deliveryRecords.push(record)\n } catch (err) {\n // Channels are documented as \"MUST NOT throw\" but we defend in\n // depth — a misbehaving channel can't tear down the publisher.\n this.logger?.error(\n { err, typeId: type.id, channelId },\n 'notify(): channel send threw — recording as failed',\n )\n deliveryRecords.push({\n channel: channelId,\n state: 'failed',\n at: new Date().toISOString(),\n detail: err instanceof Error ? err.message : 'unknown error',\n })\n }\n }\n\n // Final state write — `channels` reflects ONLY the most recent notify\n // pass's per-channel state (operators read it as \"what happened last\n // time we tried\"). Two reasons to replace rather than append:\n //\n // 1. Bounded growth. A chatty group (50 ticket replies) would\n // otherwise accrete 50× delivery records on the row.\n // 2. No lost-update race. Two concurrent notifies for the same\n // `(recipient, type, groupKey)` would each read the same\n // `existing.channels` and each write back, dropping the other's\n // records. Replacing eliminates the read-then-write window.\n //\n // Per-call delivery history is observable via realtime logs and (for\n // the email channel) queue audit entries.\n //\n // Scope to (id, recipientUserId) as defense-in-depth — even though\n // we just inserted/updated this id, future refactors that move the\n // method out of `notify` shouldn't be able to leak across users.\n if (deliveryRecords.length > 0) {\n await notificationsClient.update(\n { id: notificationId, recipientUserId: recipient.id },\n { channels: deliveryRecords, updatedAt: new Date() },\n )\n }\n\n return notificationId\n }\n\n /**\n * Count of unread (readAt IS NULL) notifications for a user. Drives the\n * realtime payload's `unreadCount` and the bell badge.\n */\n async unreadCount(userId: string): Promise<number> {\n return this.notificationsClient.count({\n recipientUserId: userId,\n readAt: { isNull: true },\n archivedAt: { isNull: true },\n })\n }\n\n /**\n * Result of `markRead` — `mutated` distinguishes \"we updated readAt now\"\n * from \"row was already read, no-op\" so the route can decide whether to\n * fire a realtime event.\n */\n async markRead(\n userId: string,\n notificationId: string,\n ): Promise<{ id: string; type: string; readAt: Date; mutated: boolean } | null> {\n // Scope the WHERE to the recipient — a user cannot mark another user's\n // notification read. Defense-in-depth: the admin route already gates\n // by sessionUserId === recipientUserId.\n //\n // Two-step approach so the call is idempotent in the strict sense\n // (second call doesn't bump `updatedAt` or re-fire realtime):\n // 1. Try UPDATE WHERE readAt IS NULL.\n // 2. If 0 rows changed, look up the row to distinguish\n // \"already read\" (return mutated:false) from \"doesn't exist or\n // not yours\" (return null).\n const now = new Date()\n const updated = await this.notificationsClient.update(\n {\n id: notificationId,\n recipientUserId: userId,\n readAt: { isNull: true },\n },\n { readAt: now, updatedAt: now },\n )\n if (updated) {\n // updated.readAt is non-null because we just set it.\n return {\n id: updated.id,\n type: updated.type,\n readAt: updated.readAt ?? now,\n mutated: true,\n }\n }\n // Either already read or not addressable by this user. Disambiguate.\n const existing = await this.notificationsClient.findOne({\n id: notificationId,\n recipientUserId: userId,\n })\n if (!existing || !existing.readAt) return null\n return {\n id: existing.id,\n type: existing.type,\n readAt: existing.readAt,\n mutated: false,\n }\n }\n\n /** Mark all unread notifications for a user as read. Returns the count affected. */\n async markAllRead(userId: string): Promise<number> {\n const now = new Date()\n const rows = await this.notificationsClient.updateMany(\n { recipientUserId: userId, readAt: { isNull: true } },\n { readAt: now, updatedAt: now },\n )\n return rows.length\n }\n\n /**\n * Soft-archive a notification. Returns the archived row or null if not\n * addressable. `mutated` is false on a second call (already archived).\n */\n async archive(\n userId: string,\n notificationId: string,\n ): Promise<{ id: string; type: string; mutated: boolean } | null> {\n const now = new Date()\n const updated = await this.notificationsClient.update(\n {\n id: notificationId,\n recipientUserId: userId,\n archivedAt: { isNull: true },\n },\n { archivedAt: now, updatedAt: now },\n )\n if (updated) {\n return { id: updated.id, type: updated.type, mutated: true }\n }\n const existing = await this.notificationsClient.findOne({\n id: notificationId,\n recipientUserId: userId,\n })\n if (!existing || !existing.archivedAt) return null\n return { id: existing.id, type: existing.type, mutated: false }\n }\n}\n\n// ---------------------------------------------------------------------------\n// Process-global active client + free-function notify()\n// ---------------------------------------------------------------------------\n\nconst ACTIVE_CLIENT_KEY = Symbol.for('@murumets-ee/notifications:active-client')\n\ninterface ClientHost {\n [ACTIVE_CLIENT_KEY]?: NotificationClient\n}\n\nexport function setActiveNotificationClient(client: NotificationClient | null): void {\n const host = globalThis as ClientHost\n if (client === null) {\n delete host[ACTIVE_CLIENT_KEY]\n } else {\n host[ACTIVE_CLIENT_KEY] = client\n }\n}\n\nexport function getActiveNotificationClient(): NotificationClient | null {\n return (globalThis as ClientHost)[ACTIVE_CLIENT_KEY] ?? null\n}\n\n/**\n * Free-function publisher — the canonical caller-facing API. Resolves the\n * active client wired up by the `notifications()` plugin's init hook.\n *\n * @example\n * ```ts\n * import { notify } from '@murumets-ee/notifications'\n *\n * await notify({\n * type: TicketingMessageCreated,\n * recipients: { userIds: [agentId] },\n * payload: { ticketId, messageId, ticketSubject, authorName },\n * })\n * ```\n */\nexport async function notify<TPayload extends Record<string, unknown>>(\n options: NotifyOptions<TPayload>,\n): Promise<NotifyResult> {\n const client = getActiveNotificationClient()\n if (!client) {\n throw new Error(\n 'notify(): no active NotificationClient. Add notifications() to your plugins ' +\n 'array, or pass a NotificationClient instance directly.',\n )\n }\n return client.notify(options)\n}\n"],"mappings":"+OAyBA,MAAM,EAAS,EAAa,CAAE,KAAM,yBAA0B,CAAC,CAE/D,IAAI,EAAiF,KAErF,SAAS,EAAgB,EAAc,EAAqB,CACtD,aAAe,OAAS,EAAI,OAAS,wBACvC,EAAO,MAAM,CAAE,MAAK,QAAO,CAAE,6CAA6C,CAE1E,EAAO,KAAK,CAAE,MAAK,QAAO,CAAE,0CAA0C,CAI1E,SAAS,EACP,EACA,EACA,EACM,CACN,GAAI,EAAe,CACjB,GAAI,CACF,EAAc,CAAE,QAAO,UAAS,MAAO,CAAE,SAAQ,CAAE,CAAC,OAC7C,EAAK,CACZ,EAAgB,EAAK,EAAM,CAE7B,OAEF,OAAO,8BACJ,KAAM,GAAQ,CACb,EAAgB,EAAI,aACpB,EAAc,CAAE,QAAO,UAAS,MAAO,CAAE,SAAQ,CAAE,CAAC,EACpD,CACD,MAAO,GAAiB,CACvB,EAAgB,EAAK,EAAM,EAC3B,CAGN,SAAgB,EACd,EACA,EACM,CACN,EAAY,EAAoB,QAAS,EAAQ,EAAQ,CAG3D,SAAgB,EACd,EACA,EACM,CACN,EAAY,EAAoB,QAAS,EAAQ,EAAQ,CAG3D,SAAgB,EACd,EACA,EACM,CACN,EAAY,EAAoB,KAAM,EAAQ,EAAQ,CAGxD,SAAgB,EACd,EACA,EACM,CACN,EAAY,EAAoB,SAAU,EAAQ,EAAQ,CCtE5D,MAAa,EAAoC,CAC/C,GAAI,QACJ,MAAM,KAAK,EAAyD,CAClE,IAAM,EAAU,CACd,GAAI,EAAI,eACR,KAAM,EAAI,OACV,YAAa,EAAI,YAClB,CAMD,OALI,EAAI,OAAS,UACf,EAA2B,EAAI,UAAU,GAAI,EAAQ,CAErD,EAA2B,EAAI,UAAU,GAAI,EAAQ,CAEhD,CACL,QAAS,QACT,MAAO,YACP,GAAI,IAAI,MAAM,CAAC,aAAa,CAC7B,EAEJ,CCPY,EAAoE,EAAE,OACjF,EAAE,QAAQ,CACV,EAAE,OAAO,EAAE,QAAQ,CAAE,EAAE,SAAS,CAAC,CAClC,CAOY,EAAkC,EAAe,CAC5D,UAAW,4BACX,MAAO,OACP,MAAO,2BACP,SAAU,OAIV,aAAc,GACd,OAAQ,CACN,MAAO,CACL,KAAM,OACN,MAAO,oCACP,YACE,kGACF,QAAS,EAAE,CACX,OAAQ,EACT,CACF,CACF,CAAC,CAUF,SAAgB,EACd,EACA,EACyB,CACzB,IAAM,EAAmC,EAAE,CACrC,EAAgB,EAAU,EAAK,KAAO,EAAE,CAK9C,IAAK,IAAM,KAAS,OAAO,QAAQ,EAAK,SAAS,CAAE,CACjD,IAAM,EAAY,EAAM,GAClB,EAAM,EAAM,GACb,IACc,EAAc,IACD,EAAI,UACrB,EAAQ,KAAK,EAAU,CAExC,OAAO,ECrET,MAAM,EAAc,wBAad,EAAc,IAAI,IAExB,SAAgB,EAAc,EAA8B,CAC1D,IAAM,EAAS,EAAY,IAAI,EAAK,CACpC,GAAI,EAAQ,OAAO,EAEnB,IAAM,EAAQ,EAAY,KAAK,EAAK,CACpC,GAAI,CAAC,EACH,MAAU,MACR,0BAA0B,EAAK,uFACsB,EAAY,SAClE,CAIH,GAAM,EAAG,EAAU,EAAM,GAAQ,EACjC,GAAI,CAAC,GAAY,CAAC,GAAQ,CAAC,EACzB,MAAU,MAAM,0BAA0B,EAAK,GAAG,CAEpD,IAAM,EAAQ,OAAO,SAAS,EAAU,GAAG,CACrC,EAAI,OAAO,SAAS,EAAM,GAAG,CACnC,GAAI,GAAS,GAAK,GAAK,EACrB,MAAU,MAAM,0BAA0B,EAAK,sCAAsC,CAGvF,IAAM,EAAS,CAD0B,EAAG,IAAM,EAAG,IAAQ,EAAG,KAC3C,CAAC,GACtB,GAAI,IAAW,IAAA,GACb,MAAU,MAAM,0BAA0B,EAAK,mBAAmB,EAAK,GAAG,CAE5E,IAAM,EAAyB,CAAE,QAAO,SAAU,EAAI,EAAQ,CAE9D,OADA,EAAY,IAAI,EAAM,EAAO,CACtB,EAiBT,IAAa,EAAb,KAA2B,CACzB,OAA0B,IAAI,IAC9B,MAEA,YAAY,EAAsB,KAAK,IAAK,CAC1C,KAAK,MAAQ,EAQf,WAAW,EAAa,EAAuB,CAC7C,IAAM,EAAS,EAAc,EAAK,CAC5B,EAAM,KAAK,OAAO,CAClB,EAAc,EAAM,EAAO,SAE3B,EAAW,KAAK,OAAO,IAAI,EAAI,CAC/B,EAAS,EAAW,EAAS,OAAQ,GAAM,EAAI,EAAY,CAAG,EAAE,CAetE,OAbI,EAAO,QAAU,EAAO,OAGtB,EAAO,SAAW,EACpB,KAAK,OAAO,OAAO,EAAI,CAEvB,KAAK,OAAO,IAAI,EAAK,EAAO,CAEvB,KAGT,EAAO,KAAK,EAAI,CAChB,KAAK,OAAO,IAAI,EAAK,EAAO,CACrB,IAST,OAAO,SAAS,EAAgB,EAAgB,EAAyB,CACvE,MAAO,GAAG,EAAO,GAAG,EAAO,GAAG,IAIhC,OAAc,CACZ,KAAK,OAAO,OAAO,GC7CV,EAAb,KAAgC,CAC9B,GACA,OACA,cACA,aACA,SACA,oBACA,cACA,oBAEA,YAAY,EAAkC,CAC5C,KAAK,GAAK,EAAO,GACjB,KAAK,OAAS,EAAO,OACrB,KAAK,cAAgB,EAAO,eAAiB,KAC7C,KAAK,aAAe,EAAO,cAAA,IAC3B,KAAK,SAAW,EAAO,UAAY,CAAE,MAAO,EAAc,CAC1D,KAAK,oBAAsB,EAAO,sBAAwB,UAAa,EAAE,GACzE,KAAK,cAAgB,EAAO,eAAiB,IAAI,EACjD,KAAK,oBAAsB,EAAmB,WAAW,EAAO,GAAG,CAWrE,2BACE,EACkD,CAClD,OAAO,EAAK,EAAmB,WAAW,EAAG,CAAG,KAAK,oBAUvD,MAAM,OACJ,EACuB,CACvB,GAAM,CAAE,OAAM,aAAY,UAAS,MAAO,EASpC,EAAU,MAAM,EAAoB,EAAY,KAAK,aAAa,CAGlE,EAAe,MAAM,EAAuB,KAAK,GAAI,EAAS,KAAK,cAAc,CACjF,EAAsB,KAAK,2BAA2B,EAAG,CACzD,EAAiB,IAAI,IACrB,EAA4B,EAAE,CAEpC,IAAK,IAAM,KAAU,EAAS,CAC5B,IAAM,EAAY,EAAa,IAAI,EAAO,CAC1C,GAAI,CAAC,EAAW,CAId,KAAK,QAAQ,KACX,CAAE,SAAQ,OAAQ,EAAK,GAAI,CAC3B,+CACD,CACD,SAKF,IAAI,EAAQ,EAAe,IAAI,EAAO,CACtC,GAAI,CAAC,EAAO,CACV,GAAI,CACF,EAAQ,MAAM,KAAK,oBAAoB,EAAO,OACvC,EAAK,CACZ,KAAK,QAAQ,KACX,CAAE,MAAK,SAAQ,OAAQ,EAAK,GAAI,CAChC,gEACD,CACD,EAAQ,EAAE,CAEZ,EAAe,IAAI,EAAQ,EAAM,CAOnC,IAAM,EAAa,EACb,EAAgB,EAEhB,EAAkB,EAAuB,EAAY,EAAM,CACjE,GAAI,EAAgB,SAAW,EAAG,CAChC,KAAK,QAAQ,MACX,CAAE,SAAQ,OAAQ,EAAK,GAAI,CAC3B,gEACD,CACD,SAGF,IAAM,EAAW,EAAW,OAAO,CACjC,QAAS,EACT,YACA,OAAQ,EAAU,OACnB,CAAC,CAEI,EAAS,MAAM,KAAK,mBAAmB,CAC3C,KAAM,EACN,YACA,QAAS,EACT,kBACA,WACA,sBACA,KACD,CAAC,CAEE,GAAQ,EAAgB,KAAK,EAAO,CAG1C,MAAO,CAAE,kBAAiB,CAa5B,MAAc,mBAAmB,EAQN,CACzB,GAAM,CACJ,OACA,YACA,UACA,kBACA,WACA,sBACA,MACE,EAEE,EAAW,EAAK,QAAU,EAAK,QAAQ,EAAQ,CAAG,KAIpD,EAA8B,UAC9B,EAEJ,GAAI,IAAa,KAAM,CACrB,IAAM,EAAW,MAAM,EAAoB,QAAQ,CACjD,gBAAiB,EAAU,GAC3B,KAAM,EAAK,GACX,WACA,OAAQ,CAAE,OAAQ,GAAM,CACzB,CAAC,CACF,GAAI,EAAU,CACZ,IAAM,EAAU,MAAM,EAAoB,OACxC,CAAE,GAAI,EAAS,GAAI,gBAAiB,EAAU,GAAI,CAClD,CACE,UACA,UAAW,IAAI,KAChB,CACF,CACI,GAYH,EAAiB,EAAQ,GACzB,EAAO,WAHP,GAAiB,MAPM,EAAoB,OAAO,CAChD,gBAAiB,EAAU,GAC3B,KAAM,EAAK,GACX,UACA,SAAU,EAAE,CACZ,WACD,CAAC,EACwB,QAa5B,GAAiB,MAPM,EAAoB,OAAO,CAChD,gBAAiB,EAAU,GAC3B,KAAM,EAAK,GACX,UACA,SAAU,EAAE,CACZ,WACD,CAAC,EACwB,QAU5B,GAAiB,MAPM,EAAoB,OAAO,CAChD,gBAAiB,EAAU,GAC3B,KAAM,EAAK,GACX,UACA,SAAU,EAAE,CACZ,SAAU,KACX,CAAC,EACwB,GAO5B,IAAM,EAAc,MAAM,EAAoB,MAAM,CAClD,gBAAiB,EAAU,GAC3B,OAAQ,CAAE,OAAQ,GAAM,CACxB,WAAY,CAAE,OAAQ,GAAM,CAC7B,CAAC,CAEI,EAA2C,EAAE,CACnD,IAAK,IAAM,KAAa,EAAiB,CACvC,IAAM,EAAa,EAAK,SAAS,GAC3B,EAAU,KAAK,SAAS,GAC9B,GAAI,CAAC,GAAW,CAAC,EAAY,CAI3B,KAAK,QAAQ,KACX,CAAE,OAAQ,EAAK,GAAI,YAAW,CAC9B,0DACD,CACD,SAOF,GAAI,IAAc,SAAW,CAAC,EAAU,MAAO,CAC7C,EAAgB,KAAK,CACnB,QAAS,EACT,MAAO,qBACP,GAAI,IAAI,MAAM,CAAC,aAAa,CAC7B,CAAC,CACF,SAMF,GAAI,EAAW,SAAU,CACvB,IAAM,EAAM,EAAc,SAAS,EAAU,GAAI,EAAK,GAAI,EAAU,CAEpE,GAAI,CADY,KAAK,cAAc,WAAW,EAAK,EAAW,SAClD,CAAE,CACZ,EAAgB,KAAK,CACnB,QAAS,EACT,MAAO,YACP,GAAI,IAAI,MAAM,CAAC,aAAa,CAC5B,OAAQ,YAAY,EAAW,SAAS,WACzC,CAAC,CACF,UAIJ,GAAI,CACF,IAAM,EAAS,MAAM,EAAQ,KAAK,CAChC,iBACA,OACA,OAAQ,EAAK,GACb,YACA,WACA,cACA,KACD,CAAC,CACF,EAAgB,KAAK,EAAO,OACrB,EAAK,CAGZ,KAAK,QAAQ,MACX,CAAE,MAAK,OAAQ,EAAK,GAAI,YAAW,CACnC,qDACD,CACD,EAAgB,KAAK,CACnB,QAAS,EACT,MAAO,SACP,GAAI,IAAI,MAAM,CAAC,aAAa,CAC5B,OAAQ,aAAe,MAAQ,EAAI,QAAU,gBAC9C,CAAC,EA4BN,OAPI,EAAgB,OAAS,GAC3B,MAAM,EAAoB,OACxB,CAAE,GAAI,EAAgB,gBAAiB,EAAU,GAAI,CACrD,CAAE,SAAU,EAAiB,UAAW,IAAI,KAAQ,CACrD,CAGI,EAOT,MAAM,YAAY,EAAiC,CACjD,OAAO,KAAK,oBAAoB,MAAM,CACpC,gBAAiB,EACjB,OAAQ,CAAE,OAAQ,GAAM,CACxB,WAAY,CAAE,OAAQ,GAAM,CAC7B,CAAC,CAQJ,MAAM,SACJ,EACA,EAC8E,CAW9E,IAAM,EAAM,IAAI,KACV,EAAU,MAAM,KAAK,oBAAoB,OAC7C,CACE,GAAI,EACJ,gBAAiB,EACjB,OAAQ,CAAE,OAAQ,GAAM,CACzB,CACD,CAAE,OAAQ,EAAK,UAAW,EAAK,CAChC,CACD,GAAI,EAEF,MAAO,CACL,GAAI,EAAQ,GACZ,KAAM,EAAQ,KACd,OAAQ,EAAQ,QAAU,EAC1B,QAAS,GACV,CAGH,IAAM,EAAW,MAAM,KAAK,oBAAoB,QAAQ,CACtD,GAAI,EACJ,gBAAiB,EAClB,CAAC,CAEF,MADI,CAAC,GAAY,CAAC,EAAS,OAAe,KACnC,CACL,GAAI,EAAS,GACb,KAAM,EAAS,KACf,OAAQ,EAAS,OACjB,QAAS,GACV,CAIH,MAAM,YAAY,EAAiC,CACjD,IAAM,EAAM,IAAI,KAKhB,OAAO,MAJY,KAAK,oBAAoB,WAC1C,CAAE,gBAAiB,EAAQ,OAAQ,CAAE,OAAQ,GAAM,CAAE,CACrD,CAAE,OAAQ,EAAK,UAAW,EAAK,CAChC,EACW,OAOd,MAAM,QACJ,EACA,EACgE,CAChE,IAAM,EAAM,IAAI,KACV,EAAU,MAAM,KAAK,oBAAoB,OAC7C,CACE,GAAI,EACJ,gBAAiB,EACjB,WAAY,CAAE,OAAQ,GAAM,CAC7B,CACD,CAAE,WAAY,EAAK,UAAW,EAAK,CACpC,CACD,GAAI,EACF,MAAO,CAAE,GAAI,EAAQ,GAAI,KAAM,EAAQ,KAAM,QAAS,GAAM,CAE9D,IAAM,EAAW,MAAM,KAAK,oBAAoB,QAAQ,CACtD,GAAI,EACJ,gBAAiB,EAClB,CAAC,CAEF,MADI,CAAC,GAAY,CAAC,EAAS,WAAmB,KACvC,CAAE,GAAI,EAAS,GAAI,KAAM,EAAS,KAAM,QAAS,GAAO,GAQnE,MAAM,EAAoB,OAAO,IAAI,2CAA2C,CAMhF,SAAgB,EAA4B,EAAyC,CACnF,IAAM,EAAO,WACT,IAAW,KACb,OAAO,EAAK,GAEZ,EAAK,GAAqB,EAI9B,SAAgB,GAAyD,CACvE,OAAQ,WAA0B,IAAsB,KAkB1D,eAAsB,EACpB,EACuB,CACvB,IAAM,EAAS,GAA6B,CAC5C,GAAI,CAAC,EACH,MAAU,MACR,qIAED,CAEH,OAAO,EAAO,OAAO,EAAQ"}
@@ -0,0 +1,39 @@
1
+ import { _ as RenderedEmail, a as NOTIFICATION_TOPICS, c as NotificationRealtimePayload, d as NotifyOptions, f as NotifyResult, g as RenderedContent, h as RenderArgs, i as DEFAULT_RECIPIENT_CAP, l as NotificationTopic, m as Recipients, n as ChannelDeliveryRecord, o as NotificationChannelId, p as RecipientContext, r as ChannelDeliveryState, s as NotificationPreferences, t as ChannelDefinition, u as NotificationType, v as RenderedInApp } from "./types-B8qKgKMj.mjs";
2
+
3
+ //#region src/define.d.ts
4
+ /**
5
+ * Build a `NotificationType<TPayload>` handle.
6
+ *
7
+ * @example
8
+ * ```ts
9
+ * import { defineNotificationType } from '@murumets-ee/notifications'
10
+ *
11
+ * export const TicketingMessageCreated = defineNotificationType({
12
+ * id: 'ticketing.message.created',
13
+ * channels: {
14
+ * inApp: { default: true },
15
+ * email: { default: true, throttle: '5/15m' },
16
+ * },
17
+ * groupBy: ({ ticketId }) => `ticket:${ticketId}`,
18
+ * render: ({ payload }) => ({
19
+ * inApp: { title: `New reply on #${payload.ticketId}`, href: `/admin/ticketing/${payload.ticketId}` },
20
+ * email: { subject: 'New reply', html: '...', text: '...' },
21
+ * }),
22
+ * })
23
+ * ```
24
+ */
25
+ declare function defineNotificationType<TPayload extends Record<string, unknown> = Record<string, unknown>>(opts: NotificationType<TPayload>): NotificationType<TPayload>;
26
+ /**
27
+ * Register a type. Idempotent: re-calling with the same id replaces the
28
+ * entry (so HMR / test re-init works cleanly).
29
+ */
30
+ declare function registerNotificationType(type: NotificationType): void;
31
+ /** Lookup. Returns `undefined` for unknown ids — `notify(...)` throws. */
32
+ declare function getNotificationType(id: string): NotificationType | undefined;
33
+ /** Snapshot of all registered types — used by the preferences UI deriver. */
34
+ declare function getAllNotificationTypes(): NotificationType[];
35
+ /** @internal — for tests. */
36
+ declare function clearNotificationTypeRegistry(): void;
37
+ //#endregion
38
+ export { type ChannelDefinition, type ChannelDeliveryRecord, type ChannelDeliveryState, DEFAULT_RECIPIENT_CAP, NOTIFICATION_TOPICS, type NotificationChannelId, type NotificationPreferences, type NotificationRealtimePayload, type NotificationTopic, type NotificationType, type NotifyOptions, type NotifyResult, type RecipientContext, type Recipients, type RenderArgs, type RenderedContent, type RenderedEmail, type RenderedInApp, clearNotificationTypeRegistry, defineNotificationType, getAllNotificationTypes, getNotificationType, registerNotificationType };
39
+ //# sourceMappingURL=define.d.mts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"define.d.mts","names":[],"sources":["../src/define.ts"],"mappings":";;;;;;;;;;;;AA2HA;;;;;AAKA;;;;;AAKA;;iBAhEgB,sBAAA,kBACG,MAAA,oBAA0B,MAAA,kBAAA,CAC3C,IAAA,EAAM,gBAAA,CAAiB,QAAA,IAAY,gBAAA,CAAiB,QAAA;;;AAmEtD;;iBAfgB,wBAAA,CAAyB,IAAA,EAAM,gBAAA;;iBAK/B,mBAAA,CAAoB,EAAA,WAAa,gBAAA;;iBAKjC,uBAAA,CAAA,GAA2B,gBAAA;;iBAK3B,6BAAA,CAAA"}
@@ -0,0 +1,2 @@
1
+ import{n as e,t}from"./types-Dy_AGX6X.mjs";const n=/^[a-z][a-z0-9]*(\.[a-z][a-z0-9]*)+$/;function r(e){if(!n.test(e.id))throw Error(`Invalid notification type id "${e.id}": must be a dot-namespaced lowercase identifier (e.g. "ticketing.message.created"). Pattern: ${n.source}`);if(e.id.length>64)throw Error(`Invalid notification type id "${e.id}": ${e.id.length} chars exceeds the 64-char cap. Long ids usually mean payload data leaked into the namespace — keep payload in the payload object.`);if(!e.channels||Object.keys(e.channels).length===0)throw Error(`defineNotificationType("${e.id}"): channels record must declare at least one supported channel (e.g. inApp, email).`);return Object.freeze({...e})}const i=Symbol.for(`@murumets-ee/notifications:type-registry`);function a(){let e=globalThis;return e[i]||(e[i]=new Map),e[i]}function o(e){a().set(e.id,e)}function s(e){return a().get(e)}function c(){return Array.from(a().values())}function l(){a().clear()}export{t as DEFAULT_RECIPIENT_CAP,e as NOTIFICATION_TOPICS,l as clearNotificationTypeRegistry,r as defineNotificationType,c as getAllNotificationTypes,s as getNotificationType,o as registerNotificationType};
2
+ //# sourceMappingURL=define.mjs.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"define.mjs","names":[],"sources":["../src/define.ts"],"sourcesContent":["/**\n * `defineNotificationType` — the type-safe builder + global registry.\n *\n * Registry survives Turbopack module duplication via `globalThis +\n * Symbol.for`, mirroring queue's `job-metadata` and `worker-slot`.\n *\n * Plugins ship their types via `Plugin.shared.notifications` (declared in\n * `@murumets-ee/core`). The notifications plugin's `init` walks all\n * plugins and calls `registerNotificationType()` for each — keeping the\n * registry the single source of truth that both `notify(...)` and the\n * auto-derived preferences settings page read from.\n */\n\nimport type { NotificationType } from './types.js'\n\n// Re-export the public type-level surface so consumers importing from the\n// `@murumets-ee/notifications/define` subpath get everything they need to\n// build a typed `NotificationType` without touching the main entry (which\n// carries `import 'server-only'`). This subpath is the canonical entry\n// for plugin type-registration files that load at config-resolve time\n// through jiti / tsx — neither applies the `react-server` export\n// condition that resolves `server-only` to its empty stub.\nexport type {\n ChannelDefinition,\n ChannelDeliveryRecord,\n ChannelDeliveryState,\n NotificationChannelId,\n NotificationPreferences,\n NotificationRealtimePayload,\n NotificationTopic,\n NotificationType,\n NotifyOptions,\n NotifyResult,\n RecipientContext,\n Recipients,\n RenderArgs,\n RenderedContent,\n RenderedEmail,\n RenderedInApp,\n} from './types.js'\nexport { DEFAULT_RECIPIENT_CAP, NOTIFICATION_TOPICS } from './types.js'\n\nconst ID_RE = /^[a-z][a-z0-9]*(\\.[a-z][a-z0-9]*)+$/\n// Bounds the id well below the 128-char `type` column. A long id is a\n// likely sign of misuse (encoding payload data into the type id) and would\n// also strain the auto-derived preferences UI.\nconst ID_MAX_LENGTH = 64\n\n/**\n * Build a `NotificationType<TPayload>` handle.\n *\n * @example\n * ```ts\n * import { defineNotificationType } from '@murumets-ee/notifications'\n *\n * export const TicketingMessageCreated = defineNotificationType({\n * id: 'ticketing.message.created',\n * channels: {\n * inApp: { default: true },\n * email: { default: true, throttle: '5/15m' },\n * },\n * groupBy: ({ ticketId }) => `ticket:${ticketId}`,\n * render: ({ payload }) => ({\n * inApp: { title: `New reply on #${payload.ticketId}`, href: `/admin/ticketing/${payload.ticketId}` },\n * email: { subject: 'New reply', html: '...', text: '...' },\n * }),\n * })\n * ```\n */\nexport function defineNotificationType<\n TPayload extends Record<string, unknown> = Record<string, unknown>,\n>(opts: NotificationType<TPayload>): NotificationType<TPayload> {\n if (!ID_RE.test(opts.id)) {\n throw new Error(\n `Invalid notification type id \"${opts.id}\": must be a dot-namespaced ` +\n `lowercase identifier (e.g. \"ticketing.message.created\"). ` +\n `Pattern: ${ID_RE.source}`,\n )\n }\n if (opts.id.length > ID_MAX_LENGTH) {\n throw new Error(\n `Invalid notification type id \"${opts.id}\": ${opts.id.length} chars ` +\n `exceeds the ${ID_MAX_LENGTH}-char cap. Long ids usually mean payload ` +\n `data leaked into the namespace — keep payload in the payload object.`,\n )\n }\n if (!opts.channels || Object.keys(opts.channels).length === 0) {\n throw new Error(\n `defineNotificationType(\"${opts.id}\"): channels record must declare at ` +\n `least one supported channel (e.g. inApp, email).`,\n )\n }\n // Frozen so a downstream consumer can't accidentally mutate the type\n // and have the change leak into the registry on next register pass —\n // matches queue's defineJob pattern.\n return Object.freeze({ ...opts })\n}\n\n// ---------------------------------------------------------------------------\n// Process-global registry\n// ---------------------------------------------------------------------------\n\nconst REGISTRY_KEY = Symbol.for('@murumets-ee/notifications:type-registry')\n\ninterface RegistryHost {\n // Stored as the open-shape `NotificationType` so heterogeneous payload\n // types coexist in one map. Each type narrows back at lookup via the\n // caller's own type parameter.\n [REGISTRY_KEY]?: Map<string, NotificationType>\n}\n\nfunction getRegistry(): Map<string, NotificationType> {\n const host = globalThis as RegistryHost\n if (!host[REGISTRY_KEY]) {\n host[REGISTRY_KEY] = new Map()\n }\n return host[REGISTRY_KEY]\n}\n\n/**\n * Register a type. Idempotent: re-calling with the same id replaces the\n * entry (so HMR / test re-init works cleanly).\n */\nexport function registerNotificationType(type: NotificationType): void {\n getRegistry().set(type.id, type)\n}\n\n/** Lookup. Returns `undefined` for unknown ids — `notify(...)` throws. */\nexport function getNotificationType(id: string): NotificationType | undefined {\n return getRegistry().get(id)\n}\n\n/** Snapshot of all registered types — used by the preferences UI deriver. */\nexport function getAllNotificationTypes(): NotificationType[] {\n return Array.from(getRegistry().values())\n}\n\n/** @internal — for tests. */\nexport function clearNotificationTypeRegistry(): void {\n getRegistry().clear()\n}\n"],"mappings":"2CA0CA,MAAM,EAAQ,sCA2Bd,SAAgB,EAEd,EAA8D,CAC9D,GAAI,CAAC,EAAM,KAAK,EAAK,GAAG,CACtB,MAAU,MACR,iCAAiC,EAAK,GAAG,gGAE3B,EAAM,SACrB,CAEH,GAAI,EAAK,GAAG,OAAS,GACnB,MAAU,MACR,iCAAiC,EAAK,GAAG,KAAK,EAAK,GAAG,OAAO,oIAG9D,CAEH,GAAI,CAAC,EAAK,UAAY,OAAO,KAAK,EAAK,SAAS,CAAC,SAAW,EAC1D,MAAU,MACR,2BAA2B,EAAK,GAAG,sFAEpC,CAKH,OAAO,OAAO,OAAO,CAAE,GAAG,EAAM,CAAC,CAOnC,MAAM,EAAe,OAAO,IAAI,2CAA2C,CAS3E,SAAS,GAA6C,CACpD,IAAM,EAAO,WAIb,OAHK,EAAK,KACR,EAAK,GAAgB,IAAI,KAEpB,EAAK,GAOd,SAAgB,EAAyB,EAA8B,CACrE,GAAa,CAAC,IAAI,EAAK,GAAI,EAAK,CAIlC,SAAgB,EAAoB,EAA0C,CAC5E,OAAO,GAAa,CAAC,IAAI,EAAG,CAI9B,SAAgB,GAA8C,CAC5D,OAAO,MAAM,KAAK,GAAa,CAAC,QAAQ,CAAC,CAI3C,SAAgB,GAAsC,CACpD,GAAa,CAAC,OAAO"}
@@ -0,0 +1,2 @@
1
+ import{getNotificationType as e}from"./define.mjs";import{s as t,t as n}from"./recipients-DDN8AJzX.mjs";import{defineJob as r}from"@murumets-ee/queue/client";import{z as i}from"zod";var a=Object.defineProperty,o=((e,t)=>{let n={};for(var r in e)a(n,r,{get:e[r],enumerable:!0});return t||a(n,Symbol.toStringTag,{value:`Module`}),n})({createEmailChannel:()=>l,createSendEmailHandler:()=>u,notificationsSendEmailJob:()=>c,runSendEmailJob:()=>d,sendEmailJobPayloadSchema:()=>s});const s=i.object({notificationId:i.string().uuid(),recipientUserId:i.string().min(1).max(255)}),c=r({name:`notifications:send-email`,description:`Render and send a transactional notification email.`,schema:s,defaultRetries:3});function l(e){return{id:`email`,async send(t){let n=new Date().toISOString();if(!t.recipient.email)return{channel:`email`,state:`skipped_unverified`,at:n};try{return await e.queue.enqueue(c,{notificationId:t.notificationId,recipientUserId:t.recipient.id},t.tx?{tx:t.tx}:void 0),{channel:`email`,state:`queued`,at:n}}catch(r){return e.logger?.error({err:r,notificationId:t.notificationId,typeId:t.typeId},`email channel: failed to enqueue send job`),{channel:`email`,state:`failed`,at:n,detail:`enqueue: ${r instanceof Error?r.message:`unknown error`}`.slice(0,256)}}}}}function u(r){let i=t.makeClient(r.db),a=r.defaultLocale??`en`,o={fetchRow:(e,t)=>i.findOne({id:e,recipientUserId:t}),fetchRecipient:async e=>(await n(r.db,[e],a)).get(e)??null,lookupType:e,sendMail:e=>r.mail.send(e),flipChannelState:(e,t,n)=>f(i,e,t,n),from:r.from,logger:r.logger};return e=>d(o,e)}async function d(e,t){let{notificationId:n,recipientUserId:r}=t.payload,i=await e.fetchRow(n,r);if(!i){e.logger?.warn({notificationId:n,recipientUserId:r},`send-email: notification row not found, skipping (likely archived/deleted)`);return}let a=e.lookupType(i.type);if(!a){e.logger?.warn({notificationId:n,typeId:i.type},`send-email: notification type not registered, skipping`);return}let o=await e.fetchRecipient(r);if(!o||!o.email){e.logger?.info({notificationId:n,recipientUserId:r},`send-email: recipient has no verified email at run time, marking skipped`),await e.flipChannelState(i.id,r,{channel:`email`,state:`skipped_unverified`,at:new Date().toISOString()});return}let s=a.render({payload:i.payload,recipient:o,locale:o.locale}).email;if(!s){e.logger?.error({notificationId:n,typeId:i.type},`send-email: type render produced no email content despite email channel enabled`),await e.flipChannelState(i.id,r,{channel:`email`,state:`failed`,at:new Date().toISOString(),detail:`render produced no email content`});return}let c={from:e.from,to:o.email,subject:s.subject,html:s.html,text:s.text};try{await e.sendMail(c)}catch(a){let o=a instanceof Error?a.message:`unknown error`;try{await e.flipChannelState(i.id,r,{channel:`email`,state:`failed`,at:new Date().toISOString(),detail:o.slice(0,256)})}catch(t){e.logger?.error({err:t,notificationId:n,typeId:i.type},`send-email: failed-state flip threw — preserving original mail error`)}throw e.logger?.error({err:a,notificationId:n,typeId:i.type,attempts:t.attempts},`send-email: mail.send failed, will be retried by queue`),a}try{await e.flipChannelState(i.id,r,{channel:`email`,state:`delivered`,at:new Date().toISOString()})}catch(t){e.logger?.error({err:t,notificationId:n,typeId:i.type},`send-email: delivered-state flip threw — mail was sent, row state will recover on next notify() pass`)}}async function f(e,t,n,r){let i=await e.findOne({id:t,recipientUserId:n});if(!i)return;let a=[],o=!1;for(let e of i.channels)e.channel===r.channel?(a.push(r),o=!0):a.push(e);o||a.push(r),await e.update({id:t,recipientUserId:n},{channels:a,updatedAt:new Date})}export{d as a,c as i,u as n,s as o,o as r,l as t};
2
+ //# sourceMappingURL=email-DgfO6gZR.mjs.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"email-DgfO6gZR.mjs","names":[],"sources":["../src/channels/email.ts"],"sourcesContent":["/**\n * Email channel — enqueues a `notifications:send-email` queue job that\n * performs the actual `mail.send`.\n *\n * Why a queue job and not a synchronous send:\n * - Email I/O is slow (100-500ms typical) and the publisher's request\n * must not wait on it.\n * - Mail providers fail transiently (rate limits, timeouts); the queue's\n * retry policy is the right place to handle that.\n * - The notify pipeline already records `queued` on the notification row\n * via the channel's return record, so the bell drawer can show \"email\n * pending\" until the job lands.\n *\n * Channel state lifecycle on the row:\n * - `queued` — channel.send() enqueued the job.\n * - `delivered` — handler flipped after `mail.send` returned.\n * - `failed` — handler flipped after `mail.send` threw (terminal +\n * non-terminal alike — the operator-facing record always\n * reflects the last attempt; queue retry semantics drive\n * whether another attempt will follow).\n * - `skipped_unverified` — recipient has no verified email at handler-run\n * time. Pre-`notify` skip is in `client.ts`; this branch\n * covers the race where the email got unverified between\n * enqueue and run.\n *\n * The handler is idempotent w.r.t. the notification row: it reads payload +\n * recipient at run time (NOT at enqueue time), so a row that was\n * group-collapse-updated between enqueue and run sends the LATEST content,\n * not a stale snapshot. This matches the in-app channel's \"fan-out the\n * latest state\" behaviour.\n */\n\nimport type { Logger } from '@murumets-ee/core'\nimport type { MailMessage, MailProvider, SendResult } from '@murumets-ee/mail'\nimport { defineJob, type JobDefinition } from '@murumets-ee/queue/client'\nimport type { PostgresJsDatabase } from 'drizzle-orm/postgres-js'\nimport { z } from 'zod'\nimport { getNotificationType } from '../define.js'\nimport { notificationsTable } from '../notifications-table.js'\nimport { fetchRecipientContexts } from '../recipients.js'\nimport type { ChannelDeliveryRecord, RenderedEmail } from '../types.js'\nimport type { ChannelSendContext, NotificationChannel } from './types.js'\n\n// ---------------------------------------------------------------------------\n// Shared queue job definition\n// ---------------------------------------------------------------------------\n\n/**\n * Payload schema for the `notifications:send-email` job.\n *\n * Deliberately minimal — only the row id + the recipient. Render content\n * is re-derived from the row's `payload` + the type registry at handler\n * run time, so a group-collapse update between enqueue and run is reflected\n * in the sent email.\n *\n * `recipientUserId` is repeated alongside `notificationId` so the handler's\n * row read can scope to `(id, recipientUserId)` without needing a second\n * lookup — defense-in-depth against a misbehaving plugin queueing a job\n * for a row owned by a different user.\n */\nexport const sendEmailJobPayloadSchema = z.object({\n notificationId: z.string().uuid(),\n // Cap at varchar(255) — matches the auth `user.id` column shape, refuses\n // payload-poisoning attempts at the schema boundary.\n recipientUserId: z.string().min(1).max(255),\n})\n\nexport type SendEmailJobPayload = z.infer<typeof sendEmailJobPayloadSchema>\n\n/**\n * Queue job definition. Plugin wiring registers a handler against this\n * definition at init time when the mail provider is available.\n */\nexport const notificationsSendEmailJob: JobDefinition<SendEmailJobPayload> = defineJob({\n // outbox-allow-no-idempotency: a follow-up that derives a key from\n // `(notificationId, channel)` is straightforward now that PR B of\n // PLAN-OUTBOX shipped the contract — but not strictly required: the\n // duplicate-email risk is already bounded by `defaultRetries: 3` plus\n // the dedupe-window alerter, and tx-threading (PLAN-OUTBOX Phase 5 /\n // PR E, this file's `ctx.tx` forward) closes the dominant duplicate\n // source by making the enqueue atomic with the row write.\n name: 'notifications:send-email',\n description: 'Render and send a transactional notification email.',\n schema: sendEmailJobPayloadSchema,\n defaultRetries: 3,\n})\n\n// ---------------------------------------------------------------------------\n// Channel — synchronously enqueues the job\n// ---------------------------------------------------------------------------\n\n/**\n * Minimal subset of `QueueClient.enqueue` the email channel relies on.\n * Defining it as a shape (instead of importing the whole class) lets tests\n * inject a stub without spinning up the queue table.\n *\n * The third `options` argument is structurally-typed against the bits of\n * `EnqueueOptions` this channel actually forwards — currently just `tx`\n * for outbox-style atomic enqueue (PLAN-OUTBOX Phase 5 / PR E). The real\n * `QueueClient.enqueue` accepts a wider `EnqueueOptions`, which is\n * structurally assignable to this narrower shape.\n */\nexport interface QueueEnqueueShape {\n enqueue(\n job: JobDefinition<SendEmailJobPayload>,\n payload: SendEmailJobPayload,\n options?: { tx?: PostgresJsDatabase | undefined },\n ): Promise<string>\n}\n\nexport interface EmailChannelConfig {\n /** Queue client used to enqueue the send job. */\n queue: QueueEnqueueShape\n /** Optional logger; child-logger from the plugin is preferred. */\n logger?: Logger | undefined\n}\n\n/** Match the handler-side cap so neither side writes a multi-KB `detail`. */\nconst DETAIL_MAX = 256\n\n/**\n * Build the `email` channel. Returns a `NotificationChannel` whose `send`\n * enqueues a queue job — never blocks on mail I/O.\n */\nexport function createEmailChannel(config: EmailChannelConfig): NotificationChannel {\n return {\n id: 'email',\n async send(ctx: ChannelSendContext): Promise<ChannelDeliveryRecord> {\n const at = new Date().toISOString()\n\n // The notify pipeline runs the unverified-email check before us. We\n // assert it here as documentation-as-code: any future caller invoking\n // the channel from a path that DOESN'T run the pipeline's skip-empty-\n // email branch first will trip this and skip the send cleanly instead\n // of enqueueing a doomed job.\n if (!ctx.recipient.email) {\n return {\n channel: 'email',\n state: 'skipped_unverified',\n at,\n }\n }\n\n try {\n // Forward the publisher's tx (if any) so the send-email job INSERT\n // participates in the same Drizzle transaction as the notification\n // row write. PR A of PLAN-OUTBOX (`enqueue({ tx })`) is the\n // queue-side primitive; PLAN-OUTBOX Phase 5 / PR E (the consumer\n // retrofit shipping this code) closes the v1 \"ghost notifications\n // on rollback\" gap.\n await config.queue.enqueue(\n notificationsSendEmailJob,\n {\n notificationId: ctx.notificationId,\n recipientUserId: ctx.recipient.id,\n },\n ctx.tx ? { tx: ctx.tx } : undefined,\n )\n return { channel: 'email', state: 'queued', at }\n } catch (err) {\n // Enqueue itself failed (DB write error, schema validation, etc).\n // Log + record `failed` on the row; the publisher's call still\n // succeeds because the in-app channel may have delivered.\n config.logger?.error(\n { err, notificationId: ctx.notificationId, typeId: ctx.typeId },\n 'email channel: failed to enqueue send job',\n )\n const reason = err instanceof Error ? err.message : 'unknown error'\n return {\n channel: 'email',\n state: 'failed',\n at,\n detail: `enqueue: ${reason}`.slice(0, DETAIL_MAX),\n }\n }\n },\n }\n}\n\n// ---------------------------------------------------------------------------\n// Job handler — performs mail.send, flips channel state on the row\n// ---------------------------------------------------------------------------\n\nexport interface SendEmailHandlerConfig {\n db: PostgresJsDatabase\n /** Mail provider — usually `getMailConfig().provider`. */\n mail: MailProvider\n /** From-address. Plugin resolves this from `mail().defaultFrom` or its own opt. */\n from: string\n /** Default locale fallback when the recipient has no per-user locale. */\n defaultLocale?: string\n logger?: Logger | undefined\n}\n\n/**\n * Subset of `JobContext` the send-email handler reads. Declared structurally\n * so we don't depend on a `JobHandler` import that the queue package doesn't\n * re-export from any subpath today. Function-param contravariance lets a\n * `(SendEmailJobContext) => Promise<void>` be assigned to the broader\n * `JobHandler<SendEmailJobPayload>` shape `registerJob` expects.\n */\nexport interface SendEmailJobContext {\n payload: SendEmailJobPayload\n /** Number of attempts so far (incl. the current one). Surfaced in failure logs. */\n attempts: number\n}\n\nexport type SendEmailHandler = (job: SendEmailJobContext) => Promise<void>\n\n/**\n * Build the queue-handler closure. Plugin wiring calls\n * `registerJob(notificationsSendEmailJob, createSendEmailHandler({...}))`.\n *\n * The closure is a thin adapter: it builds DB-backed deps and forwards to\n * `runSendEmailJob(deps, ...)`, which is the testable orchestration core.\n */\nexport function createSendEmailHandler(\n config: SendEmailHandlerConfig,\n): SendEmailHandler {\n const client = notificationsTable.makeClient(config.db)\n const defaultLocale = config.defaultLocale ?? 'en'\n\n const deps: SendEmailJobDeps = {\n fetchRow: (id, recipientUserId) => client.findOne({ id, recipientUserId }),\n fetchRecipient: async (userId) => {\n const map = await fetchRecipientContexts(config.db, [userId], defaultLocale)\n return map.get(userId) ?? null\n },\n lookupType: getNotificationType,\n sendMail: (msg) => config.mail.send(msg),\n flipChannelState: (id, recipientUserId, entry) =>\n flipChannelStateOnRow(client, id, recipientUserId, entry),\n from: config.from,\n logger: config.logger,\n }\n\n return (job) => runSendEmailJob(deps, job)\n}\n\n// ---------------------------------------------------------------------------\n// Testable orchestration core\n// ---------------------------------------------------------------------------\n\n/**\n * Subset of `notificationsTable.makeClient(...).findOne` return shape that\n * `runSendEmailJob` reads. Re-declared structurally so tests don't need a\n * real DB to construct one.\n */\nexport interface NotificationRowSnapshot {\n id: string\n type: string\n payload: Record<string, unknown>\n channels: ChannelDeliveryRecord[]\n}\n\n/**\n * Recipient subset returned by `fetchRecipient`. Mirrors `RecipientContext`\n * but spelled out so tests don't need to import the full type for stubs.\n */\nexport interface SendEmailRecipient {\n id: string\n name?: string | undefined\n email?: string | undefined\n locale: string\n}\n\n/**\n * Lower-level deps used by `runSendEmailJob`. The shipping\n * `createSendEmailHandler` builds these from a `SendEmailHandlerConfig`;\n * unit tests inject their own stubs to drive each branch without touching\n * the DB or mail provider.\n */\nexport interface SendEmailJobDeps {\n fetchRow(id: string, recipientUserId: string): Promise<NotificationRowSnapshot | null>\n fetchRecipient(userId: string): Promise<SendEmailRecipient | null>\n lookupType(typeId: string): {\n render: (args: {\n payload: Record<string, unknown>\n recipient: SendEmailRecipient\n locale: string\n }) => { email?: RenderedEmail | undefined }\n } | undefined\n sendMail(message: MailMessage): Promise<SendResult>\n flipChannelState(\n id: string,\n recipientUserId: string,\n entry: ChannelDeliveryRecord,\n ): Promise<void>\n from: string\n logger?: Logger | undefined\n}\n\n/**\n * Run the send-email job logic against pluggable deps.\n *\n * Behaviour summary:\n * - Row missing → log + return (archived/deleted between enqueue and run).\n * - Type missing → log + return (deregistered between enqueue and run).\n * - Recipient missing or email unverified at run time → flip channel\n * state to `skipped_unverified`, return cleanly.\n * - Render produced no `email` slice → flip to `failed`, log, return\n * cleanly (no retry — render is deterministic).\n * - mail.send threw → flip to `failed` with detail, RE-THROW so the\n * queue retries until maxRetries.\n * - mail.send returned → flip to `delivered`.\n */\nexport async function runSendEmailJob(\n deps: SendEmailJobDeps,\n job: SendEmailJobContext,\n): Promise<void> {\n const { notificationId, recipientUserId } = job.payload\n\n const row = await deps.fetchRow(notificationId, recipientUserId)\n if (!row) {\n deps.logger?.warn(\n { notificationId, recipientUserId },\n 'send-email: notification row not found, skipping (likely archived/deleted)',\n )\n return\n }\n\n const type = deps.lookupType(row.type)\n if (!type) {\n deps.logger?.warn(\n { notificationId, typeId: row.type },\n 'send-email: notification type not registered, skipping',\n )\n return\n }\n\n const recipient = await deps.fetchRecipient(recipientUserId)\n if (!recipient || !recipient.email) {\n deps.logger?.info(\n { notificationId, recipientUserId },\n 'send-email: recipient has no verified email at run time, marking skipped',\n )\n await deps.flipChannelState(row.id, recipientUserId, {\n channel: 'email',\n state: 'skipped_unverified',\n at: new Date().toISOString(),\n })\n return\n }\n\n const rendered = type.render({\n payload: row.payload,\n recipient,\n locale: recipient.locale,\n })\n const email = rendered.email\n if (!email) {\n deps.logger?.error(\n { notificationId, typeId: row.type },\n 'send-email: type render produced no email content despite email channel enabled',\n )\n await deps.flipChannelState(row.id, recipientUserId, {\n channel: 'email',\n state: 'failed',\n at: new Date().toISOString(),\n detail: 'render produced no email content',\n })\n return\n }\n\n const message: MailMessage = {\n from: deps.from,\n to: recipient.email,\n subject: email.subject,\n html: email.html,\n text: email.text,\n }\n try {\n await deps.sendMail(message)\n } catch (err) {\n // Record the failure on the row so the bell/admin UI reflects what\n // happened on this attempt, then re-throw so the queue retries.\n // The flip itself MUST NOT propagate — if it threw and replaced `err`,\n // the queue's `last_error` would lose the actual mail-send root cause.\n const detail = err instanceof Error ? err.message : 'unknown error'\n try {\n await deps.flipChannelState(row.id, recipientUserId, {\n channel: 'email',\n state: 'failed',\n at: new Date().toISOString(),\n detail: detail.slice(0, DETAIL_MAX),\n })\n } catch (flipErr) {\n deps.logger?.error(\n { err: flipErr, notificationId, typeId: row.type },\n 'send-email: failed-state flip threw — preserving original mail error',\n )\n }\n deps.logger?.error(\n { err, notificationId, typeId: row.type, attempts: job.attempts },\n 'send-email: mail.send failed, will be retried by queue',\n )\n throw err\n }\n\n // Post-send bookkeeping. mail.send already SUCCEEDED — if the flip throws\n // for any transient reason (DB blip, lock timeout) and we let it propagate,\n // the queue would retry the entire job, sending a SECOND email. Log + drop\n // instead. The next notify() pass to the same row writes a fresh channels\n // array, so the row eventually self-heals; the queue audit log retains the\n // canonical \"delivered\" event regardless of what the row says.\n try {\n await deps.flipChannelState(row.id, recipientUserId, {\n channel: 'email',\n state: 'delivered',\n at: new Date().toISOString(),\n })\n } catch (flipErr) {\n deps.logger?.error(\n { err: flipErr, notificationId, typeId: row.type },\n 'send-email: delivered-state flip threw — mail was sent, row state will recover on next notify() pass',\n )\n }\n}\n\n// ---------------------------------------------------------------------------\n// Internal — read-modify-write the channels[email] entry\n// ---------------------------------------------------------------------------\n\n/**\n * Replace the `email` entry in the row's `channels` array with `entry`.\n *\n * Read-modify-write — there's a documented race with concurrent `notify()`\n * passes (see PLAN-NOTIFICATIONS §3.3 v1 limitation): a notify() pass for\n * the same row resets `channels` to `[..., {email: queued}]` and may stomp\n * a `delivered` written here. The notification log + queue audit log are\n * the canonical observability sources; this row field is best-effort.\n *\n * Scoped to (id, recipientUserId) as defense-in-depth.\n */\nasync function flipChannelStateOnRow(\n client: ReturnType<typeof notificationsTable.makeClient>,\n notificationId: string,\n recipientUserId: string,\n entry: ChannelDeliveryRecord,\n): Promise<void> {\n const row = await client.findOne({\n id: notificationId,\n recipientUserId,\n })\n if (!row) return\n\n const next: ChannelDeliveryRecord[] = []\n let replaced = false\n for (const existing of row.channels) {\n if (existing.channel === entry.channel) {\n next.push(entry)\n replaced = true\n } else {\n next.push(existing)\n }\n }\n if (!replaced) next.push(entry)\n\n await client.update(\n { id: notificationId, recipientUserId },\n { channels: next, updatedAt: new Date() },\n )\n}\n"],"mappings":"2dA4DA,MAAa,EAA4B,EAAE,OAAO,CAChD,eAAgB,EAAE,QAAQ,CAAC,MAAM,CAGjC,gBAAiB,EAAE,QAAQ,CAAC,IAAI,EAAE,CAAC,IAAI,IAAI,CAC5C,CAAC,CAQW,EAAgE,EAAU,CAQrF,KAAM,2BACN,YAAa,sDACb,OAAQ,EACR,eAAgB,EACjB,CAAC,CAuCF,SAAgB,EAAmB,EAAiD,CAClF,MAAO,CACL,GAAI,QACJ,MAAM,KAAK,EAAyD,CAClE,IAAM,EAAK,IAAI,MAAM,CAAC,aAAa,CAOnC,GAAI,CAAC,EAAI,UAAU,MACjB,MAAO,CACL,QAAS,QACT,MAAO,qBACP,KACD,CAGH,GAAI,CAeF,OARA,MAAM,EAAO,MAAM,QACjB,EACA,CACE,eAAgB,EAAI,eACpB,gBAAiB,EAAI,UAAU,GAChC,CACD,EAAI,GAAK,CAAE,GAAI,EAAI,GAAI,CAAG,IAAA,GAC3B,CACM,CAAE,QAAS,QAAS,MAAO,SAAU,KAAI,OACzC,EAAK,CASZ,OALA,EAAO,QAAQ,MACb,CAAE,MAAK,eAAgB,EAAI,eAAgB,OAAQ,EAAI,OAAQ,CAC/D,4CACD,CAEM,CACL,QAAS,QACT,MAAO,SACP,KACA,OAAQ,YALK,aAAe,MAAQ,EAAI,QAAU,kBAKrB,MAAM,EAAG,IAAW,CAClD,GAGN,CAwCH,SAAgB,EACd,EACkB,CAClB,IAAM,EAAS,EAAmB,WAAW,EAAO,GAAG,CACjD,EAAgB,EAAO,eAAiB,KAExC,EAAyB,CAC7B,UAAW,EAAI,IAAoB,EAAO,QAAQ,CAAE,KAAI,kBAAiB,CAAC,CAC1E,eAAgB,KAAO,KAEd,MADW,EAAuB,EAAO,GAAI,CAAC,EAAO,CAAE,EAAc,EACjE,IAAI,EAAO,EAAI,KAE5B,WAAY,EACZ,SAAW,GAAQ,EAAO,KAAK,KAAK,EAAI,CACxC,kBAAmB,EAAI,EAAiB,IACtC,EAAsB,EAAQ,EAAI,EAAiB,EAAM,CAC3D,KAAM,EAAO,KACb,OAAQ,EAAO,OAChB,CAED,MAAQ,IAAQ,EAAgB,EAAM,EAAI,CAsE5C,eAAsB,EACpB,EACA,EACe,CACf,GAAM,CAAE,iBAAgB,mBAAoB,EAAI,QAE1C,EAAM,MAAM,EAAK,SAAS,EAAgB,EAAgB,CAChE,GAAI,CAAC,EAAK,CACR,EAAK,QAAQ,KACX,CAAE,iBAAgB,kBAAiB,CACnC,6EACD,CACD,OAGF,IAAM,EAAO,EAAK,WAAW,EAAI,KAAK,CACtC,GAAI,CAAC,EAAM,CACT,EAAK,QAAQ,KACX,CAAE,iBAAgB,OAAQ,EAAI,KAAM,CACpC,yDACD,CACD,OAGF,IAAM,EAAY,MAAM,EAAK,eAAe,EAAgB,CAC5D,GAAI,CAAC,GAAa,CAAC,EAAU,MAAO,CAClC,EAAK,QAAQ,KACX,CAAE,iBAAgB,kBAAiB,CACnC,2EACD,CACD,MAAM,EAAK,iBAAiB,EAAI,GAAI,EAAiB,CACnD,QAAS,QACT,MAAO,qBACP,GAAI,IAAI,MAAM,CAAC,aAAa,CAC7B,CAAC,CACF,OAQF,IAAM,EALW,EAAK,OAAO,CAC3B,QAAS,EAAI,QACb,YACA,OAAQ,EAAU,OACnB,CACqB,CAAC,MACvB,GAAI,CAAC,EAAO,CACV,EAAK,QAAQ,MACX,CAAE,iBAAgB,OAAQ,EAAI,KAAM,CACpC,kFACD,CACD,MAAM,EAAK,iBAAiB,EAAI,GAAI,EAAiB,CACnD,QAAS,QACT,MAAO,SACP,GAAI,IAAI,MAAM,CAAC,aAAa,CAC5B,OAAQ,mCACT,CAAC,CACF,OAGF,IAAM,EAAuB,CAC3B,KAAM,EAAK,KACX,GAAI,EAAU,MACd,QAAS,EAAM,QACf,KAAM,EAAM,KACZ,KAAM,EAAM,KACb,CACD,GAAI,CACF,MAAM,EAAK,SAAS,EAAQ,OACrB,EAAK,CAKZ,IAAM,EAAS,aAAe,MAAQ,EAAI,QAAU,gBACpD,GAAI,CACF,MAAM,EAAK,iBAAiB,EAAI,GAAI,EAAiB,CACnD,QAAS,QACT,MAAO,SACP,GAAI,IAAI,MAAM,CAAC,aAAa,CAC5B,OAAQ,EAAO,MAAM,EAAG,IAAW,CACpC,CAAC,OACK,EAAS,CAChB,EAAK,QAAQ,MACX,CAAE,IAAK,EAAS,iBAAgB,OAAQ,EAAI,KAAM,CAClD,uEACD,CAMH,MAJA,EAAK,QAAQ,MACX,CAAE,MAAK,iBAAgB,OAAQ,EAAI,KAAM,SAAU,EAAI,SAAU,CACjE,yDACD,CACK,EASR,GAAI,CACF,MAAM,EAAK,iBAAiB,EAAI,GAAI,EAAiB,CACnD,QAAS,QACT,MAAO,YACP,GAAI,IAAI,MAAM,CAAC,aAAa,CAC7B,CAAC,OACK,EAAS,CAChB,EAAK,QAAQ,MACX,CAAE,IAAK,EAAS,iBAAgB,OAAQ,EAAI,KAAM,CAClD,uGACD,EAmBL,eAAe,EACb,EACA,EACA,EACA,EACe,CACf,IAAM,EAAM,MAAM,EAAO,QAAQ,CAC/B,GAAI,EACJ,kBACD,CAAC,CACF,GAAI,CAAC,EAAK,OAEV,IAAM,EAAgC,EAAE,CACpC,EAAW,GACf,IAAK,IAAM,KAAY,EAAI,SACrB,EAAS,UAAY,EAAM,SAC7B,EAAK,KAAK,EAAM,CAChB,EAAW,IAEX,EAAK,KAAK,EAAS,CAGlB,GAAU,EAAK,KAAK,EAAM,CAE/B,MAAM,EAAO,OACX,CAAE,GAAI,EAAgB,kBAAiB,CACvC,CAAE,SAAU,EAAM,UAAW,IAAI,KAAQ,CAC1C"}