@orpc/experimental-publisher 0.0.0-next.0294b1b

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2023 oRPC
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,81 @@
1
+ <div align="center">
2
+ <image align="center" src="https://orpc.dev/logo.webp" width=280 alt="oRPC logo" />
3
+ </div>
4
+
5
+ <h1></h1>
6
+
7
+ <div align="center">
8
+ <a href="https://codecov.io/gh/unnoq/orpc">
9
+ <img alt="codecov" src="https://codecov.io/gh/unnoq/orpc/branch/main/graph/badge.svg">
10
+ </a>
11
+ <a href="https://www.npmjs.com/package/@orpc/experimental-publisher">
12
+ <img alt="weekly downloads" src="https://img.shields.io/npm/dw/%40orpc%2Fexperimental-publisher?logo=npm" />
13
+ </a>
14
+ <a href="https://github.com/unnoq/orpc/blob/main/LICENSE">
15
+ <img alt="MIT License" src="https://img.shields.io/github/license/unnoq/orpc?logo=open-source-initiative" />
16
+ </a>
17
+ <a href="https://discord.gg/TXEbwRBvQn">
18
+ <img alt="Discord" src="https://img.shields.io/discord/1308966753044398161?color=7389D8&label&logo=discord&logoColor=ffffff" />
19
+ </a>
20
+ <a href="https://deepwiki.com/unnoq/orpc">
21
+ <img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki">
22
+ </a>
23
+ </div>
24
+
25
+ <h3 align="center">Typesafe APIs Made Simple 🪄</h3>
26
+
27
+ **oRPC is a powerful combination of RPC and OpenAPI**, makes it easy to build APIs that are end-to-end type-safe and adhere to OpenAPI standards
28
+
29
+ ---
30
+
31
+ ## Highlights
32
+
33
+ - **🔗 End-to-End Type Safety**: Ensure type-safe inputs, outputs, and errors from client to server.
34
+ - **📘 First-Class OpenAPI**: Built-in support that fully adheres to the OpenAPI standard.
35
+ - **📝 Contract-First Development**: Optionally define your API contract before implementation.
36
+ - **🔍 First-Class OpenTelemetry**: Seamlessly integrate with OpenTelemetry for observability.
37
+ - **⚙️ Framework Integrations**: Seamlessly integrate with TanStack Query (React, Vue, Solid, Svelte, Angular), SWR, Pinia Colada, and more.
38
+ - **🚀 Server Actions**: Fully compatible with React Server Actions on Next.js, TanStack Start, and other platforms.
39
+ - **🔠 Standard Schema Support**: Works out of the box with Zod, Valibot, ArkType, and other schema validators.
40
+ - **🗃️ Native Types**: Supports native types like Date, File, Blob, BigInt, URL, and more.
41
+ - **⏱️ Lazy Router**: Enhance cold start times with our lazy routing feature.
42
+ - **📡 SSE & Streaming**: Enjoy full type-safe support for SSE and streaming.
43
+ - **🌍 Multi-Runtime Support**: Fast and lightweight on Cloudflare, Deno, Bun, Node.js, and beyond.
44
+ - **🔌 Extendability**: Easily extend functionality with plugins, middleware, and interceptors.
45
+
46
+ ## Documentation
47
+
48
+ You can find the full documentation [here](https://orpc.dev).
49
+
50
+ ## Packages
51
+
52
+ - [@orpc/contract](https://www.npmjs.com/package/@orpc/contract): Build your API contract.
53
+ - [@orpc/server](https://www.npmjs.com/package/@orpc/server): Build your API or implement API contract.
54
+ - [@orpc/client](https://www.npmjs.com/package/@orpc/client): Consume your API on the client with type-safety.
55
+ - [@orpc/openapi](https://www.npmjs.com/package/@orpc/openapi): Generate OpenAPI specs and handle OpenAPI requests.
56
+ - [@orpc/otel](https://www.npmjs.com/package/@orpc/otel): [OpenTelemetry](https://opentelemetry.io/) integration for observability.
57
+ - [@orpc/nest](https://www.npmjs.com/package/@orpc/nest): Deeply integrate oRPC with [NestJS](https://nestjs.com/).
58
+ - [@orpc/react](https://www.npmjs.com/package/@orpc/react): Utilities for integrating oRPC with React and React Server Actions.
59
+ - [@orpc/tanstack-query](https://www.npmjs.com/package/@orpc/tanstack-query): [TanStack Query](https://tanstack.com/query/latest) integration.
60
+ - [@orpc/experimental-react-swr](https://www.npmjs.com/package/@orpc/experimental-react-swr): [SWR](https://swr.vercel.app/) integration.
61
+ - [@orpc/vue-colada](https://www.npmjs.com/package/@orpc/vue-colada): Integration with [Pinia Colada](https://pinia-colada.esm.dev/).
62
+ - [@orpc/hey-api](https://www.npmjs.com/package/@orpc/hey-api): [Hey API](https://heyapi.dev/) integration.
63
+ - [@orpc/zod](https://www.npmjs.com/package/@orpc/zod): More schemas that [Zod](https://zod.dev/) doesn't support yet.
64
+ - [@orpc/valibot](https://www.npmjs.com/package/@orpc/valibot): OpenAPI spec generation from [Valibot](https://valibot.dev/).
65
+ - [@orpc/arktype](https://www.npmjs.com/package/@orpc/arktype): OpenAPI spec generation from [ArkType](https://arktype.io/).
66
+
67
+ ## `@orpc/experimental-publisher`
68
+
69
+ Event Publisher with multiple adapter support, resume support, and more.
70
+
71
+ ## Sponsors
72
+
73
+ <p align="center">
74
+ <a href="https://cdn.jsdelivr.net/gh/unnoq/unnoq/sponsors.svg">
75
+ <img src='https://cdn.jsdelivr.net/gh/unnoq/unnoq/sponsors.svg'/>
76
+ </a>
77
+ </p>
78
+
79
+ ## License
80
+
81
+ Distributed under the MIT License. See [LICENSE](https://github.com/unnoq/orpc/blob/main/LICENSE) for more information.
@@ -0,0 +1,71 @@
1
+ import { StandardRPCJsonSerializerOptions } from '@orpc/client/standard';
2
+ import Redis from 'ioredis';
3
+ import { PublisherOptions, Publisher, PublisherSubscribeListenerOptions } from '../index.mjs';
4
+ import '@orpc/shared';
5
+
6
+ interface IORedisPublisherOptions extends PublisherOptions, StandardRPCJsonSerializerOptions {
7
+ /**
8
+ * Redis commander instance (used for execute short-lived commands)
9
+ */
10
+ commander: Redis;
11
+ /**
12
+ * redis listener instance (used for listening to events)
13
+ *
14
+ * @remark
15
+ * - `lazyConnect: true` option is supported
16
+ */
17
+ listener: Redis;
18
+ /**
19
+ * How long (in seconds) to retain events for replay.
20
+ *
21
+ * @remark
22
+ * This allows new subscribers to "catch up" on missed events using `lastEventId`.
23
+ * Note that event cleanup is deferred for performance reasons — meaning some
24
+ * expired events may still be available for a short period of time, and listeners
25
+ * might still receive them.
26
+ *
27
+ * @default NaN (disabled)
28
+ */
29
+ resumeRetentionSeconds?: number;
30
+ /**
31
+ * The prefix to use for Redis keys.
32
+ *
33
+ * @default orpc:publisher:
34
+ */
35
+ prefix?: string;
36
+ }
37
+ declare class IORedisPublisher<T extends Record<string, object>> extends Publisher<T> {
38
+ private readonly commander;
39
+ private readonly listener;
40
+ private readonly prefix;
41
+ private readonly serializer;
42
+ private readonly retentionSeconds;
43
+ private readonly subscriptionPromiseMap;
44
+ private readonly listenersMap;
45
+ private readonly onErrorsMap;
46
+ private redisListenerAndOnError;
47
+ private get isResumeEnabled();
48
+ /**
49
+ * The exactness of the `XTRIM` command.
50
+ *
51
+ * @internal
52
+ */
53
+ xtrimExactness: '~' | '=';
54
+ /**
55
+ * Useful for measuring memory usage.
56
+ *
57
+ * @internal
58
+ *
59
+ */
60
+ get size(): number;
61
+ constructor({ commander, listener, resumeRetentionSeconds, prefix, ...options }: IORedisPublisherOptions);
62
+ private lastCleanupTimeMap;
63
+ publish<K extends keyof T & string>(event: K, payload: T[K]): Promise<void>;
64
+ protected subscribeListener<K extends keyof T & string>(event: K, originalListener: (payload: T[K]) => void, { lastEventId, onError }?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>>;
65
+ private prefixKey;
66
+ private serializePayload;
67
+ private deserializePayload;
68
+ }
69
+
70
+ export { IORedisPublisher };
71
+ export type { IORedisPublisherOptions };
@@ -0,0 +1,71 @@
1
+ import { StandardRPCJsonSerializerOptions } from '@orpc/client/standard';
2
+ import Redis from 'ioredis';
3
+ import { PublisherOptions, Publisher, PublisherSubscribeListenerOptions } from '../index.js';
4
+ import '@orpc/shared';
5
+
6
+ interface IORedisPublisherOptions extends PublisherOptions, StandardRPCJsonSerializerOptions {
7
+ /**
8
+ * Redis commander instance (used for execute short-lived commands)
9
+ */
10
+ commander: Redis;
11
+ /**
12
+ * redis listener instance (used for listening to events)
13
+ *
14
+ * @remark
15
+ * - `lazyConnect: true` option is supported
16
+ */
17
+ listener: Redis;
18
+ /**
19
+ * How long (in seconds) to retain events for replay.
20
+ *
21
+ * @remark
22
+ * This allows new subscribers to "catch up" on missed events using `lastEventId`.
23
+ * Note that event cleanup is deferred for performance reasons — meaning some
24
+ * expired events may still be available for a short period of time, and listeners
25
+ * might still receive them.
26
+ *
27
+ * @default NaN (disabled)
28
+ */
29
+ resumeRetentionSeconds?: number;
30
+ /**
31
+ * The prefix to use for Redis keys.
32
+ *
33
+ * @default orpc:publisher:
34
+ */
35
+ prefix?: string;
36
+ }
37
+ declare class IORedisPublisher<T extends Record<string, object>> extends Publisher<T> {
38
+ private readonly commander;
39
+ private readonly listener;
40
+ private readonly prefix;
41
+ private readonly serializer;
42
+ private readonly retentionSeconds;
43
+ private readonly subscriptionPromiseMap;
44
+ private readonly listenersMap;
45
+ private readonly onErrorsMap;
46
+ private redisListenerAndOnError;
47
+ private get isResumeEnabled();
48
+ /**
49
+ * The exactness of the `XTRIM` command.
50
+ *
51
+ * @internal
52
+ */
53
+ xtrimExactness: '~' | '=';
54
+ /**
55
+ * Useful for measuring memory usage.
56
+ *
57
+ * @internal
58
+ *
59
+ */
60
+ get size(): number;
61
+ constructor({ commander, listener, resumeRetentionSeconds, prefix, ...options }: IORedisPublisherOptions);
62
+ private lastCleanupTimeMap;
63
+ publish<K extends keyof T & string>(event: K, payload: T[K]): Promise<void>;
64
+ protected subscribeListener<K extends keyof T & string>(event: K, originalListener: (payload: T[K]) => void, { lastEventId, onError }?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>>;
65
+ private prefixKey;
66
+ private serializePayload;
67
+ private deserializePayload;
68
+ }
69
+
70
+ export { IORedisPublisher };
71
+ export type { IORedisPublisherOptions };
@@ -0,0 +1,219 @@
1
+ import { StandardRPCJsonSerializer } from '@orpc/client/standard';
2
+ import { fallback, stringifyJSON, once } from '@orpc/shared';
3
+ import { getEventMeta, withEventMeta } from '@orpc/standard-server';
4
+ import { P as Publisher } from '../shared/experimental-publisher.BtlOkhPO.mjs';
5
+
6
+ class IORedisPublisher extends Publisher {
7
+ commander;
8
+ listener;
9
+ prefix;
10
+ serializer;
11
+ retentionSeconds;
12
+ subscriptionPromiseMap = /* @__PURE__ */ new Map();
13
+ listenersMap = /* @__PURE__ */ new Map();
14
+ onErrorsMap = /* @__PURE__ */ new Map();
15
+ redisListenerAndOnError;
16
+ get isResumeEnabled() {
17
+ return Number.isFinite(this.retentionSeconds) && this.retentionSeconds > 0;
18
+ }
19
+ /**
20
+ * The exactness of the `XTRIM` command.
21
+ *
22
+ * @internal
23
+ */
24
+ xtrimExactness = "~";
25
+ /**
26
+ * Useful for measuring memory usage.
27
+ *
28
+ * @internal
29
+ *
30
+ */
31
+ get size() {
32
+ let size = this.redisListenerAndOnError ? 1 : 0;
33
+ for (const listeners of this.listenersMap) {
34
+ size += listeners[1].length || 1;
35
+ }
36
+ for (const onErrors of this.onErrorsMap) {
37
+ size += onErrors[1].length || 1;
38
+ }
39
+ return size;
40
+ }
41
+ constructor({ commander, listener, resumeRetentionSeconds, prefix, ...options }) {
42
+ super(options);
43
+ this.commander = commander;
44
+ this.listener = listener;
45
+ this.prefix = fallback(prefix, "orpc:publisher:");
46
+ this.retentionSeconds = resumeRetentionSeconds ?? Number.NaN;
47
+ this.serializer = new StandardRPCJsonSerializer(options);
48
+ }
49
+ lastCleanupTimeMap = /* @__PURE__ */ new Map();
50
+ async publish(event, payload) {
51
+ const key = this.prefixKey(event);
52
+ const serialized = this.serializePayload(payload);
53
+ let id;
54
+ if (this.isResumeEnabled) {
55
+ const now = Date.now();
56
+ for (const [mapKey, lastCleanupTime] of this.lastCleanupTimeMap) {
57
+ if (lastCleanupTime + this.retentionSeconds * 1e3 < now) {
58
+ this.lastCleanupTimeMap.delete(mapKey);
59
+ }
60
+ }
61
+ if (!this.lastCleanupTimeMap.has(key)) {
62
+ this.lastCleanupTimeMap.set(key, now);
63
+ const result = await this.commander.multi().xadd(key, "*", "data", stringifyJSON(serialized)).xtrim(key, "MINID", this.xtrimExactness, `${now - this.retentionSeconds * 1e3}-0`).expire(key, this.retentionSeconds * 2).exec();
64
+ if (result) {
65
+ for (const [error] of result) {
66
+ if (error) {
67
+ throw error;
68
+ }
69
+ }
70
+ }
71
+ id = result[0][1];
72
+ } else {
73
+ const result = await this.commander.xadd(key, "*", "data", stringifyJSON(serialized));
74
+ id = result;
75
+ }
76
+ }
77
+ await this.commander.publish(key, stringifyJSON({ ...serialized, id }));
78
+ }
79
+ async subscribeListener(event, originalListener, { lastEventId, onError } = {}) {
80
+ const key = this.prefixKey(event);
81
+ let pendingPayloads = [];
82
+ const resumePayloadIds = /* @__PURE__ */ new Set();
83
+ const listener = (payload) => {
84
+ if (pendingPayloads) {
85
+ pendingPayloads.push(payload);
86
+ return;
87
+ }
88
+ const payloadId = getEventMeta(payload)?.id;
89
+ if (payloadId !== void 0 && resumePayloadIds.has(payloadId)) {
90
+ return;
91
+ }
92
+ originalListener(payload);
93
+ };
94
+ if (!this.redisListenerAndOnError) {
95
+ const redisOnError = (error) => {
96
+ for (const [_, onErrors] of this.onErrorsMap) {
97
+ for (const onError2 of onErrors) {
98
+ onError2(error);
99
+ }
100
+ }
101
+ };
102
+ const redisListener = (channel, message) => {
103
+ try {
104
+ const listeners2 = this.listenersMap.get(channel);
105
+ if (listeners2) {
106
+ const { id, ...rest } = JSON.parse(message);
107
+ const payload = this.deserializePayload(id, rest);
108
+ for (const listener2 of listeners2) {
109
+ listener2(payload);
110
+ }
111
+ }
112
+ } catch (error) {
113
+ const onErrors = this.onErrorsMap.get(channel);
114
+ if (onErrors) {
115
+ for (const onError2 of onErrors) {
116
+ onError2(error);
117
+ }
118
+ }
119
+ }
120
+ };
121
+ this.redisListenerAndOnError = { listener: redisListener, onError: redisOnError };
122
+ this.listener.on("message", redisListener);
123
+ this.listener.on("error", redisOnError);
124
+ }
125
+ const subscriptionPromise = this.subscriptionPromiseMap.get(key);
126
+ if (subscriptionPromise) {
127
+ await subscriptionPromise;
128
+ }
129
+ let listeners = this.listenersMap.get(key);
130
+ if (!listeners) {
131
+ try {
132
+ const promise = this.listener.subscribe(key);
133
+ this.subscriptionPromiseMap.set(key, promise);
134
+ await promise;
135
+ this.listenersMap.set(key, listeners = []);
136
+ } finally {
137
+ this.subscriptionPromiseMap.delete(key);
138
+ if (this.listenersMap.size === 0) {
139
+ this.listener.off("message", this.redisListenerAndOnError.listener);
140
+ this.listener.off("error", this.redisListenerAndOnError.onError);
141
+ this.redisListenerAndOnError = void 0;
142
+ }
143
+ }
144
+ }
145
+ listeners.push(listener);
146
+ if (onError) {
147
+ let onErrors = this.onErrorsMap.get(key);
148
+ if (!onErrors) {
149
+ this.onErrorsMap.set(key, onErrors = []);
150
+ }
151
+ onErrors.push(onError);
152
+ }
153
+ void (async () => {
154
+ try {
155
+ if (this.isResumeEnabled && typeof lastEventId === "string") {
156
+ const results = await this.commander.xread("STREAMS", key, lastEventId);
157
+ if (results && results[0]) {
158
+ const [_, items] = results[0];
159
+ for (const [id, fields] of items) {
160
+ const serialized = fields[1];
161
+ const payload = this.deserializePayload(id, JSON.parse(serialized));
162
+ resumePayloadIds.add(id);
163
+ originalListener(payload);
164
+ }
165
+ }
166
+ }
167
+ } catch (error) {
168
+ onError?.(error);
169
+ } finally {
170
+ const pending = pendingPayloads;
171
+ pendingPayloads = void 0;
172
+ for (const payload of pending) {
173
+ listener(payload);
174
+ }
175
+ }
176
+ })();
177
+ const cleanupListeners = once(() => {
178
+ listeners.splice(listeners.indexOf(listener), 1);
179
+ if (onError) {
180
+ const onErrors = this.onErrorsMap.get(key);
181
+ if (onErrors) {
182
+ const index = onErrors.indexOf(onError);
183
+ if (index !== -1) {
184
+ onErrors.splice(index, 1);
185
+ }
186
+ }
187
+ }
188
+ });
189
+ return async () => {
190
+ cleanupListeners();
191
+ if (listeners.length === 0) {
192
+ this.listenersMap.delete(key);
193
+ this.onErrorsMap.delete(key);
194
+ if (this.redisListenerAndOnError && this.listenersMap.size === 0) {
195
+ this.listener.off("message", this.redisListenerAndOnError.listener);
196
+ this.listener.off("error", this.redisListenerAndOnError.onError);
197
+ this.redisListenerAndOnError = void 0;
198
+ }
199
+ await this.listener.unsubscribe(key);
200
+ }
201
+ };
202
+ }
203
+ prefixKey(key) {
204
+ return `${this.prefix}${key}`;
205
+ }
206
+ serializePayload(payload) {
207
+ const eventMeta = getEventMeta(payload);
208
+ const [json, meta] = this.serializer.serialize(payload);
209
+ return { json, meta, eventMeta };
210
+ }
211
+ deserializePayload(id, { json, meta, eventMeta }) {
212
+ return withEventMeta(
213
+ this.serializer.deserialize(json, meta),
214
+ id === void 0 ? { ...eventMeta } : { ...eventMeta, id }
215
+ );
216
+ }
217
+ }
218
+
219
+ export { IORedisPublisher };
@@ -0,0 +1,39 @@
1
+ import { PublisherOptions, Publisher, PublisherSubscribeListenerOptions } from '../index.mjs';
2
+ import '@orpc/shared';
3
+
4
+ interface MemoryPublisherOptions extends PublisherOptions {
5
+ /**
6
+ * How long (in seconds) to retain events for replay.
7
+ *
8
+ * @remark
9
+ * This allows new subscribers to "catch up" on missed events using `lastEventId`.
10
+ * Note that event cleanup is deferred for performance reasons — meaning some
11
+ * expired events may still be available for a short period of time, and listeners
12
+ * might still receive them.
13
+ *
14
+ * @default NaN (disabled)
15
+ */
16
+ resumeRetentionSeconds?: number;
17
+ }
18
+ declare class MemoryPublisher<T extends Record<string, object>> extends Publisher<T> {
19
+ private readonly eventPublisher;
20
+ private readonly idGenerator;
21
+ private readonly retentionSeconds;
22
+ private readonly eventsMap;
23
+ /**
24
+ * Useful for measuring memory usage.
25
+ *
26
+ * @internal
27
+ *
28
+ */
29
+ get size(): number;
30
+ private get isResumeEnabled();
31
+ constructor({ resumeRetentionSeconds, ...options }?: MemoryPublisherOptions);
32
+ publish<K extends keyof T & string>(event: K, payload: T[K]): Promise<void>;
33
+ protected subscribeListener<K extends keyof T & string>(event: K, listener: (payload: T[K]) => void, options?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>>;
34
+ private lastCleanupTime;
35
+ private cleanup;
36
+ }
37
+
38
+ export { MemoryPublisher };
39
+ export type { MemoryPublisherOptions };
@@ -0,0 +1,39 @@
1
+ import { PublisherOptions, Publisher, PublisherSubscribeListenerOptions } from '../index.js';
2
+ import '@orpc/shared';
3
+
4
+ interface MemoryPublisherOptions extends PublisherOptions {
5
+ /**
6
+ * How long (in seconds) to retain events for replay.
7
+ *
8
+ * @remark
9
+ * This allows new subscribers to "catch up" on missed events using `lastEventId`.
10
+ * Note that event cleanup is deferred for performance reasons — meaning some
11
+ * expired events may still be available for a short period of time, and listeners
12
+ * might still receive them.
13
+ *
14
+ * @default NaN (disabled)
15
+ */
16
+ resumeRetentionSeconds?: number;
17
+ }
18
+ declare class MemoryPublisher<T extends Record<string, object>> extends Publisher<T> {
19
+ private readonly eventPublisher;
20
+ private readonly idGenerator;
21
+ private readonly retentionSeconds;
22
+ private readonly eventsMap;
23
+ /**
24
+ * Useful for measuring memory usage.
25
+ *
26
+ * @internal
27
+ *
28
+ */
29
+ get size(): number;
30
+ private get isResumeEnabled();
31
+ constructor({ resumeRetentionSeconds, ...options }?: MemoryPublisherOptions);
32
+ publish<K extends keyof T & string>(event: K, payload: T[K]): Promise<void>;
33
+ protected subscribeListener<K extends keyof T & string>(event: K, listener: (payload: T[K]) => void, options?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>>;
34
+ private lastCleanupTime;
35
+ private cleanup;
36
+ }
37
+
38
+ export { MemoryPublisher };
39
+ export type { MemoryPublisherOptions };
@@ -0,0 +1,82 @@
1
+ import { EventPublisher, SequentialIdGenerator, compareSequentialIds } from '@orpc/shared';
2
+ import { withEventMeta, getEventMeta } from '@orpc/standard-server';
3
+ import { P as Publisher } from '../shared/experimental-publisher.BtlOkhPO.mjs';
4
+
5
+ class MemoryPublisher extends Publisher {
6
+ eventPublisher = new EventPublisher();
7
+ idGenerator = new SequentialIdGenerator();
8
+ retentionSeconds;
9
+ eventsMap = /* @__PURE__ */ new Map();
10
+ /**
11
+ * Useful for measuring memory usage.
12
+ *
13
+ * @internal
14
+ *
15
+ */
16
+ get size() {
17
+ let size = this.eventPublisher.size;
18
+ for (const events of this.eventsMap) {
19
+ size += events[1].length || 1;
20
+ }
21
+ return size;
22
+ }
23
+ get isResumeEnabled() {
24
+ return Number.isFinite(this.retentionSeconds) && this.retentionSeconds > 0;
25
+ }
26
+ constructor({ resumeRetentionSeconds, ...options } = {}) {
27
+ super(options);
28
+ this.retentionSeconds = resumeRetentionSeconds ?? Number.NaN;
29
+ }
30
+ async publish(event, payload) {
31
+ this.cleanup();
32
+ if (this.isResumeEnabled) {
33
+ const now = Date.now();
34
+ const expiresAt = now + this.retentionSeconds * 1e3;
35
+ let events = this.eventsMap.get(event);
36
+ if (!events) {
37
+ this.eventsMap.set(event, events = []);
38
+ }
39
+ payload = withEventMeta(payload, { ...getEventMeta(payload), id: this.idGenerator.generate() });
40
+ events.push({ expiresAt, payload });
41
+ }
42
+ this.eventPublisher.publish(event, payload);
43
+ }
44
+ async subscribeListener(event, listener, options) {
45
+ if (this.isResumeEnabled && typeof options?.lastEventId === "string") {
46
+ const events = this.eventsMap.get(event);
47
+ if (events) {
48
+ for (const { payload } of events) {
49
+ const id = getEventMeta(payload)?.id;
50
+ if (typeof id === "string" && compareSequentialIds(id, options.lastEventId) > 0) {
51
+ listener(payload);
52
+ }
53
+ }
54
+ }
55
+ }
56
+ const syncUnsub = this.eventPublisher.subscribe(event, listener);
57
+ return async () => {
58
+ syncUnsub();
59
+ };
60
+ }
61
+ lastCleanupTime = null;
62
+ cleanup() {
63
+ if (!this.isResumeEnabled) {
64
+ return;
65
+ }
66
+ const now = Date.now();
67
+ if (this.lastCleanupTime !== null && this.lastCleanupTime + this.retentionSeconds * 1e3 > now) {
68
+ return;
69
+ }
70
+ this.lastCleanupTime = now;
71
+ for (const [event, events] of this.eventsMap) {
72
+ const validEvents = events.filter((event2) => event2.expiresAt > now);
73
+ if (validEvents.length > 0) {
74
+ this.eventsMap.set(event, validEvents);
75
+ } else {
76
+ this.eventsMap.delete(event);
77
+ }
78
+ }
79
+ }
80
+ }
81
+
82
+ export { MemoryPublisher };
@@ -0,0 +1,59 @@
1
+ import { StandardRPCJsonSerializerOptions } from '@orpc/client/standard';
2
+ import { Redis } from '@upstash/redis';
3
+ import { PublisherOptions, Publisher, PublisherSubscribeListenerOptions } from '../index.mjs';
4
+ import '@orpc/shared';
5
+
6
+ interface UpstashRedisPublisherOptions extends PublisherOptions, StandardRPCJsonSerializerOptions {
7
+ /**
8
+ * How long (in seconds) to retain events for replay.
9
+ *
10
+ * @remark
11
+ * This allows new subscribers to "catch up" on missed events using `lastEventId`.
12
+ * Note that event cleanup is deferred for performance reasons — meaning some
13
+ * expired events may still be available for a short period of time, and listeners
14
+ * might still receive them.
15
+ *
16
+ * @default NaN (disabled)
17
+ */
18
+ resumeRetentionSeconds?: number;
19
+ /**
20
+ * The prefix to use for Redis keys.
21
+ *
22
+ * @default orpc:publisher:
23
+ */
24
+ prefix?: string;
25
+ }
26
+ declare class UpstashRedisPublisher<T extends Record<string, object>> extends Publisher<T> {
27
+ private readonly redis;
28
+ private readonly prefix;
29
+ private readonly serializer;
30
+ private readonly retentionSeconds;
31
+ private readonly listenersMap;
32
+ private readonly onErrorsMap;
33
+ private readonly subscriptionPromiseMap;
34
+ private readonly subscriptionsMap;
35
+ private get isResumeEnabled();
36
+ /**
37
+ * The exactness of the `XTRIM` command.
38
+ *
39
+ * @internal
40
+ */
41
+ xtrimExactness: '~' | '=';
42
+ /**
43
+ * Useful for measuring memory usage.
44
+ *
45
+ * @internal
46
+ *
47
+ */
48
+ get size(): number;
49
+ constructor(redis: Redis, { resumeRetentionSeconds, prefix, ...options }?: UpstashRedisPublisherOptions);
50
+ private lastCleanupTimeMap;
51
+ publish<K extends keyof T & string>(event: K, payload: T[K]): Promise<void>;
52
+ protected subscribeListener<K extends keyof T & string>(event: K, originalListener: (payload: T[K]) => void, { lastEventId, onError }?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>>;
53
+ private prefixKey;
54
+ private serializePayload;
55
+ private deserializePayload;
56
+ }
57
+
58
+ export { UpstashRedisPublisher };
59
+ export type { UpstashRedisPublisherOptions };
@@ -0,0 +1,59 @@
1
+ import { StandardRPCJsonSerializerOptions } from '@orpc/client/standard';
2
+ import { Redis } from '@upstash/redis';
3
+ import { PublisherOptions, Publisher, PublisherSubscribeListenerOptions } from '../index.js';
4
+ import '@orpc/shared';
5
+
6
+ interface UpstashRedisPublisherOptions extends PublisherOptions, StandardRPCJsonSerializerOptions {
7
+ /**
8
+ * How long (in seconds) to retain events for replay.
9
+ *
10
+ * @remark
11
+ * This allows new subscribers to "catch up" on missed events using `lastEventId`.
12
+ * Note that event cleanup is deferred for performance reasons — meaning some
13
+ * expired events may still be available for a short period of time, and listeners
14
+ * might still receive them.
15
+ *
16
+ * @default NaN (disabled)
17
+ */
18
+ resumeRetentionSeconds?: number;
19
+ /**
20
+ * The prefix to use for Redis keys.
21
+ *
22
+ * @default orpc:publisher:
23
+ */
24
+ prefix?: string;
25
+ }
26
+ declare class UpstashRedisPublisher<T extends Record<string, object>> extends Publisher<T> {
27
+ private readonly redis;
28
+ private readonly prefix;
29
+ private readonly serializer;
30
+ private readonly retentionSeconds;
31
+ private readonly listenersMap;
32
+ private readonly onErrorsMap;
33
+ private readonly subscriptionPromiseMap;
34
+ private readonly subscriptionsMap;
35
+ private get isResumeEnabled();
36
+ /**
37
+ * The exactness of the `XTRIM` command.
38
+ *
39
+ * @internal
40
+ */
41
+ xtrimExactness: '~' | '=';
42
+ /**
43
+ * Useful for measuring memory usage.
44
+ *
45
+ * @internal
46
+ *
47
+ */
48
+ get size(): number;
49
+ constructor(redis: Redis, { resumeRetentionSeconds, prefix, ...options }?: UpstashRedisPublisherOptions);
50
+ private lastCleanupTimeMap;
51
+ publish<K extends keyof T & string>(event: K, payload: T[K]): Promise<void>;
52
+ protected subscribeListener<K extends keyof T & string>(event: K, originalListener: (payload: T[K]) => void, { lastEventId, onError }?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>>;
53
+ private prefixKey;
54
+ private serializePayload;
55
+ private deserializePayload;
56
+ }
57
+
58
+ export { UpstashRedisPublisher };
59
+ export type { UpstashRedisPublisherOptions };
@@ -0,0 +1,209 @@
1
+ import { StandardRPCJsonSerializer } from '@orpc/client/standard';
2
+ import { fallback, once } from '@orpc/shared';
3
+ import { getEventMeta, withEventMeta } from '@orpc/standard-server';
4
+ import { P as Publisher } from '../shared/experimental-publisher.BtlOkhPO.mjs';
5
+
6
+ class UpstashRedisPublisher extends Publisher {
7
+ constructor(redis, { resumeRetentionSeconds, prefix, ...options } = {}) {
8
+ super(options);
9
+ this.redis = redis;
10
+ this.prefix = fallback(prefix, "orpc:publisher:");
11
+ this.retentionSeconds = resumeRetentionSeconds ?? Number.NaN;
12
+ this.serializer = new StandardRPCJsonSerializer(options);
13
+ }
14
+ prefix;
15
+ serializer;
16
+ retentionSeconds;
17
+ listenersMap = /* @__PURE__ */ new Map();
18
+ onErrorsMap = /* @__PURE__ */ new Map();
19
+ subscriptionPromiseMap = /* @__PURE__ */ new Map();
20
+ subscriptionsMap = /* @__PURE__ */ new Map();
21
+ // Upstash subscription objects
22
+ get isResumeEnabled() {
23
+ return Number.isFinite(this.retentionSeconds) && this.retentionSeconds > 0;
24
+ }
25
+ /**
26
+ * The exactness of the `XTRIM` command.
27
+ *
28
+ * @internal
29
+ */
30
+ xtrimExactness = "~";
31
+ /**
32
+ * Useful for measuring memory usage.
33
+ *
34
+ * @internal
35
+ *
36
+ */
37
+ get size() {
38
+ let size = 0;
39
+ for (const listeners of this.listenersMap) {
40
+ size += listeners[1].length || 1;
41
+ }
42
+ for (const onErrors of this.onErrorsMap) {
43
+ size += onErrors[1].length || 1;
44
+ }
45
+ return size;
46
+ }
47
+ lastCleanupTimeMap = /* @__PURE__ */ new Map();
48
+ async publish(event, payload) {
49
+ const key = this.prefixKey(event);
50
+ const serialized = this.serializePayload(payload);
51
+ let id;
52
+ if (this.isResumeEnabled) {
53
+ const now = Date.now();
54
+ for (const [mapKey, lastCleanupTime] of this.lastCleanupTimeMap) {
55
+ if (lastCleanupTime + this.retentionSeconds * 1e3 < now) {
56
+ this.lastCleanupTimeMap.delete(mapKey);
57
+ }
58
+ }
59
+ if (!this.lastCleanupTimeMap.has(key)) {
60
+ this.lastCleanupTimeMap.set(key, now);
61
+ const results = await this.redis.multi().xadd(key, "*", { data: serialized }).xtrim(key, { strategy: "MINID", exactness: this.xtrimExactness, threshold: `${now - this.retentionSeconds * 1e3}-0` }).expire(key, this.retentionSeconds * 2).exec();
62
+ id = results[0];
63
+ } else {
64
+ const result = await this.redis.xadd(key, "*", { data: serialized });
65
+ id = result;
66
+ }
67
+ }
68
+ await this.redis.publish(key, { ...serialized, id });
69
+ }
70
+ async subscribeListener(event, originalListener, { lastEventId, onError } = {}) {
71
+ const key = this.prefixKey(event);
72
+ let pendingPayloads = [];
73
+ const resumePayloadIds = /* @__PURE__ */ new Set();
74
+ const listener = (payload) => {
75
+ if (pendingPayloads) {
76
+ pendingPayloads.push(payload);
77
+ return;
78
+ }
79
+ const payloadId = getEventMeta(payload)?.id;
80
+ if (payloadId !== void 0 && resumePayloadIds.has(payloadId)) {
81
+ return;
82
+ }
83
+ originalListener(payload);
84
+ };
85
+ const subscriptionPromise = this.subscriptionPromiseMap.get(key);
86
+ if (subscriptionPromise) {
87
+ await subscriptionPromise;
88
+ }
89
+ let subscription = this.subscriptionsMap.get(key);
90
+ if (!subscription) {
91
+ const dispatchErrorForKey = (error) => {
92
+ const onErrors = this.onErrorsMap.get(key);
93
+ if (onErrors) {
94
+ for (const onError2 of onErrors) {
95
+ onError2(error);
96
+ }
97
+ }
98
+ };
99
+ subscription = this.redis.subscribe(key);
100
+ subscription.on("message", (event2) => {
101
+ try {
102
+ const listeners2 = this.listenersMap.get(event2.channel);
103
+ if (listeners2) {
104
+ const { id, ...rest } = event2.message;
105
+ const payload = this.deserializePayload(id, rest);
106
+ for (const listener2 of listeners2) {
107
+ listener2(payload);
108
+ }
109
+ }
110
+ } catch (error) {
111
+ dispatchErrorForKey(error);
112
+ }
113
+ });
114
+ let resolvePromise;
115
+ let rejectPromise;
116
+ const promise = new Promise((resolve, reject) => {
117
+ resolvePromise = resolve;
118
+ rejectPromise = reject;
119
+ });
120
+ subscription.on("error", (error) => {
121
+ rejectPromise(error);
122
+ dispatchErrorForKey(error);
123
+ });
124
+ subscription.on("subscribe", () => {
125
+ resolvePromise();
126
+ });
127
+ try {
128
+ this.subscriptionPromiseMap.set(key, promise);
129
+ await promise;
130
+ this.subscriptionsMap.set(key, subscription);
131
+ } finally {
132
+ this.subscriptionPromiseMap.delete(key);
133
+ }
134
+ }
135
+ let listeners = this.listenersMap.get(key);
136
+ if (!listeners) {
137
+ this.listenersMap.set(key, listeners = []);
138
+ }
139
+ listeners.push(listener);
140
+ if (onError) {
141
+ let onErrors = this.onErrorsMap.get(key);
142
+ if (!onErrors) {
143
+ this.onErrorsMap.set(key, onErrors = []);
144
+ }
145
+ onErrors.push(onError);
146
+ }
147
+ void (async () => {
148
+ try {
149
+ if (this.isResumeEnabled && typeof lastEventId === "string") {
150
+ const results = await this.redis.xread(key, lastEventId);
151
+ if (results && results[0]) {
152
+ const [_, items] = results[0];
153
+ for (const [id, fields] of items) {
154
+ const serialized = fields[1];
155
+ const payload = this.deserializePayload(id, serialized);
156
+ resumePayloadIds.add(id);
157
+ originalListener(payload);
158
+ }
159
+ }
160
+ }
161
+ } catch (error) {
162
+ onError?.(error);
163
+ } finally {
164
+ const pending = pendingPayloads;
165
+ pendingPayloads = void 0;
166
+ for (const payload of pending) {
167
+ listener(payload);
168
+ }
169
+ }
170
+ })();
171
+ const cleanupListeners = once(() => {
172
+ listeners.splice(listeners.indexOf(listener), 1);
173
+ if (onError) {
174
+ const onErrors = this.onErrorsMap.get(key);
175
+ if (onErrors) {
176
+ onErrors.splice(onErrors.indexOf(onError), 1);
177
+ }
178
+ }
179
+ });
180
+ return async () => {
181
+ cleanupListeners();
182
+ if (listeners.length === 0) {
183
+ this.listenersMap.delete(key);
184
+ this.onErrorsMap.delete(key);
185
+ const subscription2 = this.subscriptionsMap.get(key);
186
+ if (subscription2) {
187
+ this.subscriptionsMap.delete(key);
188
+ await subscription2.unsubscribe();
189
+ }
190
+ }
191
+ };
192
+ }
193
+ prefixKey(key) {
194
+ return `${this.prefix}${key}`;
195
+ }
196
+ serializePayload(payload) {
197
+ const eventMeta = getEventMeta(payload);
198
+ const [json, meta] = this.serializer.serialize(payload);
199
+ return { json, meta, eventMeta };
200
+ }
201
+ deserializePayload(id, { json, meta, eventMeta }) {
202
+ return withEventMeta(
203
+ this.serializer.deserialize(json, meta),
204
+ id === void 0 ? { ...eventMeta } : { ...eventMeta, id }
205
+ );
206
+ }
207
+ }
208
+
209
+ export { UpstashRedisPublisher };
@@ -0,0 +1,85 @@
1
+ import { ThrowableError, AsyncIteratorClass } from '@orpc/shared';
2
+
3
+ interface PublisherOptions {
4
+ /**
5
+ * Maximum number of events to buffer for async iterator subscribers.
6
+ *
7
+ * If the buffer exceeds this limit, the oldest event is dropped.
8
+ * This prevents unbounded memory growth if consumers process events slowly.
9
+ *
10
+ * Set to:
11
+ * - `0`: Disable buffering. Events must be consumed before the next one arrives.
12
+ * - `1`: Only keep the latest event. Useful for real-time updates where only the most recent value matters.
13
+ * - `Infinity`: Keep all events. Ensures no data loss, but may lead to high memory usage.
14
+ *
15
+ * @default 100
16
+ */
17
+ maxBufferedEvents?: number;
18
+ }
19
+ interface PublisherSubscribeListenerOptions {
20
+ /**
21
+ * Resume from a specific event ID
22
+ */
23
+ lastEventId?: string | undefined;
24
+ /**
25
+ * Triggered when an error occur
26
+ */
27
+ onError?: (error: ThrowableError) => void;
28
+ }
29
+ interface PublisherSubscribeIteratorOptions extends Pick<PublisherSubscribeListenerOptions, 'lastEventId'>, Pick<PublisherOptions, 'maxBufferedEvents'> {
30
+ /**
31
+ * Abort signal, automatically unsubscribes on abort
32
+ */
33
+ signal?: AbortSignal | undefined | null;
34
+ }
35
+ declare abstract class Publisher<T extends Record<string, object>> {
36
+ private readonly maxBufferedEvents;
37
+ constructor(options?: PublisherOptions);
38
+ /**
39
+ * Publish an event to subscribers
40
+ */
41
+ abstract publish<K extends keyof T & string>(event: K, payload: T[K]): Promise<void>;
42
+ /**
43
+ * Subscribes to a specific event using a callback function.
44
+ * Returns an unsubscribe function to remove the listener.
45
+ *
46
+ * @remarks
47
+ * This method should be protected to avoid conflicts with `subscribe` method
48
+ */
49
+ protected abstract subscribeListener<K extends keyof T & string>(event: K, listener: (payload: T[K]) => void, options?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>>;
50
+ /**
51
+ * Subscribes to a specific event using a callback function.
52
+ * Returns an unsubscribe function to remove the listener.
53
+ *
54
+ * @example
55
+ * ```ts
56
+ * const unsubscribe = publisher.subscribe('event', (payload) => {
57
+ * console.log(payload)
58
+ * }, {
59
+ * lastEventId,
60
+ * onError: (error) => {
61
+ * // handle error (consider unsubscribe if error can't be recovered)
62
+ * }
63
+ * })
64
+ *
65
+ * // Later
66
+ * unsubscribe()
67
+ * ```
68
+ */
69
+ subscribe<K extends keyof T & string>(event: K, listener: (payload: T[K]) => void, options?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>>;
70
+ /**
71
+ * Subscribes to a specific event using an async iterator.
72
+ * Useful for `for await...of` loops with optional buffering and abort support.
73
+ *
74
+ * @example
75
+ * ```ts
76
+ * for await (const payload of publisher.subscribe('event', { signal, lastEventId })) {
77
+ * console.log(payload)
78
+ * }
79
+ * ```
80
+ */
81
+ subscribe<K extends keyof T & string>(event: K, options?: PublisherSubscribeIteratorOptions): AsyncIteratorClass<T[K], void, void>;
82
+ }
83
+
84
+ export { Publisher };
85
+ export type { PublisherOptions, PublisherSubscribeIteratorOptions, PublisherSubscribeListenerOptions };
@@ -0,0 +1,85 @@
1
+ import { ThrowableError, AsyncIteratorClass } from '@orpc/shared';
2
+
3
+ interface PublisherOptions {
4
+ /**
5
+ * Maximum number of events to buffer for async iterator subscribers.
6
+ *
7
+ * If the buffer exceeds this limit, the oldest event is dropped.
8
+ * This prevents unbounded memory growth if consumers process events slowly.
9
+ *
10
+ * Set to:
11
+ * - `0`: Disable buffering. Events must be consumed before the next one arrives.
12
+ * - `1`: Only keep the latest event. Useful for real-time updates where only the most recent value matters.
13
+ * - `Infinity`: Keep all events. Ensures no data loss, but may lead to high memory usage.
14
+ *
15
+ * @default 100
16
+ */
17
+ maxBufferedEvents?: number;
18
+ }
19
+ interface PublisherSubscribeListenerOptions {
20
+ /**
21
+ * Resume from a specific event ID
22
+ */
23
+ lastEventId?: string | undefined;
24
+ /**
25
+ * Triggered when an error occur
26
+ */
27
+ onError?: (error: ThrowableError) => void;
28
+ }
29
+ interface PublisherSubscribeIteratorOptions extends Pick<PublisherSubscribeListenerOptions, 'lastEventId'>, Pick<PublisherOptions, 'maxBufferedEvents'> {
30
+ /**
31
+ * Abort signal, automatically unsubscribes on abort
32
+ */
33
+ signal?: AbortSignal | undefined | null;
34
+ }
35
+ declare abstract class Publisher<T extends Record<string, object>> {
36
+ private readonly maxBufferedEvents;
37
+ constructor(options?: PublisherOptions);
38
+ /**
39
+ * Publish an event to subscribers
40
+ */
41
+ abstract publish<K extends keyof T & string>(event: K, payload: T[K]): Promise<void>;
42
+ /**
43
+ * Subscribes to a specific event using a callback function.
44
+ * Returns an unsubscribe function to remove the listener.
45
+ *
46
+ * @remarks
47
+ * This method should be protected to avoid conflicts with `subscribe` method
48
+ */
49
+ protected abstract subscribeListener<K extends keyof T & string>(event: K, listener: (payload: T[K]) => void, options?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>>;
50
+ /**
51
+ * Subscribes to a specific event using a callback function.
52
+ * Returns an unsubscribe function to remove the listener.
53
+ *
54
+ * @example
55
+ * ```ts
56
+ * const unsubscribe = publisher.subscribe('event', (payload) => {
57
+ * console.log(payload)
58
+ * }, {
59
+ * lastEventId,
60
+ * onError: (error) => {
61
+ * // handle error (consider unsubscribe if error can't be recovered)
62
+ * }
63
+ * })
64
+ *
65
+ * // Later
66
+ * unsubscribe()
67
+ * ```
68
+ */
69
+ subscribe<K extends keyof T & string>(event: K, listener: (payload: T[K]) => void, options?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>>;
70
+ /**
71
+ * Subscribes to a specific event using an async iterator.
72
+ * Useful for `for await...of` loops with optional buffering and abort support.
73
+ *
74
+ * @example
75
+ * ```ts
76
+ * for await (const payload of publisher.subscribe('event', { signal, lastEventId })) {
77
+ * console.log(payload)
78
+ * }
79
+ * ```
80
+ */
81
+ subscribe<K extends keyof T & string>(event: K, options?: PublisherSubscribeIteratorOptions): AsyncIteratorClass<T[K], void, void>;
82
+ }
83
+
84
+ export { Publisher };
85
+ export type { PublisherOptions, PublisherSubscribeIteratorOptions, PublisherSubscribeListenerOptions };
package/dist/index.mjs ADDED
@@ -0,0 +1,2 @@
1
+ export { P as Publisher } from './shared/experimental-publisher.BtlOkhPO.mjs';
2
+ import '@orpc/shared';
@@ -0,0 +1,72 @@
1
+ import { AsyncIteratorClass } from '@orpc/shared';
2
+
3
+ class Publisher {
4
+ maxBufferedEvents;
5
+ constructor(options = {}) {
6
+ this.maxBufferedEvents = options.maxBufferedEvents ?? 100;
7
+ }
8
+ subscribe(event, listenerOrOptions, listenerOptions) {
9
+ if (typeof listenerOrOptions === "function") {
10
+ return this.subscribeListener(event, listenerOrOptions, listenerOptions);
11
+ }
12
+ const signal = listenerOrOptions?.signal;
13
+ const maxBufferedEvents = listenerOrOptions?.maxBufferedEvents ?? this.maxBufferedEvents;
14
+ signal?.throwIfAborted();
15
+ const bufferedEvents = [];
16
+ const pullResolvers = [];
17
+ let subscriptionError;
18
+ const unsubscribePromise = this.subscribe(event, (payload) => {
19
+ const resolver = pullResolvers.shift();
20
+ if (resolver) {
21
+ resolver[0]({ done: false, value: payload });
22
+ } else {
23
+ bufferedEvents.push(payload);
24
+ if (bufferedEvents.length > maxBufferedEvents) {
25
+ bufferedEvents.shift();
26
+ }
27
+ }
28
+ }, {
29
+ lastEventId: listenerOrOptions?.lastEventId,
30
+ onError: (error) => {
31
+ subscriptionError = { error };
32
+ pullResolvers.forEach((resolver) => resolver[1](error));
33
+ signal?.removeEventListener("abort", abortListener);
34
+ pullResolvers.length = 0;
35
+ bufferedEvents.length = 0;
36
+ unsubscribePromise.then((unsubscribe) => unsubscribe()).catch(() => {
37
+ });
38
+ }
39
+ });
40
+ function abortListener(event2) {
41
+ pullResolvers.forEach((resolver) => resolver[1](event2.target.reason));
42
+ pullResolvers.length = 0;
43
+ bufferedEvents.length = 0;
44
+ unsubscribePromise.then((unsubscribe) => unsubscribe()).catch(() => {
45
+ });
46
+ }
47
+ signal?.addEventListener("abort", abortListener, { once: true });
48
+ return new AsyncIteratorClass(async () => {
49
+ if (subscriptionError) {
50
+ throw subscriptionError.error;
51
+ }
52
+ if (signal?.aborted) {
53
+ throw signal.reason;
54
+ }
55
+ await unsubscribePromise;
56
+ if (bufferedEvents.length > 0) {
57
+ return { done: false, value: bufferedEvents.shift() };
58
+ }
59
+ return new Promise((resolve, reject) => {
60
+ pullResolvers.push([resolve, reject]);
61
+ });
62
+ }, async () => {
63
+ pullResolvers.forEach((resolver) => resolver[0]({ done: true, value: void 0 }));
64
+ signal?.removeEventListener("abort", abortListener);
65
+ pullResolvers.length = 0;
66
+ bufferedEvents.length = 0;
67
+ await unsubscribePromise.then((unsubscribe) => unsubscribe());
68
+ });
69
+ }
70
+ }
71
+
72
+ export { Publisher as P };
package/package.json ADDED
@@ -0,0 +1,67 @@
1
+ {
2
+ "name": "@orpc/experimental-publisher",
3
+ "type": "module",
4
+ "version": "0.0.0-next.0294b1b",
5
+ "license": "MIT",
6
+ "homepage": "https://orpc.dev",
7
+ "repository": {
8
+ "type": "git",
9
+ "url": "git+https://github.com/unnoq/orpc.git",
10
+ "directory": "packages/publisher"
11
+ },
12
+ "keywords": [
13
+ "unnoq",
14
+ "orpc"
15
+ ],
16
+ "exports": {
17
+ ".": {
18
+ "types": "./dist/index.d.mts",
19
+ "import": "./dist/index.mjs",
20
+ "default": "./dist/index.mjs"
21
+ },
22
+ "./memory": {
23
+ "types": "./dist/adapters/memory.d.mts",
24
+ "import": "./dist/adapters/memory.mjs",
25
+ "default": "./dist/adapters/memory.mjs"
26
+ },
27
+ "./ioredis": {
28
+ "types": "./dist/adapters/ioredis.d.mts",
29
+ "import": "./dist/adapters/ioredis.mjs",
30
+ "default": "./dist/adapters/ioredis.mjs"
31
+ },
32
+ "./upstash-redis": {
33
+ "types": "./dist/adapters/upstash-redis.d.mts",
34
+ "import": "./dist/adapters/upstash-redis.mjs",
35
+ "default": "./dist/adapters/upstash-redis.mjs"
36
+ }
37
+ },
38
+ "files": [
39
+ "dist"
40
+ ],
41
+ "peerDependencies": {
42
+ "@upstash/redis": ">=1.35.6",
43
+ "ioredis": ">=5.8.1"
44
+ },
45
+ "peerDependenciesMeta": {
46
+ "@upstash/redis": {
47
+ "optional": true
48
+ },
49
+ "ioredis": {
50
+ "optional": true
51
+ }
52
+ },
53
+ "dependencies": {
54
+ "@orpc/client": "0.0.0-next.0294b1b",
55
+ "@orpc/shared": "0.0.0-next.0294b1b",
56
+ "@orpc/standard-server": "0.0.0-next.0294b1b"
57
+ },
58
+ "devDependencies": {
59
+ "@upstash/redis": "^1.35.6",
60
+ "ioredis": "^5.8.2"
61
+ },
62
+ "scripts": {
63
+ "build": "unbuild",
64
+ "build:watch": "pnpm run build --watch",
65
+ "type:check": "tsc -b"
66
+ }
67
+ }