@drarzter/kafka-client 0.5.2 → 0.5.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -6,10 +6,53 @@
6
6
 
7
7
  Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class NestJS adapter. Built on top of [`@confluentinc/kafka-javascript`](https://github.com/confluentinc/confluent-kafka-javascript) (librdkafka).
8
8
 
9
+ ## Table of contents
10
+
11
+ - [What is this?](#what-is-this)
12
+ - [Why?](#why)
13
+ - [Installation](#installation)
14
+ - [Standalone usage](#standalone-usage-no-nestjs)
15
+ - [Quick start (NestJS)](#quick-start-nestjs)
16
+ - [Usage](#usage)
17
+ - [1. Define your topic map](#1-define-your-topic-map)
18
+ - [2. Register the module](#2-register-the-module)
19
+ - [3. Inject and use](#3-inject-and-use)
20
+ - [Consuming messages](#consuming-messages)
21
+ - [Declarative: @SubscribeTo()](#declarative-subscribeto)
22
+ - [Imperative: startConsumer()](#imperative-startconsumer)
23
+ - [Multiple consumer groups](#multiple-consumer-groups)
24
+ - [Partition key](#partition-key)
25
+ - [Message headers](#message-headers)
26
+ - [Batch sending](#batch-sending)
27
+ - [Batch consuming](#batch-consuming)
28
+ - [Transactions](#transactions)
29
+ - [Consumer interceptors](#consumer-interceptors)
30
+ - [Instrumentation](#instrumentation)
31
+ - [Options reference](#options-reference)
32
+ - [Error classes](#error-classes)
33
+ - [Retry topic chain](#retry-topic-chain)
34
+ - [stopConsumer](#stopconsumer)
35
+ - [onMessageLost](#onmessagelost)
36
+ - [Schema validation](#schema-validation)
37
+ - [Health check](#health-check)
38
+ - [Testing](#testing)
39
+ - [Project structure](#project-structure)
40
+
9
41
  ## What is this?
10
42
 
11
43
  An opinionated, type-safe abstraction over `@confluentinc/kafka-javascript` (librdkafka). Works standalone (Express, Fastify, raw Node) or as a NestJS DynamicModule. Not a full-featured framework — just a clean, typed layer for producing and consuming Kafka messages.
12
44
 
45
+ **This library exists so you don't have to think about:**
46
+
47
+ - rebalance edge cases
48
+ - retry loops and backoff scheduling
49
+ - dead letter queue wiring
50
+ - transaction coordinator warmup
51
+ - graceful shutdown and offset commit pitfalls
52
+ - silent message loss
53
+
54
+ Safe by default. Configurable when you need it. Escape hatches for when you know what you're doing.
55
+
13
56
  ## Why?
14
57
 
15
58
  - **Typed topics** — you define a map of topic -> message shape, and the compiler won't let you send wrong data to wrong topic
@@ -666,6 +709,7 @@ Options for `sendMessage()` — the third argument:
666
709
  | `retry.backoffMs` | `1000` | Base delay for exponential backoff in ms |
667
710
  | `retry.maxBackoffMs` | `30000` | Maximum delay cap for exponential backoff in ms |
668
711
  | `dlq` | `false` | Send to `{topic}.dlq` after all retries exhausted — message carries `x-dlq-*` metadata headers |
712
+ | `retryTopics` | `false` | Route failed messages through `{topic}.retry` instead of sleeping in-process (see [Retry topic chain](#retry-topic-chain)) |
669
713
  | `interceptors` | `[]` | Array of before/after/onError hooks |
670
714
  | `batch` | `false` | (decorator only) Use `startBatchConsumer` instead of `startConsumer` |
671
715
  | `subscribeRetry.retries` | `5` | Max attempts for `consumer.subscribe()` when topic doesn't exist yet |
@@ -774,6 +818,49 @@ const interceptor: ConsumerInterceptor<MyTopics> = {
774
818
  };
775
819
  ```
776
820
 
821
+ ## Retry topic chain
822
+
823
+ By default, retry is handled in-process: the consumer sleeps between attempts while holding the partition. With `retryTopics: true`, failed messages are routed to a `<topic>.retry` Kafka topic instead. A companion consumer auto-starts on `<topic>.retry` (group `<groupId>-retry`), waits until the scheduled retry time, then calls the same handler.
824
+
825
+ Benefits over in-process retry:
826
+
827
+ - **Durable** — retry messages survive a consumer restart
828
+ - **Non-blocking** — the original consumer is free immediately; the retry consumer pauses only the specific partition being delayed, so other partitions continue processing
829
+
830
+ ```typescript
831
+ await kafka.startConsumer(['orders.created'], handler, {
832
+ retry: { maxRetries: 3, backoffMs: 1000, maxBackoffMs: 30_000 },
833
+ dlq: true,
834
+ retryTopics: true, // ← opt in
835
+ });
836
+ ```
837
+
838
+ Message flow with `maxRetries: 2`:
839
+
840
+ ```text
841
+ orders.created → handler fails → orders.created.retry (attempt 1, delay ~1 s)
842
+ → handler fails → orders.created.retry (attempt 2, delay ~2 s)
843
+ → handler fails → orders.created.dlq
844
+ ```
845
+
846
+ The retry topic messages carry scheduling headers (`x-retry-attempt`, `x-retry-after`, `x-retry-original-topic`, `x-retry-max-retries`) that the companion consumer reads automatically — no manual configuration needed.
847
+
848
+ > **Note:** `retryTopics` requires `retry` to be set — an error is thrown at startup if `retry` is missing. Currently only applies to `startConsumer`; batch consumers (`startBatchConsumer`) use in-process retry regardless.
849
+
850
+ ## stopConsumer
851
+
852
+ Stop all consumers or a specific group:
853
+
854
+ ```typescript
855
+ // Stop a specific consumer group
856
+ await kafka.stopConsumer('my-group');
857
+
858
+ // Stop all consumers
859
+ await kafka.stopConsumer();
860
+ ```
861
+
862
+ `stopConsumer(groupId)` disconnects and removes only that group's consumer, leaving other groups running. Useful when you want to pause processing for a specific topic without restarting the whole client.
863
+
777
864
  ## onMessageLost
778
865
 
779
866
  By default, if a consumer handler throws and `dlq` is not enabled, the message is logged and dropped. Use `onMessageLost` to catch these silent losses:
@@ -164,9 +164,43 @@ async function sendToDlq(topic2, rawMessage, deps, meta) {
164
164
  );
165
165
  }
166
166
  }
167
+ var RETRY_HEADER_ATTEMPT = "x-retry-attempt";
168
+ var RETRY_HEADER_AFTER = "x-retry-after";
169
+ var RETRY_HEADER_MAX_RETRIES = "x-retry-max-retries";
170
+ var RETRY_HEADER_ORIGINAL_TOPIC = "x-retry-original-topic";
171
+ async function sendToRetryTopic(originalTopic, rawMessages, attempt, maxRetries, delayMs, originalHeaders, deps) {
172
+ const retryTopic = `${originalTopic}.retry`;
173
+ const {
174
+ [RETRY_HEADER_ATTEMPT]: _a,
175
+ [RETRY_HEADER_AFTER]: _b,
176
+ [RETRY_HEADER_MAX_RETRIES]: _c,
177
+ [RETRY_HEADER_ORIGINAL_TOPIC]: _d,
178
+ ...userHeaders
179
+ } = originalHeaders;
180
+ const headers = {
181
+ ...userHeaders,
182
+ [RETRY_HEADER_ATTEMPT]: String(attempt),
183
+ [RETRY_HEADER_AFTER]: String(Date.now() + delayMs),
184
+ [RETRY_HEADER_MAX_RETRIES]: String(maxRetries),
185
+ [RETRY_HEADER_ORIGINAL_TOPIC]: originalTopic
186
+ };
187
+ try {
188
+ for (const raw of rawMessages) {
189
+ await deps.producer.send({ topic: retryTopic, messages: [{ value: raw, headers }] });
190
+ }
191
+ deps.logger.warn(
192
+ `Message queued in retry topic ${retryTopic} (attempt ${attempt}/${maxRetries})`
193
+ );
194
+ } catch (error) {
195
+ deps.logger.error(
196
+ `Failed to send message to retry topic ${retryTopic}:`,
197
+ toError(error).stack
198
+ );
199
+ }
200
+ }
167
201
  async function executeWithRetry(fn, ctx, deps) {
168
- const { envelope, rawMessages, interceptors, dlq, retry, isBatch } = ctx;
169
- const maxAttempts = retry ? retry.maxRetries + 1 : 1;
202
+ const { envelope, rawMessages, interceptors, dlq, retry, isBatch, retryTopics } = ctx;
203
+ const maxAttempts = retryTopics ? 1 : retry ? retry.maxRetries + 1 : 1;
170
204
  const backoffMs = retry?.backoffMs ?? 1e3;
171
205
  const maxBackoffMs = retry?.maxBackoffMs ?? 3e4;
172
206
  const envelopes = Array.isArray(envelope) ? envelope : [envelope];
@@ -225,7 +259,19 @@ async function executeWithRetry(fn, ctx, deps) {
225
259
  `Error processing ${isBatch ? "batch" : "message"} from topic ${topic2} (attempt ${attempt}/${maxAttempts}):`,
226
260
  err.stack
227
261
  );
228
- if (isLastAttempt) {
262
+ if (retryTopics && retry) {
263
+ const cap = Math.min(backoffMs, maxBackoffMs);
264
+ const delay = Math.floor(Math.random() * cap);
265
+ await sendToRetryTopic(
266
+ topic2,
267
+ rawMessages,
268
+ 1,
269
+ retry.maxRetries,
270
+ delay,
271
+ envelopes[0]?.headers ?? {},
272
+ deps
273
+ );
274
+ } else if (isLastAttempt) {
229
275
  if (dlq) {
230
276
  const dlqMeta = {
231
277
  error: err,
@@ -403,7 +449,12 @@ var KafkaClient = class {
403
449
  this.logger.log("Producer disconnected");
404
450
  }
405
451
  async startConsumer(topics, handleMessage, options = {}) {
406
- const { consumer, schemaMap, gid, dlq, interceptors, retry } = await this.setupConsumer(topics, "eachMessage", options);
452
+ if (options.retryTopics && !options.retry) {
453
+ throw new Error(
454
+ "retryTopics requires retry to be configured \u2014 set retry.maxRetries to enable the retry topic chain"
455
+ );
456
+ }
457
+ const { consumer, schemaMap, topicNames, gid, dlq, interceptors, retry } = await this.setupConsumer(topics, "eachMessage", options);
407
458
  const deps = { logger: this.logger, producer: this.producer, instrumentation: this.instrumentation, onMessageLost: this.onMessageLost };
408
459
  await consumer.run({
409
460
  eachMessage: async ({ topic: topic2, partition, message }) => {
@@ -437,12 +488,23 @@ var KafkaClient = class {
437
488
  { correlationId: envelope.correlationId, traceparent: envelope.traceparent },
438
489
  () => handleMessage(envelope)
439
490
  ),
440
- { envelope, rawMessages: [raw], interceptors, dlq, retry },
491
+ { envelope, rawMessages: [raw], interceptors, dlq, retry, retryTopics: options.retryTopics },
441
492
  deps
442
493
  );
443
494
  }
444
495
  });
445
496
  this.runningConsumers.set(gid, "eachMessage");
497
+ if (options.retryTopics && retry) {
498
+ await this.startRetryTopicConsumers(
499
+ topicNames,
500
+ gid,
501
+ handleMessage,
502
+ retry,
503
+ dlq,
504
+ interceptors,
505
+ schemaMap
506
+ );
507
+ }
446
508
  }
447
509
  async startBatchConsumer(topics, handleBatch, options = {}) {
448
510
  const { consumer, schemaMap, gid, dlq, interceptors, retry } = await this.setupConsumer(topics, "eachBatch", options);
@@ -507,15 +569,28 @@ var KafkaClient = class {
507
569
  this.runningConsumers.set(gid, "eachBatch");
508
570
  }
509
571
  // ── Consumer lifecycle ───────────────────────────────────────────
510
- async stopConsumer() {
511
- const tasks = [];
512
- for (const consumer of this.consumers.values()) {
513
- tasks.push(consumer.disconnect());
572
+ async stopConsumer(groupId) {
573
+ if (groupId !== void 0) {
574
+ const consumer = this.consumers.get(groupId);
575
+ if (!consumer) {
576
+ this.logger.warn(`stopConsumer: no active consumer for group "${groupId}"`);
577
+ return;
578
+ }
579
+ await consumer.disconnect().catch(() => {
580
+ });
581
+ this.consumers.delete(groupId);
582
+ this.runningConsumers.delete(groupId);
583
+ this.logger.log(`Consumer disconnected: group "${groupId}"`);
584
+ } else {
585
+ const tasks = Array.from(this.consumers.values()).map(
586
+ (c) => c.disconnect().catch(() => {
587
+ })
588
+ );
589
+ await Promise.allSettled(tasks);
590
+ this.consumers.clear();
591
+ this.runningConsumers.clear();
592
+ this.logger.log("All consumers disconnected");
514
593
  }
515
- await Promise.allSettled(tasks);
516
- this.consumers.clear();
517
- this.runningConsumers.clear();
518
- this.logger.log("All consumers disconnected");
519
594
  }
520
595
  /** Check broker connectivity and return status, clientId, and available topics. */
521
596
  async checkStatus() {
@@ -548,7 +623,164 @@ var KafkaClient = class {
548
623
  this.runningConsumers.clear();
549
624
  this.logger.log("All connections closed");
550
625
  }
626
+ // ── Retry topic chain ────────────────────────────────────────────
627
+ /**
628
+ * Auto-start companion consumers on `<topic>.retry` for each original topic.
629
+ * Called by `startConsumer` when `retryTopics: true`.
630
+ *
631
+ * Flow per message:
632
+ * 1. Sleep until `x-retry-after` (scheduled by the main consumer or previous retry hop)
633
+ * 2. Call the original handler
634
+ * 3. On failure: if retries remain → re-send to `<originalTopic>.retry` with incremented attempt
635
+ * if exhausted → DLQ or onMessageLost
636
+ */
637
+ async startRetryTopicConsumers(originalTopics, originalGroupId, handleMessage, retry, dlq, interceptors, schemaMap) {
638
+ const retryTopicNames = originalTopics.map((t) => `${t}.retry`);
639
+ const retryGroupId = `${originalGroupId}-retry`;
640
+ const backoffMs = retry.backoffMs ?? 1e3;
641
+ const maxBackoffMs = retry.maxBackoffMs ?? 3e4;
642
+ const deps = {
643
+ logger: this.logger,
644
+ producer: this.producer,
645
+ instrumentation: this.instrumentation,
646
+ onMessageLost: this.onMessageLost
647
+ };
648
+ for (const rt of retryTopicNames) {
649
+ await this.ensureTopic(rt);
650
+ }
651
+ const consumer = this.getOrCreateConsumer(retryGroupId, false, true);
652
+ await consumer.connect();
653
+ await subscribeWithRetry(consumer, retryTopicNames, this.logger);
654
+ await consumer.run({
655
+ eachMessage: async ({ topic: retryTopic, partition, message }) => {
656
+ if (!message.value) return;
657
+ const raw = message.value.toString();
658
+ const parsed = parseJsonMessage(raw, retryTopic, this.logger);
659
+ if (parsed === null) return;
660
+ const headers = decodeHeaders(message.headers);
661
+ const originalTopic = headers[RETRY_HEADER_ORIGINAL_TOPIC] ?? retryTopic.replace(/\.retry$/, "");
662
+ const currentAttempt = parseInt(
663
+ headers[RETRY_HEADER_ATTEMPT] ?? "1",
664
+ 10
665
+ );
666
+ const maxRetries = parseInt(
667
+ headers[RETRY_HEADER_MAX_RETRIES] ?? String(retry.maxRetries),
668
+ 10
669
+ );
670
+ const retryAfter = parseInt(
671
+ headers[RETRY_HEADER_AFTER] ?? "0",
672
+ 10
673
+ );
674
+ const remaining = retryAfter - Date.now();
675
+ if (remaining > 0) {
676
+ consumer.pause([{ topic: retryTopic, partitions: [partition] }]);
677
+ await sleep(remaining);
678
+ consumer.resume([{ topic: retryTopic, partitions: [partition] }]);
679
+ }
680
+ const validated = await validateWithSchema(
681
+ parsed,
682
+ raw,
683
+ originalTopic,
684
+ schemaMap,
685
+ interceptors,
686
+ dlq,
687
+ { ...deps, originalHeaders: headers }
688
+ );
689
+ if (validated === null) return;
690
+ const envelope = extractEnvelope(
691
+ validated,
692
+ headers,
693
+ originalTopic,
694
+ partition,
695
+ message.offset
696
+ );
697
+ try {
698
+ const cleanups = [];
699
+ for (const inst of this.instrumentation) {
700
+ const c = inst.beforeConsume?.(envelope);
701
+ if (typeof c === "function") cleanups.push(c);
702
+ }
703
+ for (const interceptor of interceptors) await interceptor.before?.(envelope);
704
+ await runWithEnvelopeContext(
705
+ { correlationId: envelope.correlationId, traceparent: envelope.traceparent },
706
+ () => handleMessage(envelope)
707
+ );
708
+ for (const interceptor of interceptors) await interceptor.after?.(envelope);
709
+ for (const cleanup of cleanups) cleanup();
710
+ } catch (error) {
711
+ const err = toError(error);
712
+ const nextAttempt = currentAttempt + 1;
713
+ const exhausted = currentAttempt >= maxRetries;
714
+ for (const inst of this.instrumentation) inst.onConsumeError?.(envelope, err);
715
+ const reportedError = exhausted && maxRetries > 1 ? new KafkaRetryExhaustedError(originalTopic, [envelope.payload], maxRetries, { cause: err }) : err;
716
+ for (const interceptor of interceptors) {
717
+ await interceptor.onError?.(envelope, reportedError);
718
+ }
719
+ this.logger.error(
720
+ `Retry consumer error for ${originalTopic} (attempt ${currentAttempt}/${maxRetries}):`,
721
+ err.stack
722
+ );
723
+ if (!exhausted) {
724
+ const cap = Math.min(backoffMs * 2 ** currentAttempt, maxBackoffMs);
725
+ const delay = Math.floor(Math.random() * cap);
726
+ await sendToRetryTopic(
727
+ originalTopic,
728
+ [raw],
729
+ nextAttempt,
730
+ maxRetries,
731
+ delay,
732
+ headers,
733
+ deps
734
+ );
735
+ } else if (dlq) {
736
+ await sendToDlq(originalTopic, raw, deps, {
737
+ error: err,
738
+ // +1 to account for the main consumer's initial attempt before
739
+ // routing to the retry topic, making this consistent with the
740
+ // in-process retry path where attempt counts all tries.
741
+ attempt: currentAttempt + 1,
742
+ originalHeaders: headers
743
+ });
744
+ } else {
745
+ await deps.onMessageLost?.({
746
+ topic: originalTopic,
747
+ error: err,
748
+ attempt: currentAttempt,
749
+ headers
750
+ });
751
+ }
752
+ }
753
+ }
754
+ });
755
+ this.runningConsumers.set(retryGroupId, "eachMessage");
756
+ await this.waitForPartitionAssignment(consumer, retryTopicNames);
757
+ this.logger.log(
758
+ `Retry topic consumers started for: ${originalTopics.join(", ")} (group: ${retryGroupId})`
759
+ );
760
+ }
551
761
  // ── Private helpers ──────────────────────────────────────────────
762
+ /**
763
+ * Poll `consumer.assignment()` until the consumer has received at least one
764
+ * partition for the given topics, then return. Logs a warning and returns
765
+ * (rather than throwing) on timeout so that a slow broker does not break
766
+ * the caller — in the worst case a message sent immediately after would be
767
+ * missed, which is the same behaviour as before this guard was added.
768
+ */
769
+ async waitForPartitionAssignment(consumer, topics, timeoutMs = 1e4) {
770
+ const topicSet = new Set(topics);
771
+ const deadline = Date.now() + timeoutMs;
772
+ while (Date.now() < deadline) {
773
+ try {
774
+ const assigned = consumer.assignment();
775
+ if (assigned.some((a) => topicSet.has(a.topic))) return;
776
+ } catch {
777
+ }
778
+ await sleep(200);
779
+ }
780
+ this.logger.warn(
781
+ `Retry consumer did not receive partition assignments for [${topics.join(", ")}] within ${timeoutMs}ms`
782
+ );
783
+ }
552
784
  getOrCreateConsumer(groupId, fromBeginning, autoCommit) {
553
785
  if (!this.consumers.has(groupId)) {
554
786
  this.consumers.set(
@@ -712,4 +944,4 @@ export {
712
944
  KafkaClient,
713
945
  topic
714
946
  };
715
- //# sourceMappingURL=chunk-VGUALBZH.mjs.map
947
+ //# sourceMappingURL=chunk-3QXTW66R.mjs.map
@@ -0,0 +1 @@
1
+ {"version":3,"sources":["../src/client/kafka.client.ts","../src/client/envelope.ts","../src/client/errors.ts","../src/client/consumer-pipeline.ts","../src/client/subscribe-retry.ts","../src/client/topic.ts"],"sourcesContent":["import { KafkaJS } from \"@confluentinc/kafka-javascript\";\ntype Kafka = KafkaJS.Kafka;\ntype Producer = KafkaJS.Producer;\ntype Consumer = KafkaJS.Consumer;\ntype Admin = KafkaJS.Admin;\nconst { Kafka: KafkaClass, logLevel: KafkaLogLevel } = KafkaJS;\nimport { TopicDescriptor, SchemaLike } from \"./topic\";\nimport {\n buildEnvelopeHeaders,\n decodeHeaders,\n extractEnvelope,\n runWithEnvelopeContext,\n} from \"./envelope\";\nimport type { EventEnvelope } from \"./envelope\";\nimport {\n toError,\n sleep,\n parseJsonMessage,\n validateWithSchema,\n executeWithRetry,\n sendToDlq,\n sendToRetryTopic,\n RETRY_HEADER_ATTEMPT,\n RETRY_HEADER_AFTER,\n RETRY_HEADER_MAX_RETRIES,\n RETRY_HEADER_ORIGINAL_TOPIC,\n} from \"./consumer-pipeline\";\nimport { KafkaRetryExhaustedError } from \"./errors\";\nimport { subscribeWithRetry } from \"./subscribe-retry\";\nimport type {\n ClientId,\n GroupId,\n SendOptions,\n MessageHeaders,\n BatchMessageItem,\n ConsumerOptions,\n TransactionContext,\n TopicMapConstraint,\n IKafkaClient,\n KafkaClientOptions,\n KafkaInstrumentation,\n KafkaLogger,\n BatchMeta,\n} from \"./types\";\n\n// Re-export all types so existing `import { ... } from './kafka.client'` keeps working\nexport * from \"./types\";\n\n/**\n * Type-safe Kafka client.\n * Wraps @confluentinc/kafka-javascript (librdkafka) with JSON serialization,\n * retries, DLQ, transactions, and interceptors.\n *\n * @typeParam T - Topic-to-message type mapping for compile-time safety.\n */\nexport class KafkaClient<\n T extends TopicMapConstraint<T>,\n> implements IKafkaClient<T> {\n private readonly kafka: Kafka;\n private readonly producer: Producer;\n private txProducer: Producer | undefined;\n private readonly consumers = new Map<string, Consumer>();\n private readonly admin: Admin;\n private readonly logger: KafkaLogger;\n private readonly autoCreateTopicsEnabled: boolean;\n private readonly strictSchemasEnabled: boolean;\n private readonly numPartitions: number;\n private readonly ensuredTopics = new Set<string>();\n private readonly defaultGroupId: string;\n private readonly schemaRegistry = new Map<string, SchemaLike>();\n private readonly runningConsumers = new Map<string, \"eachMessage\" | \"eachBatch\">();\n private readonly instrumentation: KafkaInstrumentation[];\n private readonly onMessageLost: KafkaClientOptions['onMessageLost'];\n\n private isAdminConnected = false;\n public readonly clientId: ClientId;\n\n constructor(\n clientId: ClientId,\n groupId: GroupId,\n brokers: string[],\n options?: KafkaClientOptions,\n ) {\n this.clientId = clientId;\n this.defaultGroupId = groupId;\n this.logger = options?.logger ?? {\n log: (msg) => console.log(`[KafkaClient:${clientId}] ${msg}`),\n warn: (msg, ...args) => console.warn(`[KafkaClient:${clientId}] ${msg}`, ...args),\n error: (msg, ...args) => console.error(`[KafkaClient:${clientId}] ${msg}`, ...args),\n };\n this.autoCreateTopicsEnabled = options?.autoCreateTopics ?? false;\n this.strictSchemasEnabled = options?.strictSchemas ?? true;\n this.numPartitions = options?.numPartitions ?? 1;\n this.instrumentation = options?.instrumentation ?? [];\n this.onMessageLost = options?.onMessageLost;\n\n this.kafka = new KafkaClass({\n kafkaJS: {\n clientId: this.clientId,\n brokers,\n logLevel: KafkaLogLevel.ERROR,\n },\n });\n this.producer = this.kafka.producer({\n kafkaJS: {\n acks: -1,\n },\n });\n this.admin = this.kafka.admin();\n }\n\n // ── Send ─────────────────────────────────────────────────────────\n\n /** Send a single typed message. Accepts a topic key or a TopicDescriptor. */\n public async sendMessage<\n D extends TopicDescriptor<string & keyof T, T[string & keyof T]>,\n >(descriptor: D, message: D[\"__type\"], options?: SendOptions): Promise<void>;\n public async sendMessage<K extends keyof T>(\n topic: K,\n message: T[K],\n options?: SendOptions,\n ): Promise<void>;\n public async sendMessage(\n topicOrDesc: any,\n message: any,\n options: SendOptions = {},\n ): Promise<void> {\n const payload = await this.buildSendPayload(topicOrDesc, [\n {\n value: message,\n key: options.key,\n headers: options.headers,\n correlationId: options.correlationId,\n schemaVersion: options.schemaVersion,\n eventId: options.eventId,\n },\n ]);\n await this.ensureTopic(payload.topic);\n await this.producer.send(payload);\n for (const inst of this.instrumentation) {\n inst.afterSend?.(payload.topic);\n }\n }\n\n /** Send multiple typed messages in one call. Accepts a topic key or a TopicDescriptor. */\n public async sendBatch<\n D extends TopicDescriptor<string & keyof T, T[string & keyof T]>,\n >(\n descriptor: D,\n messages: Array<BatchMessageItem<D[\"__type\"]>>,\n ): Promise<void>;\n public async sendBatch<K extends keyof T>(\n topic: K,\n messages: Array<BatchMessageItem<T[K]>>,\n ): Promise<void>;\n public async sendBatch(\n topicOrDesc: any,\n messages: Array<BatchMessageItem<any>>,\n ): Promise<void> {\n const payload = await this.buildSendPayload(topicOrDesc, messages);\n await this.ensureTopic(payload.topic);\n await this.producer.send(payload);\n for (const inst of this.instrumentation) {\n inst.afterSend?.(payload.topic);\n }\n }\n\n /** Execute multiple sends atomically. Commits on success, aborts on error. */\n public async transaction(\n fn: (ctx: TransactionContext<T>) => Promise<void>,\n ): Promise<void> {\n if (!this.txProducer) {\n this.txProducer = this.kafka.producer({\n kafkaJS: {\n acks: -1,\n idempotent: true,\n transactionalId: `${this.clientId}-tx`,\n maxInFlightRequests: 1,\n },\n });\n await this.txProducer.connect();\n }\n const tx = await this.txProducer.transaction();\n try {\n const ctx: TransactionContext<T> = {\n send: async (\n topicOrDesc: any,\n message: any,\n options: SendOptions = {},\n ) => {\n const payload = await this.buildSendPayload(topicOrDesc, [\n {\n value: message,\n key: options.key,\n headers: options.headers,\n correlationId: options.correlationId,\n schemaVersion: options.schemaVersion,\n eventId: options.eventId,\n },\n ]);\n await this.ensureTopic(payload.topic);\n await tx.send(payload);\n },\n sendBatch: async (topicOrDesc: any, messages: BatchMessageItem<any>[]) => {\n const payload = await this.buildSendPayload(topicOrDesc, messages);\n await this.ensureTopic(payload.topic);\n await tx.send(payload);\n },\n };\n await fn(ctx);\n await tx.commit();\n } catch (error) {\n try {\n await tx.abort();\n } catch (abortError) {\n this.logger.error(\n \"Failed to abort transaction:\",\n toError(abortError).message,\n );\n }\n throw error;\n }\n }\n\n // ── Producer lifecycle ───────────────────────────────────────────\n\n /** Connect the idempotent producer. Called automatically by `KafkaModule.register()`. */\n public async connectProducer(): Promise<void> {\n await this.producer.connect();\n this.logger.log(\"Producer connected\");\n }\n\n public async disconnectProducer(): Promise<void> {\n await this.producer.disconnect();\n this.logger.log(\"Producer disconnected\");\n }\n\n // ── Consumer: eachMessage ────────────────────────────────────────\n\n /** Subscribe to topics and start consuming messages with the given handler. */\n public async startConsumer<K extends Array<keyof T>>(\n topics: K,\n handleMessage: (envelope: EventEnvelope<T[K[number]]>) => Promise<void>,\n options?: ConsumerOptions<T>,\n ): Promise<void>;\n public async startConsumer<\n D extends TopicDescriptor<string & keyof T, T[string & keyof T]>,\n >(\n topics: D[],\n handleMessage: (envelope: EventEnvelope<D[\"__type\"]>) => Promise<void>,\n options?: ConsumerOptions<T>,\n ): Promise<void>;\n public async startConsumer(\n topics: any[],\n handleMessage: (envelope: EventEnvelope<any>) => Promise<void>,\n options: ConsumerOptions<T> = {},\n ): Promise<void> {\n if (options.retryTopics && !options.retry) {\n throw new Error(\n 'retryTopics requires retry to be configured — set retry.maxRetries to enable the retry topic chain',\n );\n }\n\n const { consumer, schemaMap, topicNames, gid, dlq, interceptors, retry } =\n await this.setupConsumer(topics, \"eachMessage\", options);\n\n const deps = { logger: this.logger, producer: this.producer, instrumentation: this.instrumentation, onMessageLost: this.onMessageLost };\n\n await consumer.run({\n eachMessage: async ({ topic, partition, message }) => {\n if (!message.value) {\n this.logger.warn(`Received empty message from topic ${topic}`);\n return;\n }\n\n const raw = message.value.toString();\n const parsed = parseJsonMessage(raw, topic, this.logger);\n if (parsed === null) return;\n\n const headers = decodeHeaders(message.headers);\n const validated = await validateWithSchema(\n parsed, raw, topic, schemaMap, interceptors, dlq,\n { ...deps, originalHeaders: headers },\n );\n if (validated === null) return;\n\n const envelope = extractEnvelope(\n validated, headers, topic, partition, message.offset,\n );\n\n await executeWithRetry(\n () =>\n runWithEnvelopeContext(\n { correlationId: envelope.correlationId, traceparent: envelope.traceparent },\n () => handleMessage(envelope),\n ),\n { envelope, rawMessages: [raw], interceptors, dlq, retry, retryTopics: options.retryTopics },\n deps,\n );\n },\n });\n\n this.runningConsumers.set(gid, \"eachMessage\");\n\n if (options.retryTopics && retry) {\n await this.startRetryTopicConsumers(\n topicNames, gid, handleMessage, retry, dlq, interceptors, schemaMap,\n );\n }\n }\n\n // ── Consumer: eachBatch ──────────────────────────────────────────\n\n /** Subscribe to topics and consume messages in batches. */\n public async startBatchConsumer<K extends Array<keyof T>>(\n topics: K,\n handleBatch: (\n envelopes: EventEnvelope<T[K[number]]>[],\n meta: BatchMeta,\n ) => Promise<void>,\n options?: ConsumerOptions<T>,\n ): Promise<void>;\n public async startBatchConsumer<\n D extends TopicDescriptor<string & keyof T, T[string & keyof T]>,\n >(\n topics: D[],\n handleBatch: (\n envelopes: EventEnvelope<D[\"__type\"]>[],\n meta: BatchMeta,\n ) => Promise<void>,\n options?: ConsumerOptions<T>,\n ): Promise<void>;\n public async startBatchConsumer(\n topics: any[],\n handleBatch: (\n envelopes: EventEnvelope<any>[],\n meta: BatchMeta,\n ) => Promise<void>,\n options: ConsumerOptions<T> = {},\n ): Promise<void> {\n const { consumer, schemaMap, gid, dlq, interceptors, retry } =\n await this.setupConsumer(topics, \"eachBatch\", options);\n\n const deps = { logger: this.logger, producer: this.producer, instrumentation: this.instrumentation, onMessageLost: this.onMessageLost };\n\n await consumer.run({\n eachBatch: async ({\n batch,\n heartbeat,\n resolveOffset,\n commitOffsetsIfNecessary,\n }) => {\n const envelopes: EventEnvelope<any>[] = [];\n const rawMessages: string[] = [];\n\n for (const message of batch.messages) {\n if (!message.value) {\n this.logger.warn(\n `Received empty message from topic ${batch.topic}`,\n );\n continue;\n }\n\n const raw = message.value.toString();\n const parsed = parseJsonMessage(raw, batch.topic, this.logger);\n if (parsed === null) continue;\n\n const headers = decodeHeaders(message.headers);\n const validated = await validateWithSchema(\n parsed, raw, batch.topic, schemaMap, interceptors, dlq,\n { ...deps, originalHeaders: headers },\n );\n if (validated === null) continue;\n envelopes.push(\n extractEnvelope(validated, headers, batch.topic, batch.partition, message.offset),\n );\n rawMessages.push(raw);\n }\n\n if (envelopes.length === 0) return;\n\n const meta: BatchMeta = {\n partition: batch.partition,\n highWatermark: batch.highWatermark,\n heartbeat,\n resolveOffset,\n commitOffsetsIfNecessary,\n };\n\n await executeWithRetry(\n () => handleBatch(envelopes, meta),\n {\n envelope: envelopes,\n rawMessages: batch.messages\n .filter((m) => m.value)\n .map((m) => m.value!.toString()),\n interceptors,\n dlq,\n retry,\n isBatch: true,\n },\n deps,\n );\n },\n });\n\n this.runningConsumers.set(gid, \"eachBatch\");\n }\n\n // ── Consumer lifecycle ───────────────────────────────────────────\n\n public async stopConsumer(groupId?: string): Promise<void> {\n if (groupId !== undefined) {\n const consumer = this.consumers.get(groupId);\n if (!consumer) {\n this.logger.warn(`stopConsumer: no active consumer for group \"${groupId}\"`);\n return;\n }\n await consumer.disconnect().catch(() => {});\n this.consumers.delete(groupId);\n this.runningConsumers.delete(groupId);\n this.logger.log(`Consumer disconnected: group \"${groupId}\"`);\n } else {\n const tasks = Array.from(this.consumers.values()).map((c) =>\n c.disconnect().catch(() => {}),\n );\n await Promise.allSettled(tasks);\n this.consumers.clear();\n this.runningConsumers.clear();\n this.logger.log(\"All consumers disconnected\");\n }\n }\n\n /** Check broker connectivity and return status, clientId, and available topics. */\n public async checkStatus(): Promise<{ status: 'up'; clientId: string; topics: string[] }> {\n if (!this.isAdminConnected) {\n await this.admin.connect();\n this.isAdminConnected = true;\n }\n const topics = await this.admin.listTopics();\n return { status: 'up', clientId: this.clientId, topics };\n }\n\n public getClientId(): ClientId {\n return this.clientId;\n }\n\n /** Gracefully disconnect producer, all consumers, and admin. */\n public async disconnect(): Promise<void> {\n const tasks: Promise<void>[] = [this.producer.disconnect()];\n if (this.txProducer) {\n tasks.push(this.txProducer.disconnect());\n this.txProducer = undefined;\n }\n for (const consumer of this.consumers.values()) {\n tasks.push(consumer.disconnect());\n }\n if (this.isAdminConnected) {\n tasks.push(this.admin.disconnect());\n this.isAdminConnected = false;\n }\n await Promise.allSettled(tasks);\n this.consumers.clear();\n this.runningConsumers.clear();\n this.logger.log(\"All connections closed\");\n }\n\n // ── Retry topic chain ────────────────────────────────────────────\n\n /**\n * Auto-start companion consumers on `<topic>.retry` for each original topic.\n * Called by `startConsumer` when `retryTopics: true`.\n *\n * Flow per message:\n * 1. Sleep until `x-retry-after` (scheduled by the main consumer or previous retry hop)\n * 2. Call the original handler\n * 3. On failure: if retries remain → re-send to `<originalTopic>.retry` with incremented attempt\n * if exhausted → DLQ or onMessageLost\n */\n private async startRetryTopicConsumers(\n originalTopics: string[],\n originalGroupId: string,\n handleMessage: (envelope: EventEnvelope<any>) => Promise<void>,\n retry: { maxRetries: number; backoffMs?: number; maxBackoffMs?: number },\n dlq: boolean,\n interceptors: any[],\n schemaMap: Map<string, SchemaLike>,\n ): Promise<void> {\n const retryTopicNames = originalTopics.map((t) => `${t}.retry`);\n const retryGroupId = `${originalGroupId}-retry`;\n const backoffMs = retry.backoffMs ?? 1_000;\n const maxBackoffMs = retry.maxBackoffMs ?? 30_000;\n const deps = {\n logger: this.logger,\n producer: this.producer,\n instrumentation: this.instrumentation,\n onMessageLost: this.onMessageLost,\n };\n\n for (const rt of retryTopicNames) {\n await this.ensureTopic(rt);\n }\n\n const consumer = this.getOrCreateConsumer(retryGroupId, false, true);\n await consumer.connect();\n await subscribeWithRetry(consumer, retryTopicNames, this.logger);\n\n await consumer.run({\n eachMessage: async ({ topic: retryTopic, partition, message }) => {\n if (!message.value) return;\n\n const raw = message.value.toString();\n const parsed = parseJsonMessage(raw, retryTopic, this.logger);\n if (parsed === null) return;\n\n const headers = decodeHeaders(message.headers);\n const originalTopic =\n (headers[RETRY_HEADER_ORIGINAL_TOPIC] as string | undefined) ??\n retryTopic.replace(/\\.retry$/, '');\n const currentAttempt = parseInt(\n (headers[RETRY_HEADER_ATTEMPT] as string | undefined) ?? '1',\n 10,\n );\n const maxRetries = parseInt(\n (headers[RETRY_HEADER_MAX_RETRIES] as string | undefined) ??\n String(retry.maxRetries),\n 10,\n );\n const retryAfter = parseInt(\n (headers[RETRY_HEADER_AFTER] as string | undefined) ?? '0',\n 10,\n );\n\n // Pause only this partition for the scheduled delay so that other\n // topic-partitions on the same retry consumer continue processing.\n const remaining = retryAfter - Date.now();\n if (remaining > 0) {\n consumer.pause([{ topic: retryTopic, partitions: [partition] }]);\n await sleep(remaining);\n consumer.resume([{ topic: retryTopic, partitions: [partition] }]);\n }\n\n // Validate schema against original topic's schema (if any)\n const validated = await validateWithSchema(\n parsed, raw, originalTopic, schemaMap, interceptors, dlq,\n { ...deps, originalHeaders: headers },\n );\n if (validated === null) return;\n\n // Build envelope with originalTopic so correlationId/traceparent are correct\n const envelope = extractEnvelope(\n validated, headers, originalTopic, partition, message.offset,\n );\n\n try {\n // Instrumentation: beforeConsume\n const cleanups: (() => void)[] = [];\n for (const inst of this.instrumentation) {\n const c = inst.beforeConsume?.(envelope);\n if (typeof c === 'function') cleanups.push(c);\n }\n for (const interceptor of interceptors) await interceptor.before?.(envelope);\n\n await runWithEnvelopeContext(\n { correlationId: envelope.correlationId, traceparent: envelope.traceparent },\n () => handleMessage(envelope),\n );\n\n for (const interceptor of interceptors) await interceptor.after?.(envelope);\n for (const cleanup of cleanups) cleanup();\n } catch (error) {\n const err = toError(error);\n const nextAttempt = currentAttempt + 1;\n const exhausted = currentAttempt >= maxRetries;\n\n for (const inst of this.instrumentation) inst.onConsumeError?.(envelope, err);\n\n const reportedError = exhausted && maxRetries > 1\n ? new KafkaRetryExhaustedError(originalTopic, [envelope.payload], maxRetries, { cause: err })\n : err;\n for (const interceptor of interceptors) {\n await interceptor.onError?.(envelope, reportedError);\n }\n\n this.logger.error(\n `Retry consumer error for ${originalTopic} (attempt ${currentAttempt}/${maxRetries}):`,\n err.stack,\n );\n\n if (!exhausted) {\n // currentAttempt is the hop that just failed (1-based).\n // Before hop N+1 we want backoffMs * 2^N, so exponent = currentAttempt.\n const cap = Math.min(backoffMs * 2 ** currentAttempt, maxBackoffMs);\n const delay = Math.floor(Math.random() * cap);\n await sendToRetryTopic(\n originalTopic, [raw], nextAttempt, maxRetries, delay, headers, deps,\n );\n } else if (dlq) {\n await sendToDlq(originalTopic, raw, deps, {\n error: err,\n // +1 to account for the main consumer's initial attempt before\n // routing to the retry topic, making this consistent with the\n // in-process retry path where attempt counts all tries.\n attempt: currentAttempt + 1,\n originalHeaders: headers,\n });\n } else {\n await deps.onMessageLost?.({\n topic: originalTopic,\n error: err,\n attempt: currentAttempt,\n headers,\n });\n }\n }\n },\n });\n\n this.runningConsumers.set(retryGroupId, 'eachMessage');\n\n // Block until the retry consumer has received at least one partition assignment.\n // consumer.run() starts the group-join protocol asynchronously; without this wait,\n // the caller may send a message to the original topic before the retry consumer\n // has established its \"latest\" offset on the (initially empty) retry partition,\n // causing it to skip any retry messages produced in that window.\n await this.waitForPartitionAssignment(consumer, retryTopicNames);\n\n this.logger.log(\n `Retry topic consumers started for: ${originalTopics.join(', ')} (group: ${retryGroupId})`,\n );\n }\n\n // ── Private helpers ──────────────────────────────────────────────\n\n /**\n * Poll `consumer.assignment()` until the consumer has received at least one\n * partition for the given topics, then return. Logs a warning and returns\n * (rather than throwing) on timeout so that a slow broker does not break\n * the caller — in the worst case a message sent immediately after would be\n * missed, which is the same behaviour as before this guard was added.\n */\n private async waitForPartitionAssignment(\n consumer: Consumer,\n topics: string[],\n timeoutMs = 10_000,\n ): Promise<void> {\n const topicSet = new Set(topics);\n const deadline = Date.now() + timeoutMs;\n while (Date.now() < deadline) {\n try {\n const assigned: { topic: string; partition: number }[] = consumer.assignment();\n if (assigned.some((a) => topicSet.has(a.topic))) return;\n } catch {\n // consumer.assignment() throws if not yet in CONNECTED state — keep polling\n }\n await sleep(200);\n }\n this.logger.warn(\n `Retry consumer did not receive partition assignments for [${topics.join(', ')}] within ${timeoutMs}ms`,\n );\n }\n\n private getOrCreateConsumer(\n groupId: string,\n fromBeginning: boolean,\n autoCommit: boolean,\n ): Consumer {\n if (!this.consumers.has(groupId)) {\n this.consumers.set(\n groupId,\n this.kafka.consumer({\n kafkaJS: { groupId, fromBeginning, autoCommit },\n }),\n );\n }\n return this.consumers.get(groupId)!;\n }\n\n private resolveTopicName(topicOrDescriptor: unknown): string {\n if (typeof topicOrDescriptor === \"string\") return topicOrDescriptor;\n if (\n topicOrDescriptor &&\n typeof topicOrDescriptor === \"object\" &&\n \"__topic\" in topicOrDescriptor\n ) {\n return (topicOrDescriptor as TopicDescriptor).__topic;\n }\n return String(topicOrDescriptor);\n }\n\n private async ensureTopic(topic: string): Promise<void> {\n if (!this.autoCreateTopicsEnabled || this.ensuredTopics.has(topic)) return;\n if (!this.isAdminConnected) {\n await this.admin.connect();\n this.isAdminConnected = true;\n }\n await this.admin.createTopics({\n topics: [{ topic, numPartitions: this.numPartitions }],\n });\n this.ensuredTopics.add(topic);\n }\n\n /** Register schema from descriptor into global registry (side-effect). */\n private registerSchema(topicOrDesc: any): void {\n if (topicOrDesc?.__schema) {\n const topic = this.resolveTopicName(topicOrDesc);\n this.schemaRegistry.set(topic, topicOrDesc.__schema);\n }\n }\n\n /** Validate message against schema. Pure — no side-effects on registry. */\n private async validateMessage(topicOrDesc: any, message: any): Promise<any> {\n if (topicOrDesc?.__schema) {\n return await topicOrDesc.__schema.parse(message);\n }\n if (this.strictSchemasEnabled && typeof topicOrDesc === \"string\") {\n const schema = this.schemaRegistry.get(topicOrDesc);\n if (schema) return await schema.parse(message);\n }\n return message;\n }\n\n /**\n * Build a kafkajs-ready send payload.\n * Handles: topic resolution, schema registration, validation, JSON serialization,\n * envelope header generation, and instrumentation hooks.\n */\n private async buildSendPayload(\n topicOrDesc: any,\n messages: Array<BatchMessageItem<any>>,\n ): Promise<{ topic: string; messages: Array<{ value: string; key: string | null; headers: MessageHeaders }> }> {\n this.registerSchema(topicOrDesc);\n const topic = this.resolveTopicName(topicOrDesc);\n const builtMessages = await Promise.all(\n messages.map(async (m) => {\n const envelopeHeaders = buildEnvelopeHeaders({\n correlationId: m.correlationId,\n schemaVersion: m.schemaVersion,\n eventId: m.eventId,\n headers: m.headers,\n });\n\n // Let instrumentation hooks mutate headers (e.g. OTel injects traceparent)\n for (const inst of this.instrumentation) {\n inst.beforeSend?.(topic, envelopeHeaders);\n }\n\n return {\n value: JSON.stringify(await this.validateMessage(topicOrDesc, m.value)),\n key: m.key ?? null,\n headers: envelopeHeaders,\n };\n }),\n );\n return { topic, messages: builtMessages };\n }\n\n /** Shared consumer setup: groupId check, schema map, connect, subscribe. */\n private async setupConsumer(\n topics: any[],\n mode: \"eachMessage\" | \"eachBatch\",\n options: ConsumerOptions<T>,\n ) {\n const {\n groupId: optGroupId,\n fromBeginning = false,\n retry,\n dlq = false,\n interceptors = [],\n schemas: optionSchemas,\n } = options;\n\n const gid = optGroupId || this.defaultGroupId;\n const existingMode = this.runningConsumers.get(gid);\n const oppositeMode = mode === \"eachMessage\" ? \"eachBatch\" : \"eachMessage\";\n if (existingMode === oppositeMode) {\n throw new Error(\n `Cannot use ${mode} on consumer group \"${gid}\" — it is already running with ${oppositeMode}. ` +\n `Use a different groupId for this consumer.`,\n );\n }\n\n const consumer = this.getOrCreateConsumer(gid, fromBeginning, options.autoCommit ?? true);\n const schemaMap = this.buildSchemaMap(topics, optionSchemas);\n\n const topicNames = (topics as any[]).map((t: any) =>\n this.resolveTopicName(t),\n );\n\n // Ensure topics exist before subscribing — librdkafka errors on unknown topics\n for (const t of topicNames) {\n await this.ensureTopic(t);\n }\n if (dlq) {\n for (const t of topicNames) {\n await this.ensureTopic(`${t}.dlq`);\n }\n }\n\n await consumer.connect();\n await subscribeWithRetry(consumer, topicNames, this.logger, options.subscribeRetry);\n\n this.logger.log(\n `${mode === \"eachBatch\" ? \"Batch consumer\" : \"Consumer\"} subscribed to topics: ${topicNames.join(\", \")}`,\n );\n\n return { consumer, schemaMap, topicNames, gid, dlq, interceptors, retry };\n }\n\n private buildSchemaMap(\n topics: any[],\n optionSchemas?: Map<string, SchemaLike>,\n ): Map<string, SchemaLike> {\n const schemaMap = new Map<string, SchemaLike>();\n for (const t of topics) {\n if (t?.__schema) {\n const name = this.resolveTopicName(t);\n schemaMap.set(name, t.__schema);\n this.schemaRegistry.set(name, t.__schema);\n }\n }\n if (optionSchemas) {\n for (const [k, v] of optionSchemas) {\n schemaMap.set(k, v);\n this.schemaRegistry.set(k, v);\n }\n }\n return schemaMap;\n }\n}\n","import { AsyncLocalStorage } from \"node:async_hooks\";\nimport { randomUUID } from \"node:crypto\";\nimport type { MessageHeaders } from \"./types\";\n\n// ── Header keys ──────────────────────────────────────────────────────\n\nexport const HEADER_EVENT_ID = \"x-event-id\";\nexport const HEADER_CORRELATION_ID = \"x-correlation-id\";\nexport const HEADER_TIMESTAMP = \"x-timestamp\";\nexport const HEADER_SCHEMA_VERSION = \"x-schema-version\";\nexport const HEADER_TRACEPARENT = \"traceparent\";\n\n// ── EventEnvelope ────────────────────────────────────────────────────\n\n/**\n * Typed wrapper combining a parsed message payload with Kafka metadata\n * and envelope headers.\n *\n * On **send**, the library auto-generates envelope headers\n * (`x-event-id`, `x-correlation-id`, `x-timestamp`, `x-schema-version`).\n *\n * On **consume**, the library extracts those headers and assembles\n * an `EventEnvelope` that is passed to the handler.\n */\nexport interface EventEnvelope<T> {\n /** Deserialized + validated message body. */\n payload: T;\n /** Topic the message was produced to / consumed from. */\n topic: string;\n /** Kafka partition (consume-side only, `-1` on send). */\n partition: number;\n /** Kafka offset (consume-side only, empty string on send). */\n offset: string;\n /** ISO-8601 timestamp set by the producer. */\n timestamp: string;\n /** Unique ID for this event (UUID v4). */\n eventId: string;\n /** Correlation ID — auto-propagated via AsyncLocalStorage. */\n correlationId: string;\n /** Schema version of the payload. */\n schemaVersion: number;\n /** W3C Trace Context `traceparent` header (set by OTel instrumentation). */\n traceparent?: string;\n /** All decoded Kafka headers for extensibility. */\n headers: MessageHeaders;\n}\n\n// ── AsyncLocalStorage context ────────────────────────────────────────\n\ninterface EnvelopeCtx {\n correlationId: string;\n traceparent?: string;\n}\n\nconst envelopeStorage = new AsyncLocalStorage<EnvelopeCtx>();\n\n/** Read the current envelope context (correlationId / traceparent) from ALS. */\nexport function getEnvelopeContext(): EnvelopeCtx | undefined {\n return envelopeStorage.getStore();\n}\n\n/** Execute `fn` inside an envelope context so nested sends inherit correlationId. */\nexport function runWithEnvelopeContext<R>(\n ctx: EnvelopeCtx,\n fn: () => R,\n): R {\n return envelopeStorage.run(ctx, fn);\n}\n\n// ── Header helpers ───────────────────────────────────────────────────\n\n/** Options accepted by `buildEnvelopeHeaders`. */\nexport interface EnvelopeHeaderOptions {\n correlationId?: string;\n schemaVersion?: number;\n eventId?: string;\n headers?: MessageHeaders;\n}\n\n/**\n * Generate envelope headers for the send path.\n *\n * Priority for `correlationId`:\n * explicit option → ALS context → new UUID.\n */\nexport function buildEnvelopeHeaders(\n options: EnvelopeHeaderOptions = {},\n): MessageHeaders {\n const ctx = getEnvelopeContext();\n\n const correlationId =\n options.correlationId ?? ctx?.correlationId ?? randomUUID();\n const eventId = options.eventId ?? randomUUID();\n const timestamp = new Date().toISOString();\n const schemaVersion = String(options.schemaVersion ?? 1);\n\n const envelope: MessageHeaders = {\n [HEADER_EVENT_ID]: eventId,\n [HEADER_CORRELATION_ID]: correlationId,\n [HEADER_TIMESTAMP]: timestamp,\n [HEADER_SCHEMA_VERSION]: schemaVersion,\n };\n\n // Propagate traceparent from ALS if present (OTel may override via instrumentation)\n if (ctx?.traceparent) {\n envelope[HEADER_TRACEPARENT] = ctx.traceparent;\n }\n\n // User-provided headers win on conflict\n return { ...envelope, ...options.headers };\n}\n\n/**\n * Decode kafkajs headers (`Record<string, Buffer | string | undefined>`)\n * into plain `Record<string, string>`.\n */\nexport function decodeHeaders(\n raw: Record<string, Buffer | string | (Buffer | string)[] | undefined> | undefined,\n): MessageHeaders {\n if (!raw) return {};\n const result: MessageHeaders = {};\n for (const [key, value] of Object.entries(raw)) {\n if (value === undefined) continue;\n if (Array.isArray(value)) {\n result[key] = value.map((v) => (Buffer.isBuffer(v) ? v.toString() : v)).join(\",\");\n } else {\n result[key] = Buffer.isBuffer(value) ? value.toString() : value;\n }\n }\n return result;\n}\n\n/**\n * Build an `EventEnvelope` from a consumed kafkajs message.\n * Tolerates missing envelope headers — generates defaults so messages\n * from non-envelope producers still work.\n */\nexport function extractEnvelope<T>(\n payload: T,\n headers: MessageHeaders,\n topic: string,\n partition: number,\n offset: string,\n): EventEnvelope<T> {\n return {\n payload,\n topic,\n partition,\n offset,\n eventId: headers[HEADER_EVENT_ID] ?? randomUUID(),\n correlationId: headers[HEADER_CORRELATION_ID] ?? randomUUID(),\n timestamp: headers[HEADER_TIMESTAMP] ?? new Date().toISOString(),\n schemaVersion: Number(headers[HEADER_SCHEMA_VERSION] ?? 1),\n traceparent: headers[HEADER_TRACEPARENT],\n headers,\n };\n}\n","/** Error thrown when a consumer message handler fails. */\nexport class KafkaProcessingError extends Error {\n declare readonly cause?: Error;\n\n constructor(\n message: string,\n public readonly topic: string,\n public readonly originalMessage: unknown,\n options?: { cause?: Error },\n ) {\n super(message, options);\n this.name = \"KafkaProcessingError\";\n if (options?.cause) this.cause = options.cause;\n }\n}\n\n/** Error thrown when schema validation fails on send or consume. */\nexport class KafkaValidationError extends Error {\n declare readonly cause?: Error;\n\n constructor(\n public readonly topic: string,\n public readonly originalMessage: unknown,\n options?: { cause?: Error },\n ) {\n super(`Schema validation failed for topic \"${topic}\"`, options);\n this.name = \"KafkaValidationError\";\n if (options?.cause) this.cause = options.cause;\n }\n}\n\n/** Error thrown when all retry attempts are exhausted for a message. */\nexport class KafkaRetryExhaustedError extends KafkaProcessingError {\n constructor(\n topic: string,\n originalMessage: unknown,\n public readonly attempts: number,\n options?: { cause?: Error },\n ) {\n super(\n `Message processing failed after ${attempts} attempts on topic \"${topic}\"`,\n topic,\n originalMessage,\n options,\n );\n this.name = \"KafkaRetryExhaustedError\";\n }\n}\n","import type { KafkaJS } from \"@confluentinc/kafka-javascript\";\ntype Producer = KafkaJS.Producer;\nimport type { EventEnvelope } from \"./envelope\";\nimport { extractEnvelope } from \"./envelope\";\nimport { KafkaRetryExhaustedError, KafkaValidationError } from \"./errors\";\nimport type { SchemaLike } from \"./topic\";\nimport type {\n ConsumerInterceptor,\n KafkaInstrumentation,\n KafkaLogger,\n MessageHeaders,\n MessageLostContext,\n RetryOptions,\n TopicMapConstraint,\n} from \"./types\";\n\n\n// ── Helpers ──────────────────────────────────────────────────────────\n\nexport function toError(error: unknown): Error {\n return error instanceof Error ? error : new Error(String(error));\n}\n\nexport function sleep(ms: number): Promise<void> {\n return new Promise((resolve) => setTimeout(resolve, ms));\n}\n\n// ── JSON parsing ────────────────────────────────────────────────────\n\n/** Parse raw message as JSON. Returns null on failure (logs error). */\nexport function parseJsonMessage(\n raw: string,\n topic: string,\n logger: KafkaLogger,\n): any | null {\n try {\n return JSON.parse(raw);\n } catch (error) {\n logger.error(\n `Failed to parse message from topic ${topic}:`,\n toError(error).stack,\n );\n return null;\n }\n}\n\n// ── Schema validation ───────────────────────────────────────────────\n\n/**\n * Validate a parsed message against the schema map.\n * On failure: logs error, sends to DLQ if enabled, calls interceptor.onError.\n * Returns validated message or null.\n */\nexport async function validateWithSchema<T extends TopicMapConstraint<T>>(\n message: any,\n raw: string,\n topic: string,\n schemaMap: Map<string, SchemaLike>,\n interceptors: ConsumerInterceptor<T>[],\n dlq: boolean,\n deps: {\n logger: KafkaLogger;\n producer: Producer;\n onMessageLost?: (ctx: MessageLostContext) => void | Promise<void>;\n originalHeaders?: MessageHeaders;\n },\n): Promise<any | null> {\n const schema = schemaMap.get(topic);\n if (!schema) return message;\n\n try {\n return await schema.parse(message);\n } catch (error) {\n const err = toError(error);\n const validationError = new KafkaValidationError(topic, message, {\n cause: err,\n });\n deps.logger.error(\n `Schema validation failed for topic ${topic}:`,\n err.message,\n );\n if (dlq) {\n await sendToDlq(topic, raw, deps, {\n error: validationError,\n attempt: 0,\n originalHeaders: deps.originalHeaders,\n });\n } else {\n await deps.onMessageLost?.({ topic, error: validationError, attempt: 0, headers: deps.originalHeaders ?? {} });\n }\n // Validation errors don't have an envelope yet — call onError with a minimal envelope\n const errorEnvelope = extractEnvelope(message, deps.originalHeaders ?? {}, topic, -1, \"\");\n for (const interceptor of interceptors) {\n await interceptor.onError?.(errorEnvelope, validationError);\n }\n return null;\n }\n}\n\n// ── DLQ ─────────────────────────────────────────────────────────────\n\nexport interface DlqMetadata {\n error: Error;\n attempt: number;\n /** Original Kafka message headers — forwarded to DLQ to preserve correlationId, traceparent, etc. */\n originalHeaders?: MessageHeaders;\n}\n\nexport async function sendToDlq(\n topic: string,\n rawMessage: string,\n deps: { logger: KafkaLogger; producer: Producer },\n meta?: DlqMetadata,\n): Promise<void> {\n const dlqTopic = `${topic}.dlq`;\n const headers: MessageHeaders = {\n ...(meta?.originalHeaders ?? {}),\n 'x-dlq-original-topic': topic,\n 'x-dlq-failed-at': new Date().toISOString(),\n 'x-dlq-error-message': meta?.error.message ?? 'unknown',\n 'x-dlq-error-stack': meta?.error.stack?.slice(0, 2000) ?? '',\n 'x-dlq-attempt-count': String(meta?.attempt ?? 0),\n };\n try {\n await deps.producer.send({\n topic: dlqTopic,\n messages: [{ value: rawMessage, headers }],\n });\n deps.logger.warn(`Message sent to DLQ: ${dlqTopic}`);\n } catch (error) {\n deps.logger.error(\n `Failed to send message to DLQ ${dlqTopic}:`,\n toError(error).stack,\n );\n }\n}\n\n// ── Retry topic routing ─────────────────────────────────────────────\n\n/** Headers stamped on messages sent to a `<topic>.retry` topic. */\nexport const RETRY_HEADER_ATTEMPT = 'x-retry-attempt';\nexport const RETRY_HEADER_AFTER = 'x-retry-after';\nexport const RETRY_HEADER_MAX_RETRIES = 'x-retry-max-retries';\nexport const RETRY_HEADER_ORIGINAL_TOPIC = 'x-retry-original-topic';\n\n/**\n * Send raw messages to the retry topic `<originalTopic>.retry`.\n * Stamps scheduling headers so the retry consumer knows when and how many times to retry.\n */\nexport async function sendToRetryTopic(\n originalTopic: string,\n rawMessages: string[],\n attempt: number,\n maxRetries: number,\n delayMs: number,\n originalHeaders: MessageHeaders,\n deps: { logger: KafkaLogger; producer: Producer },\n): Promise<void> {\n const retryTopic = `${originalTopic}.retry`;\n // Strip any stale retry headers from a previous hop so they don't leak through.\n const {\n [RETRY_HEADER_ATTEMPT]: _a,\n [RETRY_HEADER_AFTER]: _b,\n [RETRY_HEADER_MAX_RETRIES]: _c,\n [RETRY_HEADER_ORIGINAL_TOPIC]: _d,\n ...userHeaders\n } = originalHeaders;\n const headers: MessageHeaders = {\n ...userHeaders,\n [RETRY_HEADER_ATTEMPT]: String(attempt),\n [RETRY_HEADER_AFTER]: String(Date.now() + delayMs),\n [RETRY_HEADER_MAX_RETRIES]: String(maxRetries),\n [RETRY_HEADER_ORIGINAL_TOPIC]: originalTopic,\n };\n try {\n for (const raw of rawMessages) {\n await deps.producer.send({ topic: retryTopic, messages: [{ value: raw, headers }] });\n }\n deps.logger.warn(\n `Message queued in retry topic ${retryTopic} (attempt ${attempt}/${maxRetries})`,\n );\n } catch (error) {\n deps.logger.error(\n `Failed to send message to retry topic ${retryTopic}:`,\n toError(error).stack,\n );\n }\n}\n\n// ── Retry pipeline ──────────────────────────────────────────────────\n\nexport interface ExecuteWithRetryContext<T extends TopicMapConstraint<T>> {\n envelope: EventEnvelope<any> | EventEnvelope<any>[];\n rawMessages: string[];\n interceptors: ConsumerInterceptor<T>[];\n dlq: boolean;\n retry?: RetryOptions;\n isBatch?: boolean;\n /**\n * When `true`, failed messages are routed to `<topic>.retry` instead of being\n * retried in-process. All backoff and subsequent attempts are handled by the\n * companion retry consumer started by `startRetryTopicConsumers`.\n */\n retryTopics?: boolean;\n}\n\n/**\n * Execute a handler with retry, interceptors, instrumentation, and DLQ support.\n * Used by both single-message and batch consumers.\n */\nexport async function executeWithRetry<T extends TopicMapConstraint<T>>(\n fn: () => Promise<void>,\n ctx: ExecuteWithRetryContext<T>,\n deps: {\n logger: KafkaLogger;\n producer: Producer;\n instrumentation: KafkaInstrumentation[];\n onMessageLost?: (ctx: MessageLostContext) => void | Promise<void>;\n },\n): Promise<void> {\n const { envelope, rawMessages, interceptors, dlq, retry, isBatch, retryTopics } = ctx;\n // With retryTopics mode the main consumer tries exactly once — retry consumer takes over.\n const maxAttempts = retryTopics ? 1 : retry ? retry.maxRetries + 1 : 1;\n const backoffMs = retry?.backoffMs ?? 1000;\n const maxBackoffMs = retry?.maxBackoffMs ?? 30_000;\n const envelopes = Array.isArray(envelope) ? envelope : [envelope];\n const topic = envelopes[0]?.topic ?? \"unknown\";\n\n for (let attempt = 1; attempt <= maxAttempts; attempt++) {\n // Collect instrumentation cleanup functions\n const cleanups: (() => void)[] = [];\n\n try {\n // Instrumentation: beforeConsume\n for (const env of envelopes) {\n for (const inst of deps.instrumentation) {\n const cleanup = inst.beforeConsume?.(env);\n if (typeof cleanup === \"function\") cleanups.push(cleanup);\n }\n }\n\n // Consumer interceptors: before\n for (const env of envelopes) {\n for (const interceptor of interceptors) {\n await interceptor.before?.(env);\n }\n }\n\n await fn();\n\n // Consumer interceptors: after\n for (const env of envelopes) {\n for (const interceptor of interceptors) {\n await interceptor.after?.(env);\n }\n }\n\n // Instrumentation: cleanup (end spans etc.)\n for (const cleanup of cleanups) cleanup();\n\n return;\n } catch (error) {\n const err = toError(error);\n const isLastAttempt = attempt === maxAttempts;\n\n // Instrumentation: onConsumeError\n for (const env of envelopes) {\n for (const inst of deps.instrumentation) {\n inst.onConsumeError?.(env, err);\n }\n }\n // Instrumentation: cleanup even on error\n for (const cleanup of cleanups) cleanup();\n\n if (isLastAttempt && maxAttempts > 1) {\n const exhaustedError = new KafkaRetryExhaustedError(\n topic,\n envelopes.map((e) => e.payload),\n maxAttempts,\n { cause: err },\n );\n for (const env of envelopes) {\n for (const interceptor of interceptors) {\n await interceptor.onError?.(env, exhaustedError);\n }\n }\n } else {\n for (const env of envelopes) {\n for (const interceptor of interceptors) {\n await interceptor.onError?.(env, err);\n }\n }\n }\n\n deps.logger.error(\n `Error processing ${isBatch ? \"batch\" : \"message\"} from topic ${topic} (attempt ${attempt}/${maxAttempts}):`,\n err.stack,\n );\n\n if (retryTopics && retry) {\n // Route to retry topic — retry consumer handles backoff and further attempts.\n // Always use attempt 1 here (main consumer never retries in-process).\n const cap = Math.min(backoffMs, maxBackoffMs);\n const delay = Math.floor(Math.random() * cap);\n await sendToRetryTopic(\n topic, rawMessages, 1, retry.maxRetries, delay,\n envelopes[0]?.headers ?? {}, deps,\n );\n } else if (isLastAttempt) {\n if (dlq) {\n const dlqMeta: DlqMetadata = {\n error: err,\n attempt,\n originalHeaders: envelopes[0]?.headers,\n };\n for (const raw of rawMessages) {\n await sendToDlq(topic, raw, deps, dlqMeta);\n }\n } else {\n await deps.onMessageLost?.({\n topic,\n error: err,\n attempt,\n headers: envelopes[0]?.headers ?? {},\n });\n }\n } else {\n // Exponential backoff with full jitter to avoid thundering herd\n const cap = Math.min(backoffMs * 2 ** (attempt - 1), maxBackoffMs);\n await sleep(Math.random() * cap);\n }\n }\n }\n}\n","import type { KafkaJS } from \"@confluentinc/kafka-javascript\";\nimport type { KafkaLogger, SubscribeRetryOptions } from \"./types\";\nimport { toError, sleep } from \"./consumer-pipeline\";\n\nexport async function subscribeWithRetry(\n consumer: KafkaJS.Consumer,\n topics: string[],\n logger: KafkaLogger,\n retryOpts?: SubscribeRetryOptions,\n): Promise<void> {\n const maxAttempts = retryOpts?.retries ?? 5;\n const backoffMs = retryOpts?.backoffMs ?? 5000;\n\n for (let attempt = 1; attempt <= maxAttempts; attempt++) {\n try {\n await consumer.subscribe({ topics });\n return;\n } catch (error) {\n if (attempt === maxAttempts) throw error;\n const msg = toError(error).message;\n logger.warn(\n `Failed to subscribe to [${topics.join(\", \")}] (attempt ${attempt}/${maxAttempts}): ${msg}. Retrying in ${backoffMs}ms...`,\n );\n await sleep(backoffMs);\n }\n }\n}\n","/**\n * Any validation library with a `.parse()` method.\n * Works with Zod, Valibot, ArkType, or any custom validator.\n *\n * @example\n * ```ts\n * import { z } from 'zod';\n * const schema: SchemaLike<{ id: string }> = z.object({ id: z.string() });\n * ```\n */\nexport interface SchemaLike<T = any> {\n parse(data: unknown): T | Promise<T>;\n}\n\n/** Infer the output type from a SchemaLike. */\nexport type InferSchema<S extends SchemaLike> =\n S extends SchemaLike<infer T> ? T : never;\n\n/**\n * A typed topic descriptor that pairs a topic name with its message type.\n * Created via the `topic()` factory function.\n *\n * @typeParam N - The literal topic name string.\n * @typeParam M - The message payload type for this topic.\n */\nexport interface TopicDescriptor<\n N extends string = string,\n M extends Record<string, any> = Record<string, any>,\n> {\n readonly __topic: N;\n /** @internal Phantom type — never has a real value at runtime. */\n readonly __type: M;\n /** Runtime schema validator. Present only when created via `topic().schema()`. */\n readonly __schema?: SchemaLike<M>;\n}\n\n/**\n * Define a typed topic descriptor.\n *\n * @example\n * ```ts\n * // Without schema — type provided explicitly:\n * const OrderCreated = topic('order.created')<{ orderId: string; amount: number }>();\n *\n * // With schema — type inferred from schema:\n * const OrderCreated = topic('order.created').schema(z.object({\n * orderId: z.string(),\n * amount: z.number(),\n * }));\n *\n * // Use with KafkaClient:\n * await kafka.sendMessage(OrderCreated, { orderId: '123', amount: 100 });\n *\n * // Use with @SubscribeTo:\n * @SubscribeTo(OrderCreated)\n * async handleOrder(msg) { ... }\n * ```\n */\nexport function topic<N extends string>(name: N) {\n const fn = <M extends Record<string, any>>(): TopicDescriptor<N, M> => ({\n __topic: name,\n __type: undefined as unknown as M,\n });\n\n fn.schema = <S extends SchemaLike<Record<string, any>>>(\n schema: S,\n ): TopicDescriptor<N, InferSchema<S>> => ({\n __topic: name,\n __type: undefined as unknown as InferSchema<S>,\n __schema: schema as unknown as SchemaLike<InferSchema<S>>,\n });\n\n return fn;\n}\n\n/**\n * Build a topic-message map type from a union of TopicDescriptors.\n *\n * @example\n * ```ts\n * const OrderCreated = topic('order.created')<{ orderId: string }>();\n * const OrderCompleted = topic('order.completed')<{ completedAt: string }>();\n *\n * type MyTopics = TopicsFrom<typeof OrderCreated | typeof OrderCompleted>;\n * // { 'order.created': { orderId: string }; 'order.completed': { completedAt: string } }\n * ```\n */\nexport type TopicsFrom<D extends TopicDescriptor<any, any>> = {\n [K in D as K[\"__topic\"]]: K[\"__type\"];\n};\n"],"mappings":";AAAA,SAAS,eAAe;;;ACAxB,SAAS,yBAAyB;AAClC,SAAS,kBAAkB;AAKpB,IAAM,kBAAkB;AACxB,IAAM,wBAAwB;AAC9B,IAAM,mBAAmB;AACzB,IAAM,wBAAwB;AAC9B,IAAM,qBAAqB;AA4ClC,IAAM,kBAAkB,IAAI,kBAA+B;AAGpD,SAAS,qBAA8C;AAC5D,SAAO,gBAAgB,SAAS;AAClC;AAGO,SAAS,uBACd,KACA,IACG;AACH,SAAO,gBAAgB,IAAI,KAAK,EAAE;AACpC;AAkBO,SAAS,qBACd,UAAiC,CAAC,GAClB;AAChB,QAAM,MAAM,mBAAmB;AAE/B,QAAM,gBACJ,QAAQ,iBAAiB,KAAK,iBAAiB,WAAW;AAC5D,QAAM,UAAU,QAAQ,WAAW,WAAW;AAC9C,QAAM,aAAY,oBAAI,KAAK,GAAE,YAAY;AACzC,QAAM,gBAAgB,OAAO,QAAQ,iBAAiB,CAAC;AAEvD,QAAM,WAA2B;AAAA,IAC/B,CAAC,eAAe,GAAG;AAAA,IACnB,CAAC,qBAAqB,GAAG;AAAA,IACzB,CAAC,gBAAgB,GAAG;AAAA,IACpB,CAAC,qBAAqB,GAAG;AAAA,EAC3B;AAGA,MAAI,KAAK,aAAa;AACpB,aAAS,kBAAkB,IAAI,IAAI;AAAA,EACrC;AAGA,SAAO,EAAE,GAAG,UAAU,GAAG,QAAQ,QAAQ;AAC3C;AAMO,SAAS,cACd,KACgB;AAChB,MAAI,CAAC,IAAK,QAAO,CAAC;AAClB,QAAM,SAAyB,CAAC;AAChC,aAAW,CAAC,KAAK,KAAK,KAAK,OAAO,QAAQ,GAAG,GAAG;AAC9C,QAAI,UAAU,OAAW;AACzB,QAAI,MAAM,QAAQ,KAAK,GAAG;AACxB,aAAO,GAAG,IAAI,MAAM,IAAI,CAAC,MAAO,OAAO,SAAS,CAAC,IAAI,EAAE,SAAS,IAAI,CAAE,EAAE,KAAK,GAAG;AAAA,IAClF,OAAO;AACL,aAAO,GAAG,IAAI,OAAO,SAAS,KAAK,IAAI,MAAM,SAAS,IAAI;AAAA,IAC5D;AAAA,EACF;AACA,SAAO;AACT;AAOO,SAAS,gBACd,SACA,SACAA,QACA,WACA,QACkB;AAClB,SAAO;AAAA,IACL;AAAA,IACA,OAAAA;AAAA,IACA;AAAA,IACA;AAAA,IACA,SAAS,QAAQ,eAAe,KAAK,WAAW;AAAA,IAChD,eAAe,QAAQ,qBAAqB,KAAK,WAAW;AAAA,IAC5D,WAAW,QAAQ,gBAAgB,MAAK,oBAAI,KAAK,GAAE,YAAY;AAAA,IAC/D,eAAe,OAAO,QAAQ,qBAAqB,KAAK,CAAC;AAAA,IACzD,aAAa,QAAQ,kBAAkB;AAAA,IACvC;AAAA,EACF;AACF;;;AC3JO,IAAM,uBAAN,cAAmC,MAAM;AAAA,EAG9C,YACE,SACgBC,QACA,iBAChB,SACA;AACA,UAAM,SAAS,OAAO;AAJN,iBAAAA;AACA;AAIhB,SAAK,OAAO;AACZ,QAAI,SAAS,MAAO,MAAK,QAAQ,QAAQ;AAAA,EAC3C;AACF;AAGO,IAAM,uBAAN,cAAmC,MAAM;AAAA,EAG9C,YACkBA,QACA,iBAChB,SACA;AACA,UAAM,uCAAuCA,MAAK,KAAK,OAAO;AAJ9C,iBAAAA;AACA;AAIhB,SAAK,OAAO;AACZ,QAAI,SAAS,MAAO,MAAK,QAAQ,QAAQ;AAAA,EAC3C;AACF;AAGO,IAAM,2BAAN,cAAuC,qBAAqB;AAAA,EACjE,YACEA,QACA,iBACgB,UAChB,SACA;AACA;AAAA,MACE,mCAAmC,QAAQ,uBAAuBA,MAAK;AAAA,MACvEA;AAAA,MACA;AAAA,MACA;AAAA,IACF;AARgB;AAShB,SAAK,OAAO;AAAA,EACd;AACF;;;AC5BO,SAAS,QAAQ,OAAuB;AAC7C,SAAO,iBAAiB,QAAQ,QAAQ,IAAI,MAAM,OAAO,KAAK,CAAC;AACjE;AAEO,SAAS,MAAM,IAA2B;AAC/C,SAAO,IAAI,QAAQ,CAAC,YAAY,WAAW,SAAS,EAAE,CAAC;AACzD;AAKO,SAAS,iBACd,KACAC,QACA,QACY;AACZ,MAAI;AACF,WAAO,KAAK,MAAM,GAAG;AAAA,EACvB,SAAS,OAAO;AACd,WAAO;AAAA,MACL,sCAAsCA,MAAK;AAAA,MAC3C,QAAQ,KAAK,EAAE;AAAA,IACjB;AACA,WAAO;AAAA,EACT;AACF;AASA,eAAsB,mBACpB,SACA,KACAA,QACA,WACA,cACA,KACA,MAMqB;AACrB,QAAM,SAAS,UAAU,IAAIA,MAAK;AAClC,MAAI,CAAC,OAAQ,QAAO;AAEpB,MAAI;AACF,WAAO,MAAM,OAAO,MAAM,OAAO;AAAA,EACnC,SAAS,OAAO;AACd,UAAM,MAAM,QAAQ,KAAK;AACzB,UAAM,kBAAkB,IAAI,qBAAqBA,QAAO,SAAS;AAAA,MAC/D,OAAO;AAAA,IACT,CAAC;AACD,SAAK,OAAO;AAAA,MACV,sCAAsCA,MAAK;AAAA,MAC3C,IAAI;AAAA,IACN;AACA,QAAI,KAAK;AACP,YAAM,UAAUA,QAAO,KAAK,MAAM;AAAA,QAChC,OAAO;AAAA,QACP,SAAS;AAAA,QACT,iBAAiB,KAAK;AAAA,MACxB,CAAC;AAAA,IACH,OAAO;AACL,YAAM,KAAK,gBAAgB,EAAE,OAAAA,QAAO,OAAO,iBAAiB,SAAS,GAAG,SAAS,KAAK,mBAAmB,CAAC,EAAE,CAAC;AAAA,IAC/G;AAEA,UAAM,gBAAgB,gBAAgB,SAAS,KAAK,mBAAmB,CAAC,GAAGA,QAAO,IAAI,EAAE;AACxF,eAAW,eAAe,cAAc;AACtC,YAAM,YAAY,UAAU,eAAe,eAAe;AAAA,IAC5D;AACA,WAAO;AAAA,EACT;AACF;AAWA,eAAsB,UACpBA,QACA,YACA,MACA,MACe;AACf,QAAM,WAAW,GAAGA,MAAK;AACzB,QAAM,UAA0B;AAAA,IAC9B,GAAI,MAAM,mBAAmB,CAAC;AAAA,IAC9B,wBAAwBA;AAAA,IACxB,oBAAmB,oBAAI,KAAK,GAAE,YAAY;AAAA,IAC1C,uBAAuB,MAAM,MAAM,WAAW;AAAA,IAC9C,qBAAqB,MAAM,MAAM,OAAO,MAAM,GAAG,GAAI,KAAK;AAAA,IAC1D,uBAAuB,OAAO,MAAM,WAAW,CAAC;AAAA,EAClD;AACA,MAAI;AACF,UAAM,KAAK,SAAS,KAAK;AAAA,MACvB,OAAO;AAAA,MACP,UAAU,CAAC,EAAE,OAAO,YAAY,QAAQ,CAAC;AAAA,IAC3C,CAAC;AACD,SAAK,OAAO,KAAK,wBAAwB,QAAQ,EAAE;AAAA,EACrD,SAAS,OAAO;AACd,SAAK,OAAO;AAAA,MACV,iCAAiC,QAAQ;AAAA,MACzC,QAAQ,KAAK,EAAE;AAAA,IACjB;AAAA,EACF;AACF;AAKO,IAAM,uBAAuB;AAC7B,IAAM,qBAAqB;AAC3B,IAAM,2BAA2B;AACjC,IAAM,8BAA8B;AAM3C,eAAsB,iBACpB,eACA,aACA,SACA,YACA,SACA,iBACA,MACe;AACf,QAAM,aAAa,GAAG,aAAa;AAEnC,QAAM;AAAA,IACJ,CAAC,oBAAoB,GAAG;AAAA,IACxB,CAAC,kBAAkB,GAAG;AAAA,IACtB,CAAC,wBAAwB,GAAG;AAAA,IAC5B,CAAC,2BAA2B,GAAG;AAAA,IAC/B,GAAG;AAAA,EACL,IAAI;AACJ,QAAM,UAA0B;AAAA,IAC9B,GAAG;AAAA,IACH,CAAC,oBAAoB,GAAG,OAAO,OAAO;AAAA,IACtC,CAAC,kBAAkB,GAAG,OAAO,KAAK,IAAI,IAAI,OAAO;AAAA,IACjD,CAAC,wBAAwB,GAAG,OAAO,UAAU;AAAA,IAC7C,CAAC,2BAA2B,GAAG;AAAA,EACjC;AACA,MAAI;AACF,eAAW,OAAO,aAAa;AAC7B,YAAM,KAAK,SAAS,KAAK,EAAE,OAAO,YAAY,UAAU,CAAC,EAAE,OAAO,KAAK,QAAQ,CAAC,EAAE,CAAC;AAAA,IACrF;AACA,SAAK,OAAO;AAAA,MACV,iCAAiC,UAAU,aAAa,OAAO,IAAI,UAAU;AAAA,IAC/E;AAAA,EACF,SAAS,OAAO;AACd,SAAK,OAAO;AAAA,MACV,yCAAyC,UAAU;AAAA,MACnD,QAAQ,KAAK,EAAE;AAAA,IACjB;AAAA,EACF;AACF;AAuBA,eAAsB,iBACpB,IACA,KACA,MAMe;AACf,QAAM,EAAE,UAAU,aAAa,cAAc,KAAK,OAAO,SAAS,YAAY,IAAI;AAElF,QAAM,cAAc,cAAc,IAAI,QAAQ,MAAM,aAAa,IAAI;AACrE,QAAM,YAAY,OAAO,aAAa;AACtC,QAAM,eAAe,OAAO,gBAAgB;AAC5C,QAAM,YAAY,MAAM,QAAQ,QAAQ,IAAI,WAAW,CAAC,QAAQ;AAChE,QAAMA,SAAQ,UAAU,CAAC,GAAG,SAAS;AAErC,WAAS,UAAU,GAAG,WAAW,aAAa,WAAW;AAEvD,UAAM,WAA2B,CAAC;AAElC,QAAI;AAEF,iBAAW,OAAO,WAAW;AAC3B,mBAAW,QAAQ,KAAK,iBAAiB;AACvC,gBAAM,UAAU,KAAK,gBAAgB,GAAG;AACxC,cAAI,OAAO,YAAY,WAAY,UAAS,KAAK,OAAO;AAAA,QAC1D;AAAA,MACF;AAGA,iBAAW,OAAO,WAAW;AAC3B,mBAAW,eAAe,cAAc;AACtC,gBAAM,YAAY,SAAS,GAAG;AAAA,QAChC;AAAA,MACF;AAEA,YAAM,GAAG;AAGT,iBAAW,OAAO,WAAW;AAC3B,mBAAW,eAAe,cAAc;AACtC,gBAAM,YAAY,QAAQ,GAAG;AAAA,QAC/B;AAAA,MACF;AAGA,iBAAW,WAAW,SAAU,SAAQ;AAExC;AAAA,IACF,SAAS,OAAO;AACd,YAAM,MAAM,QAAQ,KAAK;AACzB,YAAM,gBAAgB,YAAY;AAGlC,iBAAW,OAAO,WAAW;AAC3B,mBAAW,QAAQ,KAAK,iBAAiB;AACvC,eAAK,iBAAiB,KAAK,GAAG;AAAA,QAChC;AAAA,MACF;AAEA,iBAAW,WAAW,SAAU,SAAQ;AAExC,UAAI,iBAAiB,cAAc,GAAG;AACpC,cAAM,iBAAiB,IAAI;AAAA,UACzBA;AAAA,UACA,UAAU,IAAI,CAAC,MAAM,EAAE,OAAO;AAAA,UAC9B;AAAA,UACA,EAAE,OAAO,IAAI;AAAA,QACf;AACA,mBAAW,OAAO,WAAW;AAC3B,qBAAW,eAAe,cAAc;AACtC,kBAAM,YAAY,UAAU,KAAK,cAAc;AAAA,UACjD;AAAA,QACF;AAAA,MACF,OAAO;AACL,mBAAW,OAAO,WAAW;AAC3B,qBAAW,eAAe,cAAc;AACtC,kBAAM,YAAY,UAAU,KAAK,GAAG;AAAA,UACtC;AAAA,QACF;AAAA,MACF;AAEA,WAAK,OAAO;AAAA,QACV,oBAAoB,UAAU,UAAU,SAAS,eAAeA,MAAK,aAAa,OAAO,IAAI,WAAW;AAAA,QACxG,IAAI;AAAA,MACN;AAEA,UAAI,eAAe,OAAO;AAGxB,cAAM,MAAM,KAAK,IAAI,WAAW,YAAY;AAC5C,cAAM,QAAQ,KAAK,MAAM,KAAK,OAAO,IAAI,GAAG;AAC5C,cAAM;AAAA,UACJA;AAAA,UAAO;AAAA,UAAa;AAAA,UAAG,MAAM;AAAA,UAAY;AAAA,UACzC,UAAU,CAAC,GAAG,WAAW,CAAC;AAAA,UAAG;AAAA,QAC/B;AAAA,MACF,WAAW,eAAe;AACxB,YAAI,KAAK;AACP,gBAAM,UAAuB;AAAA,YAC3B,OAAO;AAAA,YACP;AAAA,YACA,iBAAiB,UAAU,CAAC,GAAG;AAAA,UACjC;AACA,qBAAW,OAAO,aAAa;AAC7B,kBAAM,UAAUA,QAAO,KAAK,MAAM,OAAO;AAAA,UAC3C;AAAA,QACF,OAAO;AACL,gBAAM,KAAK,gBAAgB;AAAA,YACzB,OAAAA;AAAA,YACA,OAAO;AAAA,YACP;AAAA,YACA,SAAS,UAAU,CAAC,GAAG,WAAW,CAAC;AAAA,UACrC,CAAC;AAAA,QACH;AAAA,MACF,OAAO;AAEL,cAAM,MAAM,KAAK,IAAI,YAAY,MAAM,UAAU,IAAI,YAAY;AACjE,cAAM,MAAM,KAAK,OAAO,IAAI,GAAG;AAAA,MACjC;AAAA,IACF;AAAA,EACF;AACF;;;ACzUA,eAAsB,mBACpB,UACA,QACA,QACA,WACe;AACf,QAAM,cAAc,WAAW,WAAW;AAC1C,QAAM,YAAY,WAAW,aAAa;AAE1C,WAAS,UAAU,GAAG,WAAW,aAAa,WAAW;AACvD,QAAI;AACF,YAAM,SAAS,UAAU,EAAE,OAAO,CAAC;AACnC;AAAA,IACF,SAAS,OAAO;AACd,UAAI,YAAY,YAAa,OAAM;AACnC,YAAM,MAAM,QAAQ,KAAK,EAAE;AAC3B,aAAO;AAAA,QACL,2BAA2B,OAAO,KAAK,IAAI,CAAC,cAAc,OAAO,IAAI,WAAW,MAAM,GAAG,iBAAiB,SAAS;AAAA,MACrH;AACA,YAAM,MAAM,SAAS;AAAA,IACvB;AAAA,EACF;AACF;;;AJrBA,IAAM,EAAE,OAAO,YAAY,UAAU,cAAc,IAAI;AAkDhD,IAAM,cAAN,MAEsB;AAAA,EACV;AAAA,EACA;AAAA,EACT;AAAA,EACS,YAAY,oBAAI,IAAsB;AAAA,EACtC;AAAA,EACA;AAAA,EACA;AAAA,EACA;AAAA,EACA;AAAA,EACA,gBAAgB,oBAAI,IAAY;AAAA,EAChC;AAAA,EACA,iBAAiB,oBAAI,IAAwB;AAAA,EAC7C,mBAAmB,oBAAI,IAAyC;AAAA,EAChE;AAAA,EACA;AAAA,EAET,mBAAmB;AAAA,EACX;AAAA,EAEhB,YACE,UACA,SACA,SACA,SACA;AACA,SAAK,WAAW;AAChB,SAAK,iBAAiB;AACtB,SAAK,SAAS,SAAS,UAAU;AAAA,MAC/B,KAAK,CAAC,QAAQ,QAAQ,IAAI,gBAAgB,QAAQ,KAAK,GAAG,EAAE;AAAA,MAC5D,MAAM,CAAC,QAAQ,SAAS,QAAQ,KAAK,gBAAgB,QAAQ,KAAK,GAAG,IAAI,GAAG,IAAI;AAAA,MAChF,OAAO,CAAC,QAAQ,SAAS,QAAQ,MAAM,gBAAgB,QAAQ,KAAK,GAAG,IAAI,GAAG,IAAI;AAAA,IACpF;AACA,SAAK,0BAA0B,SAAS,oBAAoB;AAC5D,SAAK,uBAAuB,SAAS,iBAAiB;AACtD,SAAK,gBAAgB,SAAS,iBAAiB;AAC/C,SAAK,kBAAkB,SAAS,mBAAmB,CAAC;AACpD,SAAK,gBAAgB,SAAS;AAE9B,SAAK,QAAQ,IAAI,WAAW;AAAA,MAC1B,SAAS;AAAA,QACP,UAAU,KAAK;AAAA,QACf;AAAA,QACA,UAAU,cAAc;AAAA,MAC1B;AAAA,IACF,CAAC;AACD,SAAK,WAAW,KAAK,MAAM,SAAS;AAAA,MAClC,SAAS;AAAA,QACP,MAAM;AAAA,MACR;AAAA,IACF,CAAC;AACD,SAAK,QAAQ,KAAK,MAAM,MAAM;AAAA,EAChC;AAAA,EAaA,MAAa,YACX,aACA,SACA,UAAuB,CAAC,GACT;AACf,UAAM,UAAU,MAAM,KAAK,iBAAiB,aAAa;AAAA,MACvD;AAAA,QACE,OAAO;AAAA,QACP,KAAK,QAAQ;AAAA,QACb,SAAS,QAAQ;AAAA,QACjB,eAAe,QAAQ;AAAA,QACvB,eAAe,QAAQ;AAAA,QACvB,SAAS,QAAQ;AAAA,MACnB;AAAA,IACF,CAAC;AACD,UAAM,KAAK,YAAY,QAAQ,KAAK;AACpC,UAAM,KAAK,SAAS,KAAK,OAAO;AAChC,eAAW,QAAQ,KAAK,iBAAiB;AACvC,WAAK,YAAY,QAAQ,KAAK;AAAA,IAChC;AAAA,EACF;AAAA,EAaA,MAAa,UACX,aACA,UACe;AACf,UAAM,UAAU,MAAM,KAAK,iBAAiB,aAAa,QAAQ;AACjE,UAAM,KAAK,YAAY,QAAQ,KAAK;AACpC,UAAM,KAAK,SAAS,KAAK,OAAO;AAChC,eAAW,QAAQ,KAAK,iBAAiB;AACvC,WAAK,YAAY,QAAQ,KAAK;AAAA,IAChC;AAAA,EACF;AAAA;AAAA,EAGA,MAAa,YACX,IACe;AACf,QAAI,CAAC,KAAK,YAAY;AACpB,WAAK,aAAa,KAAK,MAAM,SAAS;AAAA,QACpC,SAAS;AAAA,UACP,MAAM;AAAA,UACN,YAAY;AAAA,UACZ,iBAAiB,GAAG,KAAK,QAAQ;AAAA,UACjC,qBAAqB;AAAA,QACvB;AAAA,MACF,CAAC;AACD,YAAM,KAAK,WAAW,QAAQ;AAAA,IAChC;AACA,UAAM,KAAK,MAAM,KAAK,WAAW,YAAY;AAC7C,QAAI;AACF,YAAM,MAA6B;AAAA,QACjC,MAAM,OACJ,aACA,SACA,UAAuB,CAAC,MACrB;AACH,gBAAM,UAAU,MAAM,KAAK,iBAAiB,aAAa;AAAA,YACvD;AAAA,cACE,OAAO;AAAA,cACP,KAAK,QAAQ;AAAA,cACb,SAAS,QAAQ;AAAA,cACjB,eAAe,QAAQ;AAAA,cACvB,eAAe,QAAQ;AAAA,cACvB,SAAS,QAAQ;AAAA,YACnB;AAAA,UACF,CAAC;AACD,gBAAM,KAAK,YAAY,QAAQ,KAAK;AACpC,gBAAM,GAAG,KAAK,OAAO;AAAA,QACvB;AAAA,QACA,WAAW,OAAO,aAAkB,aAAsC;AACxE,gBAAM,UAAU,MAAM,KAAK,iBAAiB,aAAa,QAAQ;AACjE,gBAAM,KAAK,YAAY,QAAQ,KAAK;AACpC,gBAAM,GAAG,KAAK,OAAO;AAAA,QACvB;AAAA,MACF;AACA,YAAM,GAAG,GAAG;AACZ,YAAM,GAAG,OAAO;AAAA,IAClB,SAAS,OAAO;AACd,UAAI;AACF,cAAM,GAAG,MAAM;AAAA,MACjB,SAAS,YAAY;AACnB,aAAK,OAAO;AAAA,UACV;AAAA,UACA,QAAQ,UAAU,EAAE;AAAA,QACtB;AAAA,MACF;AACA,YAAM;AAAA,IACR;AAAA,EACF;AAAA;AAAA;AAAA,EAKA,MAAa,kBAAiC;AAC5C,UAAM,KAAK,SAAS,QAAQ;AAC5B,SAAK,OAAO,IAAI,oBAAoB;AAAA,EACtC;AAAA,EAEA,MAAa,qBAAoC;AAC/C,UAAM,KAAK,SAAS,WAAW;AAC/B,SAAK,OAAO,IAAI,uBAAuB;AAAA,EACzC;AAAA,EAiBA,MAAa,cACX,QACA,eACA,UAA8B,CAAC,GAChB;AACf,QAAI,QAAQ,eAAe,CAAC,QAAQ,OAAO;AACzC,YAAM,IAAI;AAAA,QACR;AAAA,MACF;AAAA,IACF;AAEA,UAAM,EAAE,UAAU,WAAW,YAAY,KAAK,KAAK,cAAc,MAAM,IACrE,MAAM,KAAK,cAAc,QAAQ,eAAe,OAAO;AAEzD,UAAM,OAAO,EAAE,QAAQ,KAAK,QAAQ,UAAU,KAAK,UAAU,iBAAiB,KAAK,iBAAiB,eAAe,KAAK,cAAc;AAEtI,UAAM,SAAS,IAAI;AAAA,MACjB,aAAa,OAAO,EAAE,OAAAC,QAAO,WAAW,QAAQ,MAAM;AACpD,YAAI,CAAC,QAAQ,OAAO;AAClB,eAAK,OAAO,KAAK,qCAAqCA,MAAK,EAAE;AAC7D;AAAA,QACF;AAEA,cAAM,MAAM,QAAQ,MAAM,SAAS;AACnC,cAAM,SAAS,iBAAiB,KAAKA,QAAO,KAAK,MAAM;AACvD,YAAI,WAAW,KAAM;AAErB,cAAM,UAAU,cAAc,QAAQ,OAAO;AAC7C,cAAM,YAAY,MAAM;AAAA,UACtB;AAAA,UAAQ;AAAA,UAAKA;AAAA,UAAO;AAAA,UAAW;AAAA,UAAc;AAAA,UAC7C,EAAE,GAAG,MAAM,iBAAiB,QAAQ;AAAA,QACtC;AACA,YAAI,cAAc,KAAM;AAExB,cAAM,WAAW;AAAA,UACf;AAAA,UAAW;AAAA,UAASA;AAAA,UAAO;AAAA,UAAW,QAAQ;AAAA,QAChD;AAEA,cAAM;AAAA,UACJ,MACE;AAAA,YACE,EAAE,eAAe,SAAS,eAAe,aAAa,SAAS,YAAY;AAAA,YAC3E,MAAM,cAAc,QAAQ;AAAA,UAC9B;AAAA,UACF,EAAE,UAAU,aAAa,CAAC,GAAG,GAAG,cAAc,KAAK,OAAO,aAAa,QAAQ,YAAY;AAAA,UAC3F;AAAA,QACF;AAAA,MACF;AAAA,IACF,CAAC;AAED,SAAK,iBAAiB,IAAI,KAAK,aAAa;AAE5C,QAAI,QAAQ,eAAe,OAAO;AAChC,YAAM,KAAK;AAAA,QACT;AAAA,QAAY;AAAA,QAAK;AAAA,QAAe;AAAA,QAAO;AAAA,QAAK;AAAA,QAAc;AAAA,MAC5D;AAAA,IACF;AAAA,EACF;AAAA,EAuBA,MAAa,mBACX,QACA,aAIA,UAA8B,CAAC,GAChB;AACf,UAAM,EAAE,UAAU,WAAW,KAAK,KAAK,cAAc,MAAM,IACzD,MAAM,KAAK,cAAc,QAAQ,aAAa,OAAO;AAEvD,UAAM,OAAO,EAAE,QAAQ,KAAK,QAAQ,UAAU,KAAK,UAAU,iBAAiB,KAAK,iBAAiB,eAAe,KAAK,cAAc;AAEtI,UAAM,SAAS,IAAI;AAAA,MACjB,WAAW,OAAO;AAAA,QAChB;AAAA,QACA;AAAA,QACA;AAAA,QACA;AAAA,MACF,MAAM;AACJ,cAAM,YAAkC,CAAC;AACzC,cAAM,cAAwB,CAAC;AAE/B,mBAAW,WAAW,MAAM,UAAU;AACpC,cAAI,CAAC,QAAQ,OAAO;AAClB,iBAAK,OAAO;AAAA,cACV,qCAAqC,MAAM,KAAK;AAAA,YAClD;AACA;AAAA,UACF;AAEA,gBAAM,MAAM,QAAQ,MAAM,SAAS;AACnC,gBAAM,SAAS,iBAAiB,KAAK,MAAM,OAAO,KAAK,MAAM;AAC7D,cAAI,WAAW,KAAM;AAErB,gBAAM,UAAU,cAAc,QAAQ,OAAO;AAC7C,gBAAM,YAAY,MAAM;AAAA,YACtB;AAAA,YAAQ;AAAA,YAAK,MAAM;AAAA,YAAO;AAAA,YAAW;AAAA,YAAc;AAAA,YACnD,EAAE,GAAG,MAAM,iBAAiB,QAAQ;AAAA,UACtC;AACA,cAAI,cAAc,KAAM;AACxB,oBAAU;AAAA,YACR,gBAAgB,WAAW,SAAS,MAAM,OAAO,MAAM,WAAW,QAAQ,MAAM;AAAA,UAClF;AACA,sBAAY,KAAK,GAAG;AAAA,QACtB;AAEA,YAAI,UAAU,WAAW,EAAG;AAE5B,cAAM,OAAkB;AAAA,UACtB,WAAW,MAAM;AAAA,UACjB,eAAe,MAAM;AAAA,UACrB;AAAA,UACA;AAAA,UACA;AAAA,QACF;AAEA,cAAM;AAAA,UACJ,MAAM,YAAY,WAAW,IAAI;AAAA,UACjC;AAAA,YACE,UAAU;AAAA,YACV,aAAa,MAAM,SAChB,OAAO,CAAC,MAAM,EAAE,KAAK,EACrB,IAAI,CAAC,MAAM,EAAE,MAAO,SAAS,CAAC;AAAA,YACjC;AAAA,YACA;AAAA,YACA;AAAA,YACA,SAAS;AAAA,UACX;AAAA,UACA;AAAA,QACF;AAAA,MACF;AAAA,IACF,CAAC;AAED,SAAK,iBAAiB,IAAI,KAAK,WAAW;AAAA,EAC5C;AAAA;AAAA,EAIA,MAAa,aAAa,SAAiC;AACzD,QAAI,YAAY,QAAW;AACzB,YAAM,WAAW,KAAK,UAAU,IAAI,OAAO;AAC3C,UAAI,CAAC,UAAU;AACb,aAAK,OAAO,KAAK,+CAA+C,OAAO,GAAG;AAC1E;AAAA,MACF;AACA,YAAM,SAAS,WAAW,EAAE,MAAM,MAAM;AAAA,MAAC,CAAC;AAC1C,WAAK,UAAU,OAAO,OAAO;AAC7B,WAAK,iBAAiB,OAAO,OAAO;AACpC,WAAK,OAAO,IAAI,iCAAiC,OAAO,GAAG;AAAA,IAC7D,OAAO;AACL,YAAM,QAAQ,MAAM,KAAK,KAAK,UAAU,OAAO,CAAC,EAAE;AAAA,QAAI,CAAC,MACrD,EAAE,WAAW,EAAE,MAAM,MAAM;AAAA,QAAC,CAAC;AAAA,MAC/B;AACA,YAAM,QAAQ,WAAW,KAAK;AAC9B,WAAK,UAAU,MAAM;AACrB,WAAK,iBAAiB,MAAM;AAC5B,WAAK,OAAO,IAAI,4BAA4B;AAAA,IAC9C;AAAA,EACF;AAAA;AAAA,EAGA,MAAa,cAA6E;AACxF,QAAI,CAAC,KAAK,kBAAkB;AAC1B,YAAM,KAAK,MAAM,QAAQ;AACzB,WAAK,mBAAmB;AAAA,IAC1B;AACA,UAAM,SAAS,MAAM,KAAK,MAAM,WAAW;AAC3C,WAAO,EAAE,QAAQ,MAAM,UAAU,KAAK,UAAU,OAAO;AAAA,EACzD;AAAA,EAEO,cAAwB;AAC7B,WAAO,KAAK;AAAA,EACd;AAAA;AAAA,EAGA,MAAa,aAA4B;AACvC,UAAM,QAAyB,CAAC,KAAK,SAAS,WAAW,CAAC;AAC1D,QAAI,KAAK,YAAY;AACnB,YAAM,KAAK,KAAK,WAAW,WAAW,CAAC;AACvC,WAAK,aAAa;AAAA,IACpB;AACA,eAAW,YAAY,KAAK,UAAU,OAAO,GAAG;AAC9C,YAAM,KAAK,SAAS,WAAW,CAAC;AAAA,IAClC;AACA,QAAI,KAAK,kBAAkB;AACzB,YAAM,KAAK,KAAK,MAAM,WAAW,CAAC;AAClC,WAAK,mBAAmB;AAAA,IAC1B;AACA,UAAM,QAAQ,WAAW,KAAK;AAC9B,SAAK,UAAU,MAAM;AACrB,SAAK,iBAAiB,MAAM;AAC5B,SAAK,OAAO,IAAI,wBAAwB;AAAA,EAC1C;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAcA,MAAc,yBACZ,gBACA,iBACA,eACA,OACA,KACA,cACA,WACe;AACf,UAAM,kBAAkB,eAAe,IAAI,CAAC,MAAM,GAAG,CAAC,QAAQ;AAC9D,UAAM,eAAe,GAAG,eAAe;AACvC,UAAM,YAAY,MAAM,aAAa;AACrC,UAAM,eAAe,MAAM,gBAAgB;AAC3C,UAAM,OAAO;AAAA,MACX,QAAQ,KAAK;AAAA,MACb,UAAU,KAAK;AAAA,MACf,iBAAiB,KAAK;AAAA,MACtB,eAAe,KAAK;AAAA,IACtB;AAEA,eAAW,MAAM,iBAAiB;AAChC,YAAM,KAAK,YAAY,EAAE;AAAA,IAC3B;AAEA,UAAM,WAAW,KAAK,oBAAoB,cAAc,OAAO,IAAI;AACnE,UAAM,SAAS,QAAQ;AACvB,UAAM,mBAAmB,UAAU,iBAAiB,KAAK,MAAM;AAE/D,UAAM,SAAS,IAAI;AAAA,MACjB,aAAa,OAAO,EAAE,OAAO,YAAY,WAAW,QAAQ,MAAM;AAChE,YAAI,CAAC,QAAQ,MAAO;AAEpB,cAAM,MAAM,QAAQ,MAAM,SAAS;AACnC,cAAM,SAAS,iBAAiB,KAAK,YAAY,KAAK,MAAM;AAC5D,YAAI,WAAW,KAAM;AAErB,cAAM,UAAU,cAAc,QAAQ,OAAO;AAC7C,cAAM,gBACH,QAAQ,2BAA2B,KACpC,WAAW,QAAQ,YAAY,EAAE;AACnC,cAAM,iBAAiB;AAAA,UACpB,QAAQ,oBAAoB,KAA4B;AAAA,UACzD;AAAA,QACF;AACA,cAAM,aAAa;AAAA,UAChB,QAAQ,wBAAwB,KAC/B,OAAO,MAAM,UAAU;AAAA,UACzB;AAAA,QACF;AACA,cAAM,aAAa;AAAA,UAChB,QAAQ,kBAAkB,KAA4B;AAAA,UACvD;AAAA,QACF;AAIA,cAAM,YAAY,aAAa,KAAK,IAAI;AACxC,YAAI,YAAY,GAAG;AACjB,mBAAS,MAAM,CAAC,EAAE,OAAO,YAAY,YAAY,CAAC,SAAS,EAAE,CAAC,CAAC;AAC/D,gBAAM,MAAM,SAAS;AACrB,mBAAS,OAAO,CAAC,EAAE,OAAO,YAAY,YAAY,CAAC,SAAS,EAAE,CAAC,CAAC;AAAA,QAClE;AAGA,cAAM,YAAY,MAAM;AAAA,UACtB;AAAA,UAAQ;AAAA,UAAK;AAAA,UAAe;AAAA,UAAW;AAAA,UAAc;AAAA,UACrD,EAAE,GAAG,MAAM,iBAAiB,QAAQ;AAAA,QACtC;AACA,YAAI,cAAc,KAAM;AAGxB,cAAM,WAAW;AAAA,UACf;AAAA,UAAW;AAAA,UAAS;AAAA,UAAe;AAAA,UAAW,QAAQ;AAAA,QACxD;AAEA,YAAI;AAEF,gBAAM,WAA2B,CAAC;AAClC,qBAAW,QAAQ,KAAK,iBAAiB;AACvC,kBAAM,IAAI,KAAK,gBAAgB,QAAQ;AACvC,gBAAI,OAAO,MAAM,WAAY,UAAS,KAAK,CAAC;AAAA,UAC9C;AACA,qBAAW,eAAe,aAAc,OAAM,YAAY,SAAS,QAAQ;AAE3E,gBAAM;AAAA,YACJ,EAAE,eAAe,SAAS,eAAe,aAAa,SAAS,YAAY;AAAA,YAC3E,MAAM,cAAc,QAAQ;AAAA,UAC9B;AAEA,qBAAW,eAAe,aAAc,OAAM,YAAY,QAAQ,QAAQ;AAC1E,qBAAW,WAAW,SAAU,SAAQ;AAAA,QAC1C,SAAS,OAAO;AACd,gBAAM,MAAM,QAAQ,KAAK;AACzB,gBAAM,cAAc,iBAAiB;AACrC,gBAAM,YAAY,kBAAkB;AAEpC,qBAAW,QAAQ,KAAK,gBAAiB,MAAK,iBAAiB,UAAU,GAAG;AAE5E,gBAAM,gBAAgB,aAAa,aAAa,IAC5C,IAAI,yBAAyB,eAAe,CAAC,SAAS,OAAO,GAAG,YAAY,EAAE,OAAO,IAAI,CAAC,IAC1F;AACJ,qBAAW,eAAe,cAAc;AACtC,kBAAM,YAAY,UAAU,UAAU,aAAa;AAAA,UACrD;AAEA,eAAK,OAAO;AAAA,YACV,4BAA4B,aAAa,aAAa,cAAc,IAAI,UAAU;AAAA,YAClF,IAAI;AAAA,UACN;AAEA,cAAI,CAAC,WAAW;AAGd,kBAAM,MAAM,KAAK,IAAI,YAAY,KAAK,gBAAgB,YAAY;AAClE,kBAAM,QAAQ,KAAK,MAAM,KAAK,OAAO,IAAI,GAAG;AAC5C,kBAAM;AAAA,cACJ;AAAA,cAAe,CAAC,GAAG;AAAA,cAAG;AAAA,cAAa;AAAA,cAAY;AAAA,cAAO;AAAA,cAAS;AAAA,YACjE;AAAA,UACF,WAAW,KAAK;AACd,kBAAM,UAAU,eAAe,KAAK,MAAM;AAAA,cACxC,OAAO;AAAA;AAAA;AAAA;AAAA,cAIP,SAAS,iBAAiB;AAAA,cAC1B,iBAAiB;AAAA,YACnB,CAAC;AAAA,UACH,OAAO;AACL,kBAAM,KAAK,gBAAgB;AAAA,cACzB,OAAO;AAAA,cACP,OAAO;AAAA,cACP,SAAS;AAAA,cACT;AAAA,YACF,CAAC;AAAA,UACH;AAAA,QACF;AAAA,MACF;AAAA,IACF,CAAC;AAED,SAAK,iBAAiB,IAAI,cAAc,aAAa;AAOrD,UAAM,KAAK,2BAA2B,UAAU,eAAe;AAE/D,SAAK,OAAO;AAAA,MACV,sCAAsC,eAAe,KAAK,IAAI,CAAC,YAAY,YAAY;AAAA,IACzF;AAAA,EACF;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAWA,MAAc,2BACZ,UACA,QACA,YAAY,KACG;AACf,UAAM,WAAW,IAAI,IAAI,MAAM;AAC/B,UAAM,WAAW,KAAK,IAAI,IAAI;AAC9B,WAAO,KAAK,IAAI,IAAI,UAAU;AAC5B,UAAI;AACF,cAAM,WAAmD,SAAS,WAAW;AAC7E,YAAI,SAAS,KAAK,CAAC,MAAM,SAAS,IAAI,EAAE,KAAK,CAAC,EAAG;AAAA,MACnD,QAAQ;AAAA,MAER;AACA,YAAM,MAAM,GAAG;AAAA,IACjB;AACA,SAAK,OAAO;AAAA,MACV,6DAA6D,OAAO,KAAK,IAAI,CAAC,YAAY,SAAS;AAAA,IACrG;AAAA,EACF;AAAA,EAEQ,oBACN,SACA,eACA,YACU;AACV,QAAI,CAAC,KAAK,UAAU,IAAI,OAAO,GAAG;AAChC,WAAK,UAAU;AAAA,QACb;AAAA,QACA,KAAK,MAAM,SAAS;AAAA,UAClB,SAAS,EAAE,SAAS,eAAe,WAAW;AAAA,QAChD,CAAC;AAAA,MACH;AAAA,IACF;AACA,WAAO,KAAK,UAAU,IAAI,OAAO;AAAA,EACnC;AAAA,EAEQ,iBAAiB,mBAAoC;AAC3D,QAAI,OAAO,sBAAsB,SAAU,QAAO;AAClD,QACE,qBACA,OAAO,sBAAsB,YAC7B,aAAa,mBACb;AACA,aAAQ,kBAAsC;AAAA,IAChD;AACA,WAAO,OAAO,iBAAiB;AAAA,EACjC;AAAA,EAEA,MAAc,YAAYA,QAA8B;AACtD,QAAI,CAAC,KAAK,2BAA2B,KAAK,cAAc,IAAIA,MAAK,EAAG;AACpE,QAAI,CAAC,KAAK,kBAAkB;AAC1B,YAAM,KAAK,MAAM,QAAQ;AACzB,WAAK,mBAAmB;AAAA,IAC1B;AACA,UAAM,KAAK,MAAM,aAAa;AAAA,MAC5B,QAAQ,CAAC,EAAE,OAAAA,QAAO,eAAe,KAAK,cAAc,CAAC;AAAA,IACvD,CAAC;AACD,SAAK,cAAc,IAAIA,MAAK;AAAA,EAC9B;AAAA;AAAA,EAGQ,eAAe,aAAwB;AAC7C,QAAI,aAAa,UAAU;AACzB,YAAMA,SAAQ,KAAK,iBAAiB,WAAW;AAC/C,WAAK,eAAe,IAAIA,QAAO,YAAY,QAAQ;AAAA,IACrD;AAAA,EACF;AAAA;AAAA,EAGA,MAAc,gBAAgB,aAAkB,SAA4B;AAC1E,QAAI,aAAa,UAAU;AACzB,aAAO,MAAM,YAAY,SAAS,MAAM,OAAO;AAAA,IACjD;AACA,QAAI,KAAK,wBAAwB,OAAO,gBAAgB,UAAU;AAChE,YAAM,SAAS,KAAK,eAAe,IAAI,WAAW;AAClD,UAAI,OAAQ,QAAO,MAAM,OAAO,MAAM,OAAO;AAAA,IAC/C;AACA,WAAO;AAAA,EACT;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAOA,MAAc,iBACZ,aACA,UAC6G;AAC7G,SAAK,eAAe,WAAW;AAC/B,UAAMA,SAAQ,KAAK,iBAAiB,WAAW;AAC/C,UAAM,gBAAgB,MAAM,QAAQ;AAAA,MAClC,SAAS,IAAI,OAAO,MAAM;AACxB,cAAM,kBAAkB,qBAAqB;AAAA,UAC3C,eAAe,EAAE;AAAA,UACjB,eAAe,EAAE;AAAA,UACjB,SAAS,EAAE;AAAA,UACX,SAAS,EAAE;AAAA,QACb,CAAC;AAGD,mBAAW,QAAQ,KAAK,iBAAiB;AACvC,eAAK,aAAaA,QAAO,eAAe;AAAA,QAC1C;AAEA,eAAO;AAAA,UACL,OAAO,KAAK,UAAU,MAAM,KAAK,gBAAgB,aAAa,EAAE,KAAK,CAAC;AAAA,UACtE,KAAK,EAAE,OAAO;AAAA,UACd,SAAS;AAAA,QACX;AAAA,MACF,CAAC;AAAA,IACH;AACA,WAAO,EAAE,OAAAA,QAAO,UAAU,cAAc;AAAA,EAC1C;AAAA;AAAA,EAGA,MAAc,cACZ,QACA,MACA,SACA;AACA,UAAM;AAAA,MACJ,SAAS;AAAA,MACT,gBAAgB;AAAA,MAChB;AAAA,MACA,MAAM;AAAA,MACN,eAAe,CAAC;AAAA,MAChB,SAAS;AAAA,IACX,IAAI;AAEJ,UAAM,MAAM,cAAc,KAAK;AAC/B,UAAM,eAAe,KAAK,iBAAiB,IAAI,GAAG;AAClD,UAAM,eAAe,SAAS,gBAAgB,cAAc;AAC5D,QAAI,iBAAiB,cAAc;AACjC,YAAM,IAAI;AAAA,QACR,cAAc,IAAI,uBAAuB,GAAG,uCAAkC,YAAY;AAAA,MAE5F;AAAA,IACF;AAEA,UAAM,WAAW,KAAK,oBAAoB,KAAK,eAAe,QAAQ,cAAc,IAAI;AACxF,UAAM,YAAY,KAAK,eAAe,QAAQ,aAAa;AAE3D,UAAM,aAAc,OAAiB;AAAA,MAAI,CAAC,MACxC,KAAK,iBAAiB,CAAC;AAAA,IACzB;AAGA,eAAW,KAAK,YAAY;AAC1B,YAAM,KAAK,YAAY,CAAC;AAAA,IAC1B;AACA,QAAI,KAAK;AACP,iBAAW,KAAK,YAAY;AAC1B,cAAM,KAAK,YAAY,GAAG,CAAC,MAAM;AAAA,MACnC;AAAA,IACF;AAEA,UAAM,SAAS,QAAQ;AACvB,UAAM,mBAAmB,UAAU,YAAY,KAAK,QAAQ,QAAQ,cAAc;AAElF,SAAK,OAAO;AAAA,MACV,GAAG,SAAS,cAAc,mBAAmB,UAAU,0BAA0B,WAAW,KAAK,IAAI,CAAC;AAAA,IACxG;AAEA,WAAO,EAAE,UAAU,WAAW,YAAY,KAAK,KAAK,cAAc,MAAM;AAAA,EAC1E;AAAA,EAEQ,eACN,QACA,eACyB;AACzB,UAAM,YAAY,oBAAI,IAAwB;AAC9C,eAAW,KAAK,QAAQ;AACtB,UAAI,GAAG,UAAU;AACf,cAAM,OAAO,KAAK,iBAAiB,CAAC;AACpC,kBAAU,IAAI,MAAM,EAAE,QAAQ;AAC9B,aAAK,eAAe,IAAI,MAAM,EAAE,QAAQ;AAAA,MAC1C;AAAA,IACF;AACA,QAAI,eAAe;AACjB,iBAAW,CAAC,GAAG,CAAC,KAAK,eAAe;AAClC,kBAAU,IAAI,GAAG,CAAC;AAClB,aAAK,eAAe,IAAI,GAAG,CAAC;AAAA,MAC9B;AAAA,IACF;AACA,WAAO;AAAA,EACT;AACF;;;AKnwBO,SAAS,MAAwB,MAAS;AAC/C,QAAM,KAAK,OAA6D;AAAA,IACtE,SAAS;AAAA,IACT,QAAQ;AAAA,EACV;AAEA,KAAG,SAAS,CACV,YACwC;AAAA,IACxC,SAAS;AAAA,IACT,QAAQ;AAAA,IACR,UAAU;AAAA,EACZ;AAEA,SAAO;AACT;","names":["topic","topic","topic","topic"]}
package/dist/core.d.mts CHANGED
@@ -1,5 +1,5 @@
1
- import { T as TopicMapConstraint, I as IKafkaClient, C as ClientId, G as GroupId, k as KafkaClientOptions, b as TopicDescriptor, n as SendOptions, B as BatchMessageItem, r as TransactionContext, e as EventEnvelope, a as ConsumerOptions, c as BatchMeta } from './envelope-C66_h8r_.mjs';
2
- export { d as ConsumerInterceptor, E as EnvelopeHeaderOptions, H as HEADER_CORRELATION_ID, f as HEADER_EVENT_ID, g as HEADER_SCHEMA_VERSION, h as HEADER_TIMESTAMP, i as HEADER_TRACEPARENT, j as InferSchema, K as KafkaInstrumentation, l as KafkaLogger, M as MessageHeaders, m as MessageLostContext, R as RetryOptions, S as SchemaLike, o as SubscribeRetryOptions, p as TTopicMessageMap, q as TopicsFrom, s as buildEnvelopeHeaders, t as decodeHeaders, u as extractEnvelope, v as getEnvelopeContext, w as runWithEnvelopeContext, x as topic } from './envelope-C66_h8r_.mjs';
1
+ import { T as TopicMapConstraint, I as IKafkaClient, C as ClientId, G as GroupId, k as KafkaClientOptions, b as TopicDescriptor, n as SendOptions, B as BatchMessageItem, r as TransactionContext, e as EventEnvelope, a as ConsumerOptions, c as BatchMeta } from './envelope-BR8d1m8c.mjs';
2
+ export { d as ConsumerInterceptor, E as EnvelopeHeaderOptions, H as HEADER_CORRELATION_ID, f as HEADER_EVENT_ID, g as HEADER_SCHEMA_VERSION, h as HEADER_TIMESTAMP, i as HEADER_TRACEPARENT, j as InferSchema, K as KafkaInstrumentation, l as KafkaLogger, M as MessageHeaders, m as MessageLostContext, R as RetryOptions, S as SchemaLike, o as SubscribeRetryOptions, p as TTopicMessageMap, q as TopicsFrom, s as buildEnvelopeHeaders, t as decodeHeaders, u as extractEnvelope, v as getEnvelopeContext, w as runWithEnvelopeContext, x as topic } from './envelope-BR8d1m8c.mjs';
3
3
 
4
4
  /**
5
5
  * Type-safe Kafka client.
@@ -44,7 +44,7 @@ declare class KafkaClient<T extends TopicMapConstraint<T>> implements IKafkaClie
44
44
  /** Subscribe to topics and consume messages in batches. */
45
45
  startBatchConsumer<K extends Array<keyof T>>(topics: K, handleBatch: (envelopes: EventEnvelope<T[K[number]]>[], meta: BatchMeta) => Promise<void>, options?: ConsumerOptions<T>): Promise<void>;
46
46
  startBatchConsumer<D extends TopicDescriptor<string & keyof T, T[string & keyof T]>>(topics: D[], handleBatch: (envelopes: EventEnvelope<D["__type"]>[], meta: BatchMeta) => Promise<void>, options?: ConsumerOptions<T>): Promise<void>;
47
- stopConsumer(): Promise<void>;
47
+ stopConsumer(groupId?: string): Promise<void>;
48
48
  /** Check broker connectivity and return status, clientId, and available topics. */
49
49
  checkStatus(): Promise<{
50
50
  status: 'up';
@@ -54,6 +54,25 @@ declare class KafkaClient<T extends TopicMapConstraint<T>> implements IKafkaClie
54
54
  getClientId(): ClientId;
55
55
  /** Gracefully disconnect producer, all consumers, and admin. */
56
56
  disconnect(): Promise<void>;
57
+ /**
58
+ * Auto-start companion consumers on `<topic>.retry` for each original topic.
59
+ * Called by `startConsumer` when `retryTopics: true`.
60
+ *
61
+ * Flow per message:
62
+ * 1. Sleep until `x-retry-after` (scheduled by the main consumer or previous retry hop)
63
+ * 2. Call the original handler
64
+ * 3. On failure: if retries remain → re-send to `<originalTopic>.retry` with incremented attempt
65
+ * if exhausted → DLQ or onMessageLost
66
+ */
67
+ private startRetryTopicConsumers;
68
+ /**
69
+ * Poll `consumer.assignment()` until the consumer has received at least one
70
+ * partition for the given topics, then return. Logs a warning and returns
71
+ * (rather than throwing) on timeout so that a slow broker does not break
72
+ * the caller — in the worst case a message sent immediately after would be
73
+ * missed, which is the same behaviour as before this guard was added.
74
+ */
75
+ private waitForPartitionAssignment;
57
76
  private getOrCreateConsumer;
58
77
  private resolveTopicName;
59
78
  private ensureTopic;