@drarzter/kafka-client 0.5.4 → 0.5.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -32,7 +32,11 @@ Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class N
32
32
  - [Error classes](#error-classes)
33
33
  - [Retry topic chain](#retry-topic-chain)
34
34
  - [stopConsumer](#stopconsumer)
35
+ - [Consumer handles](#consumer-handles)
35
36
  - [onMessageLost](#onmessagelost)
37
+ - [onRebalance](#onrebalance)
38
+ - [Consumer lag](#consumer-lag)
39
+ - [Handler timeout warning](#handler-timeout-warning)
36
40
  - [Schema validation](#schema-validation)
37
41
  - [Health check](#health-check)
38
42
  - [Testing](#testing)
@@ -711,6 +715,7 @@ Options for `sendMessage()` — the third argument:
711
715
  | `dlq` | `false` | Send to `{topic}.dlq` after all retries exhausted — message carries `x-dlq-*` metadata headers |
712
716
  | `retryTopics` | `false` | Route failed messages through `{topic}.retry` instead of sleeping in-process (see [Retry topic chain](#retry-topic-chain)) |
713
717
  | `interceptors` | `[]` | Array of before/after/onError hooks |
718
+ | `handlerTimeoutMs` | — | Log a warning if the handler hasn't resolved within this window (ms) — does not cancel the handler |
714
719
  | `batch` | `false` | (decorator only) Use `startBatchConsumer` instead of `startConsumer` |
715
720
  | `subscribeRetry.retries` | `5` | Max attempts for `consumer.subscribe()` when topic doesn't exist yet |
716
721
  | `subscribeRetry.backoffMs` | `5000` | Delay between subscribe retry attempts (ms) |
@@ -731,6 +736,7 @@ Passed to `KafkaModule.register()` or returned from `registerAsync()` factory:
731
736
  | `strictSchemas` | `true` | Validate string topic keys against schemas registered via TopicDescriptor |
732
737
  | `instrumentation` | `[]` | Client-wide instrumentation hooks (e.g. OTel). Applied to both send and consume paths |
733
738
  | `onMessageLost` | — | Called when a message is silently dropped without DLQ — use to alert, log to external systems, or trigger fallback logic |
739
+ | `onRebalance` | — | Called on every partition assign/revoke event across all consumers created by this client |
734
740
 
735
741
  **Module-scoped** (default) — import `KafkaModule` in each module that needs it:
736
742
 
@@ -861,6 +867,21 @@ await kafka.stopConsumer();
861
867
 
862
868
  `stopConsumer(groupId)` disconnects and removes only that group's consumer, leaving other groups running. Useful when you want to pause processing for a specific topic without restarting the whole client.
863
869
 
870
+ ## Consumer handles
871
+
872
+ `startConsumer()` and `startBatchConsumer()` return a `ConsumerHandle` instead of `void`. Use it to stop a specific consumer without needing to remember the group ID:
873
+
874
+ ```typescript
875
+ const handle = await kafka.startConsumer(['orders'], handler);
876
+
877
+ console.log(handle.groupId); // e.g. "my-group"
878
+
879
+ // Later — stop only this consumer, producer stays connected
880
+ await handle.stop();
881
+ ```
882
+
883
+ `handle.stop()` is equivalent to `kafka.stopConsumer(handle.groupId)`. Useful in lifecycle methods or when you need to conditionally stop one consumer while others keep running.
884
+
864
885
  ## onMessageLost
865
886
 
866
887
  By default, if a consumer handler throws and `dlq` is not enabled, the message is logged and dropped. Use `onMessageLost` to catch these silent losses:
@@ -886,6 +907,59 @@ const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
886
907
 
887
908
  It does NOT fire when `dlq: true` — in that case the message is preserved in `{topic}.dlq`.
888
909
 
910
+ ## onRebalance
911
+
912
+ React to partition rebalance events without patching the consumer. Useful for flushing in-flight state before partitions are revoked, or for logging/metrics:
913
+
914
+ ```typescript
915
+ const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
916
+ onRebalance: (type, partitions) => {
917
+ // type — 'assign' | 'revoke'
918
+ // partitions — Array<{ topic: string; partition: number }>
919
+ console.log(`Rebalance ${type}:`, partitions);
920
+ },
921
+ });
922
+ ```
923
+
924
+ - `'assign'` fires when this instance receives new partitions (e.g. on startup or when another consumer leaves the group).
925
+ - `'revoke'` fires when partitions are taken away (e.g. another consumer joins).
926
+
927
+ The callback is applied to **every** consumer created by this client. If you need per-consumer rebalance handling, use separate `KafkaClient` instances.
928
+
929
+ ## Consumer lag
930
+
931
+ Query consumer group lag per partition via the admin API — no external tooling needed:
932
+
933
+ ```typescript
934
+ const lag = await kafka.getConsumerLag();
935
+ // [{ topic: 'orders', partition: 0, lag: 12 }, ...]
936
+
937
+ // Or for a different group:
938
+ const lag2 = await kafka.getConsumerLag('another-group');
939
+ ```
940
+
941
+ Lag is computed as `brokerHighWatermark − lastCommittedOffset`. A partition with a committed offset of `-1` (nothing ever committed) reports full lag equal to the high watermark.
942
+
943
+ Returns an empty array when the group has no committed offsets at all.
944
+
945
+ ## Handler timeout warning
946
+
947
+ Catch stuck handlers before they silently starve a partition. Set `handlerTimeoutMs` on `startConsumer` or `startBatchConsumer`:
948
+
949
+ ```typescript
950
+ await kafka.startConsumer(['orders'], handler, {
951
+ handlerTimeoutMs: 5_000, // warn if handler hasn't resolved after 5 s
952
+ });
953
+ ```
954
+
955
+ If the handler hasn't resolved within the window, a `warn` is logged:
956
+
957
+ ```text
958
+ [KafkaClient:my-app] Handler for topic "orders" has not resolved after 5000ms — possible stuck handler
959
+ ```
960
+
961
+ The handler is **not** cancelled — the warning is diagnostic only. Combine with `retry` to automatically give up after a fixed number of slow attempts.
962
+
889
963
  ## Schema validation
890
964
 
891
965
  Add runtime message validation using any library with a `.parse()` method — Zod, Valibot, ArkType, or a custom validator. No extra dependency required.
@@ -132,9 +132,20 @@ async function validateWithSchema(message, raw, topic2, schemaMap, interceptors,
132
132
  originalHeaders: deps.originalHeaders
133
133
  });
134
134
  } else {
135
- await deps.onMessageLost?.({ topic: topic2, error: validationError, attempt: 0, headers: deps.originalHeaders ?? {} });
135
+ await deps.onMessageLost?.({
136
+ topic: topic2,
137
+ error: validationError,
138
+ attempt: 0,
139
+ headers: deps.originalHeaders ?? {}
140
+ });
136
141
  }
137
- const errorEnvelope = extractEnvelope(message, deps.originalHeaders ?? {}, topic2, -1, "");
142
+ const errorEnvelope = extractEnvelope(
143
+ message,
144
+ deps.originalHeaders ?? {},
145
+ topic2,
146
+ -1,
147
+ ""
148
+ );
138
149
  for (const interceptor of interceptors) {
139
150
  await interceptor.onError?.(errorEnvelope, validationError);
140
151
  }
@@ -186,7 +197,10 @@ async function sendToRetryTopic(originalTopic, rawMessages, attempt, maxRetries,
186
197
  };
187
198
  try {
188
199
  for (const raw of rawMessages) {
189
- await deps.producer.send({ topic: retryTopic, messages: [{ value: raw, headers }] });
200
+ await deps.producer.send({
201
+ topic: retryTopic,
202
+ messages: [{ value: raw, headers }]
203
+ });
190
204
  }
191
205
  deps.logger.warn(
192
206
  `Message queued in retry topic ${retryTopic} (attempt ${attempt}/${maxRetries})`
@@ -199,7 +213,15 @@ async function sendToRetryTopic(originalTopic, rawMessages, attempt, maxRetries,
199
213
  }
200
214
  }
201
215
  async function executeWithRetry(fn, ctx, deps) {
202
- const { envelope, rawMessages, interceptors, dlq, retry, isBatch, retryTopics } = ctx;
216
+ const {
217
+ envelope,
218
+ rawMessages,
219
+ interceptors,
220
+ dlq,
221
+ retry,
222
+ isBatch,
223
+ retryTopics
224
+ } = ctx;
203
225
  const maxAttempts = retryTopics ? 1 : retry ? retry.maxRetries + 1 : 1;
204
226
  const backoffMs = retry?.backoffMs ?? 1e3;
205
227
  const maxBackoffMs = retry?.maxBackoffMs ?? 3e4;
@@ -334,6 +356,7 @@ var KafkaClient = class {
334
356
  runningConsumers = /* @__PURE__ */ new Map();
335
357
  instrumentation;
336
358
  onMessageLost;
359
+ onRebalance;
337
360
  isAdminConnected = false;
338
361
  clientId;
339
362
  constructor(clientId, groupId, brokers, options) {
@@ -349,6 +372,7 @@ var KafkaClient = class {
349
372
  this.numPartitions = options?.numPartitions ?? 1;
350
373
  this.instrumentation = options?.instrumentation ?? [];
351
374
  this.onMessageLost = options?.onMessageLost;
375
+ this.onRebalance = options?.onRebalance;
352
376
  this.kafka = new KafkaClass({
353
377
  kafkaJS: {
354
378
  clientId: this.clientId,
@@ -455,7 +479,13 @@ var KafkaClient = class {
455
479
  );
456
480
  }
457
481
  const { consumer, schemaMap, topicNames, gid, dlq, interceptors, retry } = await this.setupConsumer(topics, "eachMessage", options);
458
- const deps = { logger: this.logger, producer: this.producer, instrumentation: this.instrumentation, onMessageLost: this.onMessageLost };
482
+ const deps = {
483
+ logger: this.logger,
484
+ producer: this.producer,
485
+ instrumentation: this.instrumentation,
486
+ onMessageLost: this.onMessageLost
487
+ };
488
+ const timeoutMs = options.handlerTimeoutMs;
459
489
  await consumer.run({
460
490
  eachMessage: async ({ topic: topic2, partition, message }) => {
461
491
  if (!message.value) {
@@ -484,11 +514,24 @@ var KafkaClient = class {
484
514
  message.offset
485
515
  );
486
516
  await executeWithRetry(
487
- () => runWithEnvelopeContext(
488
- { correlationId: envelope.correlationId, traceparent: envelope.traceparent },
489
- () => handleMessage(envelope)
490
- ),
491
- { envelope, rawMessages: [raw], interceptors, dlq, retry, retryTopics: options.retryTopics },
517
+ () => {
518
+ const fn = () => runWithEnvelopeContext(
519
+ {
520
+ correlationId: envelope.correlationId,
521
+ traceparent: envelope.traceparent
522
+ },
523
+ () => handleMessage(envelope)
524
+ );
525
+ return timeoutMs ? this.wrapWithTimeoutWarning(fn, timeoutMs, topic2) : fn();
526
+ },
527
+ {
528
+ envelope,
529
+ rawMessages: [raw],
530
+ interceptors,
531
+ dlq,
532
+ retry,
533
+ retryTopics: options.retryTopics
534
+ },
492
535
  deps
493
536
  );
494
537
  }
@@ -505,10 +548,17 @@ var KafkaClient = class {
505
548
  schemaMap
506
549
  );
507
550
  }
551
+ return { groupId: gid, stop: () => this.stopConsumer(gid) };
508
552
  }
509
553
  async startBatchConsumer(topics, handleBatch, options = {}) {
510
554
  const { consumer, schemaMap, gid, dlq, interceptors, retry } = await this.setupConsumer(topics, "eachBatch", options);
511
- const deps = { logger: this.logger, producer: this.producer, instrumentation: this.instrumentation, onMessageLost: this.onMessageLost };
555
+ const deps = {
556
+ logger: this.logger,
557
+ producer: this.producer,
558
+ instrumentation: this.instrumentation,
559
+ onMessageLost: this.onMessageLost
560
+ };
561
+ const timeoutMs = options.handlerTimeoutMs;
512
562
  await consumer.run({
513
563
  eachBatch: async ({
514
564
  batch,
@@ -540,7 +590,13 @@ var KafkaClient = class {
540
590
  );
541
591
  if (validated === null) continue;
542
592
  envelopes.push(
543
- extractEnvelope(validated, headers, batch.topic, batch.partition, message.offset)
593
+ extractEnvelope(
594
+ validated,
595
+ headers,
596
+ batch.topic,
597
+ batch.partition,
598
+ message.offset
599
+ )
544
600
  );
545
601
  rawMessages.push(raw);
546
602
  }
@@ -553,7 +609,10 @@ var KafkaClient = class {
553
609
  commitOffsetsIfNecessary
554
610
  };
555
611
  await executeWithRetry(
556
- () => handleBatch(envelopes, meta),
612
+ () => {
613
+ const fn = () => handleBatch(envelopes, meta);
614
+ return timeoutMs ? this.wrapWithTimeoutWarning(fn, timeoutMs, batch.topic) : fn();
615
+ },
557
616
  {
558
617
  envelope: envelopes,
559
618
  rawMessages: batch.messages.filter((m) => m.value).map((m) => m.value.toString()),
@@ -567,13 +626,16 @@ var KafkaClient = class {
567
626
  }
568
627
  });
569
628
  this.runningConsumers.set(gid, "eachBatch");
629
+ return { groupId: gid, stop: () => this.stopConsumer(gid) };
570
630
  }
571
631
  // ── Consumer lifecycle ───────────────────────────────────────────
572
632
  async stopConsumer(groupId) {
573
633
  if (groupId !== void 0) {
574
634
  const consumer = this.consumers.get(groupId);
575
635
  if (!consumer) {
576
- this.logger.warn(`stopConsumer: no active consumer for group "${groupId}"`);
636
+ this.logger.warn(
637
+ `stopConsumer: no active consumer for group "${groupId}"`
638
+ );
577
639
  return;
578
640
  }
579
641
  await consumer.disconnect().catch(() => {
@@ -592,6 +654,32 @@ var KafkaClient = class {
592
654
  this.logger.log("All consumers disconnected");
593
655
  }
594
656
  }
657
+ /**
658
+ * Query consumer group lag per partition.
659
+ * Lag = broker high-watermark − last committed offset.
660
+ * A committed offset of -1 (nothing committed yet) counts as full lag.
661
+ */
662
+ async getConsumerLag(groupId) {
663
+ const gid = groupId ?? this.defaultGroupId;
664
+ if (!this.isAdminConnected) {
665
+ await this.admin.connect();
666
+ this.isAdminConnected = true;
667
+ }
668
+ const committedByTopic = await this.admin.fetchOffsets({ groupId: gid });
669
+ const result = [];
670
+ for (const { topic: topic2, partitions } of committedByTopic) {
671
+ const brokerOffsets = await this.admin.fetchTopicOffsets(topic2);
672
+ for (const { partition, offset } of partitions) {
673
+ const broker = brokerOffsets.find((o) => o.partition === partition);
674
+ if (!broker) continue;
675
+ const committed = parseInt(offset, 10);
676
+ const high = parseInt(broker.high, 10);
677
+ const lag = committed === -1 ? high : Math.max(0, high - committed);
678
+ result.push({ topic: topic2, partition, lag });
679
+ }
680
+ }
681
+ return result;
682
+ }
595
683
  /** Check broker connectivity and return status, clientId, and available topics. */
596
684
  async checkStatus() {
597
685
  if (!this.isAdminConnected) {
@@ -700,19 +788,30 @@ var KafkaClient = class {
700
788
  const c = inst.beforeConsume?.(envelope);
701
789
  if (typeof c === "function") cleanups.push(c);
702
790
  }
703
- for (const interceptor of interceptors) await interceptor.before?.(envelope);
791
+ for (const interceptor of interceptors)
792
+ await interceptor.before?.(envelope);
704
793
  await runWithEnvelopeContext(
705
- { correlationId: envelope.correlationId, traceparent: envelope.traceparent },
794
+ {
795
+ correlationId: envelope.correlationId,
796
+ traceparent: envelope.traceparent
797
+ },
706
798
  () => handleMessage(envelope)
707
799
  );
708
- for (const interceptor of interceptors) await interceptor.after?.(envelope);
800
+ for (const interceptor of interceptors)
801
+ await interceptor.after?.(envelope);
709
802
  for (const cleanup of cleanups) cleanup();
710
803
  } catch (error) {
711
804
  const err = toError(error);
712
805
  const nextAttempt = currentAttempt + 1;
713
806
  const exhausted = currentAttempt >= maxRetries;
714
- for (const inst of this.instrumentation) inst.onConsumeError?.(envelope, err);
715
- const reportedError = exhausted && maxRetries > 1 ? new KafkaRetryExhaustedError(originalTopic, [envelope.payload], maxRetries, { cause: err }) : err;
807
+ for (const inst of this.instrumentation)
808
+ inst.onConsumeError?.(envelope, err);
809
+ const reportedError = exhausted && maxRetries > 1 ? new KafkaRetryExhaustedError(
810
+ originalTopic,
811
+ [envelope.payload],
812
+ maxRetries,
813
+ { cause: err }
814
+ ) : err;
716
815
  for (const interceptor of interceptors) {
717
816
  await interceptor.onError?.(envelope, reportedError);
718
817
  }
@@ -783,15 +882,48 @@ var KafkaClient = class {
783
882
  }
784
883
  getOrCreateConsumer(groupId, fromBeginning, autoCommit) {
785
884
  if (!this.consumers.has(groupId)) {
786
- this.consumers.set(
787
- groupId,
788
- this.kafka.consumer({
789
- kafkaJS: { groupId, fromBeginning, autoCommit }
790
- })
791
- );
885
+ const config = {
886
+ kafkaJS: { groupId, fromBeginning, autoCommit }
887
+ };
888
+ if (this.onRebalance) {
889
+ const onRebalance = this.onRebalance;
890
+ config["rebalance_cb"] = (err, assignment) => {
891
+ const type = err.code === -175 ? "assign" : "revoke";
892
+ try {
893
+ onRebalance(
894
+ type,
895
+ assignment.map((p) => ({
896
+ topic: p.topic,
897
+ partition: p.partition
898
+ }))
899
+ );
900
+ } catch (e) {
901
+ this.logger.warn(
902
+ `onRebalance callback threw: ${e.message}`
903
+ );
904
+ }
905
+ };
906
+ }
907
+ this.consumers.set(groupId, this.kafka.consumer(config));
792
908
  }
793
909
  return this.consumers.get(groupId);
794
910
  }
911
+ /**
912
+ * Start a timer that logs a warning if `fn` hasn't resolved within `timeoutMs`.
913
+ * The handler itself is not cancelled — the warning is diagnostic only.
914
+ */
915
+ wrapWithTimeoutWarning(fn, timeoutMs, topic2) {
916
+ let timer;
917
+ const promise = fn().finally(() => {
918
+ if (timer !== void 0) clearTimeout(timer);
919
+ });
920
+ timer = setTimeout(() => {
921
+ this.logger.warn(
922
+ `Handler for topic "${topic2}" has not resolved after ${timeoutMs}ms \u2014 possible stuck handler`
923
+ );
924
+ }, timeoutMs);
925
+ return promise;
926
+ }
795
927
  resolveTopicName(topicOrDescriptor) {
796
928
  if (typeof topicOrDescriptor === "string") return topicOrDescriptor;
797
929
  if (topicOrDescriptor && typeof topicOrDescriptor === "object" && "__topic" in topicOrDescriptor) {
@@ -848,7 +980,9 @@ var KafkaClient = class {
848
980
  inst.beforeSend?.(topic2, envelopeHeaders);
849
981
  }
850
982
  return {
851
- value: JSON.stringify(await this.validateMessage(topicOrDesc, m.value)),
983
+ value: JSON.stringify(
984
+ await this.validateMessage(topicOrDesc, m.value)
985
+ ),
852
986
  key: m.key ?? null,
853
987
  headers: envelopeHeaders
854
988
  };
@@ -874,7 +1008,11 @@ var KafkaClient = class {
874
1008
  `Cannot use ${mode} on consumer group "${gid}" \u2014 it is already running with ${oppositeMode}. Use a different groupId for this consumer.`
875
1009
  );
876
1010
  }
877
- const consumer = this.getOrCreateConsumer(gid, fromBeginning, options.autoCommit ?? true);
1011
+ const consumer = this.getOrCreateConsumer(
1012
+ gid,
1013
+ fromBeginning,
1014
+ options.autoCommit ?? true
1015
+ );
878
1016
  const schemaMap = this.buildSchemaMap(topics, optionSchemas);
879
1017
  const topicNames = topics.map(
880
1018
  (t) => this.resolveTopicName(t)
@@ -888,7 +1026,12 @@ var KafkaClient = class {
888
1026
  }
889
1027
  }
890
1028
  await consumer.connect();
891
- await subscribeWithRetry(consumer, topicNames, this.logger, options.subscribeRetry);
1029
+ await subscribeWithRetry(
1030
+ consumer,
1031
+ topicNames,
1032
+ this.logger,
1033
+ options.subscribeRetry
1034
+ );
892
1035
  this.logger.log(
893
1036
  `${mode === "eachBatch" ? "Batch consumer" : "Consumer"} subscribed to topics: ${topicNames.join(", ")}`
894
1037
  );
@@ -944,4 +1087,4 @@ export {
944
1087
  KafkaClient,
945
1088
  topic
946
1089
  };
947
- //# sourceMappingURL=chunk-3QXTW66R.mjs.map
1090
+ //# sourceMappingURL=chunk-Z3O5GTS7.mjs.map
@@ -0,0 +1 @@
1
+ {"version":3,"sources":["../src/client/kafka.client.ts","../src/client/envelope.ts","../src/client/errors.ts","../src/client/consumer-pipeline.ts","../src/client/subscribe-retry.ts","../src/client/topic.ts"],"sourcesContent":["import { KafkaJS } from \"@confluentinc/kafka-javascript\";\ntype Kafka = KafkaJS.Kafka;\ntype Producer = KafkaJS.Producer;\ntype Consumer = KafkaJS.Consumer;\ntype Admin = KafkaJS.Admin;\nconst { Kafka: KafkaClass, logLevel: KafkaLogLevel } = KafkaJS;\nimport { TopicDescriptor, SchemaLike } from \"./topic\";\nimport {\n buildEnvelopeHeaders,\n decodeHeaders,\n extractEnvelope,\n runWithEnvelopeContext,\n} from \"./envelope\";\nimport type { EventEnvelope } from \"./envelope\";\nimport {\n toError,\n sleep,\n parseJsonMessage,\n validateWithSchema,\n executeWithRetry,\n sendToDlq,\n sendToRetryTopic,\n RETRY_HEADER_ATTEMPT,\n RETRY_HEADER_AFTER,\n RETRY_HEADER_MAX_RETRIES,\n RETRY_HEADER_ORIGINAL_TOPIC,\n} from \"./consumer-pipeline\";\nimport { KafkaRetryExhaustedError } from \"./errors\";\nimport { subscribeWithRetry } from \"./subscribe-retry\";\nimport type {\n ClientId,\n GroupId,\n SendOptions,\n MessageHeaders,\n BatchMessageItem,\n ConsumerOptions,\n ConsumerHandle,\n TransactionContext,\n TopicMapConstraint,\n IKafkaClient,\n KafkaClientOptions,\n KafkaInstrumentation,\n KafkaLogger,\n BatchMeta,\n} from \"./types\";\n\n// Re-export all types so existing `import { ... } from './kafka.client'` keeps working\nexport * from \"./types\";\n\n/**\n * Type-safe Kafka client.\n * Wraps @confluentinc/kafka-javascript (librdkafka) with JSON serialization,\n * retries, DLQ, transactions, and interceptors.\n *\n * @typeParam T - Topic-to-message type mapping for compile-time safety.\n */\nexport class KafkaClient<\n T extends TopicMapConstraint<T>,\n> implements IKafkaClient<T> {\n private readonly kafka: Kafka;\n private readonly producer: Producer;\n private txProducer: Producer | undefined;\n private readonly consumers = new Map<string, Consumer>();\n private readonly admin: Admin;\n private readonly logger: KafkaLogger;\n private readonly autoCreateTopicsEnabled: boolean;\n private readonly strictSchemasEnabled: boolean;\n private readonly numPartitions: number;\n private readonly ensuredTopics = new Set<string>();\n private readonly defaultGroupId: string;\n private readonly schemaRegistry = new Map<string, SchemaLike>();\n private readonly runningConsumers = new Map<\n string,\n \"eachMessage\" | \"eachBatch\"\n >();\n private readonly instrumentation: KafkaInstrumentation[];\n private readonly onMessageLost: KafkaClientOptions[\"onMessageLost\"];\n private readonly onRebalance: KafkaClientOptions[\"onRebalance\"];\n\n private isAdminConnected = false;\n public readonly clientId: ClientId;\n\n constructor(\n clientId: ClientId,\n groupId: GroupId,\n brokers: string[],\n options?: KafkaClientOptions,\n ) {\n this.clientId = clientId;\n this.defaultGroupId = groupId;\n this.logger = options?.logger ?? {\n log: (msg) => console.log(`[KafkaClient:${clientId}] ${msg}`),\n warn: (msg, ...args) =>\n console.warn(`[KafkaClient:${clientId}] ${msg}`, ...args),\n error: (msg, ...args) =>\n console.error(`[KafkaClient:${clientId}] ${msg}`, ...args),\n };\n this.autoCreateTopicsEnabled = options?.autoCreateTopics ?? false;\n this.strictSchemasEnabled = options?.strictSchemas ?? true;\n this.numPartitions = options?.numPartitions ?? 1;\n this.instrumentation = options?.instrumentation ?? [];\n this.onMessageLost = options?.onMessageLost;\n this.onRebalance = options?.onRebalance;\n\n this.kafka = new KafkaClass({\n kafkaJS: {\n clientId: this.clientId,\n brokers,\n logLevel: KafkaLogLevel.ERROR,\n },\n });\n this.producer = this.kafka.producer({\n kafkaJS: {\n acks: -1,\n },\n });\n this.admin = this.kafka.admin();\n }\n\n // ── Send ─────────────────────────────────────────────────────────\n\n /** Send a single typed message. Accepts a topic key or a TopicDescriptor. */\n public async sendMessage<\n D extends TopicDescriptor<string & keyof T, T[string & keyof T]>,\n >(descriptor: D, message: D[\"__type\"], options?: SendOptions): Promise<void>;\n public async sendMessage<K extends keyof T>(\n topic: K,\n message: T[K],\n options?: SendOptions,\n ): Promise<void>;\n public async sendMessage(\n topicOrDesc: any,\n message: any,\n options: SendOptions = {},\n ): Promise<void> {\n const payload = await this.buildSendPayload(topicOrDesc, [\n {\n value: message,\n key: options.key,\n headers: options.headers,\n correlationId: options.correlationId,\n schemaVersion: options.schemaVersion,\n eventId: options.eventId,\n },\n ]);\n await this.ensureTopic(payload.topic);\n await this.producer.send(payload);\n for (const inst of this.instrumentation) {\n inst.afterSend?.(payload.topic);\n }\n }\n\n /** Send multiple typed messages in one call. Accepts a topic key or a TopicDescriptor. */\n public async sendBatch<\n D extends TopicDescriptor<string & keyof T, T[string & keyof T]>,\n >(\n descriptor: D,\n messages: Array<BatchMessageItem<D[\"__type\"]>>,\n ): Promise<void>;\n public async sendBatch<K extends keyof T>(\n topic: K,\n messages: Array<BatchMessageItem<T[K]>>,\n ): Promise<void>;\n public async sendBatch(\n topicOrDesc: any,\n messages: Array<BatchMessageItem<any>>,\n ): Promise<void> {\n const payload = await this.buildSendPayload(topicOrDesc, messages);\n await this.ensureTopic(payload.topic);\n await this.producer.send(payload);\n for (const inst of this.instrumentation) {\n inst.afterSend?.(payload.topic);\n }\n }\n\n /** Execute multiple sends atomically. Commits on success, aborts on error. */\n public async transaction(\n fn: (ctx: TransactionContext<T>) => Promise<void>,\n ): Promise<void> {\n if (!this.txProducer) {\n this.txProducer = this.kafka.producer({\n kafkaJS: {\n acks: -1,\n idempotent: true,\n transactionalId: `${this.clientId}-tx`,\n maxInFlightRequests: 1,\n },\n });\n await this.txProducer.connect();\n }\n const tx = await this.txProducer.transaction();\n try {\n const ctx: TransactionContext<T> = {\n send: async (\n topicOrDesc: any,\n message: any,\n options: SendOptions = {},\n ) => {\n const payload = await this.buildSendPayload(topicOrDesc, [\n {\n value: message,\n key: options.key,\n headers: options.headers,\n correlationId: options.correlationId,\n schemaVersion: options.schemaVersion,\n eventId: options.eventId,\n },\n ]);\n await this.ensureTopic(payload.topic);\n await tx.send(payload);\n },\n sendBatch: async (\n topicOrDesc: any,\n messages: BatchMessageItem<any>[],\n ) => {\n const payload = await this.buildSendPayload(topicOrDesc, messages);\n await this.ensureTopic(payload.topic);\n await tx.send(payload);\n },\n };\n await fn(ctx);\n await tx.commit();\n } catch (error) {\n try {\n await tx.abort();\n } catch (abortError) {\n this.logger.error(\n \"Failed to abort transaction:\",\n toError(abortError).message,\n );\n }\n throw error;\n }\n }\n\n // ── Producer lifecycle ───────────────────────────────────────────\n\n /** Connect the idempotent producer. Called automatically by `KafkaModule.register()`. */\n public async connectProducer(): Promise<void> {\n await this.producer.connect();\n this.logger.log(\"Producer connected\");\n }\n\n public async disconnectProducer(): Promise<void> {\n await this.producer.disconnect();\n this.logger.log(\"Producer disconnected\");\n }\n\n // ── Consumer: eachMessage ────────────────────────────────────────\n\n /** Subscribe to topics and start consuming messages with the given handler. */\n public async startConsumer<K extends Array<keyof T>>(\n topics: K,\n handleMessage: (envelope: EventEnvelope<T[K[number]]>) => Promise<void>,\n options?: ConsumerOptions<T>,\n ): Promise<ConsumerHandle>;\n public async startConsumer<\n D extends TopicDescriptor<string & keyof T, T[string & keyof T]>,\n >(\n topics: D[],\n handleMessage: (envelope: EventEnvelope<D[\"__type\"]>) => Promise<void>,\n options?: ConsumerOptions<T>,\n ): Promise<ConsumerHandle>;\n public async startConsumer(\n topics: any[],\n handleMessage: (envelope: EventEnvelope<any>) => Promise<void>,\n options: ConsumerOptions<T> = {},\n ): Promise<ConsumerHandle> {\n if (options.retryTopics && !options.retry) {\n throw new Error(\n \"retryTopics requires retry to be configured — set retry.maxRetries to enable the retry topic chain\",\n );\n }\n\n const { consumer, schemaMap, topicNames, gid, dlq, interceptors, retry } =\n await this.setupConsumer(topics, \"eachMessage\", options);\n\n const deps = {\n logger: this.logger,\n producer: this.producer,\n instrumentation: this.instrumentation,\n onMessageLost: this.onMessageLost,\n };\n const timeoutMs = options.handlerTimeoutMs;\n\n await consumer.run({\n eachMessage: async ({ topic, partition, message }) => {\n if (!message.value) {\n this.logger.warn(`Received empty message from topic ${topic}`);\n return;\n }\n\n const raw = message.value.toString();\n const parsed = parseJsonMessage(raw, topic, this.logger);\n if (parsed === null) return;\n\n const headers = decodeHeaders(message.headers);\n const validated = await validateWithSchema(\n parsed,\n raw,\n topic,\n schemaMap,\n interceptors,\n dlq,\n { ...deps, originalHeaders: headers },\n );\n if (validated === null) return;\n\n const envelope = extractEnvelope(\n validated,\n headers,\n topic,\n partition,\n message.offset,\n );\n\n await executeWithRetry(\n () => {\n const fn = () =>\n runWithEnvelopeContext(\n {\n correlationId: envelope.correlationId,\n traceparent: envelope.traceparent,\n },\n () => handleMessage(envelope),\n );\n return timeoutMs\n ? this.wrapWithTimeoutWarning(fn, timeoutMs, topic)\n : fn();\n },\n {\n envelope,\n rawMessages: [raw],\n interceptors,\n dlq,\n retry,\n retryTopics: options.retryTopics,\n },\n deps,\n );\n },\n });\n\n this.runningConsumers.set(gid, \"eachMessage\");\n\n if (options.retryTopics && retry) {\n await this.startRetryTopicConsumers(\n topicNames,\n gid,\n handleMessage,\n retry,\n dlq,\n interceptors,\n schemaMap,\n );\n }\n\n return { groupId: gid, stop: () => this.stopConsumer(gid) };\n }\n\n // ── Consumer: eachBatch ──────────────────────────────────────────\n\n /** Subscribe to topics and consume messages in batches. */\n public async startBatchConsumer<K extends Array<keyof T>>(\n topics: K,\n handleBatch: (\n envelopes: EventEnvelope<T[K[number]]>[],\n meta: BatchMeta,\n ) => Promise<void>,\n options?: ConsumerOptions<T>,\n ): Promise<ConsumerHandle>;\n public async startBatchConsumer<\n D extends TopicDescriptor<string & keyof T, T[string & keyof T]>,\n >(\n topics: D[],\n handleBatch: (\n envelopes: EventEnvelope<D[\"__type\"]>[],\n meta: BatchMeta,\n ) => Promise<void>,\n options?: ConsumerOptions<T>,\n ): Promise<ConsumerHandle>;\n public async startBatchConsumer(\n topics: any[],\n handleBatch: (\n envelopes: EventEnvelope<any>[],\n meta: BatchMeta,\n ) => Promise<void>,\n options: ConsumerOptions<T> = {},\n ): Promise<ConsumerHandle> {\n const { consumer, schemaMap, gid, dlq, interceptors, retry } =\n await this.setupConsumer(topics, \"eachBatch\", options);\n\n const deps = {\n logger: this.logger,\n producer: this.producer,\n instrumentation: this.instrumentation,\n onMessageLost: this.onMessageLost,\n };\n const timeoutMs = options.handlerTimeoutMs;\n\n await consumer.run({\n eachBatch: async ({\n batch,\n heartbeat,\n resolveOffset,\n commitOffsetsIfNecessary,\n }) => {\n const envelopes: EventEnvelope<any>[] = [];\n const rawMessages: string[] = [];\n\n for (const message of batch.messages) {\n if (!message.value) {\n this.logger.warn(\n `Received empty message from topic ${batch.topic}`,\n );\n continue;\n }\n\n const raw = message.value.toString();\n const parsed = parseJsonMessage(raw, batch.topic, this.logger);\n if (parsed === null) continue;\n\n const headers = decodeHeaders(message.headers);\n const validated = await validateWithSchema(\n parsed,\n raw,\n batch.topic,\n schemaMap,\n interceptors,\n dlq,\n { ...deps, originalHeaders: headers },\n );\n if (validated === null) continue;\n envelopes.push(\n extractEnvelope(\n validated,\n headers,\n batch.topic,\n batch.partition,\n message.offset,\n ),\n );\n rawMessages.push(raw);\n }\n\n if (envelopes.length === 0) return;\n\n const meta: BatchMeta = {\n partition: batch.partition,\n highWatermark: batch.highWatermark,\n heartbeat,\n resolveOffset,\n commitOffsetsIfNecessary,\n };\n\n await executeWithRetry(\n () => {\n const fn = () => handleBatch(envelopes, meta);\n return timeoutMs\n ? this.wrapWithTimeoutWarning(fn, timeoutMs, batch.topic)\n : fn();\n },\n {\n envelope: envelopes,\n rawMessages: batch.messages\n .filter((m) => m.value)\n .map((m) => m.value!.toString()),\n interceptors,\n dlq,\n retry,\n isBatch: true,\n },\n deps,\n );\n },\n });\n\n this.runningConsumers.set(gid, \"eachBatch\");\n\n return { groupId: gid, stop: () => this.stopConsumer(gid) };\n }\n\n // ── Consumer lifecycle ───────────────────────────────────────────\n\n public async stopConsumer(groupId?: string): Promise<void> {\n if (groupId !== undefined) {\n const consumer = this.consumers.get(groupId);\n if (!consumer) {\n this.logger.warn(\n `stopConsumer: no active consumer for group \"${groupId}\"`,\n );\n return;\n }\n await consumer.disconnect().catch(() => {});\n this.consumers.delete(groupId);\n this.runningConsumers.delete(groupId);\n this.logger.log(`Consumer disconnected: group \"${groupId}\"`);\n } else {\n const tasks = Array.from(this.consumers.values()).map((c) =>\n c.disconnect().catch(() => {}),\n );\n await Promise.allSettled(tasks);\n this.consumers.clear();\n this.runningConsumers.clear();\n this.logger.log(\"All consumers disconnected\");\n }\n }\n\n /**\n * Query consumer group lag per partition.\n * Lag = broker high-watermark − last committed offset.\n * A committed offset of -1 (nothing committed yet) counts as full lag.\n */\n public async getConsumerLag(\n groupId?: string,\n ): Promise<Array<{ topic: string; partition: number; lag: number }>> {\n const gid = groupId ?? this.defaultGroupId;\n if (!this.isAdminConnected) {\n await this.admin.connect();\n this.isAdminConnected = true;\n }\n\n const committedByTopic = await this.admin.fetchOffsets({ groupId: gid });\n const result: Array<{ topic: string; partition: number; lag: number }> = [];\n\n for (const { topic, partitions } of committedByTopic) {\n const brokerOffsets = await this.admin.fetchTopicOffsets(topic);\n\n for (const { partition, offset } of partitions) {\n const broker = brokerOffsets.find((o) => o.partition === partition);\n if (!broker) continue;\n\n const committed = parseInt(offset, 10);\n const high = parseInt(broker.high, 10);\n // committed === -1 means the group has never committed for this partition\n const lag = committed === -1 ? high : Math.max(0, high - committed);\n result.push({ topic, partition, lag });\n }\n }\n\n return result;\n }\n\n /** Check broker connectivity and return status, clientId, and available topics. */\n public async checkStatus(): Promise<{\n status: \"up\";\n clientId: string;\n topics: string[];\n }> {\n if (!this.isAdminConnected) {\n await this.admin.connect();\n this.isAdminConnected = true;\n }\n const topics = await this.admin.listTopics();\n return { status: \"up\", clientId: this.clientId, topics };\n }\n\n public getClientId(): ClientId {\n return this.clientId;\n }\n\n /** Gracefully disconnect producer, all consumers, and admin. */\n public async disconnect(): Promise<void> {\n const tasks: Promise<void>[] = [this.producer.disconnect()];\n if (this.txProducer) {\n tasks.push(this.txProducer.disconnect());\n this.txProducer = undefined;\n }\n for (const consumer of this.consumers.values()) {\n tasks.push(consumer.disconnect());\n }\n if (this.isAdminConnected) {\n tasks.push(this.admin.disconnect());\n this.isAdminConnected = false;\n }\n await Promise.allSettled(tasks);\n this.consumers.clear();\n this.runningConsumers.clear();\n this.logger.log(\"All connections closed\");\n }\n\n // ── Retry topic chain ────────────────────────────────────────────\n\n /**\n * Auto-start companion consumers on `<topic>.retry` for each original topic.\n * Called by `startConsumer` when `retryTopics: true`.\n *\n * Flow per message:\n * 1. Sleep until `x-retry-after` (scheduled by the main consumer or previous retry hop)\n * 2. Call the original handler\n * 3. On failure: if retries remain → re-send to `<originalTopic>.retry` with incremented attempt\n * if exhausted → DLQ or onMessageLost\n */\n private async startRetryTopicConsumers(\n originalTopics: string[],\n originalGroupId: string,\n handleMessage: (envelope: EventEnvelope<any>) => Promise<void>,\n retry: { maxRetries: number; backoffMs?: number; maxBackoffMs?: number },\n dlq: boolean,\n interceptors: any[],\n schemaMap: Map<string, SchemaLike>,\n ): Promise<void> {\n const retryTopicNames = originalTopics.map((t) => `${t}.retry`);\n const retryGroupId = `${originalGroupId}-retry`;\n const backoffMs = retry.backoffMs ?? 1_000;\n const maxBackoffMs = retry.maxBackoffMs ?? 30_000;\n const deps = {\n logger: this.logger,\n producer: this.producer,\n instrumentation: this.instrumentation,\n onMessageLost: this.onMessageLost,\n };\n\n for (const rt of retryTopicNames) {\n await this.ensureTopic(rt);\n }\n\n const consumer = this.getOrCreateConsumer(retryGroupId, false, true);\n await consumer.connect();\n await subscribeWithRetry(consumer, retryTopicNames, this.logger);\n\n await consumer.run({\n eachMessage: async ({ topic: retryTopic, partition, message }) => {\n if (!message.value) return;\n\n const raw = message.value.toString();\n const parsed = parseJsonMessage(raw, retryTopic, this.logger);\n if (parsed === null) return;\n\n const headers = decodeHeaders(message.headers);\n const originalTopic =\n (headers[RETRY_HEADER_ORIGINAL_TOPIC] as string | undefined) ??\n retryTopic.replace(/\\.retry$/, \"\");\n const currentAttempt = parseInt(\n (headers[RETRY_HEADER_ATTEMPT] as string | undefined) ?? \"1\",\n 10,\n );\n const maxRetries = parseInt(\n (headers[RETRY_HEADER_MAX_RETRIES] as string | undefined) ??\n String(retry.maxRetries),\n 10,\n );\n const retryAfter = parseInt(\n (headers[RETRY_HEADER_AFTER] as string | undefined) ?? \"0\",\n 10,\n );\n\n // Pause only this partition for the scheduled delay so that other\n // topic-partitions on the same retry consumer continue processing.\n const remaining = retryAfter - Date.now();\n if (remaining > 0) {\n consumer.pause([{ topic: retryTopic, partitions: [partition] }]);\n await sleep(remaining);\n consumer.resume([{ topic: retryTopic, partitions: [partition] }]);\n }\n\n // Validate schema against original topic's schema (if any)\n const validated = await validateWithSchema(\n parsed,\n raw,\n originalTopic,\n schemaMap,\n interceptors,\n dlq,\n { ...deps, originalHeaders: headers },\n );\n if (validated === null) return;\n\n // Build envelope with originalTopic so correlationId/traceparent are correct\n const envelope = extractEnvelope(\n validated,\n headers,\n originalTopic,\n partition,\n message.offset,\n );\n\n try {\n // Instrumentation: beforeConsume\n const cleanups: (() => void)[] = [];\n for (const inst of this.instrumentation) {\n const c = inst.beforeConsume?.(envelope);\n if (typeof c === \"function\") cleanups.push(c);\n }\n for (const interceptor of interceptors)\n await interceptor.before?.(envelope);\n\n await runWithEnvelopeContext(\n {\n correlationId: envelope.correlationId,\n traceparent: envelope.traceparent,\n },\n () => handleMessage(envelope),\n );\n\n for (const interceptor of interceptors)\n await interceptor.after?.(envelope);\n for (const cleanup of cleanups) cleanup();\n } catch (error) {\n const err = toError(error);\n const nextAttempt = currentAttempt + 1;\n const exhausted = currentAttempt >= maxRetries;\n\n for (const inst of this.instrumentation)\n inst.onConsumeError?.(envelope, err);\n\n const reportedError =\n exhausted && maxRetries > 1\n ? new KafkaRetryExhaustedError(\n originalTopic,\n [envelope.payload],\n maxRetries,\n { cause: err },\n )\n : err;\n for (const interceptor of interceptors) {\n await interceptor.onError?.(envelope, reportedError);\n }\n\n this.logger.error(\n `Retry consumer error for ${originalTopic} (attempt ${currentAttempt}/${maxRetries}):`,\n err.stack,\n );\n\n if (!exhausted) {\n // currentAttempt is the hop that just failed (1-based).\n // Before hop N+1 we want backoffMs * 2^N, so exponent = currentAttempt.\n const cap = Math.min(backoffMs * 2 ** currentAttempt, maxBackoffMs);\n const delay = Math.floor(Math.random() * cap);\n await sendToRetryTopic(\n originalTopic,\n [raw],\n nextAttempt,\n maxRetries,\n delay,\n headers,\n deps,\n );\n } else if (dlq) {\n await sendToDlq(originalTopic, raw, deps, {\n error: err,\n // +1 to account for the main consumer's initial attempt before\n // routing to the retry topic, making this consistent with the\n // in-process retry path where attempt counts all tries.\n attempt: currentAttempt + 1,\n originalHeaders: headers,\n });\n } else {\n await deps.onMessageLost?.({\n topic: originalTopic,\n error: err,\n attempt: currentAttempt,\n headers,\n });\n }\n }\n },\n });\n\n this.runningConsumers.set(retryGroupId, \"eachMessage\");\n\n // Block until the retry consumer has received at least one partition assignment.\n // consumer.run() starts the group-join protocol asynchronously; without this wait,\n // the caller may send a message to the original topic before the retry consumer\n // has established its \"latest\" offset on the (initially empty) retry partition,\n // causing it to skip any retry messages produced in that window.\n await this.waitForPartitionAssignment(consumer, retryTopicNames);\n\n this.logger.log(\n `Retry topic consumers started for: ${originalTopics.join(\", \")} (group: ${retryGroupId})`,\n );\n }\n\n // ── Private helpers ──────────────────────────────────────────────\n\n /**\n * Poll `consumer.assignment()` until the consumer has received at least one\n * partition for the given topics, then return. Logs a warning and returns\n * (rather than throwing) on timeout so that a slow broker does not break\n * the caller — in the worst case a message sent immediately after would be\n * missed, which is the same behaviour as before this guard was added.\n */\n private async waitForPartitionAssignment(\n consumer: Consumer,\n topics: string[],\n timeoutMs = 10_000,\n ): Promise<void> {\n const topicSet = new Set(topics);\n const deadline = Date.now() + timeoutMs;\n while (Date.now() < deadline) {\n try {\n const assigned: { topic: string; partition: number }[] =\n consumer.assignment();\n if (assigned.some((a) => topicSet.has(a.topic))) return;\n } catch {\n // consumer.assignment() throws if not yet in CONNECTED state — keep polling\n }\n await sleep(200);\n }\n this.logger.warn(\n `Retry consumer did not receive partition assignments for [${topics.join(\", \")}] within ${timeoutMs}ms`,\n );\n }\n\n private getOrCreateConsumer(\n groupId: string,\n fromBeginning: boolean,\n autoCommit: boolean,\n ): Consumer {\n if (!this.consumers.has(groupId)) {\n const config: Parameters<typeof this.kafka.consumer>[0] = {\n kafkaJS: { groupId, fromBeginning, autoCommit },\n };\n\n if (this.onRebalance) {\n const onRebalance = this.onRebalance;\n // rebalance_cb is called by librdkafka on every partition assign/revoke.\n // err.code -175 = ERR__ASSIGN_PARTITIONS, -174 = ERR__REVOKE_PARTITIONS.\n // The library handles the actual assign/unassign in its finally block regardless\n // of what this callback does, so we only need it for the side-effect notification.\n (config as any)[\"rebalance_cb\"] = (err: any, assignment: any[]) => {\n const type = err.code === -175 ? \"assign\" : \"revoke\";\n try {\n onRebalance(\n type,\n assignment.map((p) => ({\n topic: p.topic,\n partition: p.partition,\n })),\n );\n } catch (e) {\n this.logger.warn(\n `onRebalance callback threw: ${(e as Error).message}`,\n );\n }\n };\n }\n\n this.consumers.set(groupId, this.kafka.consumer(config));\n }\n return this.consumers.get(groupId)!;\n }\n\n /**\n * Start a timer that logs a warning if `fn` hasn't resolved within `timeoutMs`.\n * The handler itself is not cancelled — the warning is diagnostic only.\n */\n private wrapWithTimeoutWarning<R>(\n fn: () => Promise<R>,\n timeoutMs: number,\n topic: string,\n ): Promise<R> {\n let timer: ReturnType<typeof setTimeout> | undefined;\n const promise = fn().finally(() => {\n if (timer !== undefined) clearTimeout(timer);\n });\n timer = setTimeout(() => {\n this.logger.warn(\n `Handler for topic \"${topic}\" has not resolved after ${timeoutMs}ms — possible stuck handler`,\n );\n }, timeoutMs);\n return promise;\n }\n\n private resolveTopicName(topicOrDescriptor: unknown): string {\n if (typeof topicOrDescriptor === \"string\") return topicOrDescriptor;\n if (\n topicOrDescriptor &&\n typeof topicOrDescriptor === \"object\" &&\n \"__topic\" in topicOrDescriptor\n ) {\n return (topicOrDescriptor as TopicDescriptor).__topic;\n }\n return String(topicOrDescriptor);\n }\n\n private async ensureTopic(topic: string): Promise<void> {\n if (!this.autoCreateTopicsEnabled || this.ensuredTopics.has(topic)) return;\n if (!this.isAdminConnected) {\n await this.admin.connect();\n this.isAdminConnected = true;\n }\n await this.admin.createTopics({\n topics: [{ topic, numPartitions: this.numPartitions }],\n });\n this.ensuredTopics.add(topic);\n }\n\n /** Register schema from descriptor into global registry (side-effect). */\n private registerSchema(topicOrDesc: any): void {\n if (topicOrDesc?.__schema) {\n const topic = this.resolveTopicName(topicOrDesc);\n this.schemaRegistry.set(topic, topicOrDesc.__schema);\n }\n }\n\n /** Validate message against schema. Pure — no side-effects on registry. */\n private async validateMessage(topicOrDesc: any, message: any): Promise<any> {\n if (topicOrDesc?.__schema) {\n return await topicOrDesc.__schema.parse(message);\n }\n if (this.strictSchemasEnabled && typeof topicOrDesc === \"string\") {\n const schema = this.schemaRegistry.get(topicOrDesc);\n if (schema) return await schema.parse(message);\n }\n return message;\n }\n\n /**\n * Build a kafkajs-ready send payload.\n * Handles: topic resolution, schema registration, validation, JSON serialization,\n * envelope header generation, and instrumentation hooks.\n */\n private async buildSendPayload(\n topicOrDesc: any,\n messages: Array<BatchMessageItem<any>>,\n ): Promise<{\n topic: string;\n messages: Array<{\n value: string;\n key: string | null;\n headers: MessageHeaders;\n }>;\n }> {\n this.registerSchema(topicOrDesc);\n const topic = this.resolveTopicName(topicOrDesc);\n const builtMessages = await Promise.all(\n messages.map(async (m) => {\n const envelopeHeaders = buildEnvelopeHeaders({\n correlationId: m.correlationId,\n schemaVersion: m.schemaVersion,\n eventId: m.eventId,\n headers: m.headers,\n });\n\n // Let instrumentation hooks mutate headers (e.g. OTel injects traceparent)\n for (const inst of this.instrumentation) {\n inst.beforeSend?.(topic, envelopeHeaders);\n }\n\n return {\n value: JSON.stringify(\n await this.validateMessage(topicOrDesc, m.value),\n ),\n key: m.key ?? null,\n headers: envelopeHeaders,\n };\n }),\n );\n return { topic, messages: builtMessages };\n }\n\n /** Shared consumer setup: groupId check, schema map, connect, subscribe. */\n private async setupConsumer(\n topics: any[],\n mode: \"eachMessage\" | \"eachBatch\",\n options: ConsumerOptions<T>,\n ) {\n const {\n groupId: optGroupId,\n fromBeginning = false,\n retry,\n dlq = false,\n interceptors = [],\n schemas: optionSchemas,\n } = options;\n\n const gid = optGroupId || this.defaultGroupId;\n const existingMode = this.runningConsumers.get(gid);\n const oppositeMode = mode === \"eachMessage\" ? \"eachBatch\" : \"eachMessage\";\n if (existingMode === oppositeMode) {\n throw new Error(\n `Cannot use ${mode} on consumer group \"${gid}\" — it is already running with ${oppositeMode}. ` +\n `Use a different groupId for this consumer.`,\n );\n }\n\n const consumer = this.getOrCreateConsumer(\n gid,\n fromBeginning,\n options.autoCommit ?? true,\n );\n const schemaMap = this.buildSchemaMap(topics, optionSchemas);\n\n const topicNames = (topics as any[]).map((t: any) =>\n this.resolveTopicName(t),\n );\n\n // Ensure topics exist before subscribing — librdkafka errors on unknown topics\n for (const t of topicNames) {\n await this.ensureTopic(t);\n }\n if (dlq) {\n for (const t of topicNames) {\n await this.ensureTopic(`${t}.dlq`);\n }\n }\n\n await consumer.connect();\n await subscribeWithRetry(\n consumer,\n topicNames,\n this.logger,\n options.subscribeRetry,\n );\n\n this.logger.log(\n `${mode === \"eachBatch\" ? \"Batch consumer\" : \"Consumer\"} subscribed to topics: ${topicNames.join(\", \")}`,\n );\n\n return { consumer, schemaMap, topicNames, gid, dlq, interceptors, retry };\n }\n\n private buildSchemaMap(\n topics: any[],\n optionSchemas?: Map<string, SchemaLike>,\n ): Map<string, SchemaLike> {\n const schemaMap = new Map<string, SchemaLike>();\n for (const t of topics) {\n if (t?.__schema) {\n const name = this.resolveTopicName(t);\n schemaMap.set(name, t.__schema);\n this.schemaRegistry.set(name, t.__schema);\n }\n }\n if (optionSchemas) {\n for (const [k, v] of optionSchemas) {\n schemaMap.set(k, v);\n this.schemaRegistry.set(k, v);\n }\n }\n return schemaMap;\n }\n}\n","import { AsyncLocalStorage } from \"node:async_hooks\";\nimport { randomUUID } from \"node:crypto\";\nimport type { MessageHeaders } from \"./types\";\n\n// ── Header keys ──────────────────────────────────────────────────────\n\nexport const HEADER_EVENT_ID = \"x-event-id\";\nexport const HEADER_CORRELATION_ID = \"x-correlation-id\";\nexport const HEADER_TIMESTAMP = \"x-timestamp\";\nexport const HEADER_SCHEMA_VERSION = \"x-schema-version\";\nexport const HEADER_TRACEPARENT = \"traceparent\";\n\n// ── EventEnvelope ────────────────────────────────────────────────────\n\n/**\n * Typed wrapper combining a parsed message payload with Kafka metadata\n * and envelope headers.\n *\n * On **send**, the library auto-generates envelope headers\n * (`x-event-id`, `x-correlation-id`, `x-timestamp`, `x-schema-version`).\n *\n * On **consume**, the library extracts those headers and assembles\n * an `EventEnvelope` that is passed to the handler.\n */\nexport interface EventEnvelope<T> {\n /** Deserialized + validated message body. */\n payload: T;\n /** Topic the message was produced to / consumed from. */\n topic: string;\n /** Kafka partition (consume-side only, `-1` on send). */\n partition: number;\n /** Kafka offset (consume-side only, empty string on send). */\n offset: string;\n /** ISO-8601 timestamp set by the producer. */\n timestamp: string;\n /** Unique ID for this event (UUID v4). */\n eventId: string;\n /** Correlation ID — auto-propagated via AsyncLocalStorage. */\n correlationId: string;\n /** Schema version of the payload. */\n schemaVersion: number;\n /** W3C Trace Context `traceparent` header (set by OTel instrumentation). */\n traceparent?: string;\n /** All decoded Kafka headers for extensibility. */\n headers: MessageHeaders;\n}\n\n// ── AsyncLocalStorage context ────────────────────────────────────────\n\ninterface EnvelopeCtx {\n correlationId: string;\n traceparent?: string;\n}\n\nconst envelopeStorage = new AsyncLocalStorage<EnvelopeCtx>();\n\n/** Read the current envelope context (correlationId / traceparent) from ALS. */\nexport function getEnvelopeContext(): EnvelopeCtx | undefined {\n return envelopeStorage.getStore();\n}\n\n/** Execute `fn` inside an envelope context so nested sends inherit correlationId. */\nexport function runWithEnvelopeContext<R>(ctx: EnvelopeCtx, fn: () => R): R {\n return envelopeStorage.run(ctx, fn);\n}\n\n// ── Header helpers ───────────────────────────────────────────────────\n\n/** Options accepted by `buildEnvelopeHeaders`. */\nexport interface EnvelopeHeaderOptions {\n correlationId?: string;\n schemaVersion?: number;\n eventId?: string;\n headers?: MessageHeaders;\n}\n\n/**\n * Generate envelope headers for the send path.\n *\n * Priority for `correlationId`:\n * explicit option → ALS context → new UUID.\n */\nexport function buildEnvelopeHeaders(\n options: EnvelopeHeaderOptions = {},\n): MessageHeaders {\n const ctx = getEnvelopeContext();\n\n const correlationId =\n options.correlationId ?? ctx?.correlationId ?? randomUUID();\n const eventId = options.eventId ?? randomUUID();\n const timestamp = new Date().toISOString();\n const schemaVersion = String(options.schemaVersion ?? 1);\n\n const envelope: MessageHeaders = {\n [HEADER_EVENT_ID]: eventId,\n [HEADER_CORRELATION_ID]: correlationId,\n [HEADER_TIMESTAMP]: timestamp,\n [HEADER_SCHEMA_VERSION]: schemaVersion,\n };\n\n // Propagate traceparent from ALS if present (OTel may override via instrumentation)\n if (ctx?.traceparent) {\n envelope[HEADER_TRACEPARENT] = ctx.traceparent;\n }\n\n // User-provided headers win on conflict\n return { ...envelope, ...options.headers };\n}\n\n/**\n * Decode kafkajs headers (`Record<string, Buffer | string | undefined>`)\n * into plain `Record<string, string>`.\n */\nexport function decodeHeaders(\n raw:\n | Record<string, Buffer | string | (Buffer | string)[] | undefined>\n | undefined,\n): MessageHeaders {\n if (!raw) return {};\n const result: MessageHeaders = {};\n for (const [key, value] of Object.entries(raw)) {\n if (value === undefined) continue;\n if (Array.isArray(value)) {\n result[key] = value\n .map((v) => (Buffer.isBuffer(v) ? v.toString() : v))\n .join(\",\");\n } else {\n result[key] = Buffer.isBuffer(value) ? value.toString() : value;\n }\n }\n return result;\n}\n\n/**\n * Build an `EventEnvelope` from a consumed kafkajs message.\n * Tolerates missing envelope headers — generates defaults so messages\n * from non-envelope producers still work.\n */\nexport function extractEnvelope<T>(\n payload: T,\n headers: MessageHeaders,\n topic: string,\n partition: number,\n offset: string,\n): EventEnvelope<T> {\n return {\n payload,\n topic,\n partition,\n offset,\n eventId: headers[HEADER_EVENT_ID] ?? randomUUID(),\n correlationId: headers[HEADER_CORRELATION_ID] ?? randomUUID(),\n timestamp: headers[HEADER_TIMESTAMP] ?? new Date().toISOString(),\n schemaVersion: Number(headers[HEADER_SCHEMA_VERSION] ?? 1),\n traceparent: headers[HEADER_TRACEPARENT],\n headers,\n };\n}\n","/** Error thrown when a consumer message handler fails. */\nexport class KafkaProcessingError extends Error {\n declare readonly cause?: Error;\n\n constructor(\n message: string,\n public readonly topic: string,\n public readonly originalMessage: unknown,\n options?: { cause?: Error },\n ) {\n super(message, options);\n this.name = \"KafkaProcessingError\";\n if (options?.cause) this.cause = options.cause;\n }\n}\n\n/** Error thrown when schema validation fails on send or consume. */\nexport class KafkaValidationError extends Error {\n declare readonly cause?: Error;\n\n constructor(\n public readonly topic: string,\n public readonly originalMessage: unknown,\n options?: { cause?: Error },\n ) {\n super(`Schema validation failed for topic \"${topic}\"`, options);\n this.name = \"KafkaValidationError\";\n if (options?.cause) this.cause = options.cause;\n }\n}\n\n/** Error thrown when all retry attempts are exhausted for a message. */\nexport class KafkaRetryExhaustedError extends KafkaProcessingError {\n constructor(\n topic: string,\n originalMessage: unknown,\n public readonly attempts: number,\n options?: { cause?: Error },\n ) {\n super(\n `Message processing failed after ${attempts} attempts on topic \"${topic}\"`,\n topic,\n originalMessage,\n options,\n );\n this.name = \"KafkaRetryExhaustedError\";\n }\n}\n","import type { KafkaJS } from \"@confluentinc/kafka-javascript\";\ntype Producer = KafkaJS.Producer;\nimport type { EventEnvelope } from \"./envelope\";\nimport { extractEnvelope } from \"./envelope\";\nimport { KafkaRetryExhaustedError, KafkaValidationError } from \"./errors\";\nimport type { SchemaLike } from \"./topic\";\nimport type {\n ConsumerInterceptor,\n KafkaInstrumentation,\n KafkaLogger,\n MessageHeaders,\n MessageLostContext,\n RetryOptions,\n TopicMapConstraint,\n} from \"./types\";\n\n// ── Helpers ──────────────────────────────────────────────────────────\n\nexport function toError(error: unknown): Error {\n return error instanceof Error ? error : new Error(String(error));\n}\n\nexport function sleep(ms: number): Promise<void> {\n return new Promise((resolve) => setTimeout(resolve, ms));\n}\n\n// ── JSON parsing ────────────────────────────────────────────────────\n\n/** Parse raw message as JSON. Returns null on failure (logs error). */\nexport function parseJsonMessage(\n raw: string,\n topic: string,\n logger: KafkaLogger,\n): any | null {\n try {\n return JSON.parse(raw);\n } catch (error) {\n logger.error(\n `Failed to parse message from topic ${topic}:`,\n toError(error).stack,\n );\n return null;\n }\n}\n\n// ── Schema validation ───────────────────────────────────────────────\n\n/**\n * Validate a parsed message against the schema map.\n * On failure: logs error, sends to DLQ if enabled, calls interceptor.onError.\n * Returns validated message or null.\n */\nexport async function validateWithSchema<T extends TopicMapConstraint<T>>(\n message: any,\n raw: string,\n topic: string,\n schemaMap: Map<string, SchemaLike>,\n interceptors: ConsumerInterceptor<T>[],\n dlq: boolean,\n deps: {\n logger: KafkaLogger;\n producer: Producer;\n onMessageLost?: (ctx: MessageLostContext) => void | Promise<void>;\n originalHeaders?: MessageHeaders;\n },\n): Promise<any | null> {\n const schema = schemaMap.get(topic);\n if (!schema) return message;\n\n try {\n return await schema.parse(message);\n } catch (error) {\n const err = toError(error);\n const validationError = new KafkaValidationError(topic, message, {\n cause: err,\n });\n deps.logger.error(\n `Schema validation failed for topic ${topic}:`,\n err.message,\n );\n if (dlq) {\n await sendToDlq(topic, raw, deps, {\n error: validationError,\n attempt: 0,\n originalHeaders: deps.originalHeaders,\n });\n } else {\n await deps.onMessageLost?.({\n topic,\n error: validationError,\n attempt: 0,\n headers: deps.originalHeaders ?? {},\n });\n }\n // Validation errors don't have an envelope yet — call onError with a minimal envelope\n const errorEnvelope = extractEnvelope(\n message,\n deps.originalHeaders ?? {},\n topic,\n -1,\n \"\",\n );\n for (const interceptor of interceptors) {\n await interceptor.onError?.(errorEnvelope, validationError);\n }\n return null;\n }\n}\n\n// ── DLQ ─────────────────────────────────────────────────────────────\n\nexport interface DlqMetadata {\n error: Error;\n attempt: number;\n /** Original Kafka message headers — forwarded to DLQ to preserve correlationId, traceparent, etc. */\n originalHeaders?: MessageHeaders;\n}\n\nexport async function sendToDlq(\n topic: string,\n rawMessage: string,\n deps: { logger: KafkaLogger; producer: Producer },\n meta?: DlqMetadata,\n): Promise<void> {\n const dlqTopic = `${topic}.dlq`;\n const headers: MessageHeaders = {\n ...(meta?.originalHeaders ?? {}),\n \"x-dlq-original-topic\": topic,\n \"x-dlq-failed-at\": new Date().toISOString(),\n \"x-dlq-error-message\": meta?.error.message ?? \"unknown\",\n \"x-dlq-error-stack\": meta?.error.stack?.slice(0, 2000) ?? \"\",\n \"x-dlq-attempt-count\": String(meta?.attempt ?? 0),\n };\n try {\n await deps.producer.send({\n topic: dlqTopic,\n messages: [{ value: rawMessage, headers }],\n });\n deps.logger.warn(`Message sent to DLQ: ${dlqTopic}`);\n } catch (error) {\n deps.logger.error(\n `Failed to send message to DLQ ${dlqTopic}:`,\n toError(error).stack,\n );\n }\n}\n\n// ── Retry topic routing ─────────────────────────────────────────────\n\n/** Headers stamped on messages sent to a `<topic>.retry` topic. */\nexport const RETRY_HEADER_ATTEMPT = \"x-retry-attempt\";\nexport const RETRY_HEADER_AFTER = \"x-retry-after\";\nexport const RETRY_HEADER_MAX_RETRIES = \"x-retry-max-retries\";\nexport const RETRY_HEADER_ORIGINAL_TOPIC = \"x-retry-original-topic\";\n\n/**\n * Send raw messages to the retry topic `<originalTopic>.retry`.\n * Stamps scheduling headers so the retry consumer knows when and how many times to retry.\n */\nexport async function sendToRetryTopic(\n originalTopic: string,\n rawMessages: string[],\n attempt: number,\n maxRetries: number,\n delayMs: number,\n originalHeaders: MessageHeaders,\n deps: { logger: KafkaLogger; producer: Producer },\n): Promise<void> {\n const retryTopic = `${originalTopic}.retry`;\n // Strip any stale retry headers from a previous hop so they don't leak through.\n const {\n [RETRY_HEADER_ATTEMPT]: _a,\n [RETRY_HEADER_AFTER]: _b,\n [RETRY_HEADER_MAX_RETRIES]: _c,\n [RETRY_HEADER_ORIGINAL_TOPIC]: _d,\n ...userHeaders\n } = originalHeaders;\n const headers: MessageHeaders = {\n ...userHeaders,\n [RETRY_HEADER_ATTEMPT]: String(attempt),\n [RETRY_HEADER_AFTER]: String(Date.now() + delayMs),\n [RETRY_HEADER_MAX_RETRIES]: String(maxRetries),\n [RETRY_HEADER_ORIGINAL_TOPIC]: originalTopic,\n };\n try {\n for (const raw of rawMessages) {\n await deps.producer.send({\n topic: retryTopic,\n messages: [{ value: raw, headers }],\n });\n }\n deps.logger.warn(\n `Message queued in retry topic ${retryTopic} (attempt ${attempt}/${maxRetries})`,\n );\n } catch (error) {\n deps.logger.error(\n `Failed to send message to retry topic ${retryTopic}:`,\n toError(error).stack,\n );\n }\n}\n\n// ── Retry pipeline ──────────────────────────────────────────────────\n\nexport interface ExecuteWithRetryContext<T extends TopicMapConstraint<T>> {\n envelope: EventEnvelope<any> | EventEnvelope<any>[];\n rawMessages: string[];\n interceptors: ConsumerInterceptor<T>[];\n dlq: boolean;\n retry?: RetryOptions;\n isBatch?: boolean;\n /**\n * When `true`, failed messages are routed to `<topic>.retry` instead of being\n * retried in-process. All backoff and subsequent attempts are handled by the\n * companion retry consumer started by `startRetryTopicConsumers`.\n */\n retryTopics?: boolean;\n}\n\n/**\n * Execute a handler with retry, interceptors, instrumentation, and DLQ support.\n * Used by both single-message and batch consumers.\n */\nexport async function executeWithRetry<T extends TopicMapConstraint<T>>(\n fn: () => Promise<void>,\n ctx: ExecuteWithRetryContext<T>,\n deps: {\n logger: KafkaLogger;\n producer: Producer;\n instrumentation: KafkaInstrumentation[];\n onMessageLost?: (ctx: MessageLostContext) => void | Promise<void>;\n },\n): Promise<void> {\n const {\n envelope,\n rawMessages,\n interceptors,\n dlq,\n retry,\n isBatch,\n retryTopics,\n } = ctx;\n // With retryTopics mode the main consumer tries exactly once — retry consumer takes over.\n const maxAttempts = retryTopics ? 1 : retry ? retry.maxRetries + 1 : 1;\n const backoffMs = retry?.backoffMs ?? 1000;\n const maxBackoffMs = retry?.maxBackoffMs ?? 30_000;\n const envelopes = Array.isArray(envelope) ? envelope : [envelope];\n const topic = envelopes[0]?.topic ?? \"unknown\";\n\n for (let attempt = 1; attempt <= maxAttempts; attempt++) {\n // Collect instrumentation cleanup functions\n const cleanups: (() => void)[] = [];\n\n try {\n // Instrumentation: beforeConsume\n for (const env of envelopes) {\n for (const inst of deps.instrumentation) {\n const cleanup = inst.beforeConsume?.(env);\n if (typeof cleanup === \"function\") cleanups.push(cleanup);\n }\n }\n\n // Consumer interceptors: before\n for (const env of envelopes) {\n for (const interceptor of interceptors) {\n await interceptor.before?.(env);\n }\n }\n\n await fn();\n\n // Consumer interceptors: after\n for (const env of envelopes) {\n for (const interceptor of interceptors) {\n await interceptor.after?.(env);\n }\n }\n\n // Instrumentation: cleanup (end spans etc.)\n for (const cleanup of cleanups) cleanup();\n\n return;\n } catch (error) {\n const err = toError(error);\n const isLastAttempt = attempt === maxAttempts;\n\n // Instrumentation: onConsumeError\n for (const env of envelopes) {\n for (const inst of deps.instrumentation) {\n inst.onConsumeError?.(env, err);\n }\n }\n // Instrumentation: cleanup even on error\n for (const cleanup of cleanups) cleanup();\n\n if (isLastAttempt && maxAttempts > 1) {\n const exhaustedError = new KafkaRetryExhaustedError(\n topic,\n envelopes.map((e) => e.payload),\n maxAttempts,\n { cause: err },\n );\n for (const env of envelopes) {\n for (const interceptor of interceptors) {\n await interceptor.onError?.(env, exhaustedError);\n }\n }\n } else {\n for (const env of envelopes) {\n for (const interceptor of interceptors) {\n await interceptor.onError?.(env, err);\n }\n }\n }\n\n deps.logger.error(\n `Error processing ${isBatch ? \"batch\" : \"message\"} from topic ${topic} (attempt ${attempt}/${maxAttempts}):`,\n err.stack,\n );\n\n if (retryTopics && retry) {\n // Route to retry topic — retry consumer handles backoff and further attempts.\n // Always use attempt 1 here (main consumer never retries in-process).\n const cap = Math.min(backoffMs, maxBackoffMs);\n const delay = Math.floor(Math.random() * cap);\n await sendToRetryTopic(\n topic,\n rawMessages,\n 1,\n retry.maxRetries,\n delay,\n envelopes[0]?.headers ?? {},\n deps,\n );\n } else if (isLastAttempt) {\n if (dlq) {\n const dlqMeta: DlqMetadata = {\n error: err,\n attempt,\n originalHeaders: envelopes[0]?.headers,\n };\n for (const raw of rawMessages) {\n await sendToDlq(topic, raw, deps, dlqMeta);\n }\n } else {\n await deps.onMessageLost?.({\n topic,\n error: err,\n attempt,\n headers: envelopes[0]?.headers ?? {},\n });\n }\n } else {\n // Exponential backoff with full jitter to avoid thundering herd\n const cap = Math.min(backoffMs * 2 ** (attempt - 1), maxBackoffMs);\n await sleep(Math.random() * cap);\n }\n }\n }\n}\n","import type { KafkaJS } from \"@confluentinc/kafka-javascript\";\nimport type { KafkaLogger, SubscribeRetryOptions } from \"./types\";\nimport { toError, sleep } from \"./consumer-pipeline\";\n\nexport async function subscribeWithRetry(\n consumer: KafkaJS.Consumer,\n topics: string[],\n logger: KafkaLogger,\n retryOpts?: SubscribeRetryOptions,\n): Promise<void> {\n const maxAttempts = retryOpts?.retries ?? 5;\n const backoffMs = retryOpts?.backoffMs ?? 5000;\n\n for (let attempt = 1; attempt <= maxAttempts; attempt++) {\n try {\n await consumer.subscribe({ topics });\n return;\n } catch (error) {\n if (attempt === maxAttempts) throw error;\n const msg = toError(error).message;\n logger.warn(\n `Failed to subscribe to [${topics.join(\", \")}] (attempt ${attempt}/${maxAttempts}): ${msg}. Retrying in ${backoffMs}ms...`,\n );\n await sleep(backoffMs);\n }\n }\n}\n","/**\n * Any validation library with a `.parse()` method.\n * Works with Zod, Valibot, ArkType, or any custom validator.\n *\n * @example\n * ```ts\n * import { z } from 'zod';\n * const schema: SchemaLike<{ id: string }> = z.object({ id: z.string() });\n * ```\n */\nexport interface SchemaLike<T = any> {\n parse(data: unknown): T | Promise<T>;\n}\n\n/** Infer the output type from a SchemaLike. */\nexport type InferSchema<S extends SchemaLike> =\n S extends SchemaLike<infer T> ? T : never;\n\n/**\n * A typed topic descriptor that pairs a topic name with its message type.\n * Created via the `topic()` factory function.\n *\n * @typeParam N - The literal topic name string.\n * @typeParam M - The message payload type for this topic.\n */\nexport interface TopicDescriptor<\n N extends string = string,\n M extends Record<string, any> = Record<string, any>,\n> {\n readonly __topic: N;\n /** @internal Phantom type — never has a real value at runtime. */\n readonly __type: M;\n /** Runtime schema validator. Present only when created via `topic().schema()`. */\n readonly __schema?: SchemaLike<M>;\n}\n\n/**\n * Define a typed topic descriptor.\n *\n * @example\n * ```ts\n * // Without schema — type provided explicitly:\n * const OrderCreated = topic('order.created')<{ orderId: string; amount: number }>();\n *\n * // With schema — type inferred from schema:\n * const OrderCreated = topic('order.created').schema(z.object({\n * orderId: z.string(),\n * amount: z.number(),\n * }));\n *\n * // Use with KafkaClient:\n * await kafka.sendMessage(OrderCreated, { orderId: '123', amount: 100 });\n *\n * // Use with @SubscribeTo:\n * @SubscribeTo(OrderCreated)\n * async handleOrder(msg) { ... }\n * ```\n */\nexport function topic<N extends string>(name: N) {\n const fn = <M extends Record<string, any>>(): TopicDescriptor<N, M> => ({\n __topic: name,\n __type: undefined as unknown as M,\n });\n\n fn.schema = <S extends SchemaLike<Record<string, any>>>(\n schema: S,\n ): TopicDescriptor<N, InferSchema<S>> => ({\n __topic: name,\n __type: undefined as unknown as InferSchema<S>,\n __schema: schema as unknown as SchemaLike<InferSchema<S>>,\n });\n\n return fn;\n}\n\n/**\n * Build a topic-message map type from a union of TopicDescriptors.\n *\n * @example\n * ```ts\n * const OrderCreated = topic('order.created')<{ orderId: string }>();\n * const OrderCompleted = topic('order.completed')<{ completedAt: string }>();\n *\n * type MyTopics = TopicsFrom<typeof OrderCreated | typeof OrderCompleted>;\n * // { 'order.created': { orderId: string }; 'order.completed': { completedAt: string } }\n * ```\n */\nexport type TopicsFrom<D extends TopicDescriptor<any, any>> = {\n [K in D as K[\"__topic\"]]: K[\"__type\"];\n};\n"],"mappings":";AAAA,SAAS,eAAe;;;ACAxB,SAAS,yBAAyB;AAClC,SAAS,kBAAkB;AAKpB,IAAM,kBAAkB;AACxB,IAAM,wBAAwB;AAC9B,IAAM,mBAAmB;AACzB,IAAM,wBAAwB;AAC9B,IAAM,qBAAqB;AA4ClC,IAAM,kBAAkB,IAAI,kBAA+B;AAGpD,SAAS,qBAA8C;AAC5D,SAAO,gBAAgB,SAAS;AAClC;AAGO,SAAS,uBAA0B,KAAkB,IAAgB;AAC1E,SAAO,gBAAgB,IAAI,KAAK,EAAE;AACpC;AAkBO,SAAS,qBACd,UAAiC,CAAC,GAClB;AAChB,QAAM,MAAM,mBAAmB;AAE/B,QAAM,gBACJ,QAAQ,iBAAiB,KAAK,iBAAiB,WAAW;AAC5D,QAAM,UAAU,QAAQ,WAAW,WAAW;AAC9C,QAAM,aAAY,oBAAI,KAAK,GAAE,YAAY;AACzC,QAAM,gBAAgB,OAAO,QAAQ,iBAAiB,CAAC;AAEvD,QAAM,WAA2B;AAAA,IAC/B,CAAC,eAAe,GAAG;AAAA,IACnB,CAAC,qBAAqB,GAAG;AAAA,IACzB,CAAC,gBAAgB,GAAG;AAAA,IACpB,CAAC,qBAAqB,GAAG;AAAA,EAC3B;AAGA,MAAI,KAAK,aAAa;AACpB,aAAS,kBAAkB,IAAI,IAAI;AAAA,EACrC;AAGA,SAAO,EAAE,GAAG,UAAU,GAAG,QAAQ,QAAQ;AAC3C;AAMO,SAAS,cACd,KAGgB;AAChB,MAAI,CAAC,IAAK,QAAO,CAAC;AAClB,QAAM,SAAyB,CAAC;AAChC,aAAW,CAAC,KAAK,KAAK,KAAK,OAAO,QAAQ,GAAG,GAAG;AAC9C,QAAI,UAAU,OAAW;AACzB,QAAI,MAAM,QAAQ,KAAK,GAAG;AACxB,aAAO,GAAG,IAAI,MACX,IAAI,CAAC,MAAO,OAAO,SAAS,CAAC,IAAI,EAAE,SAAS,IAAI,CAAE,EAClD,KAAK,GAAG;AAAA,IACb,OAAO;AACL,aAAO,GAAG,IAAI,OAAO,SAAS,KAAK,IAAI,MAAM,SAAS,IAAI;AAAA,IAC5D;AAAA,EACF;AACA,SAAO;AACT;AAOO,SAAS,gBACd,SACA,SACAA,QACA,WACA,QACkB;AAClB,SAAO;AAAA,IACL;AAAA,IACA,OAAAA;AAAA,IACA;AAAA,IACA;AAAA,IACA,SAAS,QAAQ,eAAe,KAAK,WAAW;AAAA,IAChD,eAAe,QAAQ,qBAAqB,KAAK,WAAW;AAAA,IAC5D,WAAW,QAAQ,gBAAgB,MAAK,oBAAI,KAAK,GAAE,YAAY;AAAA,IAC/D,eAAe,OAAO,QAAQ,qBAAqB,KAAK,CAAC;AAAA,IACzD,aAAa,QAAQ,kBAAkB;AAAA,IACvC;AAAA,EACF;AACF;;;AC5JO,IAAM,uBAAN,cAAmC,MAAM;AAAA,EAG9C,YACE,SACgBC,QACA,iBAChB,SACA;AACA,UAAM,SAAS,OAAO;AAJN,iBAAAA;AACA;AAIhB,SAAK,OAAO;AACZ,QAAI,SAAS,MAAO,MAAK,QAAQ,QAAQ;AAAA,EAC3C;AACF;AAGO,IAAM,uBAAN,cAAmC,MAAM;AAAA,EAG9C,YACkBA,QACA,iBAChB,SACA;AACA,UAAM,uCAAuCA,MAAK,KAAK,OAAO;AAJ9C,iBAAAA;AACA;AAIhB,SAAK,OAAO;AACZ,QAAI,SAAS,MAAO,MAAK,QAAQ,QAAQ;AAAA,EAC3C;AACF;AAGO,IAAM,2BAAN,cAAuC,qBAAqB;AAAA,EACjE,YACEA,QACA,iBACgB,UAChB,SACA;AACA;AAAA,MACE,mCAAmC,QAAQ,uBAAuBA,MAAK;AAAA,MACvEA;AAAA,MACA;AAAA,MACA;AAAA,IACF;AARgB;AAShB,SAAK,OAAO;AAAA,EACd;AACF;;;AC7BO,SAAS,QAAQ,OAAuB;AAC7C,SAAO,iBAAiB,QAAQ,QAAQ,IAAI,MAAM,OAAO,KAAK,CAAC;AACjE;AAEO,SAAS,MAAM,IAA2B;AAC/C,SAAO,IAAI,QAAQ,CAAC,YAAY,WAAW,SAAS,EAAE,CAAC;AACzD;AAKO,SAAS,iBACd,KACAC,QACA,QACY;AACZ,MAAI;AACF,WAAO,KAAK,MAAM,GAAG;AAAA,EACvB,SAAS,OAAO;AACd,WAAO;AAAA,MACL,sCAAsCA,MAAK;AAAA,MAC3C,QAAQ,KAAK,EAAE;AAAA,IACjB;AACA,WAAO;AAAA,EACT;AACF;AASA,eAAsB,mBACpB,SACA,KACAA,QACA,WACA,cACA,KACA,MAMqB;AACrB,QAAM,SAAS,UAAU,IAAIA,MAAK;AAClC,MAAI,CAAC,OAAQ,QAAO;AAEpB,MAAI;AACF,WAAO,MAAM,OAAO,MAAM,OAAO;AAAA,EACnC,SAAS,OAAO;AACd,UAAM,MAAM,QAAQ,KAAK;AACzB,UAAM,kBAAkB,IAAI,qBAAqBA,QAAO,SAAS;AAAA,MAC/D,OAAO;AAAA,IACT,CAAC;AACD,SAAK,OAAO;AAAA,MACV,sCAAsCA,MAAK;AAAA,MAC3C,IAAI;AAAA,IACN;AACA,QAAI,KAAK;AACP,YAAM,UAAUA,QAAO,KAAK,MAAM;AAAA,QAChC,OAAO;AAAA,QACP,SAAS;AAAA,QACT,iBAAiB,KAAK;AAAA,MACxB,CAAC;AAAA,IACH,OAAO;AACL,YAAM,KAAK,gBAAgB;AAAA,QACzB,OAAAA;AAAA,QACA,OAAO;AAAA,QACP,SAAS;AAAA,QACT,SAAS,KAAK,mBAAmB,CAAC;AAAA,MACpC,CAAC;AAAA,IACH;AAEA,UAAM,gBAAgB;AAAA,MACpB;AAAA,MACA,KAAK,mBAAmB,CAAC;AAAA,MACzBA;AAAA,MACA;AAAA,MACA;AAAA,IACF;AACA,eAAW,eAAe,cAAc;AACtC,YAAM,YAAY,UAAU,eAAe,eAAe;AAAA,IAC5D;AACA,WAAO;AAAA,EACT;AACF;AAWA,eAAsB,UACpBA,QACA,YACA,MACA,MACe;AACf,QAAM,WAAW,GAAGA,MAAK;AACzB,QAAM,UAA0B;AAAA,IAC9B,GAAI,MAAM,mBAAmB,CAAC;AAAA,IAC9B,wBAAwBA;AAAA,IACxB,oBAAmB,oBAAI,KAAK,GAAE,YAAY;AAAA,IAC1C,uBAAuB,MAAM,MAAM,WAAW;AAAA,IAC9C,qBAAqB,MAAM,MAAM,OAAO,MAAM,GAAG,GAAI,KAAK;AAAA,IAC1D,uBAAuB,OAAO,MAAM,WAAW,CAAC;AAAA,EAClD;AACA,MAAI;AACF,UAAM,KAAK,SAAS,KAAK;AAAA,MACvB,OAAO;AAAA,MACP,UAAU,CAAC,EAAE,OAAO,YAAY,QAAQ,CAAC;AAAA,IAC3C,CAAC;AACD,SAAK,OAAO,KAAK,wBAAwB,QAAQ,EAAE;AAAA,EACrD,SAAS,OAAO;AACd,SAAK,OAAO;AAAA,MACV,iCAAiC,QAAQ;AAAA,MACzC,QAAQ,KAAK,EAAE;AAAA,IACjB;AAAA,EACF;AACF;AAKO,IAAM,uBAAuB;AAC7B,IAAM,qBAAqB;AAC3B,IAAM,2BAA2B;AACjC,IAAM,8BAA8B;AAM3C,eAAsB,iBACpB,eACA,aACA,SACA,YACA,SACA,iBACA,MACe;AACf,QAAM,aAAa,GAAG,aAAa;AAEnC,QAAM;AAAA,IACJ,CAAC,oBAAoB,GAAG;AAAA,IACxB,CAAC,kBAAkB,GAAG;AAAA,IACtB,CAAC,wBAAwB,GAAG;AAAA,IAC5B,CAAC,2BAA2B,GAAG;AAAA,IAC/B,GAAG;AAAA,EACL,IAAI;AACJ,QAAM,UAA0B;AAAA,IAC9B,GAAG;AAAA,IACH,CAAC,oBAAoB,GAAG,OAAO,OAAO;AAAA,IACtC,CAAC,kBAAkB,GAAG,OAAO,KAAK,IAAI,IAAI,OAAO;AAAA,IACjD,CAAC,wBAAwB,GAAG,OAAO,UAAU;AAAA,IAC7C,CAAC,2BAA2B,GAAG;AAAA,EACjC;AACA,MAAI;AACF,eAAW,OAAO,aAAa;AAC7B,YAAM,KAAK,SAAS,KAAK;AAAA,QACvB,OAAO;AAAA,QACP,UAAU,CAAC,EAAE,OAAO,KAAK,QAAQ,CAAC;AAAA,MACpC,CAAC;AAAA,IACH;AACA,SAAK,OAAO;AAAA,MACV,iCAAiC,UAAU,aAAa,OAAO,IAAI,UAAU;AAAA,IAC/E;AAAA,EACF,SAAS,OAAO;AACd,SAAK,OAAO;AAAA,MACV,yCAAyC,UAAU;AAAA,MACnD,QAAQ,KAAK,EAAE;AAAA,IACjB;AAAA,EACF;AACF;AAuBA,eAAsB,iBACpB,IACA,KACA,MAMe;AACf,QAAM;AAAA,IACJ;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF,IAAI;AAEJ,QAAM,cAAc,cAAc,IAAI,QAAQ,MAAM,aAAa,IAAI;AACrE,QAAM,YAAY,OAAO,aAAa;AACtC,QAAM,eAAe,OAAO,gBAAgB;AAC5C,QAAM,YAAY,MAAM,QAAQ,QAAQ,IAAI,WAAW,CAAC,QAAQ;AAChE,QAAMA,SAAQ,UAAU,CAAC,GAAG,SAAS;AAErC,WAAS,UAAU,GAAG,WAAW,aAAa,WAAW;AAEvD,UAAM,WAA2B,CAAC;AAElC,QAAI;AAEF,iBAAW,OAAO,WAAW;AAC3B,mBAAW,QAAQ,KAAK,iBAAiB;AACvC,gBAAM,UAAU,KAAK,gBAAgB,GAAG;AACxC,cAAI,OAAO,YAAY,WAAY,UAAS,KAAK,OAAO;AAAA,QAC1D;AAAA,MACF;AAGA,iBAAW,OAAO,WAAW;AAC3B,mBAAW,eAAe,cAAc;AACtC,gBAAM,YAAY,SAAS,GAAG;AAAA,QAChC;AAAA,MACF;AAEA,YAAM,GAAG;AAGT,iBAAW,OAAO,WAAW;AAC3B,mBAAW,eAAe,cAAc;AACtC,gBAAM,YAAY,QAAQ,GAAG;AAAA,QAC/B;AAAA,MACF;AAGA,iBAAW,WAAW,SAAU,SAAQ;AAExC;AAAA,IACF,SAAS,OAAO;AACd,YAAM,MAAM,QAAQ,KAAK;AACzB,YAAM,gBAAgB,YAAY;AAGlC,iBAAW,OAAO,WAAW;AAC3B,mBAAW,QAAQ,KAAK,iBAAiB;AACvC,eAAK,iBAAiB,KAAK,GAAG;AAAA,QAChC;AAAA,MACF;AAEA,iBAAW,WAAW,SAAU,SAAQ;AAExC,UAAI,iBAAiB,cAAc,GAAG;AACpC,cAAM,iBAAiB,IAAI;AAAA,UACzBA;AAAA,UACA,UAAU,IAAI,CAAC,MAAM,EAAE,OAAO;AAAA,UAC9B;AAAA,UACA,EAAE,OAAO,IAAI;AAAA,QACf;AACA,mBAAW,OAAO,WAAW;AAC3B,qBAAW,eAAe,cAAc;AACtC,kBAAM,YAAY,UAAU,KAAK,cAAc;AAAA,UACjD;AAAA,QACF;AAAA,MACF,OAAO;AACL,mBAAW,OAAO,WAAW;AAC3B,qBAAW,eAAe,cAAc;AACtC,kBAAM,YAAY,UAAU,KAAK,GAAG;AAAA,UACtC;AAAA,QACF;AAAA,MACF;AAEA,WAAK,OAAO;AAAA,QACV,oBAAoB,UAAU,UAAU,SAAS,eAAeA,MAAK,aAAa,OAAO,IAAI,WAAW;AAAA,QACxG,IAAI;AAAA,MACN;AAEA,UAAI,eAAe,OAAO;AAGxB,cAAM,MAAM,KAAK,IAAI,WAAW,YAAY;AAC5C,cAAM,QAAQ,KAAK,MAAM,KAAK,OAAO,IAAI,GAAG;AAC5C,cAAM;AAAA,UACJA;AAAA,UACA;AAAA,UACA;AAAA,UACA,MAAM;AAAA,UACN;AAAA,UACA,UAAU,CAAC,GAAG,WAAW,CAAC;AAAA,UAC1B;AAAA,QACF;AAAA,MACF,WAAW,eAAe;AACxB,YAAI,KAAK;AACP,gBAAM,UAAuB;AAAA,YAC3B,OAAO;AAAA,YACP;AAAA,YACA,iBAAiB,UAAU,CAAC,GAAG;AAAA,UACjC;AACA,qBAAW,OAAO,aAAa;AAC7B,kBAAM,UAAUA,QAAO,KAAK,MAAM,OAAO;AAAA,UAC3C;AAAA,QACF,OAAO;AACL,gBAAM,KAAK,gBAAgB;AAAA,YACzB,OAAAA;AAAA,YACA,OAAO;AAAA,YACP;AAAA,YACA,SAAS,UAAU,CAAC,GAAG,WAAW,CAAC;AAAA,UACrC,CAAC;AAAA,QACH;AAAA,MACF,OAAO;AAEL,cAAM,MAAM,KAAK,IAAI,YAAY,MAAM,UAAU,IAAI,YAAY;AACjE,cAAM,MAAM,KAAK,OAAO,IAAI,GAAG;AAAA,MACjC;AAAA,IACF;AAAA,EACF;AACF;;;ACnWA,eAAsB,mBACpB,UACA,QACA,QACA,WACe;AACf,QAAM,cAAc,WAAW,WAAW;AAC1C,QAAM,YAAY,WAAW,aAAa;AAE1C,WAAS,UAAU,GAAG,WAAW,aAAa,WAAW;AACvD,QAAI;AACF,YAAM,SAAS,UAAU,EAAE,OAAO,CAAC;AACnC;AAAA,IACF,SAAS,OAAO;AACd,UAAI,YAAY,YAAa,OAAM;AACnC,YAAM,MAAM,QAAQ,KAAK,EAAE;AAC3B,aAAO;AAAA,QACL,2BAA2B,OAAO,KAAK,IAAI,CAAC,cAAc,OAAO,IAAI,WAAW,MAAM,GAAG,iBAAiB,SAAS;AAAA,MACrH;AACA,YAAM,MAAM,SAAS;AAAA,IACvB;AAAA,EACF;AACF;;;AJrBA,IAAM,EAAE,OAAO,YAAY,UAAU,cAAc,IAAI;AAmDhD,IAAM,cAAN,MAEsB;AAAA,EACV;AAAA,EACA;AAAA,EACT;AAAA,EACS,YAAY,oBAAI,IAAsB;AAAA,EACtC;AAAA,EACA;AAAA,EACA;AAAA,EACA;AAAA,EACA;AAAA,EACA,gBAAgB,oBAAI,IAAY;AAAA,EAChC;AAAA,EACA,iBAAiB,oBAAI,IAAwB;AAAA,EAC7C,mBAAmB,oBAAI,IAGtC;AAAA,EACe;AAAA,EACA;AAAA,EACA;AAAA,EAET,mBAAmB;AAAA,EACX;AAAA,EAEhB,YACE,UACA,SACA,SACA,SACA;AACA,SAAK,WAAW;AAChB,SAAK,iBAAiB;AACtB,SAAK,SAAS,SAAS,UAAU;AAAA,MAC/B,KAAK,CAAC,QAAQ,QAAQ,IAAI,gBAAgB,QAAQ,KAAK,GAAG,EAAE;AAAA,MAC5D,MAAM,CAAC,QAAQ,SACb,QAAQ,KAAK,gBAAgB,QAAQ,KAAK,GAAG,IAAI,GAAG,IAAI;AAAA,MAC1D,OAAO,CAAC,QAAQ,SACd,QAAQ,MAAM,gBAAgB,QAAQ,KAAK,GAAG,IAAI,GAAG,IAAI;AAAA,IAC7D;AACA,SAAK,0BAA0B,SAAS,oBAAoB;AAC5D,SAAK,uBAAuB,SAAS,iBAAiB;AACtD,SAAK,gBAAgB,SAAS,iBAAiB;AAC/C,SAAK,kBAAkB,SAAS,mBAAmB,CAAC;AACpD,SAAK,gBAAgB,SAAS;AAC9B,SAAK,cAAc,SAAS;AAE5B,SAAK,QAAQ,IAAI,WAAW;AAAA,MAC1B,SAAS;AAAA,QACP,UAAU,KAAK;AAAA,QACf;AAAA,QACA,UAAU,cAAc;AAAA,MAC1B;AAAA,IACF,CAAC;AACD,SAAK,WAAW,KAAK,MAAM,SAAS;AAAA,MAClC,SAAS;AAAA,QACP,MAAM;AAAA,MACR;AAAA,IACF,CAAC;AACD,SAAK,QAAQ,KAAK,MAAM,MAAM;AAAA,EAChC;AAAA,EAaA,MAAa,YACX,aACA,SACA,UAAuB,CAAC,GACT;AACf,UAAM,UAAU,MAAM,KAAK,iBAAiB,aAAa;AAAA,MACvD;AAAA,QACE,OAAO;AAAA,QACP,KAAK,QAAQ;AAAA,QACb,SAAS,QAAQ;AAAA,QACjB,eAAe,QAAQ;AAAA,QACvB,eAAe,QAAQ;AAAA,QACvB,SAAS,QAAQ;AAAA,MACnB;AAAA,IACF,CAAC;AACD,UAAM,KAAK,YAAY,QAAQ,KAAK;AACpC,UAAM,KAAK,SAAS,KAAK,OAAO;AAChC,eAAW,QAAQ,KAAK,iBAAiB;AACvC,WAAK,YAAY,QAAQ,KAAK;AAAA,IAChC;AAAA,EACF;AAAA,EAaA,MAAa,UACX,aACA,UACe;AACf,UAAM,UAAU,MAAM,KAAK,iBAAiB,aAAa,QAAQ;AACjE,UAAM,KAAK,YAAY,QAAQ,KAAK;AACpC,UAAM,KAAK,SAAS,KAAK,OAAO;AAChC,eAAW,QAAQ,KAAK,iBAAiB;AACvC,WAAK,YAAY,QAAQ,KAAK;AAAA,IAChC;AAAA,EACF;AAAA;AAAA,EAGA,MAAa,YACX,IACe;AACf,QAAI,CAAC,KAAK,YAAY;AACpB,WAAK,aAAa,KAAK,MAAM,SAAS;AAAA,QACpC,SAAS;AAAA,UACP,MAAM;AAAA,UACN,YAAY;AAAA,UACZ,iBAAiB,GAAG,KAAK,QAAQ;AAAA,UACjC,qBAAqB;AAAA,QACvB;AAAA,MACF,CAAC;AACD,YAAM,KAAK,WAAW,QAAQ;AAAA,IAChC;AACA,UAAM,KAAK,MAAM,KAAK,WAAW,YAAY;AAC7C,QAAI;AACF,YAAM,MAA6B;AAAA,QACjC,MAAM,OACJ,aACA,SACA,UAAuB,CAAC,MACrB;AACH,gBAAM,UAAU,MAAM,KAAK,iBAAiB,aAAa;AAAA,YACvD;AAAA,cACE,OAAO;AAAA,cACP,KAAK,QAAQ;AAAA,cACb,SAAS,QAAQ;AAAA,cACjB,eAAe,QAAQ;AAAA,cACvB,eAAe,QAAQ;AAAA,cACvB,SAAS,QAAQ;AAAA,YACnB;AAAA,UACF,CAAC;AACD,gBAAM,KAAK,YAAY,QAAQ,KAAK;AACpC,gBAAM,GAAG,KAAK,OAAO;AAAA,QACvB;AAAA,QACA,WAAW,OACT,aACA,aACG;AACH,gBAAM,UAAU,MAAM,KAAK,iBAAiB,aAAa,QAAQ;AACjE,gBAAM,KAAK,YAAY,QAAQ,KAAK;AACpC,gBAAM,GAAG,KAAK,OAAO;AAAA,QACvB;AAAA,MACF;AACA,YAAM,GAAG,GAAG;AACZ,YAAM,GAAG,OAAO;AAAA,IAClB,SAAS,OAAO;AACd,UAAI;AACF,cAAM,GAAG,MAAM;AAAA,MACjB,SAAS,YAAY;AACnB,aAAK,OAAO;AAAA,UACV;AAAA,UACA,QAAQ,UAAU,EAAE;AAAA,QACtB;AAAA,MACF;AACA,YAAM;AAAA,IACR;AAAA,EACF;AAAA;AAAA;AAAA,EAKA,MAAa,kBAAiC;AAC5C,UAAM,KAAK,SAAS,QAAQ;AAC5B,SAAK,OAAO,IAAI,oBAAoB;AAAA,EACtC;AAAA,EAEA,MAAa,qBAAoC;AAC/C,UAAM,KAAK,SAAS,WAAW;AAC/B,SAAK,OAAO,IAAI,uBAAuB;AAAA,EACzC;AAAA,EAiBA,MAAa,cACX,QACA,eACA,UAA8B,CAAC,GACN;AACzB,QAAI,QAAQ,eAAe,CAAC,QAAQ,OAAO;AACzC,YAAM,IAAI;AAAA,QACR;AAAA,MACF;AAAA,IACF;AAEA,UAAM,EAAE,UAAU,WAAW,YAAY,KAAK,KAAK,cAAc,MAAM,IACrE,MAAM,KAAK,cAAc,QAAQ,eAAe,OAAO;AAEzD,UAAM,OAAO;AAAA,MACX,QAAQ,KAAK;AAAA,MACb,UAAU,KAAK;AAAA,MACf,iBAAiB,KAAK;AAAA,MACtB,eAAe,KAAK;AAAA,IACtB;AACA,UAAM,YAAY,QAAQ;AAE1B,UAAM,SAAS,IAAI;AAAA,MACjB,aAAa,OAAO,EAAE,OAAAC,QAAO,WAAW,QAAQ,MAAM;AACpD,YAAI,CAAC,QAAQ,OAAO;AAClB,eAAK,OAAO,KAAK,qCAAqCA,MAAK,EAAE;AAC7D;AAAA,QACF;AAEA,cAAM,MAAM,QAAQ,MAAM,SAAS;AACnC,cAAM,SAAS,iBAAiB,KAAKA,QAAO,KAAK,MAAM;AACvD,YAAI,WAAW,KAAM;AAErB,cAAM,UAAU,cAAc,QAAQ,OAAO;AAC7C,cAAM,YAAY,MAAM;AAAA,UACtB;AAAA,UACA;AAAA,UACAA;AAAA,UACA;AAAA,UACA;AAAA,UACA;AAAA,UACA,EAAE,GAAG,MAAM,iBAAiB,QAAQ;AAAA,QACtC;AACA,YAAI,cAAc,KAAM;AAExB,cAAM,WAAW;AAAA,UACf;AAAA,UACA;AAAA,UACAA;AAAA,UACA;AAAA,UACA,QAAQ;AAAA,QACV;AAEA,cAAM;AAAA,UACJ,MAAM;AACJ,kBAAM,KAAK,MACT;AAAA,cACE;AAAA,gBACE,eAAe,SAAS;AAAA,gBACxB,aAAa,SAAS;AAAA,cACxB;AAAA,cACA,MAAM,cAAc,QAAQ;AAAA,YAC9B;AACF,mBAAO,YACH,KAAK,uBAAuB,IAAI,WAAWA,MAAK,IAChD,GAAG;AAAA,UACT;AAAA,UACA;AAAA,YACE;AAAA,YACA,aAAa,CAAC,GAAG;AAAA,YACjB;AAAA,YACA;AAAA,YACA;AAAA,YACA,aAAa,QAAQ;AAAA,UACvB;AAAA,UACA;AAAA,QACF;AAAA,MACF;AAAA,IACF,CAAC;AAED,SAAK,iBAAiB,IAAI,KAAK,aAAa;AAE5C,QAAI,QAAQ,eAAe,OAAO;AAChC,YAAM,KAAK;AAAA,QACT;AAAA,QACA;AAAA,QACA;AAAA,QACA;AAAA,QACA;AAAA,QACA;AAAA,QACA;AAAA,MACF;AAAA,IACF;AAEA,WAAO,EAAE,SAAS,KAAK,MAAM,MAAM,KAAK,aAAa,GAAG,EAAE;AAAA,EAC5D;AAAA,EAuBA,MAAa,mBACX,QACA,aAIA,UAA8B,CAAC,GACN;AACzB,UAAM,EAAE,UAAU,WAAW,KAAK,KAAK,cAAc,MAAM,IACzD,MAAM,KAAK,cAAc,QAAQ,aAAa,OAAO;AAEvD,UAAM,OAAO;AAAA,MACX,QAAQ,KAAK;AAAA,MACb,UAAU,KAAK;AAAA,MACf,iBAAiB,KAAK;AAAA,MACtB,eAAe,KAAK;AAAA,IACtB;AACA,UAAM,YAAY,QAAQ;AAE1B,UAAM,SAAS,IAAI;AAAA,MACjB,WAAW,OAAO;AAAA,QAChB;AAAA,QACA;AAAA,QACA;AAAA,QACA;AAAA,MACF,MAAM;AACJ,cAAM,YAAkC,CAAC;AACzC,cAAM,cAAwB,CAAC;AAE/B,mBAAW,WAAW,MAAM,UAAU;AACpC,cAAI,CAAC,QAAQ,OAAO;AAClB,iBAAK,OAAO;AAAA,cACV,qCAAqC,MAAM,KAAK;AAAA,YAClD;AACA;AAAA,UACF;AAEA,gBAAM,MAAM,QAAQ,MAAM,SAAS;AACnC,gBAAM,SAAS,iBAAiB,KAAK,MAAM,OAAO,KAAK,MAAM;AAC7D,cAAI,WAAW,KAAM;AAErB,gBAAM,UAAU,cAAc,QAAQ,OAAO;AAC7C,gBAAM,YAAY,MAAM;AAAA,YACtB;AAAA,YACA;AAAA,YACA,MAAM;AAAA,YACN;AAAA,YACA;AAAA,YACA;AAAA,YACA,EAAE,GAAG,MAAM,iBAAiB,QAAQ;AAAA,UACtC;AACA,cAAI,cAAc,KAAM;AACxB,oBAAU;AAAA,YACR;AAAA,cACE;AAAA,cACA;AAAA,cACA,MAAM;AAAA,cACN,MAAM;AAAA,cACN,QAAQ;AAAA,YACV;AAAA,UACF;AACA,sBAAY,KAAK,GAAG;AAAA,QACtB;AAEA,YAAI,UAAU,WAAW,EAAG;AAE5B,cAAM,OAAkB;AAAA,UACtB,WAAW,MAAM;AAAA,UACjB,eAAe,MAAM;AAAA,UACrB;AAAA,UACA;AAAA,UACA;AAAA,QACF;AAEA,cAAM;AAAA,UACJ,MAAM;AACJ,kBAAM,KAAK,MAAM,YAAY,WAAW,IAAI;AAC5C,mBAAO,YACH,KAAK,uBAAuB,IAAI,WAAW,MAAM,KAAK,IACtD,GAAG;AAAA,UACT;AAAA,UACA;AAAA,YACE,UAAU;AAAA,YACV,aAAa,MAAM,SAChB,OAAO,CAAC,MAAM,EAAE,KAAK,EACrB,IAAI,CAAC,MAAM,EAAE,MAAO,SAAS,CAAC;AAAA,YACjC;AAAA,YACA;AAAA,YACA;AAAA,YACA,SAAS;AAAA,UACX;AAAA,UACA;AAAA,QACF;AAAA,MACF;AAAA,IACF,CAAC;AAED,SAAK,iBAAiB,IAAI,KAAK,WAAW;AAE1C,WAAO,EAAE,SAAS,KAAK,MAAM,MAAM,KAAK,aAAa,GAAG,EAAE;AAAA,EAC5D;AAAA;AAAA,EAIA,MAAa,aAAa,SAAiC;AACzD,QAAI,YAAY,QAAW;AACzB,YAAM,WAAW,KAAK,UAAU,IAAI,OAAO;AAC3C,UAAI,CAAC,UAAU;AACb,aAAK,OAAO;AAAA,UACV,+CAA+C,OAAO;AAAA,QACxD;AACA;AAAA,MACF;AACA,YAAM,SAAS,WAAW,EAAE,MAAM,MAAM;AAAA,MAAC,CAAC;AAC1C,WAAK,UAAU,OAAO,OAAO;AAC7B,WAAK,iBAAiB,OAAO,OAAO;AACpC,WAAK,OAAO,IAAI,iCAAiC,OAAO,GAAG;AAAA,IAC7D,OAAO;AACL,YAAM,QAAQ,MAAM,KAAK,KAAK,UAAU,OAAO,CAAC,EAAE;AAAA,QAAI,CAAC,MACrD,EAAE,WAAW,EAAE,MAAM,MAAM;AAAA,QAAC,CAAC;AAAA,MAC/B;AACA,YAAM,QAAQ,WAAW,KAAK;AAC9B,WAAK,UAAU,MAAM;AACrB,WAAK,iBAAiB,MAAM;AAC5B,WAAK,OAAO,IAAI,4BAA4B;AAAA,IAC9C;AAAA,EACF;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAOA,MAAa,eACX,SACmE;AACnE,UAAM,MAAM,WAAW,KAAK;AAC5B,QAAI,CAAC,KAAK,kBAAkB;AAC1B,YAAM,KAAK,MAAM,QAAQ;AACzB,WAAK,mBAAmB;AAAA,IAC1B;AAEA,UAAM,mBAAmB,MAAM,KAAK,MAAM,aAAa,EAAE,SAAS,IAAI,CAAC;AACvE,UAAM,SAAmE,CAAC;AAE1E,eAAW,EAAE,OAAAA,QAAO,WAAW,KAAK,kBAAkB;AACpD,YAAM,gBAAgB,MAAM,KAAK,MAAM,kBAAkBA,MAAK;AAE9D,iBAAW,EAAE,WAAW,OAAO,KAAK,YAAY;AAC9C,cAAM,SAAS,cAAc,KAAK,CAAC,MAAM,EAAE,cAAc,SAAS;AAClE,YAAI,CAAC,OAAQ;AAEb,cAAM,YAAY,SAAS,QAAQ,EAAE;AACrC,cAAM,OAAO,SAAS,OAAO,MAAM,EAAE;AAErC,cAAM,MAAM,cAAc,KAAK,OAAO,KAAK,IAAI,GAAG,OAAO,SAAS;AAClE,eAAO,KAAK,EAAE,OAAAA,QAAO,WAAW,IAAI,CAAC;AAAA,MACvC;AAAA,IACF;AAEA,WAAO;AAAA,EACT;AAAA;AAAA,EAGA,MAAa,cAIV;AACD,QAAI,CAAC,KAAK,kBAAkB;AAC1B,YAAM,KAAK,MAAM,QAAQ;AACzB,WAAK,mBAAmB;AAAA,IAC1B;AACA,UAAM,SAAS,MAAM,KAAK,MAAM,WAAW;AAC3C,WAAO,EAAE,QAAQ,MAAM,UAAU,KAAK,UAAU,OAAO;AAAA,EACzD;AAAA,EAEO,cAAwB;AAC7B,WAAO,KAAK;AAAA,EACd;AAAA;AAAA,EAGA,MAAa,aAA4B;AACvC,UAAM,QAAyB,CAAC,KAAK,SAAS,WAAW,CAAC;AAC1D,QAAI,KAAK,YAAY;AACnB,YAAM,KAAK,KAAK,WAAW,WAAW,CAAC;AACvC,WAAK,aAAa;AAAA,IACpB;AACA,eAAW,YAAY,KAAK,UAAU,OAAO,GAAG;AAC9C,YAAM,KAAK,SAAS,WAAW,CAAC;AAAA,IAClC;AACA,QAAI,KAAK,kBAAkB;AACzB,YAAM,KAAK,KAAK,MAAM,WAAW,CAAC;AAClC,WAAK,mBAAmB;AAAA,IAC1B;AACA,UAAM,QAAQ,WAAW,KAAK;AAC9B,SAAK,UAAU,MAAM;AACrB,SAAK,iBAAiB,MAAM;AAC5B,SAAK,OAAO,IAAI,wBAAwB;AAAA,EAC1C;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAcA,MAAc,yBACZ,gBACA,iBACA,eACA,OACA,KACA,cACA,WACe;AACf,UAAM,kBAAkB,eAAe,IAAI,CAAC,MAAM,GAAG,CAAC,QAAQ;AAC9D,UAAM,eAAe,GAAG,eAAe;AACvC,UAAM,YAAY,MAAM,aAAa;AACrC,UAAM,eAAe,MAAM,gBAAgB;AAC3C,UAAM,OAAO;AAAA,MACX,QAAQ,KAAK;AAAA,MACb,UAAU,KAAK;AAAA,MACf,iBAAiB,KAAK;AAAA,MACtB,eAAe,KAAK;AAAA,IACtB;AAEA,eAAW,MAAM,iBAAiB;AAChC,YAAM,KAAK,YAAY,EAAE;AAAA,IAC3B;AAEA,UAAM,WAAW,KAAK,oBAAoB,cAAc,OAAO,IAAI;AACnE,UAAM,SAAS,QAAQ;AACvB,UAAM,mBAAmB,UAAU,iBAAiB,KAAK,MAAM;AAE/D,UAAM,SAAS,IAAI;AAAA,MACjB,aAAa,OAAO,EAAE,OAAO,YAAY,WAAW,QAAQ,MAAM;AAChE,YAAI,CAAC,QAAQ,MAAO;AAEpB,cAAM,MAAM,QAAQ,MAAM,SAAS;AACnC,cAAM,SAAS,iBAAiB,KAAK,YAAY,KAAK,MAAM;AAC5D,YAAI,WAAW,KAAM;AAErB,cAAM,UAAU,cAAc,QAAQ,OAAO;AAC7C,cAAM,gBACH,QAAQ,2BAA2B,KACpC,WAAW,QAAQ,YAAY,EAAE;AACnC,cAAM,iBAAiB;AAAA,UACpB,QAAQ,oBAAoB,KAA4B;AAAA,UACzD;AAAA,QACF;AACA,cAAM,aAAa;AAAA,UAChB,QAAQ,wBAAwB,KAC/B,OAAO,MAAM,UAAU;AAAA,UACzB;AAAA,QACF;AACA,cAAM,aAAa;AAAA,UAChB,QAAQ,kBAAkB,KAA4B;AAAA,UACvD;AAAA,QACF;AAIA,cAAM,YAAY,aAAa,KAAK,IAAI;AACxC,YAAI,YAAY,GAAG;AACjB,mBAAS,MAAM,CAAC,EAAE,OAAO,YAAY,YAAY,CAAC,SAAS,EAAE,CAAC,CAAC;AAC/D,gBAAM,MAAM,SAAS;AACrB,mBAAS,OAAO,CAAC,EAAE,OAAO,YAAY,YAAY,CAAC,SAAS,EAAE,CAAC,CAAC;AAAA,QAClE;AAGA,cAAM,YAAY,MAAM;AAAA,UACtB;AAAA,UACA;AAAA,UACA;AAAA,UACA;AAAA,UACA;AAAA,UACA;AAAA,UACA,EAAE,GAAG,MAAM,iBAAiB,QAAQ;AAAA,QACtC;AACA,YAAI,cAAc,KAAM;AAGxB,cAAM,WAAW;AAAA,UACf;AAAA,UACA;AAAA,UACA;AAAA,UACA;AAAA,UACA,QAAQ;AAAA,QACV;AAEA,YAAI;AAEF,gBAAM,WAA2B,CAAC;AAClC,qBAAW,QAAQ,KAAK,iBAAiB;AACvC,kBAAM,IAAI,KAAK,gBAAgB,QAAQ;AACvC,gBAAI,OAAO,MAAM,WAAY,UAAS,KAAK,CAAC;AAAA,UAC9C;AACA,qBAAW,eAAe;AACxB,kBAAM,YAAY,SAAS,QAAQ;AAErC,gBAAM;AAAA,YACJ;AAAA,cACE,eAAe,SAAS;AAAA,cACxB,aAAa,SAAS;AAAA,YACxB;AAAA,YACA,MAAM,cAAc,QAAQ;AAAA,UAC9B;AAEA,qBAAW,eAAe;AACxB,kBAAM,YAAY,QAAQ,QAAQ;AACpC,qBAAW,WAAW,SAAU,SAAQ;AAAA,QAC1C,SAAS,OAAO;AACd,gBAAM,MAAM,QAAQ,KAAK;AACzB,gBAAM,cAAc,iBAAiB;AACrC,gBAAM,YAAY,kBAAkB;AAEpC,qBAAW,QAAQ,KAAK;AACtB,iBAAK,iBAAiB,UAAU,GAAG;AAErC,gBAAM,gBACJ,aAAa,aAAa,IACtB,IAAI;AAAA,YACF;AAAA,YACA,CAAC,SAAS,OAAO;AAAA,YACjB;AAAA,YACA,EAAE,OAAO,IAAI;AAAA,UACf,IACA;AACN,qBAAW,eAAe,cAAc;AACtC,kBAAM,YAAY,UAAU,UAAU,aAAa;AAAA,UACrD;AAEA,eAAK,OAAO;AAAA,YACV,4BAA4B,aAAa,aAAa,cAAc,IAAI,UAAU;AAAA,YAClF,IAAI;AAAA,UACN;AAEA,cAAI,CAAC,WAAW;AAGd,kBAAM,MAAM,KAAK,IAAI,YAAY,KAAK,gBAAgB,YAAY;AAClE,kBAAM,QAAQ,KAAK,MAAM,KAAK,OAAO,IAAI,GAAG;AAC5C,kBAAM;AAAA,cACJ;AAAA,cACA,CAAC,GAAG;AAAA,cACJ;AAAA,cACA;AAAA,cACA;AAAA,cACA;AAAA,cACA;AAAA,YACF;AAAA,UACF,WAAW,KAAK;AACd,kBAAM,UAAU,eAAe,KAAK,MAAM;AAAA,cACxC,OAAO;AAAA;AAAA;AAAA;AAAA,cAIP,SAAS,iBAAiB;AAAA,cAC1B,iBAAiB;AAAA,YACnB,CAAC;AAAA,UACH,OAAO;AACL,kBAAM,KAAK,gBAAgB;AAAA,cACzB,OAAO;AAAA,cACP,OAAO;AAAA,cACP,SAAS;AAAA,cACT;AAAA,YACF,CAAC;AAAA,UACH;AAAA,QACF;AAAA,MACF;AAAA,IACF,CAAC;AAED,SAAK,iBAAiB,IAAI,cAAc,aAAa;AAOrD,UAAM,KAAK,2BAA2B,UAAU,eAAe;AAE/D,SAAK,OAAO;AAAA,MACV,sCAAsC,eAAe,KAAK,IAAI,CAAC,YAAY,YAAY;AAAA,IACzF;AAAA,EACF;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAWA,MAAc,2BACZ,UACA,QACA,YAAY,KACG;AACf,UAAM,WAAW,IAAI,IAAI,MAAM;AAC/B,UAAM,WAAW,KAAK,IAAI,IAAI;AAC9B,WAAO,KAAK,IAAI,IAAI,UAAU;AAC5B,UAAI;AACF,cAAM,WACJ,SAAS,WAAW;AACtB,YAAI,SAAS,KAAK,CAAC,MAAM,SAAS,IAAI,EAAE,KAAK,CAAC,EAAG;AAAA,MACnD,QAAQ;AAAA,MAER;AACA,YAAM,MAAM,GAAG;AAAA,IACjB;AACA,SAAK,OAAO;AAAA,MACV,6DAA6D,OAAO,KAAK,IAAI,CAAC,YAAY,SAAS;AAAA,IACrG;AAAA,EACF;AAAA,EAEQ,oBACN,SACA,eACA,YACU;AACV,QAAI,CAAC,KAAK,UAAU,IAAI,OAAO,GAAG;AAChC,YAAM,SAAoD;AAAA,QACxD,SAAS,EAAE,SAAS,eAAe,WAAW;AAAA,MAChD;AAEA,UAAI,KAAK,aAAa;AACpB,cAAM,cAAc,KAAK;AAKzB,QAAC,OAAe,cAAc,IAAI,CAAC,KAAU,eAAsB;AACjE,gBAAM,OAAO,IAAI,SAAS,OAAO,WAAW;AAC5C,cAAI;AACF;AAAA,cACE;AAAA,cACA,WAAW,IAAI,CAAC,OAAO;AAAA,gBACrB,OAAO,EAAE;AAAA,gBACT,WAAW,EAAE;AAAA,cACf,EAAE;AAAA,YACJ;AAAA,UACF,SAAS,GAAG;AACV,iBAAK,OAAO;AAAA,cACV,+BAAgC,EAAY,OAAO;AAAA,YACrD;AAAA,UACF;AAAA,QACF;AAAA,MACF;AAEA,WAAK,UAAU,IAAI,SAAS,KAAK,MAAM,SAAS,MAAM,CAAC;AAAA,IACzD;AACA,WAAO,KAAK,UAAU,IAAI,OAAO;AAAA,EACnC;AAAA;AAAA;AAAA;AAAA;AAAA,EAMQ,uBACN,IACA,WACAA,QACY;AACZ,QAAI;AACJ,UAAM,UAAU,GAAG,EAAE,QAAQ,MAAM;AACjC,UAAI,UAAU,OAAW,cAAa,KAAK;AAAA,IAC7C,CAAC;AACD,YAAQ,WAAW,MAAM;AACvB,WAAK,OAAO;AAAA,QACV,sBAAsBA,MAAK,4BAA4B,SAAS;AAAA,MAClE;AAAA,IACF,GAAG,SAAS;AACZ,WAAO;AAAA,EACT;AAAA,EAEQ,iBAAiB,mBAAoC;AAC3D,QAAI,OAAO,sBAAsB,SAAU,QAAO;AAClD,QACE,qBACA,OAAO,sBAAsB,YAC7B,aAAa,mBACb;AACA,aAAQ,kBAAsC;AAAA,IAChD;AACA,WAAO,OAAO,iBAAiB;AAAA,EACjC;AAAA,EAEA,MAAc,YAAYA,QAA8B;AACtD,QAAI,CAAC,KAAK,2BAA2B,KAAK,cAAc,IAAIA,MAAK,EAAG;AACpE,QAAI,CAAC,KAAK,kBAAkB;AAC1B,YAAM,KAAK,MAAM,QAAQ;AACzB,WAAK,mBAAmB;AAAA,IAC1B;AACA,UAAM,KAAK,MAAM,aAAa;AAAA,MAC5B,QAAQ,CAAC,EAAE,OAAAA,QAAO,eAAe,KAAK,cAAc,CAAC;AAAA,IACvD,CAAC;AACD,SAAK,cAAc,IAAIA,MAAK;AAAA,EAC9B;AAAA;AAAA,EAGQ,eAAe,aAAwB;AAC7C,QAAI,aAAa,UAAU;AACzB,YAAMA,SAAQ,KAAK,iBAAiB,WAAW;AAC/C,WAAK,eAAe,IAAIA,QAAO,YAAY,QAAQ;AAAA,IACrD;AAAA,EACF;AAAA;AAAA,EAGA,MAAc,gBAAgB,aAAkB,SAA4B;AAC1E,QAAI,aAAa,UAAU;AACzB,aAAO,MAAM,YAAY,SAAS,MAAM,OAAO;AAAA,IACjD;AACA,QAAI,KAAK,wBAAwB,OAAO,gBAAgB,UAAU;AAChE,YAAM,SAAS,KAAK,eAAe,IAAI,WAAW;AAClD,UAAI,OAAQ,QAAO,MAAM,OAAO,MAAM,OAAO;AAAA,IAC/C;AACA,WAAO;AAAA,EACT;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAOA,MAAc,iBACZ,aACA,UAQC;AACD,SAAK,eAAe,WAAW;AAC/B,UAAMA,SAAQ,KAAK,iBAAiB,WAAW;AAC/C,UAAM,gBAAgB,MAAM,QAAQ;AAAA,MAClC,SAAS,IAAI,OAAO,MAAM;AACxB,cAAM,kBAAkB,qBAAqB;AAAA,UAC3C,eAAe,EAAE;AAAA,UACjB,eAAe,EAAE;AAAA,UACjB,SAAS,EAAE;AAAA,UACX,SAAS,EAAE;AAAA,QACb,CAAC;AAGD,mBAAW,QAAQ,KAAK,iBAAiB;AACvC,eAAK,aAAaA,QAAO,eAAe;AAAA,QAC1C;AAEA,eAAO;AAAA,UACL,OAAO,KAAK;AAAA,YACV,MAAM,KAAK,gBAAgB,aAAa,EAAE,KAAK;AAAA,UACjD;AAAA,UACA,KAAK,EAAE,OAAO;AAAA,UACd,SAAS;AAAA,QACX;AAAA,MACF,CAAC;AAAA,IACH;AACA,WAAO,EAAE,OAAAA,QAAO,UAAU,cAAc;AAAA,EAC1C;AAAA;AAAA,EAGA,MAAc,cACZ,QACA,MACA,SACA;AACA,UAAM;AAAA,MACJ,SAAS;AAAA,MACT,gBAAgB;AAAA,MAChB;AAAA,MACA,MAAM;AAAA,MACN,eAAe,CAAC;AAAA,MAChB,SAAS;AAAA,IACX,IAAI;AAEJ,UAAM,MAAM,cAAc,KAAK;AAC/B,UAAM,eAAe,KAAK,iBAAiB,IAAI,GAAG;AAClD,UAAM,eAAe,SAAS,gBAAgB,cAAc;AAC5D,QAAI,iBAAiB,cAAc;AACjC,YAAM,IAAI;AAAA,QACR,cAAc,IAAI,uBAAuB,GAAG,uCAAkC,YAAY;AAAA,MAE5F;AAAA,IACF;AAEA,UAAM,WAAW,KAAK;AAAA,MACpB;AAAA,MACA;AAAA,MACA,QAAQ,cAAc;AAAA,IACxB;AACA,UAAM,YAAY,KAAK,eAAe,QAAQ,aAAa;AAE3D,UAAM,aAAc,OAAiB;AAAA,MAAI,CAAC,MACxC,KAAK,iBAAiB,CAAC;AAAA,IACzB;AAGA,eAAW,KAAK,YAAY;AAC1B,YAAM,KAAK,YAAY,CAAC;AAAA,IAC1B;AACA,QAAI,KAAK;AACP,iBAAW,KAAK,YAAY;AAC1B,cAAM,KAAK,YAAY,GAAG,CAAC,MAAM;AAAA,MACnC;AAAA,IACF;AAEA,UAAM,SAAS,QAAQ;AACvB,UAAM;AAAA,MACJ;AAAA,MACA;AAAA,MACA,KAAK;AAAA,MACL,QAAQ;AAAA,IACV;AAEA,SAAK,OAAO;AAAA,MACV,GAAG,SAAS,cAAc,mBAAmB,UAAU,0BAA0B,WAAW,KAAK,IAAI,CAAC;AAAA,IACxG;AAEA,WAAO,EAAE,UAAU,WAAW,YAAY,KAAK,KAAK,cAAc,MAAM;AAAA,EAC1E;AAAA,EAEQ,eACN,QACA,eACyB;AACzB,UAAM,YAAY,oBAAI,IAAwB;AAC9C,eAAW,KAAK,QAAQ;AACtB,UAAI,GAAG,UAAU;AACf,cAAM,OAAO,KAAK,iBAAiB,CAAC;AACpC,kBAAU,IAAI,MAAM,EAAE,QAAQ;AAC9B,aAAK,eAAe,IAAI,MAAM,EAAE,QAAQ;AAAA,MAC1C;AAAA,IACF;AACA,QAAI,eAAe;AACjB,iBAAW,CAAC,GAAG,CAAC,KAAK,eAAe;AAClC,kBAAU,IAAI,GAAG,CAAC;AAClB,aAAK,eAAe,IAAI,GAAG,CAAC;AAAA,MAC9B;AAAA,IACF;AACA,WAAO;AAAA,EACT;AACF;;;AK/8BO,SAAS,MAAwB,MAAS;AAC/C,QAAM,KAAK,OAA6D;AAAA,IACtE,SAAS;AAAA,IACT,QAAQ;AAAA,EACV;AAEA,KAAG,SAAS,CACV,YACwC;AAAA,IACxC,SAAS;AAAA,IACT,QAAQ;AAAA,IACR,UAAU;AAAA,EACZ;AAEA,SAAO;AACT;","names":["topic","topic","topic","topic"]}