@drarzter/kafka-client 0.5.7 → 0.6.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -32,12 +32,14 @@ Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class N
32
32
  - [Error classes](#error-classes)
33
33
  - [Retry topic chain](#retry-topic-chain)
34
34
  - [stopConsumer](#stopconsumer)
35
+ - [Graceful shutdown](#graceful-shutdown)
35
36
  - [Consumer handles](#consumer-handles)
36
37
  - [onMessageLost](#onmessagelost)
37
38
  - [onRebalance](#onrebalance)
38
39
  - [Consumer lag](#consumer-lag)
39
40
  - [Handler timeout warning](#handler-timeout-warning)
40
41
  - [Schema validation](#schema-validation)
42
+ - [Context-aware validators](#context-aware-validators-schemaparsecontext)
41
43
  - [Health check](#health-check)
42
44
  - [Testing](#testing)
43
45
  - [Project structure](#project-structure)
@@ -439,11 +441,14 @@ await kafka.startConsumer(['orders'], auditHandler, { groupId: 'orders-audit' })
439
441
  async auditOrders(envelope) { ... }
440
442
  ```
441
443
 
442
- **Important:** You cannot mix `eachMessage` and `eachBatch` consumers on the same `groupId`. The library throws a clear error if you try:
444
+ **Important:** You cannot mix `eachMessage` and `eachBatch` consumers on the same `groupId`, and you cannot call `startConsumer` (or `startBatchConsumer`) twice on the same `groupId` without stopping it first. The library throws a clear error in both cases:
443
445
 
444
446
  ```text
445
447
  Cannot use eachBatch on consumer group "my-group" — it is already running with eachMessage.
446
448
  Use a different groupId for this consumer.
449
+
450
+ startConsumer("my-group") called twice — this group is already consuming.
451
+ Call stopConsumer("my-group") first or pass a different groupId.
447
452
  ```
448
453
 
449
454
  ### Named clients
@@ -581,6 +586,8 @@ await this.kafka.startBatchConsumer(
581
586
  );
582
587
  ```
583
588
 
589
+ > **Note:** If your handler calls `resolveOffset()` or `commitOffsetsIfNecessary()` without setting `autoCommit: false`, a `debug` message is logged at consumer-start time — mixing autoCommit with manual offset control causes offset conflicts. Set `autoCommit: false` to suppress the message and take full control of offset management.
590
+
584
591
  With `@SubscribeTo()`:
585
592
 
586
593
  ```typescript
@@ -592,6 +599,20 @@ async handleOrders(envelopes: EventEnvelope<OrdersTopicMap['order.created']>[],
592
599
 
593
600
  Schema validation runs per-message — invalid messages are skipped (DLQ'd if enabled), valid ones are passed to the handler. Retry applies to the whole batch.
594
601
 
602
+ `retryTopics: true` is also supported on `startBatchConsumer`. On handler failure, each envelope in the batch is routed individually to `<topic>.retry.1`; the companion retry consumers call the batch handler one message at a time with a stub `BatchMeta` (no-op `heartbeat`/`resolveOffset`/`commitOffsetsIfNecessary`):
603
+
604
+ ```typescript
605
+ await kafka.startBatchConsumer(
606
+ ['orders.created'],
607
+ async (envelopes, meta) => { /* same handler */ },
608
+ {
609
+ retry: { maxRetries: 3, backoffMs: 1000 },
610
+ dlq: true,
611
+ retryTopics: true, // ← now supported for batch consumers too
612
+ },
613
+ );
614
+ ```
615
+
595
616
  `BatchMeta` exposes:
596
617
 
597
618
  | Property/Method | Description |
@@ -671,21 +692,55 @@ const kafka = new KafkaClient('my-app', 'my-group', brokers, {
671
692
  });
672
693
  ```
673
694
 
674
- `otelInstrumentation()` injects `traceparent` on send, extracts it on consume, and creates `CONSUMER` spans automatically. Requires `@opentelemetry/api` as a peer dependency.
695
+ `otelInstrumentation()` injects `traceparent` on send, extracts it on consume, and creates `CONSUMER` spans automatically. The span is set as the **active OTel context** for the handler's duration via `context.with()` — so `trace.getActiveSpan()` works inside your handler and any child spans are automatically parented to the consume span. Requires `@opentelemetry/api` as a peer dependency.
696
+
697
+ ### Custom instrumentation
675
698
 
676
- Custom instrumentation:
699
+ `beforeConsume` can return a `BeforeConsumeResult` — either the legacy `() => void` cleanup function, or an object with `cleanup` and/or `wrap`:
677
700
 
678
701
  ```typescript
679
- import { KafkaInstrumentation } from '@drarzter/kafka-client';
702
+ import { KafkaInstrumentation, BeforeConsumeResult } from '@drarzter/kafka-client';
680
703
 
681
- const metrics: KafkaInstrumentation = {
704
+ const myInstrumentation: KafkaInstrumentation = {
682
705
  beforeSend(topic, headers) { /* inject headers, start timer */ },
683
706
  afterSend(topic) { /* record send latency */ },
684
- beforeConsume(envelope) { /* start span */ return () => { /* end span */ }; },
707
+
708
+ beforeConsume(envelope): BeforeConsumeResult {
709
+ const span = startMySpan(envelope.topic);
710
+ return {
711
+ // cleanup() is called after the handler completes (success or error)
712
+ cleanup() { span.end(); },
713
+ // wrap(fn) runs the handler inside the desired async context
714
+ // call fn() wherever you need it in the context scope
715
+ wrap(fn) { return runWithSpanActive(span, fn); },
716
+ };
717
+ },
718
+
685
719
  onConsumeError(envelope, error) { /* record error metric */ },
686
720
  };
687
721
  ```
688
722
 
723
+ The legacy `() => void` form is still fully supported — return a function directly if you only need cleanup:
724
+
725
+ ```typescript
726
+ beforeConsume(envelope) {
727
+ const timer = startTimer();
728
+ return () => timer.end(); // cleanup only, no context wrapping
729
+ },
730
+ ```
731
+
732
+ `BeforeConsumeResult` is a union:
733
+
734
+ ```typescript
735
+ type BeforeConsumeResult =
736
+ | (() => void) // legacy: cleanup only
737
+ | { cleanup?(): void; // called after handler (success or error)
738
+ wrap?(fn: () => Promise<void>): Promise<void>; // wraps handler execution
739
+ };
740
+ ```
741
+
742
+ When multiple instrumentations each provide a `wrap`, they compose in declaration order — the first instrumentation's `wrap` is the outermost.
743
+
689
744
  ## Options reference
690
745
 
691
746
  ### Send options
@@ -713,7 +768,7 @@ Options for `sendMessage()` — the third argument:
713
768
  | `retry.backoffMs` | `1000` | Base delay for exponential backoff in ms |
714
769
  | `retry.maxBackoffMs` | `30000` | Maximum delay cap for exponential backoff in ms |
715
770
  | `dlq` | `false` | Send to `{topic}.dlq` after all retries exhausted — message carries `x-dlq-*` metadata headers |
716
- | `retryTopics` | `false` | Route failed messages through per-level topics (`{topic}.retry.1`, `{topic}.retry.2`, …) instead of sleeping in-process; at-least-once semantics; requires `retry` (see [Retry topic chain](#retry-topic-chain)) |
771
+ | `retryTopics` | `false` | Route failed messages through per-level topics (`{topic}.retry.1`, `{topic}.retry.2`, …) instead of sleeping in-process; exactly-once routing semantics within the retry chain; requires `retry` (see [Retry topic chain](#retry-topic-chain)) |
717
772
  | `interceptors` | `[]` | Array of before/after/onError hooks |
718
773
  | `retryTopicAssignmentTimeoutMs` | `10000` | Timeout (ms) to wait for each retry level consumer to receive partition assignments after connecting; increase for slow brokers |
719
774
  | `handlerTimeoutMs` | — | Log a warning if the handler hasn't resolved within this window (ms) — does not cancel the handler |
@@ -827,11 +882,29 @@ const interceptor: ConsumerInterceptor<MyTopics> = {
827
882
 
828
883
  ## Retry topic chain
829
884
 
885
+ > **tl;dr — recommended production setup:**
886
+ >
887
+ > ```typescript
888
+ > await kafka.startConsumer(['orders.created'], handler, {
889
+ > retry: { maxRetries: 3, backoffMs: 1_000, maxBackoffMs: 30_000 },
890
+ > dlq: true, // ← messages never silently disappear
891
+ > retryTopics: true, // ← retries survive restarts; routing is exactly-once
892
+ > });
893
+ > ```
894
+ >
895
+ > Just `retry` + `dlq: true` is already safe for most workloads — failed messages land in `{topic}.dlq` after all retries and are never silently dropped. Add `retryTopics: true` for crash-durable retries and exactly-once routing guarantees within the retry chain.
896
+ >
897
+ > | Configuration | What happens to a message that always fails | Process crash mid-retry |
898
+ > | --- | --- | --- |
899
+ > | `retry` only | Dropped — `onMessageLost` fires | Lost if crash between attempts |
900
+ > | `retry` + `dlq` | Lands in `{topic}.dlq` after all attempts | DLQ write may duplicate (rare) |
901
+ > | `retry` + `dlq` + `retryTopics` | Lands in `{topic}.dlq` after all attempts | Retries survive restarts; routing is exactly-once |
902
+
830
903
  By default, retry is handled in-process: the consumer sleeps between attempts while holding the partition. With `retryTopics: true`, failed messages are routed through a chain of Kafka topics instead — one topic per retry level. A companion consumer auto-starts per level, waits for the scheduled delay using partition pause/resume, then calls the same handler.
831
904
 
832
905
  Benefits over in-process retry:
833
906
 
834
- - **Durable** — retry messages survive a consumer restart (at-least-once semantics)
907
+ - **Durable** — retry messages survive a consumer restart; routing between levels and to DLQ is exactly-once via Kafka transactions
835
908
  - **Non-blocking** — the original consumer is free immediately; each level consumer only pauses its specific partition during the delay window, so other partitions continue processing
836
909
  - **Isolated** — each retry level has its own consumer group, so a slow level 3 consumer never blocks a level 1 consumer
837
910
 
@@ -863,9 +936,11 @@ Each level consumer uses `consumer.pause → sleep(remaining) → consumer.resum
863
936
 
864
937
  The retry topic messages carry scheduling headers (`x-retry-attempt`, `x-retry-after`, `x-retry-original-topic`, `x-retry-max-retries`) that each level consumer reads automatically — no manual configuration needed.
865
938
 
866
- > **Delivery guarantee:** retry messages are at-least-once. A duplicate can occur in the rare case where a process crashes after routing to the next level but before committing the offset the message appears twice in the next level topic. Design handlers to be idempotent if duplicates are unacceptable.
939
+ > **Delivery guarantee:** routing within the retry chain (retry.N → retry.N+1 and retry.N → DLQ) is **exactly-once** — each routing step is wrapped in a Kafka transaction via `sendOffsetsToTransaction`, so the produce and the consumer offset commit happen atomically. A crash at any point rolls back the transaction: the message is redelivered and the routing is retried, with no duplicate in the next level. If the EOS transaction itself fails (broker unavailable), the offset is not committed and the message stays safely in the retry topic until the broker recovers.
867
940
  >
868
- > **Note:** `retryTopics` requires `retry` to be set an error is thrown at startup if `retry` is missing. Currently only applies to `startConsumer`; batch consumers (`startBatchConsumer`) use in-process retry regardless.
941
+ > The remaining at-least-once window is at the **main consumer → retry.1** boundary: the main consumer uses `autoCommit: true` by default, so if it crashes after routing to `retry.1` but before autoCommit fires, the message may appear twice in `retry.1`. This is the standard Kafka at-least-once trade-off for any consumer using autoCommit. Design handlers to be idempotent if this edge case is unacceptable.
942
+ >
943
+ > **Startup validation:** `retryTopics` requires `retry` to be set — an error is thrown at startup if `retry` is missing. When `autoCreateTopics: false`, all `{topic}.retry.N` topics are validated to exist at startup and a clear error lists any missing ones. With `autoCreateTopics: true` the check is skipped — topics are created automatically by the `ensureTopic` path. Supported by both `startConsumer` and `startBatchConsumer`.
869
944
 
870
945
  `stopConsumer(groupId)` automatically stops all companion retry level consumers started for that group.
871
946
 
@@ -883,6 +958,36 @@ await kafka.stopConsumer();
883
958
 
884
959
  `stopConsumer(groupId)` disconnects and removes only that group's consumer, leaving other groups running. Useful when you want to pause processing for a specific topic without restarting the whole client.
885
960
 
961
+ ## Graceful shutdown
962
+
963
+ `disconnect()` now drains in-flight handlers before tearing down connections — no messages are silently cut off mid-processing.
964
+
965
+ **NestJS** apps get this automatically: `onModuleDestroy` calls `disconnect()`, which waits for all running `eachMessage` / `eachBatch` callbacks to settle first. Enable NestJS shutdown hooks in your bootstrap:
966
+
967
+ ```typescript
968
+ // main.ts
969
+ const app = await NestFactory.create(AppModule);
970
+ app.enableShutdownHooks(); // lets NestJS call onModuleDestroy on SIGTERM
971
+ await app.listen(3000);
972
+ ```
973
+
974
+ **Standalone** apps call `enableGracefulShutdown()` to register SIGTERM / SIGINT handlers:
975
+
976
+ ```typescript
977
+ const kafka = new KafkaClient('my-app', 'my-group', brokers);
978
+ await kafka.connectProducer();
979
+
980
+ kafka.enableGracefulShutdown();
981
+ // or with custom signals and timeout:
982
+ kafka.enableGracefulShutdown(['SIGTERM', 'SIGINT'], 60_000);
983
+ ```
984
+
985
+ `disconnect()` accepts an optional `drainTimeoutMs` (default `30_000` ms). If handlers haven't settled within the window, a warning is logged and the client disconnects anyway:
986
+
987
+ ```typescript
988
+ await kafka.disconnect(10_000); // wait up to 10 s, then force disconnect
989
+ ```
990
+
886
991
  ## Consumer handles
887
992
 
888
993
  `startConsumer()` and `startBatchConsumer()` return a `ConsumerHandle` instead of `void`. Use it to stop a specific consumer without needing to remember the group ID:
@@ -916,12 +1021,13 @@ const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
916
1021
  });
917
1022
  ```
918
1023
 
919
- `onMessageLost` fires in two cases:
1024
+ `onMessageLost` fires in three cases:
920
1025
 
921
1026
  1. **Handler error** — handler threw after all retries and `dlq: false`
922
1027
  2. **Validation error** — schema rejected the message and `dlq: false` (attempt is `0`)
1028
+ 3. **DLQ send failure** — `dlq: true` but `producer.send()` to `{topic}.dlq` itself threw (broker down, topic missing); the error passed to `onMessageLost` is the send error, not the original handler error
923
1029
 
924
- It does NOT fire when `dlq: true` in that case the message is preserved in `{topic}.dlq`.
1030
+ In the normal case (`dlq: true`, DLQ send succeeds), `onMessageLost` does NOT fire the message is preserved in `{topic}.dlq`.
925
1031
 
926
1032
  ## onRebalance
927
1033
 
@@ -1064,6 +1170,38 @@ const asyncValidator: SchemaLike<{ id: string }> = {
1064
1170
  const MyTopic = topic('my.topic').schema(customValidator);
1065
1171
  ```
1066
1172
 
1173
+ ### Context-aware validators (`SchemaParseContext`)
1174
+
1175
+ `parse()` receives an optional second argument `ctx: SchemaParseContext` on both the consume and send paths. Use it for schema-registry lookups, version-aware migration, or header-driven parsing:
1176
+
1177
+ ```typescript
1178
+ import { SchemaLike, SchemaParseContext } from '@drarzter/kafka-client';
1179
+
1180
+ const versionedValidator: SchemaLike<MyPayload> = {
1181
+ parse(data: unknown, ctx?: SchemaParseContext) {
1182
+ const version = ctx?.version ?? 1;
1183
+ // version comes from the x-schema-version header (send: schemaVersion option)
1184
+ if (version >= 2) return migrateV1toV2(data);
1185
+ return validateV1(data);
1186
+ },
1187
+ };
1188
+
1189
+ // On consume: ctx = { topic: 'orders.created', headers: { ... }, version: 2 }
1190
+ // On send: ctx = { topic: 'orders.created', headers: { ... }, version: schemaVersion ?? 1 }
1191
+ ```
1192
+
1193
+ `SchemaParseContext` shape:
1194
+
1195
+ ```typescript
1196
+ interface SchemaParseContext {
1197
+ topic: string; // topic the message was produced to / consumed from
1198
+ headers: MessageHeaders; // decoded headers (envelope headers included)
1199
+ version: number; // x-schema-version header value, defaults to 1
1200
+ }
1201
+ ```
1202
+
1203
+ Existing validators (Zod, Valibot, ArkType, custom) that only use the first argument continue to work unchanged — the second argument is silently ignored.
1204
+
1067
1205
  ## Health check
1068
1206
 
1069
1207
  Monitor Kafka connectivity with the built-in health indicator: