@drarzter/kafka-client 0.5.7 → 0.6.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -32,12 +32,14 @@ Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class N
32
32
  - [Error classes](#error-classes)
33
33
  - [Retry topic chain](#retry-topic-chain)
34
34
  - [stopConsumer](#stopconsumer)
35
+ - [Graceful shutdown](#graceful-shutdown)
35
36
  - [Consumer handles](#consumer-handles)
36
37
  - [onMessageLost](#onmessagelost)
37
38
  - [onRebalance](#onrebalance)
38
39
  - [Consumer lag](#consumer-lag)
39
40
  - [Handler timeout warning](#handler-timeout-warning)
40
41
  - [Schema validation](#schema-validation)
42
+ - [Context-aware validators](#context-aware-validators-schemaparsecontext)
41
43
  - [Health check](#health-check)
42
44
  - [Testing](#testing)
43
45
  - [Project structure](#project-structure)
@@ -581,6 +583,8 @@ await this.kafka.startBatchConsumer(
581
583
  );
582
584
  ```
583
585
 
586
+ > **Note:** If your handler calls `resolveOffset()` or `commitOffsetsIfNecessary()` without setting `autoCommit: false`, a `warn` is logged at consumer-start time — mixing autoCommit with manual offset control causes offset conflicts. Set `autoCommit: false` to suppress the warning and take full control of offset management.
587
+
584
588
  With `@SubscribeTo()`:
585
589
 
586
590
  ```typescript
@@ -592,6 +596,20 @@ async handleOrders(envelopes: EventEnvelope<OrdersTopicMap['order.created']>[],
592
596
 
593
597
  Schema validation runs per-message — invalid messages are skipped (DLQ'd if enabled), valid ones are passed to the handler. Retry applies to the whole batch.
594
598
 
599
+ `retryTopics: true` is also supported on `startBatchConsumer`. On handler failure, each envelope in the batch is routed individually to `<topic>.retry.1`; the companion retry consumers call the batch handler one message at a time with a stub `BatchMeta` (no-op `heartbeat`/`resolveOffset`/`commitOffsetsIfNecessary`):
600
+
601
+ ```typescript
602
+ await kafka.startBatchConsumer(
603
+ ['orders.created'],
604
+ async (envelopes, meta) => { /* same handler */ },
605
+ {
606
+ retry: { maxRetries: 3, backoffMs: 1000 },
607
+ dlq: true,
608
+ retryTopics: true, // ← now supported for batch consumers too
609
+ },
610
+ );
611
+ ```
612
+
595
613
  `BatchMeta` exposes:
596
614
 
597
615
  | Property/Method | Description |
@@ -671,21 +689,55 @@ const kafka = new KafkaClient('my-app', 'my-group', brokers, {
671
689
  });
672
690
  ```
673
691
 
674
- `otelInstrumentation()` injects `traceparent` on send, extracts it on consume, and creates `CONSUMER` spans automatically. Requires `@opentelemetry/api` as a peer dependency.
692
+ `otelInstrumentation()` injects `traceparent` on send, extracts it on consume, and creates `CONSUMER` spans automatically. The span is set as the **active OTel context** for the handler's duration via `context.with()` — so `trace.getActiveSpan()` works inside your handler and any child spans are automatically parented to the consume span. Requires `@opentelemetry/api` as a peer dependency.
693
+
694
+ ### Custom instrumentation
675
695
 
676
- Custom instrumentation:
696
+ `beforeConsume` can return a `BeforeConsumeResult` — either the legacy `() => void` cleanup function, or an object with `cleanup` and/or `wrap`:
677
697
 
678
698
  ```typescript
679
- import { KafkaInstrumentation } from '@drarzter/kafka-client';
699
+ import { KafkaInstrumentation, BeforeConsumeResult } from '@drarzter/kafka-client';
680
700
 
681
- const metrics: KafkaInstrumentation = {
701
+ const myInstrumentation: KafkaInstrumentation = {
682
702
  beforeSend(topic, headers) { /* inject headers, start timer */ },
683
703
  afterSend(topic) { /* record send latency */ },
684
- beforeConsume(envelope) { /* start span */ return () => { /* end span */ }; },
704
+
705
+ beforeConsume(envelope): BeforeConsumeResult {
706
+ const span = startMySpan(envelope.topic);
707
+ return {
708
+ // cleanup() is called after the handler completes (success or error)
709
+ cleanup() { span.end(); },
710
+ // wrap(fn) runs the handler inside the desired async context
711
+ // call fn() wherever you need it in the context scope
712
+ wrap(fn) { return runWithSpanActive(span, fn); },
713
+ };
714
+ },
715
+
685
716
  onConsumeError(envelope, error) { /* record error metric */ },
686
717
  };
687
718
  ```
688
719
 
720
+ The legacy `() => void` form is still fully supported — return a function directly if you only need cleanup:
721
+
722
+ ```typescript
723
+ beforeConsume(envelope) {
724
+ const timer = startTimer();
725
+ return () => timer.end(); // cleanup only, no context wrapping
726
+ },
727
+ ```
728
+
729
+ `BeforeConsumeResult` is a union:
730
+
731
+ ```typescript
732
+ type BeforeConsumeResult =
733
+ | (() => void) // legacy: cleanup only
734
+ | { cleanup?(): void; // called after handler (success or error)
735
+ wrap?(fn: () => Promise<void>): Promise<void>; // wraps handler execution
736
+ };
737
+ ```
738
+
739
+ When multiple instrumentations each provide a `wrap`, they compose in declaration order — the first instrumentation's `wrap` is the outermost.
740
+
689
741
  ## Options reference
690
742
 
691
743
  ### Send options
@@ -713,7 +765,7 @@ Options for `sendMessage()` — the third argument:
713
765
  | `retry.backoffMs` | `1000` | Base delay for exponential backoff in ms |
714
766
  | `retry.maxBackoffMs` | `30000` | Maximum delay cap for exponential backoff in ms |
715
767
  | `dlq` | `false` | Send to `{topic}.dlq` after all retries exhausted — message carries `x-dlq-*` metadata headers |
716
- | `retryTopics` | `false` | Route failed messages through per-level topics (`{topic}.retry.1`, `{topic}.retry.2`, …) instead of sleeping in-process; at-least-once semantics; requires `retry` (see [Retry topic chain](#retry-topic-chain)) |
768
+ | `retryTopics` | `false` | Route failed messages through per-level topics (`{topic}.retry.1`, `{topic}.retry.2`, …) instead of sleeping in-process; exactly-once routing semantics within the retry chain; requires `retry` (see [Retry topic chain](#retry-topic-chain)) |
717
769
  | `interceptors` | `[]` | Array of before/after/onError hooks |
718
770
  | `retryTopicAssignmentTimeoutMs` | `10000` | Timeout (ms) to wait for each retry level consumer to receive partition assignments after connecting; increase for slow brokers |
719
771
  | `handlerTimeoutMs` | — | Log a warning if the handler hasn't resolved within this window (ms) — does not cancel the handler |
@@ -827,11 +879,29 @@ const interceptor: ConsumerInterceptor<MyTopics> = {
827
879
 
828
880
  ## Retry topic chain
829
881
 
882
+ > **tl;dr — recommended production setup:**
883
+ >
884
+ > ```typescript
885
+ > await kafka.startConsumer(['orders.created'], handler, {
886
+ > retry: { maxRetries: 3, backoffMs: 1_000, maxBackoffMs: 30_000 },
887
+ > dlq: true, // ← messages never silently disappear
888
+ > retryTopics: true, // ← retries survive restarts; routing is exactly-once
889
+ > });
890
+ > ```
891
+ >
892
+ > Just `retry` + `dlq: true` is already safe for most workloads — failed messages land in `{topic}.dlq` after all retries and are never silently dropped. Add `retryTopics: true` for crash-durable retries and exactly-once routing guarantees within the retry chain.
893
+ >
894
+ > | Configuration | What happens to a message that always fails | Process crash mid-retry |
895
+ > | --- | --- | --- |
896
+ > | `retry` only | Dropped — `onMessageLost` fires | Lost if crash between attempts |
897
+ > | `retry` + `dlq` | Lands in `{topic}.dlq` after all attempts | DLQ write may duplicate (rare) |
898
+ > | `retry` + `dlq` + `retryTopics` | Lands in `{topic}.dlq` after all attempts | Retries survive restarts; routing is exactly-once |
899
+
830
900
  By default, retry is handled in-process: the consumer sleeps between attempts while holding the partition. With `retryTopics: true`, failed messages are routed through a chain of Kafka topics instead — one topic per retry level. A companion consumer auto-starts per level, waits for the scheduled delay using partition pause/resume, then calls the same handler.
831
901
 
832
902
  Benefits over in-process retry:
833
903
 
834
- - **Durable** — retry messages survive a consumer restart (at-least-once semantics)
904
+ - **Durable** — retry messages survive a consumer restart; routing between levels and to DLQ is exactly-once via Kafka transactions
835
905
  - **Non-blocking** — the original consumer is free immediately; each level consumer only pauses its specific partition during the delay window, so other partitions continue processing
836
906
  - **Isolated** — each retry level has its own consumer group, so a slow level 3 consumer never blocks a level 1 consumer
837
907
 
@@ -863,9 +933,11 @@ Each level consumer uses `consumer.pause → sleep(remaining) → consumer.resum
863
933
 
864
934
  The retry topic messages carry scheduling headers (`x-retry-attempt`, `x-retry-after`, `x-retry-original-topic`, `x-retry-max-retries`) that each level consumer reads automatically — no manual configuration needed.
865
935
 
866
- > **Delivery guarantee:** retry messages are at-least-once. A duplicate can occur in the rare case where a process crashes after routing to the next level but before committing the offset the message appears twice in the next level topic. Design handlers to be idempotent if duplicates are unacceptable.
936
+ > **Delivery guarantee:** routing within the retry chain (retry.N → retry.N+1 and retry.N → DLQ) is **exactly-once** — each routing step is wrapped in a Kafka transaction via `sendOffsetsToTransaction`, so the produce and the consumer offset commit happen atomically. A crash at any point rolls back the transaction: the message is redelivered and the routing is retried, with no duplicate in the next level. If the EOS transaction itself fails (broker unavailable), the offset is not committed and the message stays safely in the retry topic until the broker recovers.
867
937
  >
868
- > **Note:** `retryTopics` requires `retry` to be set an error is thrown at startup if `retry` is missing. Currently only applies to `startConsumer`; batch consumers (`startBatchConsumer`) use in-process retry regardless.
938
+ > The remaining at-least-once window is at the **main consumer → retry.1** boundary: the main consumer uses `autoCommit: true` by default, so if it crashes after routing to `retry.1` but before autoCommit fires, the message may appear twice in `retry.1`. This is the standard Kafka at-least-once trade-off for any consumer using autoCommit. Design handlers to be idempotent if this edge case is unacceptable.
939
+ >
940
+ > **Startup validation:** `retryTopics` requires `retry` to be set — an error is thrown at startup if `retry` is missing. When `autoCreateTopics: false`, all `{topic}.retry.N` topics are validated to exist at startup and a clear error lists any missing ones. With `autoCreateTopics: true` the check is skipped — topics are created automatically by the `ensureTopic` path. Supported by both `startConsumer` and `startBatchConsumer`.
869
941
 
870
942
  `stopConsumer(groupId)` automatically stops all companion retry level consumers started for that group.
871
943
 
@@ -883,6 +955,36 @@ await kafka.stopConsumer();
883
955
 
884
956
  `stopConsumer(groupId)` disconnects and removes only that group's consumer, leaving other groups running. Useful when you want to pause processing for a specific topic without restarting the whole client.
885
957
 
958
+ ## Graceful shutdown
959
+
960
+ `disconnect()` now drains in-flight handlers before tearing down connections — no messages are silently cut off mid-processing.
961
+
962
+ **NestJS** apps get this automatically: `onModuleDestroy` calls `disconnect()`, which waits for all running `eachMessage` / `eachBatch` callbacks to settle first. Enable NestJS shutdown hooks in your bootstrap:
963
+
964
+ ```typescript
965
+ // main.ts
966
+ const app = await NestFactory.create(AppModule);
967
+ app.enableShutdownHooks(); // lets NestJS call onModuleDestroy on SIGTERM
968
+ await app.listen(3000);
969
+ ```
970
+
971
+ **Standalone** apps call `enableGracefulShutdown()` to register SIGTERM / SIGINT handlers:
972
+
973
+ ```typescript
974
+ const kafka = new KafkaClient('my-app', 'my-group', brokers);
975
+ await kafka.connectProducer();
976
+
977
+ kafka.enableGracefulShutdown();
978
+ // or with custom signals and timeout:
979
+ kafka.enableGracefulShutdown(['SIGTERM', 'SIGINT'], 60_000);
980
+ ```
981
+
982
+ `disconnect()` accepts an optional `drainTimeoutMs` (default `30_000` ms). If handlers haven't settled within the window, a warning is logged and the client disconnects anyway:
983
+
984
+ ```typescript
985
+ await kafka.disconnect(10_000); // wait up to 10 s, then force disconnect
986
+ ```
987
+
886
988
  ## Consumer handles
887
989
 
888
990
  `startConsumer()` and `startBatchConsumer()` return a `ConsumerHandle` instead of `void`. Use it to stop a specific consumer without needing to remember the group ID:
@@ -916,12 +1018,13 @@ const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
916
1018
  });
917
1019
  ```
918
1020
 
919
- `onMessageLost` fires in two cases:
1021
+ `onMessageLost` fires in three cases:
920
1022
 
921
1023
  1. **Handler error** — handler threw after all retries and `dlq: false`
922
1024
  2. **Validation error** — schema rejected the message and `dlq: false` (attempt is `0`)
1025
+ 3. **DLQ send failure** — `dlq: true` but `producer.send()` to `{topic}.dlq` itself threw (broker down, topic missing); the error passed to `onMessageLost` is the send error, not the original handler error
923
1026
 
924
- It does NOT fire when `dlq: true` in that case the message is preserved in `{topic}.dlq`.
1027
+ In the normal case (`dlq: true`, DLQ send succeeds), `onMessageLost` does NOT fire the message is preserved in `{topic}.dlq`.
925
1028
 
926
1029
  ## onRebalance
927
1030
 
@@ -1064,6 +1167,38 @@ const asyncValidator: SchemaLike<{ id: string }> = {
1064
1167
  const MyTopic = topic('my.topic').schema(customValidator);
1065
1168
  ```
1066
1169
 
1170
+ ### Context-aware validators (`SchemaParseContext`)
1171
+
1172
+ `parse()` receives an optional second argument `ctx: SchemaParseContext` on both the consume and send paths. Use it for schema-registry lookups, version-aware migration, or header-driven parsing:
1173
+
1174
+ ```typescript
1175
+ import { SchemaLike, SchemaParseContext } from '@drarzter/kafka-client';
1176
+
1177
+ const versionedValidator: SchemaLike<MyPayload> = {
1178
+ parse(data: unknown, ctx?: SchemaParseContext) {
1179
+ const version = ctx?.version ?? 1;
1180
+ // version comes from the x-schema-version header (send: schemaVersion option)
1181
+ if (version >= 2) return migrateV1toV2(data);
1182
+ return validateV1(data);
1183
+ },
1184
+ };
1185
+
1186
+ // On consume: ctx = { topic: 'orders.created', headers: { ... }, version: 2 }
1187
+ // On send: ctx = { topic: 'orders.created', headers: { ... }, version: schemaVersion ?? 1 }
1188
+ ```
1189
+
1190
+ `SchemaParseContext` shape:
1191
+
1192
+ ```typescript
1193
+ interface SchemaParseContext {
1194
+ topic: string; // topic the message was produced to / consumed from
1195
+ headers: MessageHeaders; // decoded headers (envelope headers included)
1196
+ version: number; // x-schema-version header value, defaults to 1
1197
+ }
1198
+ ```
1199
+
1200
+ Existing validators (Zod, Valibot, ArkType, custom) that only use the first argument continue to work unchanged — the second argument is silently ignored.
1201
+
1067
1202
  ## Health check
1068
1203
 
1069
1204
  Monitor Kafka connectivity with the built-in health indicator: