@drarzter/kafka-client 0.5.2 → 0.5.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +161 -0
- package/dist/{chunk-VGUALBZH.mjs → chunk-Z3O5GTS7.mjs} +408 -33
- package/dist/chunk-Z3O5GTS7.mjs.map +1 -0
- package/dist/core.d.mts +44 -9
- package/dist/core.d.ts +44 -9
- package/dist/core.js +407 -32
- package/dist/core.js.map +1 -1
- package/dist/core.mjs +1 -1
- package/dist/{envelope-C66_h8r_.d.mts → envelope-BpyKN_WL.d.mts} +60 -7
- package/dist/{envelope-C66_h8r_.d.ts → envelope-BpyKN_WL.d.ts} +60 -7
- package/dist/index.d.mts +10 -7
- package/dist/index.d.ts +10 -7
- package/dist/index.js +407 -32
- package/dist/index.js.map +1 -1
- package/dist/index.mjs +1 -1
- package/dist/index.mjs.map +1 -1
- package/dist/otel.d.mts +1 -1
- package/dist/otel.d.ts +1 -1
- package/dist/testing.d.mts +1 -1
- package/dist/testing.d.ts +1 -1
- package/dist/testing.js +27 -14
- package/dist/testing.js.map +1 -1
- package/dist/testing.mjs +27 -14
- package/dist/testing.mjs.map +1 -1
- package/package.json +1 -1
- package/dist/chunk-VGUALBZH.mjs.map +0 -1
package/README.md
CHANGED
|
@@ -6,10 +6,57 @@
|
|
|
6
6
|
|
|
7
7
|
Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class NestJS adapter. Built on top of [`@confluentinc/kafka-javascript`](https://github.com/confluentinc/confluent-kafka-javascript) (librdkafka).
|
|
8
8
|
|
|
9
|
+
## Table of contents
|
|
10
|
+
|
|
11
|
+
- [What is this?](#what-is-this)
|
|
12
|
+
- [Why?](#why)
|
|
13
|
+
- [Installation](#installation)
|
|
14
|
+
- [Standalone usage](#standalone-usage-no-nestjs)
|
|
15
|
+
- [Quick start (NestJS)](#quick-start-nestjs)
|
|
16
|
+
- [Usage](#usage)
|
|
17
|
+
- [1. Define your topic map](#1-define-your-topic-map)
|
|
18
|
+
- [2. Register the module](#2-register-the-module)
|
|
19
|
+
- [3. Inject and use](#3-inject-and-use)
|
|
20
|
+
- [Consuming messages](#consuming-messages)
|
|
21
|
+
- [Declarative: @SubscribeTo()](#declarative-subscribeto)
|
|
22
|
+
- [Imperative: startConsumer()](#imperative-startconsumer)
|
|
23
|
+
- [Multiple consumer groups](#multiple-consumer-groups)
|
|
24
|
+
- [Partition key](#partition-key)
|
|
25
|
+
- [Message headers](#message-headers)
|
|
26
|
+
- [Batch sending](#batch-sending)
|
|
27
|
+
- [Batch consuming](#batch-consuming)
|
|
28
|
+
- [Transactions](#transactions)
|
|
29
|
+
- [Consumer interceptors](#consumer-interceptors)
|
|
30
|
+
- [Instrumentation](#instrumentation)
|
|
31
|
+
- [Options reference](#options-reference)
|
|
32
|
+
- [Error classes](#error-classes)
|
|
33
|
+
- [Retry topic chain](#retry-topic-chain)
|
|
34
|
+
- [stopConsumer](#stopconsumer)
|
|
35
|
+
- [Consumer handles](#consumer-handles)
|
|
36
|
+
- [onMessageLost](#onmessagelost)
|
|
37
|
+
- [onRebalance](#onrebalance)
|
|
38
|
+
- [Consumer lag](#consumer-lag)
|
|
39
|
+
- [Handler timeout warning](#handler-timeout-warning)
|
|
40
|
+
- [Schema validation](#schema-validation)
|
|
41
|
+
- [Health check](#health-check)
|
|
42
|
+
- [Testing](#testing)
|
|
43
|
+
- [Project structure](#project-structure)
|
|
44
|
+
|
|
9
45
|
## What is this?
|
|
10
46
|
|
|
11
47
|
An opinionated, type-safe abstraction over `@confluentinc/kafka-javascript` (librdkafka). Works standalone (Express, Fastify, raw Node) or as a NestJS DynamicModule. Not a full-featured framework — just a clean, typed layer for producing and consuming Kafka messages.
|
|
12
48
|
|
|
49
|
+
**This library exists so you don't have to think about:**
|
|
50
|
+
|
|
51
|
+
- rebalance edge cases
|
|
52
|
+
- retry loops and backoff scheduling
|
|
53
|
+
- dead letter queue wiring
|
|
54
|
+
- transaction coordinator warmup
|
|
55
|
+
- graceful shutdown and offset commit pitfalls
|
|
56
|
+
- silent message loss
|
|
57
|
+
|
|
58
|
+
Safe by default. Configurable when you need it. Escape hatches for when you know what you're doing.
|
|
59
|
+
|
|
13
60
|
## Why?
|
|
14
61
|
|
|
15
62
|
- **Typed topics** — you define a map of topic -> message shape, and the compiler won't let you send wrong data to wrong topic
|
|
@@ -666,7 +713,9 @@ Options for `sendMessage()` — the third argument:
|
|
|
666
713
|
| `retry.backoffMs` | `1000` | Base delay for exponential backoff in ms |
|
|
667
714
|
| `retry.maxBackoffMs` | `30000` | Maximum delay cap for exponential backoff in ms |
|
|
668
715
|
| `dlq` | `false` | Send to `{topic}.dlq` after all retries exhausted — message carries `x-dlq-*` metadata headers |
|
|
716
|
+
| `retryTopics` | `false` | Route failed messages through `{topic}.retry` instead of sleeping in-process (see [Retry topic chain](#retry-topic-chain)) |
|
|
669
717
|
| `interceptors` | `[]` | Array of before/after/onError hooks |
|
|
718
|
+
| `handlerTimeoutMs` | — | Log a warning if the handler hasn't resolved within this window (ms) — does not cancel the handler |
|
|
670
719
|
| `batch` | `false` | (decorator only) Use `startBatchConsumer` instead of `startConsumer` |
|
|
671
720
|
| `subscribeRetry.retries` | `5` | Max attempts for `consumer.subscribe()` when topic doesn't exist yet |
|
|
672
721
|
| `subscribeRetry.backoffMs` | `5000` | Delay between subscribe retry attempts (ms) |
|
|
@@ -687,6 +736,7 @@ Passed to `KafkaModule.register()` or returned from `registerAsync()` factory:
|
|
|
687
736
|
| `strictSchemas` | `true` | Validate string topic keys against schemas registered via TopicDescriptor |
|
|
688
737
|
| `instrumentation` | `[]` | Client-wide instrumentation hooks (e.g. OTel). Applied to both send and consume paths |
|
|
689
738
|
| `onMessageLost` | — | Called when a message is silently dropped without DLQ — use to alert, log to external systems, or trigger fallback logic |
|
|
739
|
+
| `onRebalance` | — | Called on every partition assign/revoke event across all consumers created by this client |
|
|
690
740
|
|
|
691
741
|
**Module-scoped** (default) — import `KafkaModule` in each module that needs it:
|
|
692
742
|
|
|
@@ -774,6 +824,64 @@ const interceptor: ConsumerInterceptor<MyTopics> = {
|
|
|
774
824
|
};
|
|
775
825
|
```
|
|
776
826
|
|
|
827
|
+
## Retry topic chain
|
|
828
|
+
|
|
829
|
+
By default, retry is handled in-process: the consumer sleeps between attempts while holding the partition. With `retryTopics: true`, failed messages are routed to a `<topic>.retry` Kafka topic instead. A companion consumer auto-starts on `<topic>.retry` (group `<groupId>-retry`), waits until the scheduled retry time, then calls the same handler.
|
|
830
|
+
|
|
831
|
+
Benefits over in-process retry:
|
|
832
|
+
|
|
833
|
+
- **Durable** — retry messages survive a consumer restart
|
|
834
|
+
- **Non-blocking** — the original consumer is free immediately; the retry consumer pauses only the specific partition being delayed, so other partitions continue processing
|
|
835
|
+
|
|
836
|
+
```typescript
|
|
837
|
+
await kafka.startConsumer(['orders.created'], handler, {
|
|
838
|
+
retry: { maxRetries: 3, backoffMs: 1000, maxBackoffMs: 30_000 },
|
|
839
|
+
dlq: true,
|
|
840
|
+
retryTopics: true, // ← opt in
|
|
841
|
+
});
|
|
842
|
+
```
|
|
843
|
+
|
|
844
|
+
Message flow with `maxRetries: 2`:
|
|
845
|
+
|
|
846
|
+
```text
|
|
847
|
+
orders.created → handler fails → orders.created.retry (attempt 1, delay ~1 s)
|
|
848
|
+
→ handler fails → orders.created.retry (attempt 2, delay ~2 s)
|
|
849
|
+
→ handler fails → orders.created.dlq
|
|
850
|
+
```
|
|
851
|
+
|
|
852
|
+
The retry topic messages carry scheduling headers (`x-retry-attempt`, `x-retry-after`, `x-retry-original-topic`, `x-retry-max-retries`) that the companion consumer reads automatically — no manual configuration needed.
|
|
853
|
+
|
|
854
|
+
> **Note:** `retryTopics` requires `retry` to be set — an error is thrown at startup if `retry` is missing. Currently only applies to `startConsumer`; batch consumers (`startBatchConsumer`) use in-process retry regardless.
|
|
855
|
+
|
|
856
|
+
## stopConsumer
|
|
857
|
+
|
|
858
|
+
Stop all consumers or a specific group:
|
|
859
|
+
|
|
860
|
+
```typescript
|
|
861
|
+
// Stop a specific consumer group
|
|
862
|
+
await kafka.stopConsumer('my-group');
|
|
863
|
+
|
|
864
|
+
// Stop all consumers
|
|
865
|
+
await kafka.stopConsumer();
|
|
866
|
+
```
|
|
867
|
+
|
|
868
|
+
`stopConsumer(groupId)` disconnects and removes only that group's consumer, leaving other groups running. Useful when you want to pause processing for a specific topic without restarting the whole client.
|
|
869
|
+
|
|
870
|
+
## Consumer handles
|
|
871
|
+
|
|
872
|
+
`startConsumer()` and `startBatchConsumer()` return a `ConsumerHandle` instead of `void`. Use it to stop a specific consumer without needing to remember the group ID:
|
|
873
|
+
|
|
874
|
+
```typescript
|
|
875
|
+
const handle = await kafka.startConsumer(['orders'], handler);
|
|
876
|
+
|
|
877
|
+
console.log(handle.groupId); // e.g. "my-group"
|
|
878
|
+
|
|
879
|
+
// Later — stop only this consumer, producer stays connected
|
|
880
|
+
await handle.stop();
|
|
881
|
+
```
|
|
882
|
+
|
|
883
|
+
`handle.stop()` is equivalent to `kafka.stopConsumer(handle.groupId)`. Useful in lifecycle methods or when you need to conditionally stop one consumer while others keep running.
|
|
884
|
+
|
|
777
885
|
## onMessageLost
|
|
778
886
|
|
|
779
887
|
By default, if a consumer handler throws and `dlq` is not enabled, the message is logged and dropped. Use `onMessageLost` to catch these silent losses:
|
|
@@ -799,6 +907,59 @@ const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
|
|
|
799
907
|
|
|
800
908
|
It does NOT fire when `dlq: true` — in that case the message is preserved in `{topic}.dlq`.
|
|
801
909
|
|
|
910
|
+
## onRebalance
|
|
911
|
+
|
|
912
|
+
React to partition rebalance events without patching the consumer. Useful for flushing in-flight state before partitions are revoked, or for logging/metrics:
|
|
913
|
+
|
|
914
|
+
```typescript
|
|
915
|
+
const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
|
|
916
|
+
onRebalance: (type, partitions) => {
|
|
917
|
+
// type — 'assign' | 'revoke'
|
|
918
|
+
// partitions — Array<{ topic: string; partition: number }>
|
|
919
|
+
console.log(`Rebalance ${type}:`, partitions);
|
|
920
|
+
},
|
|
921
|
+
});
|
|
922
|
+
```
|
|
923
|
+
|
|
924
|
+
- `'assign'` fires when this instance receives new partitions (e.g. on startup or when another consumer leaves the group).
|
|
925
|
+
- `'revoke'` fires when partitions are taken away (e.g. another consumer joins).
|
|
926
|
+
|
|
927
|
+
The callback is applied to **every** consumer created by this client. If you need per-consumer rebalance handling, use separate `KafkaClient` instances.
|
|
928
|
+
|
|
929
|
+
## Consumer lag
|
|
930
|
+
|
|
931
|
+
Query consumer group lag per partition via the admin API — no external tooling needed:
|
|
932
|
+
|
|
933
|
+
```typescript
|
|
934
|
+
const lag = await kafka.getConsumerLag();
|
|
935
|
+
// [{ topic: 'orders', partition: 0, lag: 12 }, ...]
|
|
936
|
+
|
|
937
|
+
// Or for a different group:
|
|
938
|
+
const lag2 = await kafka.getConsumerLag('another-group');
|
|
939
|
+
```
|
|
940
|
+
|
|
941
|
+
Lag is computed as `brokerHighWatermark − lastCommittedOffset`. A partition with a committed offset of `-1` (nothing ever committed) reports full lag equal to the high watermark.
|
|
942
|
+
|
|
943
|
+
Returns an empty array when the group has no committed offsets at all.
|
|
944
|
+
|
|
945
|
+
## Handler timeout warning
|
|
946
|
+
|
|
947
|
+
Catch stuck handlers before they silently starve a partition. Set `handlerTimeoutMs` on `startConsumer` or `startBatchConsumer`:
|
|
948
|
+
|
|
949
|
+
```typescript
|
|
950
|
+
await kafka.startConsumer(['orders'], handler, {
|
|
951
|
+
handlerTimeoutMs: 5_000, // warn if handler hasn't resolved after 5 s
|
|
952
|
+
});
|
|
953
|
+
```
|
|
954
|
+
|
|
955
|
+
If the handler hasn't resolved within the window, a `warn` is logged:
|
|
956
|
+
|
|
957
|
+
```text
|
|
958
|
+
[KafkaClient:my-app] Handler for topic "orders" has not resolved after 5000ms — possible stuck handler
|
|
959
|
+
```
|
|
960
|
+
|
|
961
|
+
The handler is **not** cancelled — the warning is diagnostic only. Combine with `retry` to automatically give up after a fixed number of slow attempts.
|
|
962
|
+
|
|
802
963
|
## Schema validation
|
|
803
964
|
|
|
804
965
|
Add runtime message validation using any library with a `.parse()` method — Zod, Valibot, ArkType, or a custom validator. No extra dependency required.
|