@drarzter/kafka-client 0.5.4 → 0.5.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +74 -0
- package/dist/chunk-6B72MJPU.mjs +1160 -0
- package/dist/chunk-6B72MJPU.mjs.map +1 -0
- package/dist/core.d.mts +29 -38
- package/dist/core.d.ts +29 -38
- package/dist/core.js +649 -436
- package/dist/core.js.map +1 -1
- package/dist/core.mjs +1 -1
- package/dist/{envelope-BR8d1m8c.d.mts → envelope-LeO5e3ob.d.mts} +45 -6
- package/dist/{envelope-BR8d1m8c.d.ts → envelope-LeO5e3ob.d.ts} +45 -6
- package/dist/index.d.mts +17 -13
- package/dist/index.d.ts +17 -13
- package/dist/index.js +688 -485
- package/dist/index.js.map +1 -1
- package/dist/index.mjs +40 -50
- package/dist/index.mjs.map +1 -1
- package/dist/otel.d.mts +1 -1
- package/dist/otel.d.ts +1 -1
- package/dist/otel.js.map +1 -1
- package/dist/otel.mjs.map +1 -1
- package/dist/testing.d.mts +1 -1
- package/dist/testing.d.ts +1 -1
- package/dist/testing.js +29 -16
- package/dist/testing.js.map +1 -1
- package/dist/testing.mjs +29 -16
- package/dist/testing.mjs.map +1 -1
- package/package.json +1 -1
- package/dist/chunk-3QXTW66R.mjs +0 -947
- package/dist/chunk-3QXTW66R.mjs.map +0 -1
package/README.md
CHANGED
|
@@ -32,7 +32,11 @@ Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class N
|
|
|
32
32
|
- [Error classes](#error-classes)
|
|
33
33
|
- [Retry topic chain](#retry-topic-chain)
|
|
34
34
|
- [stopConsumer](#stopconsumer)
|
|
35
|
+
- [Consumer handles](#consumer-handles)
|
|
35
36
|
- [onMessageLost](#onmessagelost)
|
|
37
|
+
- [onRebalance](#onrebalance)
|
|
38
|
+
- [Consumer lag](#consumer-lag)
|
|
39
|
+
- [Handler timeout warning](#handler-timeout-warning)
|
|
36
40
|
- [Schema validation](#schema-validation)
|
|
37
41
|
- [Health check](#health-check)
|
|
38
42
|
- [Testing](#testing)
|
|
@@ -711,6 +715,7 @@ Options for `sendMessage()` — the third argument:
|
|
|
711
715
|
| `dlq` | `false` | Send to `{topic}.dlq` after all retries exhausted — message carries `x-dlq-*` metadata headers |
|
|
712
716
|
| `retryTopics` | `false` | Route failed messages through `{topic}.retry` instead of sleeping in-process (see [Retry topic chain](#retry-topic-chain)) |
|
|
713
717
|
| `interceptors` | `[]` | Array of before/after/onError hooks |
|
|
718
|
+
| `handlerTimeoutMs` | — | Log a warning if the handler hasn't resolved within this window (ms) — does not cancel the handler |
|
|
714
719
|
| `batch` | `false` | (decorator only) Use `startBatchConsumer` instead of `startConsumer` |
|
|
715
720
|
| `subscribeRetry.retries` | `5` | Max attempts for `consumer.subscribe()` when topic doesn't exist yet |
|
|
716
721
|
| `subscribeRetry.backoffMs` | `5000` | Delay between subscribe retry attempts (ms) |
|
|
@@ -731,6 +736,7 @@ Passed to `KafkaModule.register()` or returned from `registerAsync()` factory:
|
|
|
731
736
|
| `strictSchemas` | `true` | Validate string topic keys against schemas registered via TopicDescriptor |
|
|
732
737
|
| `instrumentation` | `[]` | Client-wide instrumentation hooks (e.g. OTel). Applied to both send and consume paths |
|
|
733
738
|
| `onMessageLost` | — | Called when a message is silently dropped without DLQ — use to alert, log to external systems, or trigger fallback logic |
|
|
739
|
+
| `onRebalance` | — | Called on every partition assign/revoke event across all consumers created by this client |
|
|
734
740
|
|
|
735
741
|
**Module-scoped** (default) — import `KafkaModule` in each module that needs it:
|
|
736
742
|
|
|
@@ -861,6 +867,21 @@ await kafka.stopConsumer();
|
|
|
861
867
|
|
|
862
868
|
`stopConsumer(groupId)` disconnects and removes only that group's consumer, leaving other groups running. Useful when you want to pause processing for a specific topic without restarting the whole client.
|
|
863
869
|
|
|
870
|
+
## Consumer handles
|
|
871
|
+
|
|
872
|
+
`startConsumer()` and `startBatchConsumer()` return a `ConsumerHandle` instead of `void`. Use it to stop a specific consumer without needing to remember the group ID:
|
|
873
|
+
|
|
874
|
+
```typescript
|
|
875
|
+
const handle = await kafka.startConsumer(['orders'], handler);
|
|
876
|
+
|
|
877
|
+
console.log(handle.groupId); // e.g. "my-group"
|
|
878
|
+
|
|
879
|
+
// Later — stop only this consumer, producer stays connected
|
|
880
|
+
await handle.stop();
|
|
881
|
+
```
|
|
882
|
+
|
|
883
|
+
`handle.stop()` is equivalent to `kafka.stopConsumer(handle.groupId)`. Useful in lifecycle methods or when you need to conditionally stop one consumer while others keep running.
|
|
884
|
+
|
|
864
885
|
## onMessageLost
|
|
865
886
|
|
|
866
887
|
By default, if a consumer handler throws and `dlq` is not enabled, the message is logged and dropped. Use `onMessageLost` to catch these silent losses:
|
|
@@ -886,6 +907,59 @@ const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
|
|
|
886
907
|
|
|
887
908
|
It does NOT fire when `dlq: true` — in that case the message is preserved in `{topic}.dlq`.
|
|
888
909
|
|
|
910
|
+
## onRebalance
|
|
911
|
+
|
|
912
|
+
React to partition rebalance events without patching the consumer. Useful for flushing in-flight state before partitions are revoked, or for logging/metrics:
|
|
913
|
+
|
|
914
|
+
```typescript
|
|
915
|
+
const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
|
|
916
|
+
onRebalance: (type, partitions) => {
|
|
917
|
+
// type — 'assign' | 'revoke'
|
|
918
|
+
// partitions — Array<{ topic: string; partition: number }>
|
|
919
|
+
console.log(`Rebalance ${type}:`, partitions);
|
|
920
|
+
},
|
|
921
|
+
});
|
|
922
|
+
```
|
|
923
|
+
|
|
924
|
+
- `'assign'` fires when this instance receives new partitions (e.g. on startup or when another consumer leaves the group).
|
|
925
|
+
- `'revoke'` fires when partitions are taken away (e.g. another consumer joins).
|
|
926
|
+
|
|
927
|
+
The callback is applied to **every** consumer created by this client. If you need per-consumer rebalance handling, use separate `KafkaClient` instances.
|
|
928
|
+
|
|
929
|
+
## Consumer lag
|
|
930
|
+
|
|
931
|
+
Query consumer group lag per partition via the admin API — no external tooling needed:
|
|
932
|
+
|
|
933
|
+
```typescript
|
|
934
|
+
const lag = await kafka.getConsumerLag();
|
|
935
|
+
// [{ topic: 'orders', partition: 0, lag: 12 }, ...]
|
|
936
|
+
|
|
937
|
+
// Or for a different group:
|
|
938
|
+
const lag2 = await kafka.getConsumerLag('another-group');
|
|
939
|
+
```
|
|
940
|
+
|
|
941
|
+
Lag is computed as `brokerHighWatermark − lastCommittedOffset`. A partition with a committed offset of `-1` (nothing ever committed) reports full lag equal to the high watermark.
|
|
942
|
+
|
|
943
|
+
Returns an empty array when the group has no committed offsets at all.
|
|
944
|
+
|
|
945
|
+
## Handler timeout warning
|
|
946
|
+
|
|
947
|
+
Catch stuck handlers before they silently starve a partition. Set `handlerTimeoutMs` on `startConsumer` or `startBatchConsumer`:
|
|
948
|
+
|
|
949
|
+
```typescript
|
|
950
|
+
await kafka.startConsumer(['orders'], handler, {
|
|
951
|
+
handlerTimeoutMs: 5_000, // warn if handler hasn't resolved after 5 s
|
|
952
|
+
});
|
|
953
|
+
```
|
|
954
|
+
|
|
955
|
+
If the handler hasn't resolved within the window, a `warn` is logged:
|
|
956
|
+
|
|
957
|
+
```text
|
|
958
|
+
[KafkaClient:my-app] Handler for topic "orders" has not resolved after 5000ms — possible stuck handler
|
|
959
|
+
```
|
|
960
|
+
|
|
961
|
+
The handler is **not** cancelled — the warning is diagnostic only. Combine with `retry` to automatically give up after a fixed number of slow attempts.
|
|
962
|
+
|
|
889
963
|
## Schema validation
|
|
890
964
|
|
|
891
965
|
Add runtime message validation using any library with a `.parse()` method — Zod, Valibot, ArkType, or a custom validator. No extra dependency required.
|