@drarzter/kafka-client 0.7.0 → 0.7.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -20,12 +20,15 @@ Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class N
20
20
  - [Consuming messages](#consuming-messages)
21
21
  - [Declarative: @SubscribeTo()](#declarative-subscribeto)
22
22
  - [Imperative: startConsumer()](#imperative-startconsumer)
23
+ - [Regex topic subscription](#regex-topic-subscription)
23
24
  - [Iterator: consume()](#iterator-consume)
24
25
  - [Multiple consumer groups](#multiple-consumer-groups)
25
26
  - [Partition key](#partition-key)
26
27
  - [Message headers](#message-headers)
27
28
  - [Batch sending](#batch-sending)
28
29
  - [Batch consuming](#batch-consuming)
30
+ - [Tombstone messages](#tombstone-messages)
31
+ - [Compression](#compression)
29
32
  - [Transactions](#transactions)
30
33
  - [Consumer interceptors](#consumer-interceptors)
31
34
  - [Instrumentation](#instrumentation)
@@ -36,13 +39,17 @@ Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class N
36
39
  - [stopConsumer](#stopconsumer)
37
40
  - [Pause and resume](#pause-and-resume)
38
41
  - [Circuit breaker](#circuit-breaker)
42
+ - [getCircuitState](#getcircuitstate)
39
43
  - [Reset consumer offsets](#reset-consumer-offsets)
40
44
  - [Seek to offset](#seek-to-offset)
45
+ - [Seek to timestamp](#seek-to-timestamp)
41
46
  - [Message TTL](#message-ttl)
42
47
  - [DLQ replay](#dlq-replay)
48
+ - [Admin API](#admin-api)
43
49
  - [Graceful shutdown](#graceful-shutdown)
44
50
  - [Consumer handles](#consumer-handles)
45
51
  - [onMessageLost](#onmessagelost)
52
+ - [onTtlExpired](#onttlexpired)
46
53
  - [onRebalance](#onrebalance)
47
54
  - [Consumer lag](#consumer-lag)
48
55
  - [Handler timeout warning](#handler-timeout-warning)
@@ -94,6 +101,11 @@ Safe by default. Configurable when you need it. Escape hatches for when you know
94
101
  - **Message TTL** — `messageTtlMs` drops or DLQs messages older than a configurable threshold, preventing stale events from poisoning downstream systems after a lag spike
95
102
  - **Circuit breaker** — `circuitBreaker` option applies a sliding-window breaker per topic-partition; pauses delivery on repeated DLQ failures and resumes after a configurable recovery window
96
103
  - **Seek to offset** — `seekToOffset(groupId, assignments)` seeks individual partitions to explicit offsets for fine-grained replay
104
+ - **Tombstone messages** — `sendTombstone(topic, key)` sends a null-value record to compact a key out of a log-compacted topic; all instrumentation hooks still fire
105
+ - **Regex topic subscription** — `startConsumer([/^orders\..+/], handler)` subscribes using a pattern; the broker routes matching topics to the consumer dynamically
106
+ - **Compression** — per-send `compression` option (`gzip`, `snappy`, `lz4`, `zstd`) in `SendOptions` and `BatchSendOptions`
107
+ - **Partition assignment strategy** — `partitionAssigner` in `ConsumerOptions` chooses between `cooperative-sticky` (default), `roundrobin`, and `range`
108
+ - **Admin API** — `listConsumerGroups()`, `describeTopics()`, `deleteRecords()` for group inspection, partition metadata, and message deletion
97
109
 
98
110
  See the [Roadmap](./ROADMAP.md) for upcoming features and version history.
99
111
 
@@ -436,6 +448,27 @@ export class OrdersService implements OnModuleInit {
436
448
  }
437
449
  ```
438
450
 
451
+ ### Regex topic subscription
452
+
453
+ Subscribe to multiple topics matching a pattern — the broker dynamically routes any topic whose name matches the regex to this consumer:
454
+
455
+ ```typescript
456
+ // Subscribe to all topics starting with "orders."
457
+ await kafka.startConsumer([/^orders\..+/], handler);
458
+
459
+ // Mix regexes and literal strings
460
+ await kafka.startConsumer([/^payments\..+/, 'audit.global'], handler);
461
+ ```
462
+
463
+ Works with `startBatchConsumer` and `@SubscribeTo` too:
464
+
465
+ ```typescript
466
+ @SubscribeTo(/^events\..+/)
467
+ async handleEvent(envelope: EventEnvelope<any>) { ... }
468
+ ```
469
+
470
+ > **Limitation:** `retryTopics: true` is incompatible with regex subscriptions — the library cannot derive static retry topic names from a pattern. An error is thrown at startup if both are combined.
471
+
439
472
  ### Iterator: consume()
440
473
 
441
474
  Stream messages from a single topic as an `AsyncIterableIterator` — useful for scripts, one-off tasks, or any context where you prefer `for await` over a callback:
@@ -465,6 +498,20 @@ for await (const envelope of kafka.consume('orders', {
465
498
 
466
499
  `break`, `return`, or any early exit from the loop calls the iterator's `return()` method, which closes the internal queue and calls `handle.stop()` on the background consumer.
467
500
 
501
+ **Backpressure** — use `queueHighWaterMark` to prevent unbounded queue growth when processing is slower than the message rate:
502
+
503
+ ```typescript
504
+ for await (const envelope of kafka.consume('orders', {
505
+ queueHighWaterMark: 100, // pause partition when queue reaches 100 messages
506
+ })) {
507
+ await slowProcessing(envelope.payload); // resumes when queue drains below 50
508
+ }
509
+ ```
510
+
511
+ The partition is paused when the internal queue reaches `queueHighWaterMark` and automatically resumed when it drains below 50%. Without this option the queue is unbounded.
512
+
513
+ **Error propagation** — if the consumer fails to start (e.g. broker unreachable), the error surfaces on the next `next()` / `for await` iteration rather than being silently swallowed.
514
+
468
515
  ## Multiple consumer groups
469
516
 
470
517
  ### Per-consumer groupId
@@ -665,6 +712,36 @@ await kafka.startBatchConsumer(
665
712
  | `resolveOffset(offset)` | Mark offset as processed (required before `commitOffsetsIfNecessary`) |
666
713
  | `commitOffsetsIfNecessary()` | Commit resolved offsets; respects `autoCommit` setting |
667
714
 
715
+ ## Tombstone messages
716
+
717
+ Send a null-value Kafka record to compact a specific key out of a log-compacted topic:
718
+
719
+ ```typescript
720
+ // Delete the record with key "user-123" from the log-compacted "users" topic
721
+ await kafka.sendTombstone('users', 'user-123');
722
+
723
+ // With custom headers
724
+ await kafka.sendTombstone('users', 'user-123', { 'x-reason': 'gdpr-deletion' });
725
+ ```
726
+
727
+ `sendTombstone` skips envelope headers, schema validation, and the Lamport clock — the record value is literally `null`, as required by Kafka's log compaction protocol. Both `beforeSend` and `afterSend` instrumentation hooks still fire so tracing works correctly.
728
+
729
+ ## Compression
730
+
731
+ Reduce network bandwidth with per-send compression. Supported codecs: `'gzip'`, `'snappy'`, `'lz4'`, `'zstd'`:
732
+
733
+ ```typescript
734
+ import { CompressionType } from '@drarzter/kafka-client/core';
735
+
736
+ // Single message
737
+ await kafka.sendMessage('events', payload, { compression: 'gzip' });
738
+
739
+ // Batch
740
+ await kafka.sendBatch('events', messages, { compression: 'snappy' });
741
+ ```
742
+
743
+ Compression is applied at the Kafka message-set level — the broker decompresses transparently on the consumer side. `'snappy'` and `'lz4'` offer the best throughput/CPU trade-off for most workloads; `'gzip'` gives the highest compression ratio; `'zstd'` balances both.
744
+
668
745
  ## Transactions
669
746
 
670
747
  Send multiple messages atomically with exactly-once semantics:
@@ -845,8 +922,9 @@ Options for `sendMessage()` — the third argument:
845
922
  | `correlationId` | auto | Override the auto-propagated correlation ID (default: inherited from ALS context or new UUID) |
846
923
  | `schemaVersion` | `1` | Schema version for the payload |
847
924
  | `eventId` | auto | Override the auto-generated event ID (UUID v4) |
925
+ | `compression` | — | Compression codec for the message set: `'gzip'`, `'snappy'`, `'lz4'`, `'zstd'`; omit to send uncompressed |
848
926
 
849
- `sendBatch()` accepts the same options per message inside the array items.
927
+ `sendBatch()` accepts `compression` as a top-level option (not per-message); all other options are per-message inside the array items.
850
928
 
851
929
  ### Consumer options
852
930
 
@@ -871,7 +949,10 @@ Options for `sendMessage()` — the third argument:
871
949
  | `circuitBreaker.recoveryMs` | `30_000` | Milliseconds to wait in OPEN state before entering HALF_OPEN |
872
950
  | `circuitBreaker.windowSize` | `threshold × 2, min 10` | Sliding window size in messages |
873
951
  | `circuitBreaker.halfOpenSuccesses` | `1` | Consecutive successes in HALF_OPEN required to close the circuit |
952
+ | `queueHighWaterMark` | unbounded | Max messages buffered in the `consume()` iterator queue before the partition is paused; resumes at 50% drain. Only applies to `consume()` |
874
953
  | `batch` | `false` | (decorator only) Use `startBatchConsumer` instead of `startConsumer` |
954
+ | `partitionAssigner` | `'cooperative-sticky'` | Partition assignment strategy: `'cooperative-sticky'` (minimal movement on rebalance, best for horizontal scaling), `'roundrobin'` (even distribution), `'range'` (contiguous partition ranges) |
955
+ | `onTtlExpired` | — | Per-consumer override of the client-level `onTtlExpired` callback; takes precedence when set. Receives `TtlExpiredContext` — same shape as the client-level hook |
875
956
  | `subscribeRetry.retries` | `5` | Max attempts for `consumer.subscribe()` when topic doesn't exist yet |
876
957
  | `subscribeRetry.backoffMs` | `5000` | Delay between subscribe retry attempts (ms) |
877
958
 
@@ -892,6 +973,7 @@ Passed to `KafkaModule.register()` or returned from `registerAsync()` factory:
892
973
  | `instrumentation` | `[]` | Client-wide instrumentation hooks (e.g. OTel). Applied to both send and consume paths |
893
974
  | `transactionalId` | `${clientId}-tx` | Transactional producer ID for `transaction()` calls. Must be unique per producer instance across the cluster — two instances sharing the same ID will be fenced by Kafka. The client logs a warning when the same ID is registered twice within one process |
894
975
  | `onMessageLost` | — | Called when a message is silently dropped without DLQ — use to alert, log to external systems, or trigger fallback logic |
976
+ | `onTtlExpired` | — | Called when a message is dropped due to TTL expiration (`messageTtlMs`) and `dlq` is not enabled; receives `{ topic, ageMs, messageTtlMs, headers }` |
895
977
  | `onRebalance` | — | Called on every partition assign/revoke event across all consumers created by this client |
896
978
 
897
979
  **Module-scoped** (default) — import `KafkaModule` in each module that needs it:
@@ -1200,6 +1282,41 @@ Options:
1200
1282
  | `windowSize` | `threshold × 2, min 10` | Sliding window size in messages |
1201
1283
  | `halfOpenSuccesses` | `1` | Consecutive successes in HALF_OPEN required to close the circuit |
1202
1284
 
1285
+ ### getCircuitState
1286
+
1287
+ Inspect the current circuit breaker state for a partition — useful for health endpoints and dashboards:
1288
+
1289
+ ```typescript
1290
+ const state = kafka.getCircuitState('orders', 0);
1291
+ // undefined — circuit not configured or never tripped
1292
+ // { status: 'closed', failures: 2, windowSize: 10 }
1293
+ // { status: 'open', failures: 5, windowSize: 10 }
1294
+ // { status: 'half-open', failures: 0, windowSize: 0 }
1295
+
1296
+ // With explicit group ID:
1297
+ const state = kafka.getCircuitState('orders', 0, 'payments-group');
1298
+ ```
1299
+
1300
+ Returns `undefined` when `circuitBreaker` is not configured for the group or the circuit has never been tripped (state is lazily initialised on the first DLQ event).
1301
+
1302
+ **Instrumentation hooks** — react to state transitions via `KafkaInstrumentation`:
1303
+
1304
+ ```typescript
1305
+ const kafka = new KafkaClient('svc', 'group', brokers, {
1306
+ instrumentation: [{
1307
+ onCircuitOpen(topic, partition) {
1308
+ metrics.increment('circuit_open', { topic, partition });
1309
+ },
1310
+ onCircuitHalfOpen(topic, partition) {
1311
+ logger.log(`Circuit probing ${topic}[${partition}]`);
1312
+ },
1313
+ onCircuitClose(topic, partition) {
1314
+ metrics.increment('circuit_close', { topic, partition });
1315
+ },
1316
+ }],
1317
+ });
1318
+ ```
1319
+
1203
1320
  ## Reset consumer offsets
1204
1321
 
1205
1322
  Seek a consumer group's committed offsets to the beginning or end of a topic:
@@ -1239,6 +1356,29 @@ The first argument is the consumer group ID — pass `undefined` to target the d
1239
1356
 
1240
1357
  **Important:** the consumer for the specified group must be stopped before calling `seekToOffset`. An error is thrown if the group is currently running.
1241
1358
 
1359
+ ## Seek to timestamp
1360
+
1361
+ Seek partitions to the offset nearest to a specific point in time — useful for replaying events that occurred after a known incident or deployment:
1362
+
1363
+ ```typescript
1364
+ const ts = new Date('2024-06-01T12:00:00Z').getTime(); // Unix ms
1365
+
1366
+ await kafka.seekToTimestamp(undefined, [
1367
+ { topic: 'orders', partition: 0, timestamp: ts },
1368
+ { topic: 'orders', partition: 1, timestamp: ts },
1369
+ ]);
1370
+
1371
+ // Multiple topics in one call
1372
+ await kafka.seekToTimestamp('payments-group', [
1373
+ { topic: 'payments', partition: 0, timestamp: ts },
1374
+ { topic: 'refunds', partition: 0, timestamp: ts },
1375
+ ]);
1376
+ ```
1377
+
1378
+ Uses `admin.fetchTopicOffsetsByTime` under the hood. If no offset exists at the requested timestamp (e.g. the partition is empty or the timestamp is in the future), the partition falls back to `-1` (end of topic — new messages only).
1379
+
1380
+ **Important:** the consumer group must be stopped before seeking. Assignments for the same topic are batched into a single `admin.setOffsets` call.
1381
+
1242
1382
  ## Message TTL
1243
1383
 
1244
1384
  Drop or route expired messages using `messageTtlMs` in `ConsumerOptions`:
@@ -1253,7 +1393,7 @@ await kafka.startConsumer(['orders'], handler, {
1253
1393
  The TTL is evaluated against the `x-timestamp` header stamped on every outgoing message by the producer. Messages whose age at consumption time exceeds `messageTtlMs` are:
1254
1394
 
1255
1395
  - **Routed to DLQ** with `x-dlq-reason: ttl-expired` when `dlq: true`
1256
- - **Dropped** (calling `onMessageLost`) otherwise
1396
+ - **Dropped** (calling `onTtlExpired` if configured) otherwise
1257
1397
 
1258
1398
  Typical use: prevent stale events from poisoning downstream systems after a consumer lag spike — e.g. discard order events or push notifications that are no longer actionable.
1259
1399
 
@@ -1290,6 +1430,50 @@ const filtered = await kafka.replayDlq('orders', {
1290
1430
 
1291
1431
  `replayDlq` creates a temporary consumer group that reads the DLQ topic up to the high-watermark at the time of the call — messages published after replay starts are not included. DLQ metadata headers (`x-dlq-original-topic`, `x-dlq-error-message`, `x-dlq-error-stack`, `x-dlq-failed-at`, `x-dlq-attempt-count`) are stripped from the replayed messages; all other headers (e.g. `x-correlation-id`) are preserved.
1292
1432
 
1433
+ ## Admin API
1434
+
1435
+ Inspect consumer groups, topic metadata, and delete records via the built-in admin client — no separate connection needed.
1436
+
1437
+ ### `listConsumerGroups()`
1438
+
1439
+ Returns all consumer groups known to the broker:
1440
+
1441
+ ```typescript
1442
+ const groups = await kafka.listConsumerGroups();
1443
+ // [{ groupId: 'orders-group', state: 'Stable' }, ...]
1444
+ ```
1445
+
1446
+ `state` reflects the librdkafka group state: `'Stable'`, `'PreparingRebalance'`, `'CompletingRebalance'`, `'Dead'`, or `'Empty'`.
1447
+
1448
+ ### `describeTopics(topics?)`
1449
+
1450
+ Fetch partition metadata (leader, replicas, ISR) for one or more topics. Omit `topics` to describe all topics the client knows about:
1451
+
1452
+ ```typescript
1453
+ const info = await kafka.describeTopics(['orders.created', 'payments.received']);
1454
+ // [
1455
+ // {
1456
+ // name: 'orders.created',
1457
+ // partitions: [{ partition: 0, leader: 1, replicas: [1, 2], isr: [1, 2] }],
1458
+ // },
1459
+ // ...
1460
+ // ]
1461
+ ```
1462
+
1463
+ ### `deleteRecords(topic, partitions)`
1464
+
1465
+ Truncate a topic by deleting all records up to (but not including) a given offset:
1466
+
1467
+ ```typescript
1468
+ // Delete all records in partition 0 up to offset 500
1469
+ await kafka.deleteRecords('orders.created', [
1470
+ { partition: 0, offset: '500' },
1471
+ { partition: 1, offset: '200' },
1472
+ ]);
1473
+ ```
1474
+
1475
+ Pass `offset: '-1'` to delete all records in a partition (truncate completely).
1476
+
1293
1477
  ## Graceful shutdown
1294
1478
 
1295
1479
  `disconnect()` now drains in-flight handlers before tearing down connections — no messages are silently cut off mid-processing.
@@ -1361,6 +1545,40 @@ const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
1361
1545
 
1362
1546
  In the normal case (`dlq: true`, DLQ send succeeds), `onMessageLost` does NOT fire — the message is preserved in `{topic}.dlq`.
1363
1547
 
1548
+ > **Note:** TTL-expired messages do **not** trigger `onMessageLost`. Use `onTtlExpired` to observe them separately.
1549
+
1550
+ ## onTtlExpired
1551
+
1552
+ Called when a message is dropped because it exceeded `messageTtlMs` and `dlq` is not enabled. Fires instead of `onMessageLost` for expired messages:
1553
+
1554
+ ```typescript
1555
+ import { KafkaClient, TtlExpiredContext } from '@drarzter/kafka-client/core';
1556
+
1557
+ const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
1558
+ onTtlExpired: (ctx: TtlExpiredContext) => {
1559
+ // ctx.topic — topic the message came from
1560
+ // ctx.ageMs — actual age of the message at drop time
1561
+ // ctx.messageTtlMs — the configured threshold
1562
+ // ctx.headers — original message headers
1563
+ logger.warn(`Stale message on ${ctx.topic}: ${ctx.ageMs}ms old (limit ${ctx.messageTtlMs}ms)`);
1564
+ },
1565
+ });
1566
+ ```
1567
+
1568
+ When `dlq: true`, expired messages are routed to DLQ instead and `onTtlExpired` is **not** called.
1569
+
1570
+ `onTtlExpired` can also be set per-consumer via `ConsumerOptions.onTtlExpired`. The consumer-level value takes precedence over the client-level one, so you can use different handlers for different topics:
1571
+
1572
+ ```typescript
1573
+ await kafka.startConsumer(['orders'], handler, {
1574
+ messageTtlMs: 30_000,
1575
+ onTtlExpired: (ctx) => {
1576
+ // overrides the client-level onTtlExpired for this consumer only
1577
+ ordersMetrics.increment('ttl_expired', { topic: ctx.topic });
1578
+ },
1579
+ });
1580
+ ```
1581
+
1364
1582
  ## onRebalance
1365
1583
 
1366
1584
  React to partition rebalance events without patching the consumer. Useful for flushing in-flight state before partitions are revoked, or for logging/metrics: