@drarzter/kafka-client 0.7.1 → 0.7.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -20,12 +20,15 @@ Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class N
20
20
  - [Consuming messages](#consuming-messages)
21
21
  - [Declarative: @SubscribeTo()](#declarative-subscribeto)
22
22
  - [Imperative: startConsumer()](#imperative-startconsumer)
23
+ - [Regex topic subscription](#regex-topic-subscription)
23
24
  - [Iterator: consume()](#iterator-consume)
24
25
  - [Multiple consumer groups](#multiple-consumer-groups)
25
26
  - [Partition key](#partition-key)
26
27
  - [Message headers](#message-headers)
27
28
  - [Batch sending](#batch-sending)
28
29
  - [Batch consuming](#batch-consuming)
30
+ - [Tombstone messages](#tombstone-messages)
31
+ - [Compression](#compression)
29
32
  - [Transactions](#transactions)
30
33
  - [Consumer interceptors](#consumer-interceptors)
31
34
  - [Instrumentation](#instrumentation)
@@ -42,6 +45,7 @@ Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class N
42
45
  - [Seek to timestamp](#seek-to-timestamp)
43
46
  - [Message TTL](#message-ttl)
44
47
  - [DLQ replay](#dlq-replay)
48
+ - [Admin API](#admin-api)
45
49
  - [Graceful shutdown](#graceful-shutdown)
46
50
  - [Consumer handles](#consumer-handles)
47
51
  - [onMessageLost](#onmessagelost)
@@ -97,6 +101,11 @@ Safe by default. Configurable when you need it. Escape hatches for when you know
97
101
  - **Message TTL** — `messageTtlMs` drops or DLQs messages older than a configurable threshold, preventing stale events from poisoning downstream systems after a lag spike
98
102
  - **Circuit breaker** — `circuitBreaker` option applies a sliding-window breaker per topic-partition; pauses delivery on repeated DLQ failures and resumes after a configurable recovery window
99
103
  - **Seek to offset** — `seekToOffset(groupId, assignments)` seeks individual partitions to explicit offsets for fine-grained replay
104
+ - **Tombstone messages** — `sendTombstone(topic, key)` sends a null-value record to compact a key out of a log-compacted topic; all instrumentation hooks still fire
105
+ - **Regex topic subscription** — `startConsumer([/^orders\..+/], handler)` subscribes using a pattern; the broker routes matching topics to the consumer dynamically
106
+ - **Compression** — per-send `compression` option (`gzip`, `snappy`, `lz4`, `zstd`) in `SendOptions` and `BatchSendOptions`
107
+ - **Partition assignment strategy** — `partitionAssigner` in `ConsumerOptions` chooses between `cooperative-sticky` (default), `roundrobin`, and `range`
108
+ - **Admin API** — `listConsumerGroups()`, `describeTopics()`, `deleteRecords()` for group inspection, partition metadata, and message deletion
100
109
 
101
110
  See the [Roadmap](./ROADMAP.md) for upcoming features and version history.
102
111
 
@@ -439,6 +448,27 @@ export class OrdersService implements OnModuleInit {
439
448
  }
440
449
  ```
441
450
 
451
+ ### Regex topic subscription
452
+
453
+ Subscribe to multiple topics matching a pattern — the broker dynamically routes any topic whose name matches the regex to this consumer:
454
+
455
+ ```typescript
456
+ // Subscribe to all topics starting with "orders."
457
+ await kafka.startConsumer([/^orders\..+/], handler);
458
+
459
+ // Mix regexes and literal strings
460
+ await kafka.startConsumer([/^payments\..+/, 'audit.global'], handler);
461
+ ```
462
+
463
+ Works with `startBatchConsumer` and `@SubscribeTo` too:
464
+
465
+ ```typescript
466
+ @SubscribeTo(/^events\..+/)
467
+ async handleEvent(envelope: EventEnvelope<any>) { ... }
468
+ ```
469
+
470
+ > **Limitation:** `retryTopics: true` is incompatible with regex subscriptions — the library cannot derive static retry topic names from a pattern. An error is thrown at startup if both are combined.
471
+
442
472
  ### Iterator: consume()
443
473
 
444
474
  Stream messages from a single topic as an `AsyncIterableIterator` — useful for scripts, one-off tasks, or any context where you prefer `for await` over a callback:
@@ -682,6 +712,36 @@ await kafka.startBatchConsumer(
682
712
  | `resolveOffset(offset)` | Mark offset as processed (required before `commitOffsetsIfNecessary`) |
683
713
  | `commitOffsetsIfNecessary()` | Commit resolved offsets; respects `autoCommit` setting |
684
714
 
715
+ ## Tombstone messages
716
+
717
+ Send a null-value Kafka record to compact a specific key out of a log-compacted topic:
718
+
719
+ ```typescript
720
+ // Delete the record with key "user-123" from the log-compacted "users" topic
721
+ await kafka.sendTombstone('users', 'user-123');
722
+
723
+ // With custom headers
724
+ await kafka.sendTombstone('users', 'user-123', { 'x-reason': 'gdpr-deletion' });
725
+ ```
726
+
727
+ `sendTombstone` skips envelope headers, schema validation, and the Lamport clock — the record value is literally `null`, as required by Kafka's log compaction protocol. Both `beforeSend` and `afterSend` instrumentation hooks still fire so tracing works correctly.
728
+
729
+ ## Compression
730
+
731
+ Reduce network bandwidth with per-send compression. Supported codecs: `'gzip'`, `'snappy'`, `'lz4'`, `'zstd'`:
732
+
733
+ ```typescript
734
+ import { CompressionType } from '@drarzter/kafka-client/core';
735
+
736
+ // Single message
737
+ await kafka.sendMessage('events', payload, { compression: 'gzip' });
738
+
739
+ // Batch
740
+ await kafka.sendBatch('events', messages, { compression: 'snappy' });
741
+ ```
742
+
743
+ Compression is applied at the Kafka message-set level — the broker decompresses transparently on the consumer side. `'snappy'` and `'lz4'` offer the best throughput/CPU trade-off for most workloads; `'gzip'` gives the highest compression ratio; `'zstd'` balances both.
744
+
685
745
  ## Transactions
686
746
 
687
747
  Send multiple messages atomically with exactly-once semantics:
@@ -862,8 +922,9 @@ Options for `sendMessage()` — the third argument:
862
922
  | `correlationId` | auto | Override the auto-propagated correlation ID (default: inherited from ALS context or new UUID) |
863
923
  | `schemaVersion` | `1` | Schema version for the payload |
864
924
  | `eventId` | auto | Override the auto-generated event ID (UUID v4) |
925
+ | `compression` | — | Compression codec for the message set: `'gzip'`, `'snappy'`, `'lz4'`, `'zstd'`; omit to send uncompressed |
865
926
 
866
- `sendBatch()` accepts the same options per message inside the array items.
927
+ `sendBatch()` accepts `compression` as a top-level option (not per-message); all other options are per-message inside the array items.
867
928
 
868
929
  ### Consumer options
869
930
 
@@ -890,6 +951,8 @@ Options for `sendMessage()` — the third argument:
890
951
  | `circuitBreaker.halfOpenSuccesses` | `1` | Consecutive successes in HALF_OPEN required to close the circuit |
891
952
  | `queueHighWaterMark` | unbounded | Max messages buffered in the `consume()` iterator queue before the partition is paused; resumes at 50% drain. Only applies to `consume()` |
892
953
  | `batch` | `false` | (decorator only) Use `startBatchConsumer` instead of `startConsumer` |
954
+ | `partitionAssigner` | `'cooperative-sticky'` | Partition assignment strategy: `'cooperative-sticky'` (minimal movement on rebalance, best for horizontal scaling), `'roundrobin'` (even distribution), `'range'` (contiguous partition ranges) |
955
+ | `onTtlExpired` | — | Per-consumer override of the client-level `onTtlExpired` callback; takes precedence when set. Receives `TtlExpiredContext` — same shape as the client-level hook |
893
956
  | `subscribeRetry.retries` | `5` | Max attempts for `consumer.subscribe()` when topic doesn't exist yet |
894
957
  | `subscribeRetry.backoffMs` | `5000` | Delay between subscribe retry attempts (ms) |
895
958
 
@@ -1367,6 +1430,50 @@ const filtered = await kafka.replayDlq('orders', {
1367
1430
 
1368
1431
  `replayDlq` creates a temporary consumer group that reads the DLQ topic up to the high-watermark at the time of the call — messages published after replay starts are not included. DLQ metadata headers (`x-dlq-original-topic`, `x-dlq-error-message`, `x-dlq-error-stack`, `x-dlq-failed-at`, `x-dlq-attempt-count`) are stripped from the replayed messages; all other headers (e.g. `x-correlation-id`) are preserved.
1369
1432
 
1433
+ ## Admin API
1434
+
1435
+ Inspect consumer groups, topic metadata, and delete records via the built-in admin client — no separate connection needed.
1436
+
1437
+ ### `listConsumerGroups()`
1438
+
1439
+ Returns all consumer groups known to the broker:
1440
+
1441
+ ```typescript
1442
+ const groups = await kafka.listConsumerGroups();
1443
+ // [{ groupId: 'orders-group', state: 'Stable' }, ...]
1444
+ ```
1445
+
1446
+ `state` reflects the librdkafka group state: `'Stable'`, `'PreparingRebalance'`, `'CompletingRebalance'`, `'Dead'`, or `'Empty'`.
1447
+
1448
+ ### `describeTopics(topics?)`
1449
+
1450
+ Fetch partition metadata (leader, replicas, ISR) for one or more topics. Omit `topics` to describe all topics the client knows about:
1451
+
1452
+ ```typescript
1453
+ const info = await kafka.describeTopics(['orders.created', 'payments.received']);
1454
+ // [
1455
+ // {
1456
+ // name: 'orders.created',
1457
+ // partitions: [{ partition: 0, leader: 1, replicas: [1, 2], isr: [1, 2] }],
1458
+ // },
1459
+ // ...
1460
+ // ]
1461
+ ```
1462
+
1463
+ ### `deleteRecords(topic, partitions)`
1464
+
1465
+ Truncate a topic by deleting all records up to (but not including) a given offset:
1466
+
1467
+ ```typescript
1468
+ // Delete all records in partition 0 up to offset 500
1469
+ await kafka.deleteRecords('orders.created', [
1470
+ { partition: 0, offset: '500' },
1471
+ { partition: 1, offset: '200' },
1472
+ ]);
1473
+ ```
1474
+
1475
+ Pass `offset: '-1'` to delete all records in a partition (truncate completely).
1476
+
1370
1477
  ## Graceful shutdown
1371
1478
 
1372
1479
  `disconnect()` now drains in-flight handlers before tearing down connections — no messages are silently cut off mid-processing.
@@ -1460,6 +1567,18 @@ const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
1460
1567
 
1461
1568
  When `dlq: true`, expired messages are routed to DLQ instead and `onTtlExpired` is **not** called.
1462
1569
 
1570
+ `onTtlExpired` can also be set per-consumer via `ConsumerOptions.onTtlExpired`. The consumer-level value takes precedence over the client-level one, so you can use different handlers for different topics:
1571
+
1572
+ ```typescript
1573
+ await kafka.startConsumer(['orders'], handler, {
1574
+ messageTtlMs: 30_000,
1575
+ onTtlExpired: (ctx) => {
1576
+ // overrides the client-level onTtlExpired for this consumer only
1577
+ ordersMetrics.increment('ttl_expired', { topic: ctx.topic });
1578
+ },
1579
+ });
1580
+ ```
1581
+
1463
1582
  ## onRebalance
1464
1583
 
1465
1584
  React to partition rebalance events without patching the consumer. Useful for flushing in-flight state before partitions are revoked, or for logging/metrics:
@@ -1747,13 +1866,27 @@ Both suites run in CI on every push to `main` and on pull requests.
1747
1866
 
1748
1867
  ```text
1749
1868
  src/
1750
- ├── client/ # Core — KafkaClient, types, envelope, consumer pipeline, topic(), errors (0 framework deps)
1751
- ├── nest/ # NestJS adapter — Module, Explorer, decorators, health
1752
- ├── testing/ # Testing utilities — mock client, testcontainer wrapper
1753
- ├── core.ts # Standalone entrypoint (@drarzter/kafka-client/core)
1754
- ├── otel.ts # OpenTelemetry entrypoint (@drarzter/kafka-client/otel)
1755
- ├── testing.ts # Testing entrypoint (@drarzter/kafka-client/testing)
1756
- └── index.ts # Full entrypoint — core + NestJS adapter
1869
+ ├── client/ # Core — 0 framework deps
1870
+ ├── kafka.client/
1871
+ │ │ ├── index.ts # KafkaClient class
1872
+ │ │ ├── admin/ # AdminOps
1873
+ │ │ ├── producer/ # payload builders, schema registry
1874
+ │ │ ├── consumer/ # consumer ops, handler, retry-topic, DLQ replay, queue, pipeline
1875
+ │ │ └── infra/ # CircuitBreakerManager, InFlightTracker, MetricsManager
1876
+ │ ├── message/ # EventEnvelope, topic(), headers
1877
+ │ ├── __tests__/
1878
+ │ │ ├── helpers.ts
1879
+ │ │ ├── consumer/ # consumer, batch, retry, dedup, TTL, DLQ replay, …
1880
+ │ │ ├── producer/ # producer, transactions, schema, topic
1881
+ │ │ ├── admin/ # admin, consumer lag
1882
+ │ │ └── infra/ # circuit breaker, errors, instrumentation, OTel, metrics
1883
+ │ └── types.ts, errors.ts, …
1884
+ ├── nest/ # NestJS adapter — Module, Explorer, decorators, health
1885
+ ├── testing/ # Testing utilities — mock client, testcontainer wrapper
1886
+ ├── core.ts # Standalone entrypoint (@drarzter/kafka-client/core)
1887
+ ├── otel.ts # OpenTelemetry entrypoint (@drarzter/kafka-client/otel)
1888
+ ├── testing.ts # Testing entrypoint (@drarzter/kafka-client/testing)
1889
+ └── index.ts # Full entrypoint — core + NestJS adapter
1757
1890
  ```
1758
1891
 
1759
1892
  All exported types and methods have JSDoc comments — your IDE will show inline docs and autocomplete.