@drarzter/kafka-client 0.8.0 → 0.9.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -961,6 +961,8 @@ Options for `sendMessage()` — the third argument:
961
961
  | `batch` | `false` | (decorator only) Use `startBatchConsumer` instead of `startConsumer` |
962
962
  | `partitionAssigner` | `'cooperative-sticky'` | Partition assignment strategy: `'cooperative-sticky'` (minimal movement on rebalance, best for horizontal scaling), `'roundrobin'` (even distribution), `'range'` (contiguous partition ranges) |
963
963
  | `onTtlExpired` | — | Per-consumer override of the client-level `onTtlExpired` callback; takes precedence when set. Receives `TtlExpiredContext` — same shape as the client-level hook |
964
+ | `onMessageLost` | — | Per-consumer override of the client-level `onMessageLost` callback; takes precedence when set. Use for consumer-specific dead-message alerting or structured logging |
965
+ | `onRetry` | — | Per-consumer retry callback; fires **in addition to** the built-in metrics hook (does not replace it). Same signature as `KafkaInstrumentation.onRetry` |
964
966
  | `subscribeRetry.retries` | `5` | Max attempts for `consumer.subscribe()` when topic doesn't exist yet |
965
967
  | `subscribeRetry.backoffMs` | `5000` | Delay between subscribe retry attempts (ms) |
966
968
 
@@ -1422,6 +1424,7 @@ Options:
1422
1424
  | `targetTopic` | `x-dlq-original-topic` header | Override the destination topic |
1423
1425
  | `dryRun` | `false` | Count messages without sending |
1424
1426
  | `filter` | — | `(headers) => boolean` — skip messages where the callback returns `false` |
1427
+ | `fromBeginning` | `true` | `true` = full replay every call; `false` = incremental (only new messages since the last call) |
1425
1428
 
1426
1429
  ```typescript
1427
1430
  // Dry run — see how many messages would be replayed
@@ -1434,9 +1437,12 @@ const result = await kafka.replayDlq('orders', { targetTopic: 'orders.v2' });
1434
1437
  const filtered = await kafka.replayDlq('orders', {
1435
1438
  filter: (headers) => headers['x-correlation-id'] === 'corr-123',
1436
1439
  });
1440
+
1441
+ // Incremental replay — only messages added since the last call
1442
+ const incremental = await kafka.replayDlq('orders', { fromBeginning: false });
1437
1443
  ```
1438
1444
 
1439
- `replayDlq` creates a temporary consumer group that reads the DLQ topic up to the high-watermark at the time of the call — messages published after replay starts are not included. DLQ metadata headers (`x-dlq-original-topic`, `x-dlq-error-message`, `x-dlq-error-stack`, `x-dlq-failed-at`, `x-dlq-attempt-count`) are stripped from the replayed messages; all other headers (e.g. `x-correlation-id`) are preserved.
1445
+ `replayDlq` reads the DLQ topic up to the high-watermark at the time of the call — messages published after replay starts are not included. By default (`fromBeginning: true`) an ephemeral group ID is used on each call so all messages are always replayed; the group is deleted from the broker after use. With `fromBeginning: false` a stable group ID persists committed offsets between calls, enabling incremental replay. DLQ metadata headers (`x-dlq-original-topic`, `x-dlq-error-message`, `x-dlq-error-stack`, `x-dlq-failed-at`, `x-dlq-attempt-count`) are stripped from the replayed messages; all other headers (e.g. `x-correlation-id`) are preserved.
1440
1446
 
1441
1447
  ## Read snapshot
1442
1448
 
@@ -1761,19 +1767,25 @@ await kafka.disconnect(10_000); // wait up to 10 s, then force disconnect
1761
1767
 
1762
1768
  ## Consumer handles
1763
1769
 
1764
- `startConsumer()` and `startBatchConsumer()` return a `ConsumerHandle` instead of `void`. Use it to stop a specific consumer without needing to remember the group ID:
1770
+ `startConsumer()` and `startBatchConsumer()` return a `ConsumerHandle` instead of `void`. Use it to stop a specific consumer or wait for it to be ready:
1765
1771
 
1766
1772
  ```typescript
1767
- const handle = await kafka.startConsumer(['orders'], handler);
1773
+ const handle = await kafka.startConsumer(['orders'], handler, { fromBeginning: false });
1768
1774
 
1769
1775
  console.log(handle.groupId); // e.g. "my-group"
1770
1776
 
1777
+ // Wait until Kafka has assigned at least one partition to this consumer.
1778
+ // Safe to call before sending test messages — eliminates fixed setTimeout delays.
1779
+ await handle.ready();
1780
+
1771
1781
  // Later — stop only this consumer, producer stays connected
1772
1782
  await handle.stop();
1773
1783
  ```
1774
1784
 
1775
1785
  `handle.stop()` is equivalent to `kafka.stopConsumer(handle.groupId)`. Useful in lifecycle methods or when you need to conditionally stop one consumer while others keep running.
1776
1786
 
1787
+ `handle.ready()` resolves once the broker fires the first partition-assignment event. For `fromBeginning: false` consumers it adds a 500 ms settle window so librdkafka can complete the async `latest` offset fetch before you send; for `fromBeginning: true` consumers it resolves immediately on assignment. In unit tests with `FakeTransport`, `subscribe()` fires the assignment synchronously, so `handle.ready()` resolves in the same tick.
1788
+
1777
1789
  ## onMessageLost
1778
1790
 
1779
1791
  By default, if a consumer handler throws and `dlq` is not enabled, the message is logged and dropped. Use `onMessageLost` to catch these silent losses: