@drarzter/kafka-client 0.5.5 → 0.5.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -112,7 +112,7 @@ For standalone usage (Express, Fastify, raw Node), no extra dependencies needed
112
112
  ```typescript
113
113
  import { KafkaClient, topic } from '@drarzter/kafka-client/core';
114
114
 
115
- const OrderCreated = topic('order.created')<{ orderId: string; amount: number }>();
115
+ const OrderCreated = topic('order.created').type<{ orderId: string; amount: number }>();
116
116
 
117
117
  const kafka = new KafkaClient('my-app', 'my-group', ['localhost:9092']);
118
118
  await kafka.connectProducer();
@@ -245,13 +245,13 @@ Instead of a centralized topic map, define each topic as a standalone typed obje
245
245
  ```typescript
246
246
  import { topic, TopicsFrom } from '@drarzter/kafka-client';
247
247
 
248
- export const OrderCreated = topic('order.created')<{
248
+ export const OrderCreated = topic('order.created').type<{
249
249
  orderId: string;
250
250
  userId: string;
251
251
  amount: number;
252
252
  }>();
253
253
 
254
- export const OrderCompleted = topic('order.completed')<{
254
+ export const OrderCompleted = topic('order.completed').type<{
255
255
  orderId: string;
256
256
  completedAt: string;
257
257
  }>();
@@ -713,8 +713,9 @@ Options for `sendMessage()` — the third argument:
713
713
  | `retry.backoffMs` | `1000` | Base delay for exponential backoff in ms |
714
714
  | `retry.maxBackoffMs` | `30000` | Maximum delay cap for exponential backoff in ms |
715
715
  | `dlq` | `false` | Send to `{topic}.dlq` after all retries exhausted — message carries `x-dlq-*` metadata headers |
716
- | `retryTopics` | `false` | Route failed messages through `{topic}.retry` instead of sleeping in-process (see [Retry topic chain](#retry-topic-chain)) |
716
+ | `retryTopics` | `false` | Route failed messages through per-level topics (`{topic}.retry.1`, `{topic}.retry.2`, …) instead of sleeping in-process; at-least-once semantics; requires `retry` (see [Retry topic chain](#retry-topic-chain)) |
717
717
  | `interceptors` | `[]` | Array of before/after/onError hooks |
718
+ | `retryTopicAssignmentTimeoutMs` | `10000` | Timeout (ms) to wait for each retry level consumer to receive partition assignments after connecting; increase for slow brokers |
718
719
  | `handlerTimeoutMs` | — | Log a warning if the handler hasn't resolved within this window (ms) — does not cancel the handler |
719
720
  | `batch` | `false` | (decorator only) Use `startBatchConsumer` instead of `startConsumer` |
720
721
  | `subscribeRetry.retries` | `5` | Max attempts for `consumer.subscribe()` when topic doesn't exist yet |
@@ -826,12 +827,13 @@ const interceptor: ConsumerInterceptor<MyTopics> = {
826
827
 
827
828
  ## Retry topic chain
828
829
 
829
- By default, retry is handled in-process: the consumer sleeps between attempts while holding the partition. With `retryTopics: true`, failed messages are routed to a `<topic>.retry` Kafka topic instead. A companion consumer auto-starts on `<topic>.retry` (group `<groupId>-retry`), waits until the scheduled retry time, then calls the same handler.
830
+ By default, retry is handled in-process: the consumer sleeps between attempts while holding the partition. With `retryTopics: true`, failed messages are routed through a chain of Kafka topics instead — one topic per retry level. A companion consumer auto-starts per level, waits for the scheduled delay using partition pause/resume, then calls the same handler.
830
831
 
831
832
  Benefits over in-process retry:
832
833
 
833
- - **Durable** — retry messages survive a consumer restart
834
- - **Non-blocking** — the original consumer is free immediately; the retry consumer pauses only the specific partition being delayed, so other partitions continue processing
834
+ - **Durable** — retry messages survive a consumer restart (at-least-once semantics)
835
+ - **Non-blocking** — the original consumer is free immediately; each level consumer only pauses its specific partition during the delay window, so other partitions continue processing
836
+ - **Isolated** — each retry level has its own consumer group, so a slow level 3 consumer never blocks a level 1 consumer
835
837
 
836
838
  ```typescript
837
839
  await kafka.startConsumer(['orders.created'], handler, {
@@ -841,18 +843,32 @@ await kafka.startConsumer(['orders.created'], handler, {
841
843
  });
842
844
  ```
843
845
 
844
- Message flow with `maxRetries: 2`:
846
+ With `maxRetries: 3`, this creates three dedicated topics and three companion consumers:
845
847
 
846
848
  ```text
847
- orders.created → handler fails → orders.created.retry (attempt 1, delay ~1 s)
848
- → handler fails → orders.created.retry (attempt 2, delay ~2 s)
849
- handler fails → orders.created.dlq
849
+ orders.created.retry.1consumer group: my-group-retry.1 (delay ~1 s)
850
+ orders.created.retry.2 → consumer group: my-group-retry.2 (delay ~2 s)
851
+ orders.created.retry.3 consumer group: my-group-retry.3 (delay ~4 s)
850
852
  ```
851
853
 
852
- The retry topic messages carry scheduling headers (`x-retry-attempt`, `x-retry-after`, `x-retry-original-topic`, `x-retry-max-retries`) that the companion consumer reads automatically — no manual configuration needed.
854
+ Message flow with `maxRetries: 2` and `dlq: true`:
853
855
 
856
+ ```text
857
+ orders.created → handler fails → orders.created.retry.1 (attempt 1, delay ~1 s)
858
+ orders.created.retry.1 → handler fails → orders.created.retry.2 (attempt 2, delay ~2 s)
859
+ orders.created.retry.2 → handler fails → orders.created.dlq
860
+ ```
861
+
862
+ Each level consumer uses `consumer.pause → sleep(remaining) → consumer.resume` so the partition offset is never committed before the message is processed. On a process crash during sleep or handler execution, the message is redelivered on restart.
863
+
864
+ The retry topic messages carry scheduling headers (`x-retry-attempt`, `x-retry-after`, `x-retry-original-topic`, `x-retry-max-retries`) that each level consumer reads automatically — no manual configuration needed.
865
+
866
+ > **Delivery guarantee:** retry messages are at-least-once. A duplicate can occur in the rare case where a process crashes after routing to the next level but before committing the offset — the message appears twice in the next level topic. Design handlers to be idempotent if duplicates are unacceptable.
867
+ >
854
868
  > **Note:** `retryTopics` requires `retry` to be set — an error is thrown at startup if `retry` is missing. Currently only applies to `startConsumer`; batch consumers (`startBatchConsumer`) use in-process retry regardless.
855
869
 
870
+ `stopConsumer(groupId)` automatically stops all companion retry level consumers started for that group.
871
+
856
872
  ## stopConsumer
857
873
 
858
874
  Stop all consumers or a specific group:
@@ -977,18 +993,18 @@ export const OrderCreated = topic('order.created').schema(z.object({
977
993
  amount: z.number().positive(),
978
994
  }));
979
995
 
980
- // Without schema — explicit generic (still works)
981
- export const OrderAudit = topic('order.audit')<{ orderId: string; action: string }>();
996
+ // Without schema — explicit type via .type<T>()
997
+ export const OrderAudit = topic('order.audit').type<{ orderId: string; action: string }>();
982
998
 
983
999
  export type MyTopics = TopicsFrom<typeof OrderCreated | typeof OrderAudit>;
984
1000
  ```
985
1001
 
986
1002
  ### How it works
987
1003
 
988
- **On send** — `sendMessage`, `sendBatch`, and `transaction` call `schema.parse(message)` before serializing. Invalid messages throw immediately (the schema library's error, e.g. `ZodError`):
1004
+ **On send** — `sendMessage`, `sendBatch`, and `transaction` call `schema.parse(message)` before serializing. Invalid messages throw immediately as `KafkaValidationError` (the original schema error is available as `cause`):
989
1005
 
990
1006
  ```typescript
991
- // This throws ZodError — amount must be positive
1007
+ // This throws KafkaValidationError — amount must be positive
992
1008
  await kafka.sendMessage(OrderCreated, { orderId: '1', userId: '2', amount: -5 });
993
1009
  ```
994
1010
 
@@ -1100,7 +1116,7 @@ expect(kafka.sendMessage).toHaveBeenCalledWith(
1100
1116
  );
1101
1117
 
1102
1118
  // Override return values
1103
- kafka.checkStatus.mockResolvedValueOnce({ topics: ['order.created'] });
1119
+ kafka.checkStatus.mockResolvedValueOnce({ status: 'up', clientId: 'mock-client', topics: ['order.created'] });
1104
1120
 
1105
1121
  // Mock rejections
1106
1122
  kafka.sendMessage.mockRejectedValueOnce(new Error('broker down'));