@drarzter/kafka-client 0.8.0 → 0.9.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -961,6 +961,8 @@ Options for `sendMessage()` — the third argument:
961
961
  | `batch` | `false` | (decorator only) Use `startBatchConsumer` instead of `startConsumer` |
962
962
  | `partitionAssigner` | `'cooperative-sticky'` | Partition assignment strategy: `'cooperative-sticky'` (minimal movement on rebalance, best for horizontal scaling), `'roundrobin'` (even distribution), `'range'` (contiguous partition ranges) |
963
963
  | `onTtlExpired` | — | Per-consumer override of the client-level `onTtlExpired` callback; takes precedence when set. Receives `TtlExpiredContext` — same shape as the client-level hook |
964
+ | `onMessageLost` | — | Per-consumer override of the client-level `onMessageLost` callback; takes precedence when set. Use for consumer-specific dead-message alerting or structured logging |
965
+ | `onRetry` | — | Per-consumer retry callback; fires **in addition to** the built-in metrics hook (does not replace it). Same signature as `KafkaInstrumentation.onRetry` |
964
966
  | `subscribeRetry.retries` | `5` | Max attempts for `consumer.subscribe()` when topic doesn't exist yet |
965
967
  | `subscribeRetry.backoffMs` | `5000` | Delay between subscribe retry attempts (ms) |
966
968
 
@@ -1422,6 +1424,7 @@ Options:
1422
1424
  | `targetTopic` | `x-dlq-original-topic` header | Override the destination topic |
1423
1425
  | `dryRun` | `false` | Count messages without sending |
1424
1426
  | `filter` | — | `(headers) => boolean` — skip messages where the callback returns `false` |
1427
+ | `fromBeginning` | `true` | `true` = full replay every call; `false` = incremental (only new messages since the last call) |
1425
1428
 
1426
1429
  ```typescript
1427
1430
  // Dry run — see how many messages would be replayed
@@ -1434,9 +1437,12 @@ const result = await kafka.replayDlq('orders', { targetTopic: 'orders.v2' });
1434
1437
  const filtered = await kafka.replayDlq('orders', {
1435
1438
  filter: (headers) => headers['x-correlation-id'] === 'corr-123',
1436
1439
  });
1440
+
1441
+ // Incremental replay — only messages added since the last call
1442
+ const incremental = await kafka.replayDlq('orders', { fromBeginning: false });
1437
1443
  ```
1438
1444
 
1439
- `replayDlq` creates a temporary consumer group that reads the DLQ topic up to the high-watermark at the time of the call — messages published after replay starts are not included. DLQ metadata headers (`x-dlq-original-topic`, `x-dlq-error-message`, `x-dlq-error-stack`, `x-dlq-failed-at`, `x-dlq-attempt-count`) are stripped from the replayed messages; all other headers (e.g. `x-correlation-id`) are preserved.
1445
+ `replayDlq` reads the DLQ topic up to the high-watermark at the time of the call — messages published after replay starts are not included. By default (`fromBeginning: true`) an ephemeral group ID is used on each call so all messages are always replayed; the group is deleted from the broker after use. With `fromBeginning: false` a stable group ID persists committed offsets between calls, enabling incremental replay. DLQ metadata headers (`x-dlq-original-topic`, `x-dlq-error-message`, `x-dlq-error-stack`, `x-dlq-failed-at`, `x-dlq-attempt-count`) are stripped from the replayed messages; all other headers (e.g. `x-correlation-id`) are preserved.
1440
1446
 
1441
1447
  ## Read snapshot
1442
1448
 
@@ -1761,19 +1767,25 @@ await kafka.disconnect(10_000); // wait up to 10 s, then force disconnect
1761
1767
 
1762
1768
  ## Consumer handles
1763
1769
 
1764
- `startConsumer()` and `startBatchConsumer()` return a `ConsumerHandle` instead of `void`. Use it to stop a specific consumer without needing to remember the group ID:
1770
+ `startConsumer()` and `startBatchConsumer()` return a `ConsumerHandle` instead of `void`. Use it to stop a specific consumer or wait for it to be ready:
1765
1771
 
1766
1772
  ```typescript
1767
- const handle = await kafka.startConsumer(['orders'], handler);
1773
+ const handle = await kafka.startConsumer(['orders'], handler, { fromBeginning: false });
1768
1774
 
1769
1775
  console.log(handle.groupId); // e.g. "my-group"
1770
1776
 
1777
+ // Wait until Kafka has assigned at least one partition to this consumer.
1778
+ // Safe to call before sending test messages — eliminates fixed setTimeout delays.
1779
+ await handle.ready();
1780
+
1771
1781
  // Later — stop only this consumer, producer stays connected
1772
1782
  await handle.stop();
1773
1783
  ```
1774
1784
 
1775
1785
  `handle.stop()` is equivalent to `kafka.stopConsumer(handle.groupId)`. Useful in lifecycle methods or when you need to conditionally stop one consumer while others keep running.
1776
1786
 
1787
+ `handle.ready()` resolves once the broker fires the first partition-assignment event. For `fromBeginning: false` consumers it adds a 500 ms settle window so librdkafka can complete the async `latest` offset fetch before you send; for `fromBeginning: true` consumers it resolves immediately on assignment. In unit tests with `FakeTransport`, `subscribe()` fires the assignment synchronously, so `handle.ready()` resolves in the same tick.
1788
+
1777
1789
  ## onMessageLost
1778
1790
 
1779
1791
  By default, if a consumer handler throws and `dlq` is not enabled, the message is logged and dropped. Use `onMessageLost` to catch these silent losses:
@@ -2117,6 +2129,31 @@ The integration suite spins up a single-node KRaft Kafka container and tests sen
2117
2129
 
2118
2130
  Both suites run in CI on every push to `main` and on pull requests.
2119
2131
 
2132
+ ## File naming conventions
2133
+
2134
+ Hyphens within a multi-word name; dot separates the name from its role suffix.
2135
+
2136
+ | Suffix | Role |
2137
+ | --- | --- |
2138
+ | `.types` | Data/option shapes — configs, options, result objects |
2139
+ | `.interface` | Contract interfaces — capability boundaries, polymorphism points |
2140
+ | `.manager` | Stateful manager class |
2141
+ | `.tracker` | Tracking/counting utility |
2142
+ | `.module` | NestJS module |
2143
+ | `.explorer` | NestJS metadata scanner |
2144
+ | `.decorator` | NestJS decorator definitions |
2145
+ | `.health` | Health indicator |
2146
+ | `.constants` | Constant values and DI tokens |
2147
+ | `.mock` | Jest/Vitest spy double |
2148
+ | `.fake` | Standalone fake implementation |
2149
+ | `.container` | Test infrastructure wrapper |
2150
+ | `.spec` | Unit test |
2151
+ | `.integration.spec` | Integration test |
2152
+
2153
+ A suffix is added only when it carries information the name alone does not. When the filename already expresses the role — `handler.ts` is a handler, `pipeline.ts` is a pipeline, `queue.ts` is a queue — no suffix is needed.
2154
+
2155
+ ---
2156
+
2120
2157
  ## Project structure
2121
2158
 
2122
2159
  ```text
@@ -2176,11 +2213,11 @@ src/
2176
2213
  │ │ │ # backpressure via queueHighWaterMark (pause/resume)
2177
2214
  │ │ │
2178
2215
  │ │ └── infra/
2179
- │ │ ├── circuit-breaker.ts # CircuitBreakerManager — per groupId:topic:partition state
2216
+ │ │ ├── circuit-breaker.manager.ts # CircuitBreakerManager — per groupId:topic:partition state
2180
2217
  │ │ │ # machine (CLOSED → OPEN → HALF_OPEN); sliding failure window
2181
- │ │ ├── metrics-manager.ts # MetricsManager — in-process counters (processed / retry /
2218
+ │ │ ├── metrics.manager.ts # MetricsManager — in-process counters (processed / retry /
2182
2219
  │ │ │ # dlq / dedup) per topic; getMetrics() / resetMetrics()
2183
- │ │ └── inflight-tracker.ts # InFlightTracker — tracks running handlers for graceful
2220
+ │ │ └── inflight.tracker.ts # InFlightTracker — tracks running handlers for graceful
2184
2221
  │ │ # shutdown drain (disconnect waits for all to settle)
2185
2222
  │ │
2186
2223
  │ └── __tests__/ # Unit tests — mocked @confluentinc/kafka-javascript
@@ -2240,13 +2277,13 @@ src/
2240
2277
 
2241
2278
  ├── testing/ # Testing utilities — no runtime Kafka deps
2242
2279
  │ ├── index.ts # Re-exports createMockKafkaClient, KafkaTestContainer
2243
- │ ├── mock-client.ts # createMockKafkaClient<T>() — jest.fn() on every
2280
+ │ ├── client.mock.ts # createMockKafkaClient<T>() — jest.fn() on every
2244
2281
  │ │ # IKafkaClient method with sensible defaults
2245
- │ ├── test-container.ts # KafkaTestContainer — thin @testcontainers/kafka wrapper;
2282
+ │ ├── test.container.ts # KafkaTestContainer — thin @testcontainers/kafka wrapper;
2246
2283
  │ │ # transaction coordinator warmup, topic pre-creation
2247
2284
  │ └── __tests__/
2248
- │ ├── mock-client.spec.ts # Mock client method stubs and overrides
2249
- │ └── test-container.spec.ts # Container start/stop lifecycle
2285
+ │ ├── client.mock.spec.ts # Mock client method stubs and overrides
2286
+ │ └── test.container.spec.ts # Container start/stop lifecycle
2250
2287
 
2251
2288
  ├── integration/ # Integration tests — require Docker (testcontainers)
2252
2289
  │ ├── global-setup.ts # Start shared Kafka container before all suites