@drarzter/kafka-client 0.3.1 → 0.5.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +163 -54
- package/dist/{chunk-A56D7HXR.mjs → chunk-P7GY4BLV.mjs} +343 -205
- package/dist/chunk-P7GY4BLV.mjs.map +1 -0
- package/dist/core.d.mts +18 -36
- package/dist/core.d.ts +18 -36
- package/dist/core.js +352 -204
- package/dist/core.js.map +1 -1
- package/dist/core.mjs +21 -1
- package/dist/{types-CtwJihJ3.d.mts → envelope-CPX1qudy.d.mts} +121 -24
- package/dist/{types-CtwJihJ3.d.ts → envelope-CPX1qudy.d.ts} +121 -24
- package/dist/index.d.mts +9 -3
- package/dist/index.d.ts +9 -3
- package/dist/index.js +363 -214
- package/dist/index.js.map +1 -1
- package/dist/index.mjs +32 -11
- package/dist/index.mjs.map +1 -1
- package/dist/otel.d.mts +27 -0
- package/dist/otel.d.ts +27 -0
- package/dist/otel.js +66 -0
- package/dist/otel.js.map +1 -0
- package/dist/otel.mjs +49 -0
- package/dist/otel.mjs.map +1 -0
- package/dist/testing.d.mts +2 -2
- package/dist/testing.d.ts +2 -2
- package/dist/testing.js +20 -14
- package/dist/testing.js.map +1 -1
- package/dist/testing.mjs +18 -12
- package/dist/testing.mjs.map +1 -1
- package/package.json +35 -9
- package/dist/chunk-A56D7HXR.mjs.map +0 -1
package/README.md
CHANGED
|
@@ -4,11 +4,11 @@
|
|
|
4
4
|
[](https://github.com/drarzter/kafka-client/actions/workflows/publish.yml)
|
|
5
5
|
[](https://opensource.org/licenses/MIT)
|
|
6
6
|
|
|
7
|
-
Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class NestJS adapter. Built on top of [
|
|
7
|
+
Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class NestJS adapter. Built on top of [`@confluentinc/kafka-javascript`](https://github.com/confluentinc/confluent-kafka-javascript) (librdkafka).
|
|
8
8
|
|
|
9
9
|
## What is this?
|
|
10
10
|
|
|
11
|
-
An opinionated, type-safe abstraction over
|
|
11
|
+
An opinionated, type-safe abstraction over `@confluentinc/kafka-javascript` (librdkafka). Works standalone (Express, Fastify, raw Node) or as a NestJS DynamicModule. Not a full-featured framework — just a clean, typed layer for producing and consuming Kafka messages.
|
|
12
12
|
|
|
13
13
|
## Why?
|
|
14
14
|
|
|
@@ -22,7 +22,11 @@ An opinionated, type-safe abstraction over kafkajs. Works standalone (Express, F
|
|
|
22
22
|
- **Partition key support** — route related messages to the same partition
|
|
23
23
|
- **Custom headers** — attach metadata headers to messages
|
|
24
24
|
- **Transactions** — exactly-once semantics with `producer.transaction()`
|
|
25
|
-
- **
|
|
25
|
+
- **EventEnvelope** — every consumed message is wrapped in `EventEnvelope<T>` with `eventId`, `correlationId`, `timestamp`, `schemaVersion`, `traceparent`, and Kafka metadata
|
|
26
|
+
- **Correlation ID propagation** — auto-generated on send, auto-propagated through `AsyncLocalStorage` so nested sends inherit the same correlation ID
|
|
27
|
+
- **OpenTelemetry support** — `@drarzter/kafka-client/otel` entrypoint with `otelInstrumentation()` for W3C Trace Context propagation
|
|
28
|
+
- **Consumer interceptors** — before/after/onError hooks with `EventEnvelope` access
|
|
29
|
+
- **Client-wide instrumentation** — `KafkaInstrumentation` hooks for cross-cutting concerns (tracing, metrics)
|
|
26
30
|
- **Auto-create topics** — `autoCreateTopics: true` for dev mode — no need to pre-create topics
|
|
27
31
|
- **Error classes** — `KafkaProcessingError` and `KafkaRetryExhaustedError` with topic, message, and attempt metadata
|
|
28
32
|
- **Health check** — built-in health indicator for monitoring
|
|
@@ -37,6 +41,21 @@ See the [Roadmap](./ROADMAP.md) for upcoming features and version history.
|
|
|
37
41
|
npm install @drarzter/kafka-client
|
|
38
42
|
```
|
|
39
43
|
|
|
44
|
+
`@confluentinc/kafka-javascript` uses a native librdkafka addon. On most systems it builds automatically. For faster installs (skips compilation), install the system library first:
|
|
45
|
+
|
|
46
|
+
```bash
|
|
47
|
+
# Arch / CachyOS
|
|
48
|
+
sudo pacman -S librdkafka
|
|
49
|
+
|
|
50
|
+
# Debian / Ubuntu
|
|
51
|
+
sudo apt-get install librdkafka-dev
|
|
52
|
+
|
|
53
|
+
# macOS
|
|
54
|
+
brew install librdkafka
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
Then install with `BUILD_LIBRDKAFKA=0 npm install`.
|
|
58
|
+
|
|
40
59
|
For NestJS projects, install peer dependencies: `@nestjs/common`, `@nestjs/core`, `reflect-metadata`, `rxjs`.
|
|
41
60
|
|
|
42
61
|
For standalone usage (Express, Fastify, raw Node), no extra dependencies needed — import from `@drarzter/kafka-client/core`.
|
|
@@ -54,15 +73,31 @@ await kafka.connectProducer();
|
|
|
54
73
|
// Send
|
|
55
74
|
await kafka.sendMessage(OrderCreated, { orderId: '123', amount: 100 });
|
|
56
75
|
|
|
57
|
-
// Consume
|
|
58
|
-
await kafka.startConsumer([OrderCreated], async (
|
|
59
|
-
console.log(`${topic}:`,
|
|
76
|
+
// Consume — handler receives an EventEnvelope
|
|
77
|
+
await kafka.startConsumer([OrderCreated], async (envelope) => {
|
|
78
|
+
console.log(`${envelope.topic}:`, envelope.payload.orderId);
|
|
60
79
|
});
|
|
61
80
|
|
|
62
81
|
// Custom logger (winston, pino, etc.)
|
|
63
82
|
const kafka2 = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
|
|
64
83
|
logger: myWinstonLogger,
|
|
65
84
|
});
|
|
85
|
+
|
|
86
|
+
// All module options work in standalone mode too
|
|
87
|
+
const kafka3 = new KafkaClient('my-app', 'my-group', ['localhost:9092'], {
|
|
88
|
+
autoCreateTopics: true, // auto-create topics on first use
|
|
89
|
+
numPartitions: 3, // partitions for auto-created topics
|
|
90
|
+
strictSchemas: false, // disable schema enforcement for string topic keys
|
|
91
|
+
instrumentation: [...], // client-wide tracing/metrics hooks
|
|
92
|
+
});
|
|
93
|
+
|
|
94
|
+
// Health check — available directly, no NestJS needed
|
|
95
|
+
const status = await kafka.checkStatus();
|
|
96
|
+
// { status: 'up', clientId: 'my-app', topics: ['order.created', ...] }
|
|
97
|
+
|
|
98
|
+
// Stop all consumers without disconnecting the producer or admin
|
|
99
|
+
// Useful when you want to re-subscribe with different options
|
|
100
|
+
await kafka.stopConsumer();
|
|
66
101
|
```
|
|
67
102
|
|
|
68
103
|
## Quick start (NestJS)
|
|
@@ -99,7 +134,7 @@ export class AppModule {}
|
|
|
99
134
|
```typescript
|
|
100
135
|
// app.service.ts
|
|
101
136
|
import { Injectable } from '@nestjs/common';
|
|
102
|
-
import { InjectKafkaClient, KafkaClient, SubscribeTo } from '@drarzter/kafka-client';
|
|
137
|
+
import { InjectKafkaClient, KafkaClient, SubscribeTo, EventEnvelope } from '@drarzter/kafka-client';
|
|
103
138
|
import { MyTopics } from './types';
|
|
104
139
|
|
|
105
140
|
@Injectable()
|
|
@@ -113,8 +148,8 @@ export class AppService {
|
|
|
113
148
|
}
|
|
114
149
|
|
|
115
150
|
@SubscribeTo('hello')
|
|
116
|
-
async onHello(
|
|
117
|
-
console.log('Received:',
|
|
151
|
+
async onHello(envelope: EventEnvelope<MyTopics['hello']>) {
|
|
152
|
+
console.log('Received:', envelope.payload.text);
|
|
118
153
|
}
|
|
119
154
|
}
|
|
120
155
|
```
|
|
@@ -192,7 +227,7 @@ await kafka.transaction(async (tx) => {
|
|
|
192
227
|
|
|
193
228
|
// Consuming (decorator)
|
|
194
229
|
@SubscribeTo(OrderCreated)
|
|
195
|
-
async handleOrder(
|
|
230
|
+
async handleOrder(envelope: EventEnvelope<OrdersTopicMap['order.created']>) { ... }
|
|
196
231
|
|
|
197
232
|
// Consuming (imperative)
|
|
198
233
|
await kafka.startConsumer([OrderCreated], handler);
|
|
@@ -217,7 +252,7 @@ import { OrdersTopicMap } from './orders.types';
|
|
|
217
252
|
export class OrdersModule {}
|
|
218
253
|
```
|
|
219
254
|
|
|
220
|
-
`autoCreateTopics` calls `admin.createTopics()` (idempotent — no-op if topic already exists) before the first send
|
|
255
|
+
`autoCreateTopics` calls `admin.createTopics()` (idempotent — no-op if topic already exists) before the first send **and** before each `startConsumer` / `startBatchConsumer` call. librdkafka errors on unknown topics at subscribe time, so consumer-side creation is required. Useful in development, not recommended for production.
|
|
221
256
|
|
|
222
257
|
Or with `ConfigService`:
|
|
223
258
|
|
|
@@ -301,13 +336,13 @@ import { SubscribeTo } from '@drarzter/kafka-client';
|
|
|
301
336
|
@Injectable()
|
|
302
337
|
export class OrdersHandler {
|
|
303
338
|
@SubscribeTo('order.created')
|
|
304
|
-
async handleOrderCreated(
|
|
305
|
-
console.log('New order:',
|
|
339
|
+
async handleOrderCreated(envelope: EventEnvelope<OrdersTopicMap['order.created']>) {
|
|
340
|
+
console.log('New order:', envelope.payload.orderId);
|
|
306
341
|
}
|
|
307
342
|
|
|
308
343
|
@SubscribeTo('order.completed', { retry: { maxRetries: 3 }, dlq: true })
|
|
309
|
-
async handleOrderCompleted(
|
|
310
|
-
console.log('Order completed:',
|
|
344
|
+
async handleOrderCompleted(envelope: EventEnvelope<OrdersTopicMap['order.completed']>) {
|
|
345
|
+
console.log('Order completed:', envelope.payload.orderId);
|
|
311
346
|
}
|
|
312
347
|
}
|
|
313
348
|
```
|
|
@@ -327,8 +362,8 @@ export class OrdersService implements OnModuleInit {
|
|
|
327
362
|
async onModuleInit() {
|
|
328
363
|
await this.kafka.startConsumer(
|
|
329
364
|
['order.created', 'order.completed'],
|
|
330
|
-
async (
|
|
331
|
-
console.log(`${topic}:`,
|
|
365
|
+
async (envelope) => {
|
|
366
|
+
console.log(`${envelope.topic}:`, envelope.payload);
|
|
332
367
|
},
|
|
333
368
|
{
|
|
334
369
|
retry: { maxRetries: 3, backoffMs: 1000 },
|
|
@@ -343,7 +378,7 @@ export class OrdersService implements OnModuleInit {
|
|
|
343
378
|
|
|
344
379
|
### Per-consumer groupId
|
|
345
380
|
|
|
346
|
-
Override the default consumer group for specific consumers. Each unique `groupId` creates a separate
|
|
381
|
+
Override the default consumer group for specific consumers. Each unique `groupId` creates a separate librdkafka Consumer internally:
|
|
347
382
|
|
|
348
383
|
```typescript
|
|
349
384
|
// Default group from constructor
|
|
@@ -354,7 +389,7 @@ await kafka.startConsumer(['orders'], auditHandler, { groupId: 'orders-audit' })
|
|
|
354
389
|
|
|
355
390
|
// Works with @SubscribeTo too
|
|
356
391
|
@SubscribeTo('orders', { groupId: 'orders-audit' })
|
|
357
|
-
async auditOrders(
|
|
392
|
+
async auditOrders(envelope) { ... }
|
|
358
393
|
```
|
|
359
394
|
|
|
360
395
|
**Important:** You cannot mix `eachMessage` and `eachBatch` consumers on the same `groupId`. The library throws a clear error if you try:
|
|
@@ -404,7 +439,7 @@ Same with `@SubscribeTo()` — use `clientName` to target a specific named clien
|
|
|
404
439
|
|
|
405
440
|
```typescript
|
|
406
441
|
@SubscribeTo('payment.received', { clientName: 'payments' }) // ← matches name: 'payments'
|
|
407
|
-
async handlePayment(
|
|
442
|
+
async handlePayment(envelope: EventEnvelope<PaymentsTopicMap['payment.received']>) {
|
|
408
443
|
// ...
|
|
409
444
|
}
|
|
410
445
|
```
|
|
@@ -460,16 +495,20 @@ await this.kafka.sendBatch('order.created', [
|
|
|
460
495
|
|
|
461
496
|
## Batch consuming
|
|
462
497
|
|
|
463
|
-
Process messages in batches for higher throughput. The handler receives an array of
|
|
498
|
+
Process messages in batches for higher throughput. The handler receives an array of `EventEnvelope`s and a `BatchMeta` object with offset management controls:
|
|
464
499
|
|
|
465
500
|
```typescript
|
|
466
501
|
await this.kafka.startBatchConsumer(
|
|
467
502
|
['order.created'],
|
|
468
|
-
async (
|
|
469
|
-
//
|
|
470
|
-
for (const
|
|
471
|
-
await processOrder(
|
|
472
|
-
meta.resolveOffset(
|
|
503
|
+
async (envelopes, meta) => {
|
|
504
|
+
// envelopes: EventEnvelope<OrdersTopicMap['order.created']>[]
|
|
505
|
+
for (const env of envelopes) {
|
|
506
|
+
await processOrder(env.payload);
|
|
507
|
+
meta.resolveOffset(env.offset);
|
|
508
|
+
|
|
509
|
+
// Call heartbeat() during long-running batch processing to prevent
|
|
510
|
+
// the broker from considering the consumer dead (session.timeout.ms)
|
|
511
|
+
await meta.heartbeat();
|
|
473
512
|
}
|
|
474
513
|
await meta.commitOffsetsIfNecessary();
|
|
475
514
|
},
|
|
@@ -477,18 +516,44 @@ await this.kafka.startBatchConsumer(
|
|
|
477
516
|
);
|
|
478
517
|
```
|
|
479
518
|
|
|
519
|
+
With `autoCommit: false` for full manual offset control:
|
|
520
|
+
|
|
521
|
+
```typescript
|
|
522
|
+
await this.kafka.startBatchConsumer(
|
|
523
|
+
['order.created'],
|
|
524
|
+
async (envelopes, meta) => {
|
|
525
|
+
for (const env of envelopes) {
|
|
526
|
+
await processOrder(env.payload);
|
|
527
|
+
meta.resolveOffset(env.offset);
|
|
528
|
+
}
|
|
529
|
+
// commitOffsetsIfNecessary() commits only when autoCommit is off
|
|
530
|
+
// or when the commit interval has elapsed
|
|
531
|
+
await meta.commitOffsetsIfNecessary();
|
|
532
|
+
},
|
|
533
|
+
{ autoCommit: false },
|
|
534
|
+
);
|
|
535
|
+
```
|
|
536
|
+
|
|
480
537
|
With `@SubscribeTo()`:
|
|
481
538
|
|
|
482
539
|
```typescript
|
|
483
540
|
@SubscribeTo('order.created', { batch: true })
|
|
484
|
-
async handleOrders(
|
|
485
|
-
|
|
541
|
+
async handleOrders(envelopes: EventEnvelope<OrdersTopicMap['order.created']>[], meta: BatchMeta) {
|
|
542
|
+
for (const env of envelopes) { ... }
|
|
486
543
|
}
|
|
487
544
|
```
|
|
488
545
|
|
|
489
546
|
Schema validation runs per-message — invalid messages are skipped (DLQ'd if enabled), valid ones are passed to the handler. Retry applies to the whole batch.
|
|
490
547
|
|
|
491
|
-
`BatchMeta` exposes:
|
|
548
|
+
`BatchMeta` exposes:
|
|
549
|
+
|
|
550
|
+
| Property/Method | Description |
|
|
551
|
+
| --------------- | ----------- |
|
|
552
|
+
| `partition` | Partition number for this batch |
|
|
553
|
+
| `highWatermark` | Latest offset in the partition (lag indicator) |
|
|
554
|
+
| `heartbeat()` | Send a heartbeat to keep the consumer session alive — call during long processing loops |
|
|
555
|
+
| `resolveOffset(offset)` | Mark offset as processed (required before `commitOffsetsIfNecessary`) |
|
|
556
|
+
| `commitOffsetsIfNecessary()` | Commit resolved offsets; respects `autoCommit` setting |
|
|
492
557
|
|
|
493
558
|
## Transactions
|
|
494
559
|
|
|
@@ -509,24 +574,34 @@ await this.kafka.transaction(async (tx) => {
|
|
|
509
574
|
});
|
|
510
575
|
```
|
|
511
576
|
|
|
512
|
-
`tx.sendBatch()` is also available inside transactions
|
|
577
|
+
`tx.sendBatch()` is also available inside transactions:
|
|
578
|
+
|
|
579
|
+
```typescript
|
|
580
|
+
await this.kafka.transaction(async (tx) => {
|
|
581
|
+
await tx.sendBatch('order.created', [
|
|
582
|
+
{ value: { orderId: '1', userId: '10', amount: 50 }, key: '1' },
|
|
583
|
+
{ value: { orderId: '2', userId: '20', amount: 75 }, key: '2' },
|
|
584
|
+
]);
|
|
585
|
+
// if anything throws, all messages are rolled back
|
|
586
|
+
});
|
|
587
|
+
```
|
|
513
588
|
|
|
514
589
|
## Consumer interceptors
|
|
515
590
|
|
|
516
|
-
Add before/after/onError hooks to message processing
|
|
591
|
+
Add before/after/onError hooks to message processing. Interceptors receive the full `EventEnvelope`:
|
|
517
592
|
|
|
518
593
|
```typescript
|
|
519
594
|
import { ConsumerInterceptor } from '@drarzter/kafka-client';
|
|
520
595
|
|
|
521
596
|
const loggingInterceptor: ConsumerInterceptor<OrdersTopicMap> = {
|
|
522
|
-
before: (
|
|
523
|
-
console.log(`Processing ${topic}`,
|
|
597
|
+
before: (envelope) => {
|
|
598
|
+
console.log(`Processing ${envelope.topic}`, envelope.payload);
|
|
524
599
|
},
|
|
525
|
-
after: (
|
|
526
|
-
console.log(`Done ${topic}`);
|
|
600
|
+
after: (envelope) => {
|
|
601
|
+
console.log(`Done ${envelope.topic}`);
|
|
527
602
|
},
|
|
528
|
-
onError: (
|
|
529
|
-
console.error(`Failed ${topic}:`, error.message);
|
|
603
|
+
onError: (envelope, error) => {
|
|
604
|
+
console.error(`Failed ${envelope.topic}:`, error.message);
|
|
530
605
|
},
|
|
531
606
|
};
|
|
532
607
|
|
|
@@ -537,23 +612,53 @@ await this.kafka.startConsumer(['order.created'], handler, {
|
|
|
537
612
|
|
|
538
613
|
Multiple interceptors run in order. All hooks are optional.
|
|
539
614
|
|
|
615
|
+
## Instrumentation
|
|
616
|
+
|
|
617
|
+
For client-wide cross-cutting concerns (tracing, metrics), use `KafkaInstrumentation` hooks instead of per-consumer interceptors:
|
|
618
|
+
|
|
619
|
+
```typescript
|
|
620
|
+
import { otelInstrumentation } from '@drarzter/kafka-client/otel';
|
|
621
|
+
|
|
622
|
+
const kafka = new KafkaClient('my-app', 'my-group', brokers, {
|
|
623
|
+
instrumentation: [otelInstrumentation()],
|
|
624
|
+
});
|
|
625
|
+
```
|
|
626
|
+
|
|
627
|
+
`otelInstrumentation()` injects `traceparent` on send, extracts it on consume, and creates `CONSUMER` spans automatically. Requires `@opentelemetry/api` as a peer dependency.
|
|
628
|
+
|
|
629
|
+
Custom instrumentation:
|
|
630
|
+
|
|
631
|
+
```typescript
|
|
632
|
+
import { KafkaInstrumentation } from '@drarzter/kafka-client';
|
|
633
|
+
|
|
634
|
+
const metrics: KafkaInstrumentation = {
|
|
635
|
+
beforeSend(topic, headers) { /* inject headers, start timer */ },
|
|
636
|
+
afterSend(topic) { /* record send latency */ },
|
|
637
|
+
beforeConsume(envelope) { /* start span */ return () => { /* end span */ }; },
|
|
638
|
+
onConsumeError(envelope, error) { /* record error metric */ },
|
|
639
|
+
};
|
|
640
|
+
```
|
|
641
|
+
|
|
540
642
|
## Options reference
|
|
541
643
|
|
|
542
644
|
### Send options
|
|
543
645
|
|
|
544
646
|
Options for `sendMessage()` — the third argument:
|
|
545
647
|
|
|
546
|
-
| Option
|
|
547
|
-
|
|
548
|
-
| `key`
|
|
549
|
-
| `headers` | —
|
|
648
|
+
| Option | Default | Description |
|
|
649
|
+
| ------ | ------- | ----------- |
|
|
650
|
+
| `key` | — | Partition key for message routing |
|
|
651
|
+
| `headers` | — | Custom metadata headers (merged with auto-generated envelope headers) |
|
|
652
|
+
| `correlationId` | auto | Override the auto-propagated correlation ID (default: inherited from ALS context or new UUID) |
|
|
653
|
+
| `schemaVersion` | `1` | Schema version for the payload |
|
|
654
|
+
| `eventId` | auto | Override the auto-generated event ID (UUID v4) |
|
|
550
655
|
|
|
551
|
-
`sendBatch()` accepts
|
|
656
|
+
`sendBatch()` accepts the same options per message inside the array items.
|
|
552
657
|
|
|
553
658
|
### Consumer options
|
|
554
659
|
|
|
555
660
|
| Option | Default | Description |
|
|
556
|
-
|
|
661
|
+
| ------ | ------- | ----------- |
|
|
557
662
|
| `groupId` | constructor value | Override consumer group for this subscription |
|
|
558
663
|
| `fromBeginning` | `false` | Read from the beginning of the topic |
|
|
559
664
|
| `autoCommit` | `true` | Auto-commit offsets |
|
|
@@ -570,7 +675,7 @@ Options for `sendMessage()` — the third argument:
|
|
|
570
675
|
Passed to `KafkaModule.register()` or returned from `registerAsync()` factory:
|
|
571
676
|
|
|
572
677
|
| Option | Default | Description |
|
|
573
|
-
|
|
678
|
+
| ------ | ------- | ----------- |
|
|
574
679
|
| `clientId` | — | Kafka client identifier (required) |
|
|
575
680
|
| `groupId` | — | Default consumer group ID (required) |
|
|
576
681
|
| `brokers` | — | Array of broker addresses (required) |
|
|
@@ -579,6 +684,7 @@ Passed to `KafkaModule.register()` or returned from `registerAsync()` factory:
|
|
|
579
684
|
| `autoCreateTopics` | `false` | Auto-create topics on first send (dev only) |
|
|
580
685
|
| `numPartitions` | `1` | Number of partitions for auto-created topics |
|
|
581
686
|
| `strictSchemas` | `true` | Validate string topic keys against schemas registered via TopicDescriptor |
|
|
687
|
+
| `instrumentation` | `[]` | Client-wide instrumentation hooks (e.g. OTel). Applied to both send and consume paths |
|
|
582
688
|
|
|
583
689
|
**Module-scoped** (default) — import `KafkaModule` in each module that needs it:
|
|
584
690
|
|
|
@@ -641,7 +747,7 @@ err.cause; // the original error
|
|
|
641
747
|
```typescript
|
|
642
748
|
// In an onError interceptor:
|
|
643
749
|
const interceptor: ConsumerInterceptor<MyTopics> = {
|
|
644
|
-
onError: (
|
|
750
|
+
onError: (envelope, error) => {
|
|
645
751
|
if (error instanceof KafkaRetryExhaustedError) {
|
|
646
752
|
console.log(`Failed after ${error.attempts} attempts on ${error.topic}`);
|
|
647
753
|
console.log('Last error:', error.cause);
|
|
@@ -658,7 +764,7 @@ When `retry.maxRetries` is set and all attempts fail, `KafkaRetryExhaustedError`
|
|
|
658
764
|
import { KafkaValidationError } from '@drarzter/kafka-client';
|
|
659
765
|
|
|
660
766
|
const interceptor: ConsumerInterceptor<MyTopics> = {
|
|
661
|
-
onError: (
|
|
767
|
+
onError: (envelope, error) => {
|
|
662
768
|
if (error instanceof KafkaValidationError) {
|
|
663
769
|
console.log(`Bad message on ${error.topic}:`, error.cause?.message);
|
|
664
770
|
}
|
|
@@ -707,9 +813,9 @@ await kafka.sendMessage(OrderCreated, { orderId: '1', userId: '2', amount: -5 })
|
|
|
707
813
|
|
|
708
814
|
```typescript
|
|
709
815
|
@SubscribeTo(OrderCreated, { dlq: true })
|
|
710
|
-
async handleOrder(
|
|
711
|
-
// `
|
|
712
|
-
console.log(
|
|
816
|
+
async handleOrder(envelope) {
|
|
817
|
+
// `envelope.payload` is guaranteed to match the schema
|
|
818
|
+
console.log(envelope.payload.orderId); // string — validated at runtime
|
|
713
819
|
}
|
|
714
820
|
```
|
|
715
821
|
|
|
@@ -777,6 +883,8 @@ export class HealthService {
|
|
|
777
883
|
|
|
778
884
|
Import from `@drarzter/kafka-client/testing` — zero runtime deps, only `jest` and `@testcontainers/kafka` as peer dependencies.
|
|
779
885
|
|
|
886
|
+
> Unit tests mock `@confluentinc/kafka-javascript` — no Kafka broker needed. Integration tests use Testcontainers (requires Docker).
|
|
887
|
+
|
|
780
888
|
#### `createMockKafkaClient<T>()`
|
|
781
889
|
|
|
782
890
|
Fully typed mock with `jest.fn()` on every `IKafkaClient` method. All methods resolve to sensible defaults:
|
|
@@ -830,14 +938,14 @@ it('sends and receives', async () => {
|
|
|
830
938
|
Options:
|
|
831
939
|
|
|
832
940
|
| Option | Default | Description |
|
|
833
|
-
|
|
941
|
+
| ------ | ------- | ----------- |
|
|
834
942
|
| `image` | `"confluentinc/cp-kafka:7.7.0"` | Docker image |
|
|
835
943
|
| `transactionWarmup` | `true` | Warm up transaction coordinator on start |
|
|
836
944
|
| `topics` | `[]` | Topics to pre-create (string or `{ topic, numPartitions }`) |
|
|
837
945
|
|
|
838
946
|
### Running tests
|
|
839
947
|
|
|
840
|
-
Unit tests (mocked
|
|
948
|
+
Unit tests (mocked `@confluentinc/kafka-javascript` — no broker needed):
|
|
841
949
|
|
|
842
950
|
```bash
|
|
843
951
|
npm test
|
|
@@ -851,16 +959,17 @@ npm run test:integration
|
|
|
851
959
|
|
|
852
960
|
The integration suite spins up a single-node KRaft Kafka container and tests sending, consuming, batching, transactions, retry + DLQ, interceptors, health checks, and `fromBeginning` — no mocks.
|
|
853
961
|
|
|
854
|
-
Both suites run in CI on every push to `main
|
|
962
|
+
Both suites run in CI on every push to `main` and on pull requests.
|
|
855
963
|
|
|
856
964
|
## Project structure
|
|
857
965
|
|
|
858
|
-
```
|
|
966
|
+
```text
|
|
859
967
|
src/
|
|
860
|
-
├── client/ # Core — KafkaClient, types, topic(),
|
|
968
|
+
├── client/ # Core — KafkaClient, types, envelope, consumer pipeline, topic(), errors (0 framework deps)
|
|
861
969
|
├── nest/ # NestJS adapter — Module, Explorer, decorators, health
|
|
862
970
|
├── testing/ # Testing utilities — mock client, testcontainer wrapper
|
|
863
971
|
├── core.ts # Standalone entrypoint (@drarzter/kafka-client/core)
|
|
972
|
+
├── otel.ts # OpenTelemetry entrypoint (@drarzter/kafka-client/otel)
|
|
864
973
|
├── testing.ts # Testing entrypoint (@drarzter/kafka-client/testing)
|
|
865
974
|
└── index.ts # Full entrypoint — core + NestJS adapter
|
|
866
975
|
```
|