@venizia/ignis-docs 0.0.7-2 → 0.0.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  # Consumer
2
2
 
3
- The `KafkaConsumerHelper` is a thin wrapper around `@platformatic/kafka`'s `Consumer`. It manages creation, logging, and lifecycle.
3
+ The `KafkaConsumerHelper` wraps `@platformatic/kafka`'s `Consumer` with health tracking, graceful shutdown, message callbacks, consumer group event callbacks, and lag monitoring.
4
4
 
5
5
  ```typescript
6
6
  class KafkaConsumerHelper<
@@ -8,7 +8,7 @@ class KafkaConsumerHelper<
8
8
  ValueType = string,
9
9
  HeaderKeyType = string,
10
10
  HeaderValueType = string,
11
- > extends BaseHelper
11
+ > extends BaseKafkaHelper<Consumer<KeyType, ValueType, HeaderKeyType, HeaderValueType>>
12
12
  ```
13
13
 
14
14
  ## Helper API
@@ -16,35 +16,74 @@ class KafkaConsumerHelper<
16
16
  | Method | Signature | Description |
17
17
  |--------|-----------|-------------|
18
18
  | `newInstance(opts)` | `static newInstance<K,V,HK,HV>(opts): KafkaConsumerHelper<K,V,HK,HV>` | Factory method |
19
- | `getConsumer()` | `(): Consumer<KeyType, ValueType, HeaderKeyType, HeaderValueType>` | Access the underlying `Consumer` |
20
- | `close(isForce?)` | `(isForce?: boolean): Promise<void>` | Close the consumer. Default: `force=true` |
21
-
22
- > [!NOTE]
23
- > Consumer defaults to `force=true` on close (unlike producer which defaults to `false`). This is because consumers should leave the group promptly to trigger faster rebalancing.
24
-
25
- ## IKafkaConsumerOpts
19
+ | `getConsumer()` | `(): Consumer<K,V,HK,HV>` | Access the underlying `Consumer` |
20
+ | `getStream()` | `(): MessagesStream \| null` | Get the active stream (after `start()`) |
21
+ | `start(opts)` | `(opts: IKafkaConsumeStartOptions): Promise<void>` | Start consuming (creates stream, wires callbacks) |
22
+ | `startLagMonitoring(opts)` | `(opts: { topics: string[]; interval?: number }): void` | Start periodic lag monitoring |
23
+ | `stopLagMonitoring()` | `(): void` | Stop lag monitoring |
24
+ | `isHealthy()` | `(): boolean` | `true` when broker connected |
25
+ | `isReady()` | `(): boolean` | `isHealthy()` **and** `consumer.isActive()` |
26
+ | `getHealthStatus()` | `(): TKafkaHealthStatus` | `'connected'` \| `'disconnected'` \| `'unknown'` |
27
+ | `close(opts?)` | `(opts?: { isForce?: boolean }): Promise<void>` | Stop lag, close stream, close consumer |
28
+
29
+ ## IKafkaConsumerOptions
26
30
 
27
31
  ```typescript
28
- interface IKafkaConsumerOpts<KeyType, ValueType, HeaderKeyType, HeaderValueType>
32
+ interface IKafkaConsumerOptions<KeyType, ValueType, HeaderKeyType, HeaderValueType>
29
33
  extends IKafkaConnectionOptions
30
34
  ```
31
35
 
36
+ ### Consumer Configuration
37
+
32
38
  | Option | Type | Default | Description |
33
39
  |--------|------|---------|-------------|
34
40
  | `groupId` | `string` | — | Consumer group ID. **Required** |
35
41
  | `identifier` | `string` | `'kafka-consumer'` | Scoped logging identifier |
36
- | `deserializers` | `Partial<Deserializers<K,V,HK,HV>>` | — | Key/value/header deserializers. **Pass explicitly** |
37
- | `autocommit` | `boolean \| number` | `false` | Auto-commit offsets. `true` = default interval, `number` = custom interval in ms |
38
- | `sessionTimeout` | `number` | `30000` | Session timeout — if no heartbeat within this period, consumer is removed from group |
42
+ | `deserializers` | `Partial<Deserializers<K,V,HK,HV>>` | — | Key/value/header deserializers |
43
+ | `autocommit` | `boolean \| number` | `false` | Auto-commit offsets. `true` = default interval, `number` = custom ms |
44
+ | `sessionTimeout` | `number` | `30000` | Session timeout — consumer removed from group if no heartbeat |
39
45
  | `heartbeatInterval` | `number` | `3000` | Heartbeat interval — must be less than `sessionTimeout` |
40
- | `rebalanceTimeout` | `number` | `sessionTimeout` | Max time for rebalance — defaults to `sessionTimeout` value |
46
+ | `rebalanceTimeout` | `number` | `sessionTimeout` | Max time for rebalance |
41
47
  | `highWaterMark` | `number` | `1024` | Stream buffer size (messages) |
42
- | `minBytes` | `number` | `1` | Min bytes per fetch response — broker waits until this threshold |
48
+ | `minBytes` | `number` | `1` | Min bytes per fetch response |
43
49
  | `maxBytes` | `number` | — | Max bytes per fetch response per partition |
44
50
  | `maxWaitTime` | `number` | — | Max time (ms) broker waits for `minBytes` |
45
- | `metadataMaxAge` | `number` | `300000` | Metadata cache TTL (ms) — how often to refresh topic/partition info |
51
+ | `metadataMaxAge` | `number` | `300000` | Metadata cache TTL (ms) |
46
52
  | `groupProtocol` | `'classic' \| 'consumer'` | `'classic'` | Consumer group protocol. `'consumer'` = KIP-848 (Kafka 3.7+) |
47
53
  | `groupInstanceId` | `string` | — | Static group membership ID — prevents rebalance on restart |
54
+ | `shutdownTimeout` | `number` | `30000` | Graceful shutdown timeout in ms |
55
+ | `registry` | `SchemaRegistry` | — | Schema registry for auto deser |
56
+
57
+ ### Lifecycle Callbacks
58
+
59
+ | Option | Type | Description |
60
+ |--------|------|-------------|
61
+ | `onBrokerConnect` | `TKafkaBrokerEventCallback` | Called when broker connects |
62
+ | `onBrokerDisconnect` | `TKafkaBrokerEventCallback` | Called when broker disconnects |
63
+
64
+ ### Message Callbacks
65
+
66
+ | Option | Type | Description |
67
+ |--------|------|-------------|
68
+ | `onMessage` | `TKafkaMessageCallback<K,V,HK,HV>` | Called for each message. Receives `{ message }` |
69
+ | `onMessageDone` | `TKafkaMessageDoneCallback<K,V,HK,HV>` | Called after `onMessage` succeeds. Receives `{ message }` |
70
+ | `onMessageError` | `TKafkaMessageErrorCallback<K,V,HK,HV>` | Called on processing error. Receives `{ error, message? }` |
71
+
72
+ ### Consumer Group Callbacks
73
+
74
+ | Option | Type | Description |
75
+ |--------|------|-------------|
76
+ | `onGroupJoin` | `TKafkaGroupJoinCallback` | Receives `{ groupId, memberId, generationId? }` |
77
+ | `onGroupLeave` | `TKafkaGroupLeaveCallback` | Receives `{ groupId, memberId }` |
78
+ | `onGroupRebalance` | `TKafkaGroupRebalanceCallback` | Receives `{ groupId }` |
79
+ | `onHeartbeatError` | `TKafkaHeartbeatErrorCallback` | Receives `{ error, groupId?, memberId? }` |
80
+
81
+ ### Lag Monitoring Callbacks
82
+
83
+ | Option | Type | Description |
84
+ |--------|------|-------------|
85
+ | `onLag` | `TKafkaLagCallback` | Receives `{ lag }` (Offsets map) |
86
+ | `onLagError` | `TKafkaLagErrorCallback` | Receives `{ error }` |
48
87
 
49
88
  Plus all [Connection Options](./#connection-options).
50
89
 
@@ -59,61 +98,93 @@ const helper = KafkaConsumerHelper.newInstance({
59
98
  clientId: 'order-consumer',
60
99
  groupId: 'order-processing',
61
100
  deserializers: stringDeserializers,
62
- autocommit: false,
101
+
102
+ // Message lifecycle
103
+ onMessage: async ({ message }) => {
104
+ console.log(`${message.topic}[${message.partition}] @${message.offset}: ${message.key} → ${message.value}`);
105
+ await message.commit();
106
+ },
107
+ onMessageDone: ({ message }) => {
108
+ console.log(`Done processing: ${message.key}`);
109
+ },
110
+ onMessageError: ({ error, message }) => {
111
+ console.error('Processing failed:', error.message, message?.key);
112
+ },
113
+
114
+ // Consumer group events
115
+ onGroupJoin: ({ groupId, memberId }) => console.log(`Joined ${groupId} as ${memberId}`),
116
+ onGroupLeave: ({ groupId }) => console.log(`Left ${groupId}`),
117
+ onGroupRebalance: ({ groupId }) => console.log(`Rebalance in ${groupId}`),
118
+ onHeartbeatError: ({ error }) => console.error('Heartbeat failed:', error),
119
+
120
+ // Broker events
121
+ onBrokerConnect: ({ broker }) => console.log(`Connected to ${broker.host}:${broker.port}`),
122
+ onBrokerDisconnect: ({ broker }) => console.log(`Disconnected from ${broker.host}`),
123
+
124
+ // Lag monitoring
125
+ onLag: ({ lag }) => {
126
+ for (const [topic, partitionLags] of lag) {
127
+ partitionLags.forEach((lagValue, partition) => {
128
+ if (lagValue > 1000n) {
129
+ console.warn(`High lag on ${topic}[${partition}]: ${lagValue}`);
130
+ }
131
+ });
132
+ }
133
+ },
134
+ onLagError: ({ error }) => console.error('Lag monitoring error:', error),
63
135
  });
64
136
 
65
- const consumer = helper.getConsumer();
137
+ // Start consuming
138
+ await helper.start({ topics: ['orders'] });
66
139
 
67
- const stream = await consumer.consume({
68
- topics: ['orders'],
69
- mode: 'committed',
70
- fallbackMode: 'latest',
71
- });
140
+ // Start lag monitoring (optional)
141
+ helper.startLagMonitoring({ topics: ['orders'], interval: 10_000 });
72
142
 
73
- for await (const message of stream) {
74
- console.log(`${message.topic}[${message.partition}] @${message.offset}: ${message.key} ${message.value}`);
75
- await message.commit();
76
- }
143
+ // Health check
144
+ helper.isHealthy(); // true when broker connected
145
+ helper.isReady(); // true when broker connected AND consumer is active
77
146
 
78
- await stream.close();
147
+ // Shutdown
79
148
  await helper.close();
80
149
  ```
81
150
 
82
- ---
151
+ ## Message Callback Flow
83
152
 
84
- ## API Reference (`@platformatic/kafka`)
153
+ When `start()` is called, the helper creates a `MessagesStream` and wires the callbacks:
154
+
155
+ ```
156
+ Stream 'data' event
157
+ → onMessage({ message })
158
+ ├── success → onMessageDone({ message })
159
+ └── error → onMessageError({ error, message })
85
160
 
86
- After calling `helper.getConsumer()`, you have full access to the `Consumer` class.
161
+ Stream 'error' event
162
+ → onMessageError({ error }) (no message available)
163
+ ```
164
+
165
+ - `onMessage` is the main processing callback — do your business logic here
166
+ - `onMessageDone` fires only after `onMessage` resolves successfully — use for logging, metrics, etc.
167
+ - `onMessageError` fires if `onMessage` OR `onMessageDone` throws — use for error tracking
168
+ - The stream `'error'` event also calls `onMessageError` (without `message` since it's a stream-level error)
87
169
 
88
- ### `consumer.consume(options)`
170
+ ## start()
89
171
 
90
- Start consuming messages. Returns a `MessagesStream` (extends Node.js `Readable`).
172
+ `start()` creates the consume stream and wires all message callbacks. It must be called explicitly after construction.
91
173
 
92
174
  ```typescript
93
- interface ConsumeOptions<Key, Value, HeaderKey, HeaderValue> {
175
+ interface IKafkaConsumeStartOptions {
94
176
  topics: string[];
95
- mode?: 'latest' | 'earliest' | 'committed' | 'manual';
96
- fallbackMode?: 'latest' | 'earliest' | 'fail';
97
- maxFetches?: number;
98
- offsets?: { topic: string; partition: number; offset: bigint }[];
99
- onCorruptedMessage?: CorruptedMessageHandler;
100
- // Plus ConsumeBaseOptions (autocommit, minBytes, maxBytes, etc.)
101
- // Plus GroupOptions (sessionTimeout, heartbeatInterval, etc.)
177
+ mode?: MessagesStreamModeValue; // Default: 'committed'
178
+ fallbackMode?: MessagesStreamFallbackModeValue; // Default: 'latest'
102
179
  }
103
180
  ```
104
181
 
105
- #### Stream Modes
106
-
107
182
  | Mode | Description |
108
183
  |------|-------------|
109
184
  | `'committed'` | Resume from last committed offset. **Recommended for production** |
110
185
  | `'latest'` | Start from the latest offset (skip existing messages) |
111
186
  | `'earliest'` | Start from the beginning of the topic |
112
- | `'manual'` | Start from explicitly provided offsets via `offsets` option |
113
-
114
- #### Fallback Modes
115
-
116
- Used when `mode: 'committed'` but no committed offset exists (new consumer group):
187
+ | `'manual'` | Start from explicitly provided offsets |
117
188
 
118
189
  | Fallback | Description |
119
190
  |----------|-------------|
@@ -122,97 +193,84 @@ Used when `mode: 'committed'` but no committed offset exists (new consumer group
122
193
  | `'fail'` | Throw an error |
123
194
 
124
195
  ```typescript
125
- // Production pattern: resume from committed, start from latest for new groups
126
- const stream = await consumer.consume({
127
- topics: ['orders'],
128
- mode: 'committed',
129
- fallbackMode: 'latest',
130
- });
196
+ // Production pattern
197
+ await helper.start({ topics: ['orders'] });
131
198
 
132
199
  // Replay all historical messages
133
- const stream = await consumer.consume({
134
- topics: ['orders'],
135
- mode: 'earliest',
136
- });
200
+ await helper.start({ topics: ['orders'], mode: 'earliest' });
137
201
 
138
- // Start from specific offsets
139
- const stream = await consumer.consume({
202
+ // Custom mode
203
+ await helper.start({
140
204
  topics: ['orders'],
141
- mode: 'manual',
142
- offsets: [
143
- { topic: 'orders', partition: 0, offset: 100n },
144
- { topic: 'orders', partition: 1, offset: 200n },
145
- ],
205
+ mode: 'committed',
206
+ fallbackMode: 'earliest',
146
207
  });
147
208
  ```
148
209
 
149
- ---
150
-
151
- ## MessagesStream
152
-
153
- `MessagesStream` extends Node.js `Readable`. It supports three consumption patterns:
210
+ Guards against duplicate starts — calling `start()` twice logs a warning and returns immediately.
154
211
 
155
- ### Pattern 1: Async Iterator (`for await`)
156
-
157
- Best for sequential processing with backpressure:
212
+ ## Lag Monitoring
158
213
 
159
214
  ```typescript
160
- const stream = await consumer.consume({ topics: ['orders'], mode: 'committed', fallbackMode: 'latest' });
215
+ // Start monitoring (polls every interval)
216
+ helper.startLagMonitoring({ topics: ['orders'], interval: 10_000 });
161
217
 
162
- for await (const message of stream) {
163
- await processMessage(message);
164
- await message.commit();
165
- }
218
+ // Stop monitoring
219
+ helper.stopLagMonitoring();
166
220
  ```
167
221
 
168
- ### Pattern 2: Event-Based (`.on('data')`)
222
+ Lag data is delivered via the `onLag` callback. Errors via `onLagError`.
223
+
224
+ Guards against duplicate starts — calling `startLagMonitoring()` twice logs a warning.
169
225
 
170
- Best for high-throughput or when you need non-blocking processing:
226
+ For one-time lag checks, use the underlying consumer directly:
171
227
 
172
228
  ```typescript
173
- const stream = await consumer.consume({ topics: ['orders'], mode: 'committed', fallbackMode: 'latest' });
229
+ const lag = await helper.getConsumer().getLag({ topics: ['orders'] });
230
+ ```
174
231
 
175
- stream.on('data', (message) => {
176
- console.log(JSON.stringify({
177
- topic: message.topic,
178
- partition: message.partition,
179
- offset: message.offset,
180
- key: message.key,
181
- value: message.value,
182
- headers: Object.fromEntries(message.headers ?? new Map()),
183
- timestamp: message.timestamp,
184
- }, (_key, value) => (typeof value === 'bigint' ? value.toString() : value)));
232
+ ## Graceful Shutdown
185
233
 
186
- message.commit();
187
- });
234
+ `close()` implements an ordered shutdown:
188
235
 
189
- stream.on('error', (err) => {
190
- console.error('Stream error:', err);
191
- });
192
- ```
236
+ 1. Stop lag monitoring
237
+ 2. Close the stream (leave consumer group)
238
+ 3. Close the consumer client (graceful with timeout, or force)
239
+ 4. Set health status to `'disconnected'`
193
240
 
194
- > [!WARNING]
195
- > `message.offset` and `message.timestamp` are `bigint`. When using `JSON.stringify`, you must provide a custom replacer:
196
- > ```typescript
197
- > JSON.stringify(data, (_key, value) => (typeof value === 'bigint' ? value.toString() : value))
198
- > ```
241
+ ```typescript
242
+ // Graceful (recommended)
243
+ await helper.close();
244
+
245
+ // Force
246
+ await helper.close({ isForce: true });
247
+ ```
199
248
 
200
- ### Pattern 3: Pause/Resume
249
+ ## Direct Stream Access
201
250
 
202
- Manual flow control:
251
+ If you don't use the callback pattern, you can access the stream directly after `start()`:
203
252
 
204
253
  ```typescript
205
- stream.on('data', async (message) => {
206
- stream.pause();
207
- await heavyProcessing(message);
208
- message.commit();
209
- stream.resume();
254
+ // After start()
255
+ const stream = helper.getStream();
256
+
257
+ // Or use the consumer directly (bypass helper's start())
258
+ const consumer = helper.getConsumer();
259
+ const stream = await consumer.consume({
260
+ topics: ['orders'],
261
+ mode: 'committed',
262
+ fallbackMode: 'latest',
210
263
  });
264
+
265
+ for await (const message of stream) {
266
+ await processMessage(message);
267
+ await message.commit();
268
+ }
211
269
  ```
212
270
 
213
- ### Message Object
271
+ ## API Reference (`@platformatic/kafka`)
214
272
 
215
- Each message from the stream has the following shape:
273
+ ### Message Object
216
274
 
217
275
  ```typescript
218
276
  interface Message<Key, Value, HeaderKey, HeaderValue> {
@@ -229,220 +287,84 @@ interface Message<Key, Value, HeaderKey, HeaderValue> {
229
287
  }
230
288
  ```
231
289
 
232
- | Property | Type | Description |
233
- |----------|------|-------------|
234
- | `topic` | `string` | Source topic |
235
- | `partition` | `number` | Source partition |
236
- | `offset` | `bigint` | Message offset within the partition |
237
- | `key` | `KeyType` | Deserialized message key |
238
- | `value` | `ValueType` | Deserialized message value |
239
- | `headers` | `Map<HK, HV>` | Deserialized message headers |
240
- | `timestamp` | `bigint` | Message timestamp (ms since epoch) |
241
- | `metadata` | `Record<string, unknown>` | Additional metadata |
242
- | `commit()` | `() => void \| Promise<void>` | Commit this message's offset |
243
-
244
- ### Stream Events
245
-
246
- | Event | Payload | Description |
247
- |-------|---------|-------------|
248
- | `'data'` | `Message<K,V,HK,HV>` | New message received |
249
- | `'error'` | `Error` | Stream error |
250
- | `'close'` | — | Stream closed |
251
- | `'end'` | — | Stream ended (all data consumed) |
252
- | `'autocommit'` | `(err, offsets)` | Auto-commit completed (or failed) |
253
- | `'fetch'` | — | Fetch request sent |
254
- | `'offsets'` | — | Offsets updated |
255
- | `'pause'` | — | Stream paused |
256
- | `'resume'` | — | Stream resumed |
257
- | `'readable'` | — | Stream has data available |
258
-
259
- ### Stream Methods
260
-
261
- | Method | Returns | Description |
262
- |--------|---------|-------------|
263
- | `close()` | `Promise<void>` | Close the stream and leave the consumer group |
264
- | `isActive()` | `boolean` | Whether the stream is actively consuming |
265
- | `isConnected()` | `boolean` | Whether the underlying connection is active |
266
- | `pause()` | `this` | Pause consumption (stop fetching) |
267
- | `resume()` | `this` | Resume consumption |
268
- | `[Symbol.asyncIterator]()` | `AsyncIterator<Message>` | Async iteration support |
269
-
270
- ### Stream Properties
271
-
272
- | Property | Type | Description |
273
- |----------|------|-------------|
274
- | `consumer` | `Consumer` | Reference to the parent consumer |
275
- | `offsetsToFetch` | `Map<string, bigint>` | Next offsets to fetch per topic-partition |
276
- | `offsetsToCommit` | `Map<string, CommitOptionsPartition>` | Pending commit offsets |
277
- | `offsetsCommitted` | `Map<string, bigint>` | Last committed offsets |
278
- | `committedOffsets` | `Map<string, bigint>` | Alias for `offsetsCommitted` |
279
-
280
- ---
281
-
282
- ## Offset Management
283
-
284
- ### Manual Commit
285
-
286
- When `autocommit: false`, commit offsets explicitly after processing:
290
+ > [!WARNING]
291
+ > `message.offset` and `message.timestamp` are `bigint`. When using `JSON.stringify`, provide a custom replacer:
292
+ > ```typescript
293
+ > JSON.stringify(data, (_key, value) => (typeof value === 'bigint' ? value.toString() : value))
294
+ > ```
287
295
 
288
- ```typescript
289
- const stream = await consumer.consume({
290
- topics: ['orders'],
291
- mode: 'committed',
292
- fallbackMode: 'latest',
293
- });
296
+ ### MessagesStream
297
+
298
+ `MessagesStream` extends Node.js `Readable`. Three consumption patterns:
294
299
 
300
+ **Async Iterator** (sequential, backpressure):
301
+ ```typescript
295
302
  for await (const message of stream) {
296
- try {
297
- await processMessage(message);
298
- await message.commit(); // Commit on success
299
- } catch (err) {
300
- // Don't commit — message will be redelivered
301
- console.error('Processing failed:', err);
302
- }
303
+ await processMessage(message);
304
+ await message.commit();
303
305
  }
304
306
  ```
305
307
 
306
- ### Auto-Commit
307
-
308
- Commit offsets automatically at a configurable interval:
308
+ **Event-Based** (high-throughput):
309
+ ```typescript
310
+ stream.on('data', (message) => {
311
+ processMessage(message);
312
+ message.commit();
313
+ });
314
+ ```
309
315
 
316
+ **Pause/Resume** (manual flow control):
310
317
  ```typescript
311
- const helper = KafkaConsumerHelper.newInstance({
312
- // ...
313
- autocommit: true, // Default interval
314
- // autocommit: 5000, // Custom: commit every 5 seconds
318
+ stream.on('data', async (message) => {
319
+ stream.pause();
320
+ await heavyProcessing(message);
321
+ message.commit();
322
+ stream.resume();
315
323
  });
316
324
  ```
317
325
 
318
- ### Bulk Commit via Consumer
326
+ ### Offset Management
319
327
 
320
328
  ```typescript
329
+ // Manual commit (when autocommit: false)
330
+ for await (const message of stream) {
331
+ await processMessage(message);
332
+ await message.commit();
333
+ }
334
+
335
+ // Bulk commit
321
336
  await consumer.commit({
322
337
  offsets: [
323
338
  { topic: 'orders', partition: 0, offset: 150n, leaderEpoch: 0 },
324
339
  { topic: 'orders', partition: 1, offset: 300n, leaderEpoch: 0 },
325
340
  ],
326
341
  });
327
- ```
328
-
329
- ### List Offsets
330
342
 
331
- ```typescript
332
- // Current log-end offsets
343
+ // List offsets
333
344
  const offsets = await consumer.listOffsets({ topics: ['orders'] });
334
- // Map<string, bigint[]> — topic → partition offsets
335
-
336
- // Committed offsets
337
345
  const committed = await consumer.listCommittedOffsets({
338
346
  topics: [{ topic: 'orders', partitions: [0, 1, 2] }],
339
347
  });
340
-
341
- // Offsets at a specific timestamp
342
- const historical = await consumer.listOffsetsWithTimestamps({
343
- topics: ['orders'],
344
- timestamp: BigInt(Date.now() - 3600_000), // 1 hour ago
345
- });
346
- ```
347
-
348
- ---
349
-
350
- ## Lag Monitoring
351
-
352
- ```typescript
353
- // One-time lag check
354
- const lag = await consumer.getLag({ topics: ['orders'] });
355
- // Map<string, bigint[]> — topic → lag per partition
356
-
357
- // Continuous monitoring (emits 'consumer:lag' events)
358
- consumer.startLagMonitoring({ topics: ['orders'] }, 5000); // Check every 5s
359
-
360
- consumer.on('consumer:lag', (lag) => {
361
- for (const [topic, partitionLags] of lag) {
362
- partitionLags.forEach((lagValue, partition) => {
363
- if (lagValue > 1000n) {
364
- console.warn(`High lag on ${topic}[${partition}]: ${lagValue}`);
365
- }
366
- });
367
- }
368
- });
369
-
370
- consumer.on('consumer:lag:error', (error) => {
371
- console.error('Lag monitoring error:', error);
372
- });
373
-
374
- // Stop monitoring
375
- consumer.stopLagMonitoring();
376
- ```
377
-
378
- ---
379
-
380
- ## Consumer Events
381
-
382
- The `Consumer` class emits lifecycle events via `consumer.on(event, handler)`:
383
-
384
- | Event | Payload | Description |
385
- |-------|---------|-------------|
386
- | `'consumer:group:join'` | `{ groupId, memberId, generationId, isLeader, assignments }` | Joined consumer group |
387
- | `'consumer:group:leave'` | `{ groupId, memberId, generationId }` | Left consumer group |
388
- | `'consumer:group:rejoin'` | — | Rejoining group after rebalance |
389
- | `'consumer:group:rebalance'` | `{ groupId }` | Partition rebalance triggered |
390
- | `'consumer:heartbeat:start'` | `{ groupId, memberId, generationId }` | Heartbeat started |
391
- | `'consumer:heartbeat:cancel'` | `{ groupId, memberId, generationId }` | Heartbeat cancelled |
392
- | `'consumer:heartbeat:end'` | `{ groupId, memberId, generationId }` | Heartbeat completed |
393
- | `'consumer:heartbeat:error'` | `{ groupId, memberId, generationId, error }` | Heartbeat failed |
394
- | `'consumer:lag'` | `Map<string, bigint[]>` | Lag report (from `startLagMonitoring`) |
395
- | `'consumer:lag:error'` | `Error` | Lag monitoring error |
396
-
397
- **Base client events** (shared with Producer and Admin):
398
-
399
- | Event | Payload | Description |
400
- |-------|---------|-------------|
401
- | `'client:broker:connect'` | `{ node, host, port }` | Connected to broker |
402
- | `'client:broker:disconnect'` | `{ node, host, port }` | Disconnected from broker |
403
- | `'client:broker:failed'` | `{ node, host, port }` | Broker connection failed |
404
- | `'client:metadata'` | `ClusterMetadata` | Metadata refreshed |
405
- | `'client:close'` | — | Client closed |
406
-
407
- ```typescript
408
- consumer.on('consumer:group:join', ({ groupId, memberId, assignments }) => {
409
- console.log(`Joined group ${groupId} as ${memberId}`);
410
- console.log('Assigned partitions:', assignments);
411
- });
412
-
413
- consumer.on('consumer:group:rebalance', ({ groupId }) => {
414
- console.log(`Rebalance triggered for group ${groupId}`);
415
- });
416
348
  ```
417
349
 
418
- ---
419
-
420
- ## Consumer Group Management
350
+ ### Consumer Group Management
421
351
 
422
352
  ```typescript
423
- // Consumer properties
424
353
  consumer.groupId; // string
425
354
  consumer.memberId; // string | null
426
355
  consumer.generationId; // number
427
356
  consumer.assignments; // GroupAssignment[] | null
428
357
  consumer.isActive(); // boolean
429
- consumer.lastHeartbeat; // Date | null
430
-
431
- // Manually leave and rejoin group
432
- await consumer.leaveGroup();
433
- await consumer.joinGroup();
434
358
 
435
359
  // Static membership — prevents rebalance on restart
436
360
  const helper = KafkaConsumerHelper.newInstance({
437
- // ...
438
- groupInstanceId: 'worker-1', // Unique per instance
439
- sessionTimeout: 60_000, // Longer timeout for static members
361
+ ...
362
+ groupInstanceId: 'worker-1',
363
+ sessionTimeout: 60_000,
440
364
  });
441
365
  ```
442
366
 
443
- ---
444
-
445
- ## Consumer Group Partitioning
367
+ ### Consumer Group Partitioning
446
368
 
447
369
  When multiple consumers share the same `groupId`, Kafka distributes topic partitions across group members:
448
370
 
@@ -458,16 +380,5 @@ Topic "orders" (3 partitions)
458
380
  - If consumers > partitions, excess consumers sit idle
459
381
  - Messages within a partition are processed **in order**
460
382
 
461
- ```bash
462
- # Terminal 1 — gets partition 0
463
- bun run consumer.ts --clientId=worker-1
464
-
465
- # Terminal 2 — gets partition 1
466
- bun run consumer.ts --clientId=worker-2
467
-
468
- # Terminal 3 — gets partition 2
469
- bun run consumer.ts --clientId=worker-3
470
- ```
471
-
472
383
  > [!TIP]
473
384
  > Create topics with enough partitions for your expected parallelism. You can increase partitions later with `admin.createPartitions()`, but you cannot decrease them.