@venizia/ignis-docs 0.0.7-1 → 0.0.7-2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@venizia/ignis-docs",
3
- "version": "0.0.7-1",
3
+ "version": "0.0.7-2",
4
4
  "description": "Interactive documentation site and MCP (Model Context Protocol) server for the Ignis Framework. Includes a VitePress-powered documentation site with guides, API references, and best practices. Ships an MCP server (CLI: ignis-docs-mcp) with 11 tools for AI assistants to search docs, browse source code, verify dependencies, and access real-time framework knowledge. Built with Mastra MCP SDK and Fuse.js fuzzy search.",
5
5
  "keywords": [
6
6
  "ai",
@@ -0,0 +1,173 @@
1
+ # Admin
2
+
3
+ The `KafkaAdminHelper` is a thin wrapper around `@platformatic/kafka`'s `Admin`. It manages creation, logging, and lifecycle.
4
+
5
+ ```typescript
6
+ class KafkaAdminHelper extends BaseHelper
7
+ ```
8
+
9
+ > [!NOTE]
10
+ > `KafkaAdminHelper` has **no generic type parameters** — the Admin client does not deal with serialized messages.
11
+
12
+ ## Helper API
13
+
14
+ | Method | Signature | Description |
15
+ |--------|-----------|-------------|
16
+ | `newInstance(opts)` | `static newInstance(opts): KafkaAdminHelper` | Factory method |
17
+ | `getAdmin()` | `(): Admin` | Access the underlying `Admin` |
18
+ | `close()` | `(): Promise<void>` | Close the admin connection |
19
+
20
+ ## IKafkaAdminOpts
21
+
22
+ ```typescript
23
+ interface IKafkaAdminOpts extends IKafkaConnectionOptions {
24
+ identifier?: string; // Default: 'kafka-admin'
25
+ }
26
+ ```
27
+
28
+ Plus all [Connection Options](./#connection-options).
29
+
30
+ ## Basic Example
31
+
32
+ ```typescript
33
+ import { KafkaAdminHelper } from '@venizia/ignis-helpers/kafka';
34
+
35
+ const helper = KafkaAdminHelper.newInstance({
36
+ bootstrapBrokers: ['localhost:9092'],
37
+ clientId: 'my-admin',
38
+ });
39
+
40
+ const admin = helper.getAdmin();
41
+
42
+ // Create a topic with 3 partitions and 2 replicas
43
+ await admin.createTopics({ topics: ['orders'], partitions: 3, replicas: 2 });
44
+
45
+ // List all topics
46
+ const topics = await admin.listTopics();
47
+
48
+ // Describe a consumer group
49
+ const groups = await admin.describeGroups({ groups: ['order-processing'] });
50
+
51
+ await helper.close();
52
+ ```
53
+
54
+ ---
55
+
56
+ ## API Reference (`@platformatic/kafka`)
57
+
58
+ After calling `helper.getAdmin()`, you have full access to the `Admin` class.
59
+
60
+ ### Topic Management
61
+
62
+ | Method | Signature | Description |
63
+ |--------|-----------|-------------|
64
+ | `createTopics(opts)` | `(opts: { topics: string[], partitions?: number, replicas?: number, configs?: Config[] }): Promise<CreatedTopic[]>` | Create topics |
65
+ | `deleteTopics(opts)` | `(opts: { topics: string[] }): Promise<void>` | Delete topics |
66
+ | `listTopics(opts?)` | `(opts?: { includeInternals?: boolean }): Promise<string[]>` | List all topics |
67
+ | `createPartitions(opts)` | `(opts: { topics: CreatePartitionsRequestTopic[], validateOnly?: boolean }): Promise<void>` | Add partitions to existing topics |
68
+ | `deleteRecords(opts)` | `(opts: { topics: { name, partitions: { partition, offset }[] }[] }): Promise<DeletedRecordsTopic[]>` | Delete records up to offset |
69
+
70
+ ```typescript
71
+ // Create with custom configuration
72
+ await admin.createTopics({
73
+ topics: ['orders'],
74
+ partitions: 6,
75
+ replicas: 3,
76
+ configs: [
77
+ { name: 'retention.ms', value: '604800000' }, // 7 days
78
+ { name: 'cleanup.policy', value: 'compact' },
79
+ { name: 'compression.type', value: 'zstd' },
80
+ ],
81
+ });
82
+
83
+ // Add partitions (can only increase, never decrease)
84
+ await admin.createPartitions({
85
+ topics: [{ name: 'orders', count: 12 }],
86
+ });
87
+
88
+ // Delete records before offset 1000 on partition 0
89
+ await admin.deleteRecords({
90
+ topics: [{ name: 'orders', partitions: [{ partition: 0, offset: 1000n }] }],
91
+ });
92
+ ```
93
+
94
+ ### Consumer Group Management
95
+
96
+ | Method | Signature | Description |
97
+ |--------|-----------|-------------|
98
+ | `listGroups(opts?)` | `(opts?: { states?: string[], types?: string[] }): Promise<Map<string, GroupBase>>` | List consumer groups |
99
+ | `describeGroups(opts)` | `(opts: { groups: string[] }): Promise<Map<string, Group>>` | Describe consumer groups (members, assignments) |
100
+ | `deleteGroups(opts)` | `(opts: { groups: string[] }): Promise<void>` | Delete consumer groups |
101
+ | `removeMembersFromConsumerGroup(opts)` | `(opts: { groupId, members? }): Promise<void>` | Remove specific members |
102
+
103
+ ```typescript
104
+ // List all active groups
105
+ const groups = await admin.listGroups({ states: ['STABLE'] });
106
+
107
+ // Describe group members and partition assignments
108
+ const details = await admin.describeGroups({ groups: ['order-processing'] });
109
+ for (const [groupId, group] of details) {
110
+ console.log(`Group: ${groupId}, State: ${group.state}`);
111
+ for (const [memberId, member] of group.members) {
112
+ console.log(` Member: ${member.clientId} (${member.clientHost})`);
113
+ if (member.assignments) {
114
+ for (const [topic, assignment] of member.assignments) {
115
+ console.log(` ${topic}: partitions ${assignment.partitions.join(', ')}`);
116
+ }
117
+ }
118
+ }
119
+ }
120
+ ```
121
+
122
+ ### Offset Management
123
+
124
+ | Method | Signature | Description |
125
+ |--------|-----------|-------------|
126
+ | `listOffsets(opts)` | `(opts): Promise<ListedOffsetsTopic[]>` | List partition offsets at timestamps |
127
+ | `listConsumerGroupOffsets(opts)` | `(opts: { groups }): Promise<ListConsumerGroupOffsetsGroup[]>` | List committed offsets for groups |
128
+ | `alterConsumerGroupOffsets(opts)` | `(opts: { groupId, topics }): Promise<void>` | Reset/alter committed offsets |
129
+ | `deleteConsumerGroupOffsets(opts)` | `(opts: { groupId, topics }): Promise<...>` | Delete committed offsets |
130
+
131
+ ```typescript
132
+ // Reset consumer group offsets to earliest
133
+ await admin.alterConsumerGroupOffsets({
134
+ groupId: 'order-processing',
135
+ topics: [{
136
+ name: 'orders',
137
+ partitionOffsets: [
138
+ { partition: 0, offset: 0n },
139
+ { partition: 1, offset: 0n },
140
+ { partition: 2, offset: 0n },
141
+ ],
142
+ }],
143
+ });
144
+ ```
145
+
146
+ ### Configuration Management
147
+
148
+ | Method | Signature | Description |
149
+ |--------|-----------|-------------|
150
+ | `describeConfigs(opts)` | `(opts: { resources, includeSynonyms?, includeDocumentation? }): Promise<ConfigDescription[]>` | Describe broker/topic configurations |
151
+ | `alterConfigs(opts)` | `(opts: { resources, validateOnly? }): Promise<void>` | Replace topic/broker configs |
152
+ | `incrementalAlterConfigs(opts)` | `(opts: { resources, validateOnly? }): Promise<void>` | Incrementally modify configs |
153
+
154
+ ### ACL Management
155
+
156
+ | Method | Signature | Description |
157
+ |--------|-----------|-------------|
158
+ | `createAcls(opts)` | `(opts: { creations: Acl[] }): Promise<void>` | Create access control lists |
159
+ | `describeAcls(opts)` | `(opts: { filter: AclFilter }): Promise<DescribeAclsResponseResource[]>` | Describe ACLs |
160
+ | `deleteAcls(opts)` | `(opts: { filters: AclFilter[] }): Promise<Acl[]>` | Delete ACLs |
161
+
162
+ ### Quota Management
163
+
164
+ | Method | Signature | Description |
165
+ |--------|-----------|-------------|
166
+ | `describeClientQuotas(opts)` | `(opts): Promise<DescribeClientQuotasResponseEntry[]>` | Describe client quotas |
167
+ | `alterClientQuotas(opts)` | `(opts): Promise<AlterClientQuotasResponseEntries[]>` | Alter client quotas |
168
+
169
+ ### Log Management
170
+
171
+ | Method | Signature | Description |
172
+ |--------|-----------|-------------|
173
+ | `describeLogDirs(opts)` | `(opts: { topics }): Promise<BrokerLogDirDescription[]>` | Describe broker log directories |
@@ -0,0 +1,473 @@
1
+ # Consumer
2
+
3
+ The `KafkaConsumerHelper` is a thin wrapper around `@platformatic/kafka`'s `Consumer`. It manages creation, logging, and lifecycle.
4
+
5
+ ```typescript
6
+ class KafkaConsumerHelper<
7
+ KeyType = string,
8
+ ValueType = string,
9
+ HeaderKeyType = string,
10
+ HeaderValueType = string,
11
+ > extends BaseHelper
12
+ ```
13
+
14
+ ## Helper API
15
+
16
+ | Method | Signature | Description |
17
+ |--------|-----------|-------------|
18
+ | `newInstance(opts)` | `static newInstance<K,V,HK,HV>(opts): KafkaConsumerHelper<K,V,HK,HV>` | Factory method |
19
+ | `getConsumer()` | `(): Consumer<KeyType, ValueType, HeaderKeyType, HeaderValueType>` | Access the underlying `Consumer` |
20
+ | `close(isForce?)` | `(isForce?: boolean): Promise<void>` | Close the consumer. Default: `force=true` |
21
+
22
+ > [!NOTE]
23
+ > Consumer defaults to `force=true` on close (unlike producer which defaults to `false`). This is because consumers should leave the group promptly to trigger faster rebalancing.
24
+
25
+ ## IKafkaConsumerOpts
26
+
27
+ ```typescript
28
+ interface IKafkaConsumerOpts<KeyType, ValueType, HeaderKeyType, HeaderValueType>
29
+ extends IKafkaConnectionOptions
30
+ ```
31
+
32
+ | Option | Type | Default | Description |
33
+ |--------|------|---------|-------------|
34
+ | `groupId` | `string` | — | Consumer group ID. **Required** |
35
+ | `identifier` | `string` | `'kafka-consumer'` | Scoped logging identifier |
36
+ | `deserializers` | `Partial<Deserializers<K,V,HK,HV>>` | — | Key/value/header deserializers. **Pass explicitly** |
37
+ | `autocommit` | `boolean \| number` | `false` | Auto-commit offsets. `true` = default interval, `number` = custom interval in ms |
38
+ | `sessionTimeout` | `number` | `30000` | Session timeout — if no heartbeat within this period, consumer is removed from group |
39
+ | `heartbeatInterval` | `number` | `3000` | Heartbeat interval — must be less than `sessionTimeout` |
40
+ | `rebalanceTimeout` | `number` | `sessionTimeout` | Max time for rebalance — defaults to `sessionTimeout` value |
41
+ | `highWaterMark` | `number` | `1024` | Stream buffer size (messages) |
42
+ | `minBytes` | `number` | `1` | Min bytes per fetch response — broker waits until this threshold |
43
+ | `maxBytes` | `number` | — | Max bytes per fetch response per partition |
44
+ | `maxWaitTime` | `number` | — | Max time (ms) broker waits for `minBytes` |
45
+ | `metadataMaxAge` | `number` | `300000` | Metadata cache TTL (ms) — how often to refresh topic/partition info |
46
+ | `groupProtocol` | `'classic' \| 'consumer'` | `'classic'` | Consumer group protocol. `'consumer'` = KIP-848 (Kafka 3.7+) |
47
+ | `groupInstanceId` | `string` | — | Static group membership ID — prevents rebalance on restart |
48
+
49
+ Plus all [Connection Options](./#connection-options).
50
+
51
+ ## Basic Example
52
+
53
+ ```typescript
54
+ import { KafkaConsumerHelper } from '@venizia/ignis-helpers/kafka';
55
+ import { stringDeserializers } from '@platformatic/kafka';
56
+
57
+ const helper = KafkaConsumerHelper.newInstance({
58
+ bootstrapBrokers: ['localhost:9092'],
59
+ clientId: 'order-consumer',
60
+ groupId: 'order-processing',
61
+ deserializers: stringDeserializers,
62
+ autocommit: false,
63
+ });
64
+
65
+ const consumer = helper.getConsumer();
66
+
67
+ const stream = await consumer.consume({
68
+ topics: ['orders'],
69
+ mode: 'committed',
70
+ fallbackMode: 'latest',
71
+ });
72
+
73
+ for await (const message of stream) {
74
+ console.log(`${message.topic}[${message.partition}] @${message.offset}: ${message.key} → ${message.value}`);
75
+ await message.commit();
76
+ }
77
+
78
+ await stream.close();
79
+ await helper.close();
80
+ ```
81
+
82
+ ---
83
+
84
+ ## API Reference (`@platformatic/kafka`)
85
+
86
+ After calling `helper.getConsumer()`, you have full access to the `Consumer` class.
87
+
88
+ ### `consumer.consume(options)`
89
+
90
+ Start consuming messages. Returns a `MessagesStream` (extends Node.js `Readable`).
91
+
92
+ ```typescript
93
+ interface ConsumeOptions<Key, Value, HeaderKey, HeaderValue> {
94
+ topics: string[];
95
+ mode?: 'latest' | 'earliest' | 'committed' | 'manual';
96
+ fallbackMode?: 'latest' | 'earliest' | 'fail';
97
+ maxFetches?: number;
98
+ offsets?: { topic: string; partition: number; offset: bigint }[];
99
+ onCorruptedMessage?: CorruptedMessageHandler;
100
+ // Plus ConsumeBaseOptions (autocommit, minBytes, maxBytes, etc.)
101
+ // Plus GroupOptions (sessionTimeout, heartbeatInterval, etc.)
102
+ }
103
+ ```
104
+
105
+ #### Stream Modes
106
+
107
+ | Mode | Description |
108
+ |------|-------------|
109
+ | `'committed'` | Resume from last committed offset. **Recommended for production** |
110
+ | `'latest'` | Start from the latest offset (skip existing messages) |
111
+ | `'earliest'` | Start from the beginning of the topic |
112
+ | `'manual'` | Start from explicitly provided offsets via `offsets` option |
113
+
114
+ #### Fallback Modes
115
+
116
+ Used when `mode: 'committed'` but no committed offset exists (new consumer group):
117
+
118
+ | Fallback | Description |
119
+ |----------|-------------|
120
+ | `'latest'` | Start from latest (default) — ignore historical messages |
121
+ | `'earliest'` | Start from beginning — process all historical messages |
122
+ | `'fail'` | Throw an error |
123
+
124
+ ```typescript
125
+ // Production pattern: resume from committed, start from latest for new groups
126
+ const stream = await consumer.consume({
127
+ topics: ['orders'],
128
+ mode: 'committed',
129
+ fallbackMode: 'latest',
130
+ });
131
+
132
+ // Replay all historical messages
133
+ const stream = await consumer.consume({
134
+ topics: ['orders'],
135
+ mode: 'earliest',
136
+ });
137
+
138
+ // Start from specific offsets
139
+ const stream = await consumer.consume({
140
+ topics: ['orders'],
141
+ mode: 'manual',
142
+ offsets: [
143
+ { topic: 'orders', partition: 0, offset: 100n },
144
+ { topic: 'orders', partition: 1, offset: 200n },
145
+ ],
146
+ });
147
+ ```
148
+
149
+ ---
150
+
151
+ ## MessagesStream
152
+
153
+ `MessagesStream` extends Node.js `Readable`. It supports three consumption patterns:
154
+
155
+ ### Pattern 1: Async Iterator (`for await`)
156
+
157
+ Best for sequential processing with backpressure:
158
+
159
+ ```typescript
160
+ const stream = await consumer.consume({ topics: ['orders'], mode: 'committed', fallbackMode: 'latest' });
161
+
162
+ for await (const message of stream) {
163
+ await processMessage(message);
164
+ await message.commit();
165
+ }
166
+ ```
167
+
168
+ ### Pattern 2: Event-Based (`.on('data')`)
169
+
170
+ Best for high-throughput or when you need non-blocking processing:
171
+
172
+ ```typescript
173
+ const stream = await consumer.consume({ topics: ['orders'], mode: 'committed', fallbackMode: 'latest' });
174
+
175
+ stream.on('data', (message) => {
176
+ console.log(JSON.stringify({
177
+ topic: message.topic,
178
+ partition: message.partition,
179
+ offset: message.offset,
180
+ key: message.key,
181
+ value: message.value,
182
+ headers: Object.fromEntries(message.headers ?? new Map()),
183
+ timestamp: message.timestamp,
184
+ }, (_key, value) => (typeof value === 'bigint' ? value.toString() : value)));
185
+
186
+ message.commit();
187
+ });
188
+
189
+ stream.on('error', (err) => {
190
+ console.error('Stream error:', err);
191
+ });
192
+ ```
193
+
194
+ > [!WARNING]
195
+ > `message.offset` and `message.timestamp` are `bigint`. When using `JSON.stringify`, you must provide a custom replacer:
196
+ > ```typescript
197
+ > JSON.stringify(data, (_key, value) => (typeof value === 'bigint' ? value.toString() : value))
198
+ > ```
199
+
200
+ ### Pattern 3: Pause/Resume
201
+
202
+ Manual flow control:
203
+
204
+ ```typescript
205
+ stream.on('data', async (message) => {
206
+ stream.pause();
207
+ await heavyProcessing(message);
208
+ message.commit();
209
+ stream.resume();
210
+ });
211
+ ```
212
+
213
+ ### Message Object
214
+
215
+ Each message from the stream has the following shape:
216
+
217
+ ```typescript
218
+ interface Message<Key, Value, HeaderKey, HeaderValue> {
219
+ topic: string;
220
+ key: Key;
221
+ value: Value;
222
+ partition: number;
223
+ offset: bigint;
224
+ timestamp: bigint;
225
+ headers: Map<HeaderKey, HeaderValue>;
226
+ metadata: Record<string, unknown>;
227
+ commit(callback?: (error?: Error) => void): void | Promise<void>;
228
+ toJSON(): MessageJSON<Key, Value, HeaderKey, HeaderValue>;
229
+ }
230
+ ```
231
+
232
+ | Property | Type | Description |
233
+ |----------|------|-------------|
234
+ | `topic` | `string` | Source topic |
235
+ | `partition` | `number` | Source partition |
236
+ | `offset` | `bigint` | Message offset within the partition |
237
+ | `key` | `KeyType` | Deserialized message key |
238
+ | `value` | `ValueType` | Deserialized message value |
239
+ | `headers` | `Map<HK, HV>` | Deserialized message headers |
240
+ | `timestamp` | `bigint` | Message timestamp (ms since epoch) |
241
+ | `metadata` | `Record<string, unknown>` | Additional metadata |
242
+ | `commit()` | `() => void \| Promise<void>` | Commit this message's offset |
243
+
244
+ ### Stream Events
245
+
246
+ | Event | Payload | Description |
247
+ |-------|---------|-------------|
248
+ | `'data'` | `Message<K,V,HK,HV>` | New message received |
249
+ | `'error'` | `Error` | Stream error |
250
+ | `'close'` | — | Stream closed |
251
+ | `'end'` | — | Stream ended (all data consumed) |
252
+ | `'autocommit'` | `(err, offsets)` | Auto-commit completed (or failed) |
253
+ | `'fetch'` | — | Fetch request sent |
254
+ | `'offsets'` | — | Offsets updated |
255
+ | `'pause'` | — | Stream paused |
256
+ | `'resume'` | — | Stream resumed |
257
+ | `'readable'` | — | Stream has data available |
258
+
259
+ ### Stream Methods
260
+
261
+ | Method | Returns | Description |
262
+ |--------|---------|-------------|
263
+ | `close()` | `Promise<void>` | Close the stream and leave the consumer group |
264
+ | `isActive()` | `boolean` | Whether the stream is actively consuming |
265
+ | `isConnected()` | `boolean` | Whether the underlying connection is active |
266
+ | `pause()` | `this` | Pause consumption (stop fetching) |
267
+ | `resume()` | `this` | Resume consumption |
268
+ | `[Symbol.asyncIterator]()` | `AsyncIterator<Message>` | Async iteration support |
269
+
270
+ ### Stream Properties
271
+
272
+ | Property | Type | Description |
273
+ |----------|------|-------------|
274
+ | `consumer` | `Consumer` | Reference to the parent consumer |
275
+ | `offsetsToFetch` | `Map<string, bigint>` | Next offsets to fetch per topic-partition |
276
+ | `offsetsToCommit` | `Map<string, CommitOptionsPartition>` | Pending commit offsets |
277
+ | `offsetsCommitted` | `Map<string, bigint>` | Last committed offsets |
278
+ | `committedOffsets` | `Map<string, bigint>` | Alias for `offsetsCommitted` |
279
+
280
+ ---
281
+
282
+ ## Offset Management
283
+
284
+ ### Manual Commit
285
+
286
+ When `autocommit: false`, commit offsets explicitly after processing:
287
+
288
+ ```typescript
289
+ const stream = await consumer.consume({
290
+ topics: ['orders'],
291
+ mode: 'committed',
292
+ fallbackMode: 'latest',
293
+ });
294
+
295
+ for await (const message of stream) {
296
+ try {
297
+ await processMessage(message);
298
+ await message.commit(); // Commit on success
299
+ } catch (err) {
300
+ // Don't commit — message will be redelivered
301
+ console.error('Processing failed:', err);
302
+ }
303
+ }
304
+ ```
305
+
306
+ ### Auto-Commit
307
+
308
+ Commit offsets automatically at a configurable interval:
309
+
310
+ ```typescript
311
+ const helper = KafkaConsumerHelper.newInstance({
312
+ // ...
313
+ autocommit: true, // Default interval
314
+ // autocommit: 5000, // Custom: commit every 5 seconds
315
+ });
316
+ ```
317
+
318
+ ### Bulk Commit via Consumer
319
+
320
+ ```typescript
321
+ await consumer.commit({
322
+ offsets: [
323
+ { topic: 'orders', partition: 0, offset: 150n, leaderEpoch: 0 },
324
+ { topic: 'orders', partition: 1, offset: 300n, leaderEpoch: 0 },
325
+ ],
326
+ });
327
+ ```
328
+
329
+ ### List Offsets
330
+
331
+ ```typescript
332
+ // Current log-end offsets
333
+ const offsets = await consumer.listOffsets({ topics: ['orders'] });
334
+ // Map<string, bigint[]> — topic → partition offsets
335
+
336
+ // Committed offsets
337
+ const committed = await consumer.listCommittedOffsets({
338
+ topics: [{ topic: 'orders', partitions: [0, 1, 2] }],
339
+ });
340
+
341
+ // Offsets at a specific timestamp
342
+ const historical = await consumer.listOffsetsWithTimestamps({
343
+ topics: ['orders'],
344
+ timestamp: BigInt(Date.now() - 3600_000), // 1 hour ago
345
+ });
346
+ ```
347
+
348
+ ---
349
+
350
+ ## Lag Monitoring
351
+
352
+ ```typescript
353
+ // One-time lag check
354
+ const lag = await consumer.getLag({ topics: ['orders'] });
355
+ // Map<string, bigint[]> — topic → lag per partition
356
+
357
+ // Continuous monitoring (emits 'consumer:lag' events)
358
+ consumer.startLagMonitoring({ topics: ['orders'] }, 5000); // Check every 5s
359
+
360
+ consumer.on('consumer:lag', (lag) => {
361
+ for (const [topic, partitionLags] of lag) {
362
+ partitionLags.forEach((lagValue, partition) => {
363
+ if (lagValue > 1000n) {
364
+ console.warn(`High lag on ${topic}[${partition}]: ${lagValue}`);
365
+ }
366
+ });
367
+ }
368
+ });
369
+
370
+ consumer.on('consumer:lag:error', (error) => {
371
+ console.error('Lag monitoring error:', error);
372
+ });
373
+
374
+ // Stop monitoring
375
+ consumer.stopLagMonitoring();
376
+ ```
377
+
378
+ ---
379
+
380
+ ## Consumer Events
381
+
382
+ The `Consumer` class emits lifecycle events via `consumer.on(event, handler)`:
383
+
384
+ | Event | Payload | Description |
385
+ |-------|---------|-------------|
386
+ | `'consumer:group:join'` | `{ groupId, memberId, generationId, isLeader, assignments }` | Joined consumer group |
387
+ | `'consumer:group:leave'` | `{ groupId, memberId, generationId }` | Left consumer group |
388
+ | `'consumer:group:rejoin'` | — | Rejoining group after rebalance |
389
+ | `'consumer:group:rebalance'` | `{ groupId }` | Partition rebalance triggered |
390
+ | `'consumer:heartbeat:start'` | `{ groupId, memberId, generationId }` | Heartbeat started |
391
+ | `'consumer:heartbeat:cancel'` | `{ groupId, memberId, generationId }` | Heartbeat cancelled |
392
+ | `'consumer:heartbeat:end'` | `{ groupId, memberId, generationId }` | Heartbeat completed |
393
+ | `'consumer:heartbeat:error'` | `{ groupId, memberId, generationId, error }` | Heartbeat failed |
394
+ | `'consumer:lag'` | `Map<string, bigint[]>` | Lag report (from `startLagMonitoring`) |
395
+ | `'consumer:lag:error'` | `Error` | Lag monitoring error |
396
+
397
+ **Base client events** (shared with Producer and Admin):
398
+
399
+ | Event | Payload | Description |
400
+ |-------|---------|-------------|
401
+ | `'client:broker:connect'` | `{ node, host, port }` | Connected to broker |
402
+ | `'client:broker:disconnect'` | `{ node, host, port }` | Disconnected from broker |
403
+ | `'client:broker:failed'` | `{ node, host, port }` | Broker connection failed |
404
+ | `'client:metadata'` | `ClusterMetadata` | Metadata refreshed |
405
+ | `'client:close'` | — | Client closed |
406
+
407
+ ```typescript
408
+ consumer.on('consumer:group:join', ({ groupId, memberId, assignments }) => {
409
+ console.log(`Joined group ${groupId} as ${memberId}`);
410
+ console.log('Assigned partitions:', assignments);
411
+ });
412
+
413
+ consumer.on('consumer:group:rebalance', ({ groupId }) => {
414
+ console.log(`Rebalance triggered for group ${groupId}`);
415
+ });
416
+ ```
417
+
418
+ ---
419
+
420
+ ## Consumer Group Management
421
+
422
+ ```typescript
423
+ // Consumer properties
424
+ consumer.groupId; // string
425
+ consumer.memberId; // string | null
426
+ consumer.generationId; // number
427
+ consumer.assignments; // GroupAssignment[] | null
428
+ consumer.isActive(); // boolean
429
+ consumer.lastHeartbeat; // Date | null
430
+
431
+ // Manually leave and rejoin group
432
+ await consumer.leaveGroup();
433
+ await consumer.joinGroup();
434
+
435
+ // Static membership — prevents rebalance on restart
436
+ const helper = KafkaConsumerHelper.newInstance({
437
+ // ...
438
+ groupInstanceId: 'worker-1', // Unique per instance
439
+ sessionTimeout: 60_000, // Longer timeout for static members
440
+ });
441
+ ```
442
+
443
+ ---
444
+
445
+ ## Consumer Group Partitioning
446
+
447
+ When multiple consumers share the same `groupId`, Kafka distributes topic partitions across group members:
448
+
449
+ ```
450
+ Topic "orders" (3 partitions)
451
+ ├── Partition 0 → Consumer A
452
+ ├── Partition 1 → Consumer B
453
+ └── Partition 2 → Consumer C
454
+ ```
455
+
456
+ - Each partition is assigned to **exactly one** consumer in the group
457
+ - If a consumer leaves/crashes, its partitions are redistributed (**rebalance**)
458
+ - If consumers > partitions, excess consumers sit idle
459
+ - Messages within a partition are processed **in order**
460
+
461
+ ```bash
462
+ # Terminal 1 — gets partition 0
463
+ bun run consumer.ts --clientId=worker-1
464
+
465
+ # Terminal 2 — gets partition 1
466
+ bun run consumer.ts --clientId=worker-2
467
+
468
+ # Terminal 3 — gets partition 2
469
+ bun run consumer.ts --clientId=worker-3
470
+ ```
471
+
472
+ > [!TIP]
473
+ > Create topics with enough partitions for your expected parallelism. You can increase partitions later with `admin.createPartitions()`, but you cannot decrease them.