@venizia/ignis-docs 0.0.7-1 → 0.0.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@venizia/ignis-docs",
3
- "version": "0.0.7-1",
3
+ "version": "0.0.7",
4
4
  "description": "Interactive documentation site and MCP (Model Context Protocol) server for the Ignis Framework. Includes a VitePress-powered documentation site with guides, API references, and best practices. Ships an MCP server (CLI: ignis-docs-mcp) with 11 tools for AI assistants to search docs, browse source code, verify dependencies, and access real-time framework knowledge. Built with Mastra MCP SDK and Fuse.js fuzzy search.",
5
5
  "keywords": [
6
6
  "ai",
@@ -0,0 +1,178 @@
1
+ # Admin
2
+
3
+ The `KafkaAdminHelper` wraps `@platformatic/kafka`'s `Admin` with health tracking, graceful shutdown, and broker event callbacks. Use `getAdmin()` to access the full Admin API directly.
4
+
5
+ ```typescript
6
+ class KafkaAdminHelper extends BaseKafkaHelper<Admin>
7
+ ```
8
+
9
+ > [!NOTE]
10
+ > `KafkaAdminHelper` has **no generic type parameters** — the Admin client does not deal with serialized messages.
11
+
12
+ ## Helper API
13
+
14
+ | Method | Signature | Description |
15
+ |--------|-----------|-------------|
16
+ | `newInstance(opts)` | `static newInstance(opts): KafkaAdminHelper` | Factory method |
17
+ | `getAdmin()` | `(): Admin` | Access the underlying `Admin` |
18
+ | `isHealthy()` | `(): boolean` | `true` when broker connected |
19
+ | `isReady()` | `(): boolean` | Same as `isHealthy()` |
20
+ | `getHealthStatus()` | `(): TKafkaHealthStatus` | `'connected'` \| `'disconnected'` \| `'unknown'` |
21
+ | `close(opts?)` | `(opts?: { isForce?: boolean }): Promise<void>` | Close the admin connection (default: graceful) |
22
+
23
+ ## IKafkaAdminOptions
24
+
25
+ ```typescript
26
+ interface IKafkaAdminOptions extends IKafkaConnectionOptions {
27
+ identifier?: string; // Default: 'kafka-admin'
28
+ shutdownTimeout?: number; // Default: 30000ms
29
+ onBrokerConnect?: TKafkaBrokerEventCallback;
30
+ onBrokerDisconnect?: TKafkaBrokerEventCallback;
31
+ }
32
+ ```
33
+
34
+ Plus all [Connection Options](./#connection-options).
35
+
36
+ ## Basic Example
37
+
38
+ ```typescript
39
+ import { KafkaAdminHelper } from '@venizia/ignis-helpers/kafka';
40
+
41
+ const helper = KafkaAdminHelper.newInstance({
42
+ bootstrapBrokers: ['localhost:9092'],
43
+ clientId: 'my-admin',
44
+ onBrokerConnect: ({ broker }) => console.log(`Connected to ${broker.host}:${broker.port}`),
45
+ onBrokerDisconnect: ({ broker }) => console.log(`Disconnected from ${broker.host}`),
46
+ });
47
+
48
+ const admin = helper.getAdmin();
49
+
50
+ // Create a topic
51
+ await admin.createTopics({ topics: ['orders'], partitions: 3, replicas: 2 });
52
+
53
+ // List all topics
54
+ const topics = await admin.listTopics();
55
+
56
+ // Health check
57
+ helper.isHealthy(); // true when connected
58
+
59
+ // Graceful close
60
+ await helper.close();
61
+
62
+ // Or force close
63
+ await helper.close({ isForce: true });
64
+ ```
65
+
66
+ ## API Reference (`@platformatic/kafka`)
67
+
68
+ After calling `helper.getAdmin()`, you have full access to the `Admin` class.
69
+
70
+ ### Topic Management
71
+
72
+ | Method | Signature | Description |
73
+ |--------|-----------|-------------|
74
+ | `createTopics(opts)` | `(opts: { topics: string[], partitions?: number, replicas?: number, configs?: Config[] }): Promise<CreatedTopic[]>` | Create topics |
75
+ | `deleteTopics(opts)` | `(opts: { topics: string[] }): Promise<void>` | Delete topics |
76
+ | `listTopics(opts?)` | `(opts?: { includeInternals?: boolean }): Promise<string[]>` | List all topics |
77
+ | `createPartitions(opts)` | `(opts: { topics: CreatePartitionsRequestTopic[], validateOnly?: boolean }): Promise<void>` | Add partitions to existing topics |
78
+ | `deleteRecords(opts)` | `(opts: { topics: { name, partitions: { partition, offset }[] }[] }): Promise<DeletedRecordsTopic[]>` | Delete records up to offset |
79
+
80
+ ```typescript
81
+ // Create with custom configuration
82
+ await admin.createTopics({
83
+ topics: ['orders'],
84
+ partitions: 6,
85
+ replicas: 3,
86
+ configs: [
87
+ { name: 'retention.ms', value: '604800000' }, // 7 days
88
+ { name: 'cleanup.policy', value: 'compact' },
89
+ { name: 'compression.type', value: 'zstd' },
90
+ ],
91
+ });
92
+
93
+ // Add partitions (can only increase, never decrease)
94
+ await admin.createPartitions({
95
+ topics: [{ name: 'orders', count: 12 }],
96
+ });
97
+
98
+ // Delete records before offset 1000 on partition 0
99
+ await admin.deleteRecords({
100
+ topics: [{ name: 'orders', partitions: [{ partition: 0, offset: 1000n }] }],
101
+ });
102
+ ```
103
+
104
+ ### Consumer Group Management
105
+
106
+ | Method | Signature | Description |
107
+ |--------|-----------|-------------|
108
+ | `listGroups(opts?)` | `(opts?: { states?: string[], types?: string[] }): Promise<Map<string, GroupBase>>` | List consumer groups |
109
+ | `describeGroups(opts)` | `(opts: { groups: string[] }): Promise<Map<string, Group>>` | Describe consumer groups (members, assignments) |
110
+ | `deleteGroups(opts)` | `(opts: { groups: string[] }): Promise<void>` | Delete consumer groups |
111
+ | `removeMembersFromConsumerGroup(opts)` | `(opts: { groupId, members? }): Promise<void>` | Remove specific members |
112
+
113
+ ```typescript
114
+ // List all active groups
115
+ const groups = await admin.listGroups({ states: ['STABLE'] });
116
+
117
+ // Describe group members and partition assignments
118
+ const details = await admin.describeGroups({ groups: ['order-processing'] });
119
+ for (const [groupId, group] of details) {
120
+ console.log(`Group: ${groupId}, State: ${group.state}`);
121
+ for (const [memberId, member] of group.members) {
122
+ console.log(` Member: ${member.clientId} (${member.clientHost})`);
123
+ }
124
+ }
125
+ ```
126
+
127
+ ### Offset Management
128
+
129
+ | Method | Signature | Description |
130
+ |--------|-----------|-------------|
131
+ | `listOffsets(opts)` | `(opts): Promise<ListedOffsetsTopic[]>` | List partition offsets at timestamps |
132
+ | `listConsumerGroupOffsets(opts)` | `(opts: { groups }): Promise<ListConsumerGroupOffsetsGroup[]>` | List committed offsets for groups |
133
+ | `alterConsumerGroupOffsets(opts)` | `(opts: { groupId, topics }): Promise<void>` | Reset/alter committed offsets |
134
+ | `deleteConsumerGroupOffsets(opts)` | `(opts: { groupId, topics }): Promise<...>` | Delete committed offsets |
135
+
136
+ ```typescript
137
+ // Reset consumer group offsets to earliest
138
+ await admin.alterConsumerGroupOffsets({
139
+ groupId: 'order-processing',
140
+ topics: [{
141
+ name: 'orders',
142
+ partitionOffsets: [
143
+ { partition: 0, offset: 0n },
144
+ { partition: 1, offset: 0n },
145
+ { partition: 2, offset: 0n },
146
+ ],
147
+ }],
148
+ });
149
+ ```
150
+
151
+ ### Configuration Management
152
+
153
+ | Method | Signature | Description |
154
+ |--------|-----------|-------------|
155
+ | `describeConfigs(opts)` | `(opts: { resources, includeSynonyms?, includeDocumentation? }): Promise<ConfigDescription[]>` | Describe broker/topic configurations |
156
+ | `alterConfigs(opts)` | `(opts: { resources, validateOnly? }): Promise<void>` | Replace topic/broker configs |
157
+ | `incrementalAlterConfigs(opts)` | `(opts: { resources, validateOnly? }): Promise<void>` | Incrementally modify configs |
158
+
159
+ ### ACL Management
160
+
161
+ | Method | Signature | Description |
162
+ |--------|-----------|-------------|
163
+ | `createAcls(opts)` | `(opts: { creations: Acl[] }): Promise<void>` | Create access control lists |
164
+ | `describeAcls(opts)` | `(opts: { filter: AclFilter }): Promise<DescribeAclsResponseResource[]>` | Describe ACLs |
165
+ | `deleteAcls(opts)` | `(opts: { filters: AclFilter[] }): Promise<Acl[]>` | Delete ACLs |
166
+
167
+ ### Quota Management
168
+
169
+ | Method | Signature | Description |
170
+ |--------|-----------|-------------|
171
+ | `describeClientQuotas(opts)` | `(opts): Promise<DescribeClientQuotasResponseEntry[]>` | Describe client quotas |
172
+ | `alterClientQuotas(opts)` | `(opts): Promise<AlterClientQuotasResponseEntries[]>` | Alter client quotas |
173
+
174
+ ### Log Management
175
+
176
+ | Method | Signature | Description |
177
+ |--------|-----------|-------------|
178
+ | `describeLogDirs(opts)` | `(opts: { topics }): Promise<BrokerLogDirDescription[]>` | Describe broker log directories |
@@ -0,0 +1,384 @@
1
+ # Consumer
2
+
3
+ The `KafkaConsumerHelper` wraps `@platformatic/kafka`'s `Consumer` with health tracking, graceful shutdown, message callbacks, consumer group event callbacks, and lag monitoring.
4
+
5
+ ```typescript
6
+ class KafkaConsumerHelper<
7
+ KeyType = string,
8
+ ValueType = string,
9
+ HeaderKeyType = string,
10
+ HeaderValueType = string,
11
+ > extends BaseKafkaHelper<Consumer<KeyType, ValueType, HeaderKeyType, HeaderValueType>>
12
+ ```
13
+
14
+ ## Helper API
15
+
16
+ | Method | Signature | Description |
17
+ |--------|-----------|-------------|
18
+ | `newInstance(opts)` | `static newInstance<K,V,HK,HV>(opts): KafkaConsumerHelper<K,V,HK,HV>` | Factory method |
19
+ | `getConsumer()` | `(): Consumer<K,V,HK,HV>` | Access the underlying `Consumer` |
20
+ | `getStream()` | `(): MessagesStream \| null` | Get the active stream (after `start()`) |
21
+ | `start(opts)` | `(opts: IKafkaConsumeStartOptions): Promise<void>` | Start consuming (creates stream, wires callbacks) |
22
+ | `startLagMonitoring(opts)` | `(opts: { topics: string[]; interval?: number }): void` | Start periodic lag monitoring |
23
+ | `stopLagMonitoring()` | `(): void` | Stop lag monitoring |
24
+ | `isHealthy()` | `(): boolean` | `true` when broker connected |
25
+ | `isReady()` | `(): boolean` | `isHealthy()` **and** `consumer.isActive()` |
26
+ | `getHealthStatus()` | `(): TKafkaHealthStatus` | `'connected'` \| `'disconnected'` \| `'unknown'` |
27
+ | `close(opts?)` | `(opts?: { isForce?: boolean }): Promise<void>` | Stop lag, close stream, close consumer |
28
+
29
+ ## IKafkaConsumerOptions
30
+
31
+ ```typescript
32
+ interface IKafkaConsumerOptions<KeyType, ValueType, HeaderKeyType, HeaderValueType>
33
+ extends IKafkaConnectionOptions
34
+ ```
35
+
36
+ ### Consumer Configuration
37
+
38
+ | Option | Type | Default | Description |
39
+ |--------|------|---------|-------------|
40
+ | `groupId` | `string` | — | Consumer group ID. **Required** |
41
+ | `identifier` | `string` | `'kafka-consumer'` | Scoped logging identifier |
42
+ | `deserializers` | `Partial<Deserializers<K,V,HK,HV>>` | — | Key/value/header deserializers |
43
+ | `autocommit` | `boolean \| number` | `false` | Auto-commit offsets. `true` = default interval, `number` = custom ms |
44
+ | `sessionTimeout` | `number` | `30000` | Session timeout — consumer removed from group if no heartbeat |
45
+ | `heartbeatInterval` | `number` | `3000` | Heartbeat interval — must be less than `sessionTimeout` |
46
+ | `rebalanceTimeout` | `number` | `sessionTimeout` | Max time for rebalance |
47
+ | `highWaterMark` | `number` | `1024` | Stream buffer size (messages) |
48
+ | `minBytes` | `number` | `1` | Min bytes per fetch response |
49
+ | `maxBytes` | `number` | — | Max bytes per fetch response per partition |
50
+ | `maxWaitTime` | `number` | — | Max time (ms) broker waits for `minBytes` |
51
+ | `metadataMaxAge` | `number` | `300000` | Metadata cache TTL (ms) |
52
+ | `groupProtocol` | `'classic' \| 'consumer'` | `'classic'` | Consumer group protocol. `'consumer'` = KIP-848 (Kafka 3.7+) |
53
+ | `groupInstanceId` | `string` | — | Static group membership ID — prevents rebalance on restart |
54
+ | `shutdownTimeout` | `number` | `30000` | Graceful shutdown timeout in ms |
55
+ | `registry` | `SchemaRegistry` | — | Schema registry for auto deser |
56
+
57
+ ### Lifecycle Callbacks
58
+
59
+ | Option | Type | Description |
60
+ |--------|------|-------------|
61
+ | `onBrokerConnect` | `TKafkaBrokerEventCallback` | Called when broker connects |
62
+ | `onBrokerDisconnect` | `TKafkaBrokerEventCallback` | Called when broker disconnects |
63
+
64
+ ### Message Callbacks
65
+
66
+ | Option | Type | Description |
67
+ |--------|------|-------------|
68
+ | `onMessage` | `TKafkaMessageCallback<K,V,HK,HV>` | Called for each message. Receives `{ message }` |
69
+ | `onMessageDone` | `TKafkaMessageDoneCallback<K,V,HK,HV>` | Called after `onMessage` succeeds. Receives `{ message }` |
70
+ | `onMessageError` | `TKafkaMessageErrorCallback<K,V,HK,HV>` | Called on processing error. Receives `{ error, message? }` |
71
+
72
+ ### Consumer Group Callbacks
73
+
74
+ | Option | Type | Description |
75
+ |--------|------|-------------|
76
+ | `onGroupJoin` | `TKafkaGroupJoinCallback` | Receives `{ groupId, memberId, generationId? }` |
77
+ | `onGroupLeave` | `TKafkaGroupLeaveCallback` | Receives `{ groupId, memberId }` |
78
+ | `onGroupRebalance` | `TKafkaGroupRebalanceCallback` | Receives `{ groupId }` |
79
+ | `onHeartbeatError` | `TKafkaHeartbeatErrorCallback` | Receives `{ error, groupId?, memberId? }` |
80
+
81
+ ### Lag Monitoring Callbacks
82
+
83
+ | Option | Type | Description |
84
+ |--------|------|-------------|
85
+ | `onLag` | `TKafkaLagCallback` | Receives `{ lag }` (Offsets map) |
86
+ | `onLagError` | `TKafkaLagErrorCallback` | Receives `{ error }` |
87
+
88
+ Plus all [Connection Options](./#connection-options).
89
+
90
+ ## Basic Example
91
+
92
+ ```typescript
93
+ import { KafkaConsumerHelper } from '@venizia/ignis-helpers/kafka';
94
+ import { stringDeserializers } from '@platformatic/kafka';
95
+
96
+ const helper = KafkaConsumerHelper.newInstance({
97
+ bootstrapBrokers: ['localhost:9092'],
98
+ clientId: 'order-consumer',
99
+ groupId: 'order-processing',
100
+ deserializers: stringDeserializers,
101
+
102
+ // Message lifecycle
103
+ onMessage: async ({ message }) => {
104
+ console.log(`${message.topic}[${message.partition}] @${message.offset}: ${message.key} → ${message.value}`);
105
+ await message.commit();
106
+ },
107
+ onMessageDone: ({ message }) => {
108
+ console.log(`Done processing: ${message.key}`);
109
+ },
110
+ onMessageError: ({ error, message }) => {
111
+ console.error('Processing failed:', error.message, message?.key);
112
+ },
113
+
114
+ // Consumer group events
115
+ onGroupJoin: ({ groupId, memberId }) => console.log(`Joined ${groupId} as ${memberId}`),
116
+ onGroupLeave: ({ groupId }) => console.log(`Left ${groupId}`),
117
+ onGroupRebalance: ({ groupId }) => console.log(`Rebalance in ${groupId}`),
118
+ onHeartbeatError: ({ error }) => console.error('Heartbeat failed:', error),
119
+
120
+ // Broker events
121
+ onBrokerConnect: ({ broker }) => console.log(`Connected to ${broker.host}:${broker.port}`),
122
+ onBrokerDisconnect: ({ broker }) => console.log(`Disconnected from ${broker.host}`),
123
+
124
+ // Lag monitoring
125
+ onLag: ({ lag }) => {
126
+ for (const [topic, partitionLags] of lag) {
127
+ partitionLags.forEach((lagValue, partition) => {
128
+ if (lagValue > 1000n) {
129
+ console.warn(`High lag on ${topic}[${partition}]: ${lagValue}`);
130
+ }
131
+ });
132
+ }
133
+ },
134
+ onLagError: ({ error }) => console.error('Lag monitoring error:', error),
135
+ });
136
+
137
+ // Start consuming
138
+ await helper.start({ topics: ['orders'] });
139
+
140
+ // Start lag monitoring (optional)
141
+ helper.startLagMonitoring({ topics: ['orders'], interval: 10_000 });
142
+
143
+ // Health check
144
+ helper.isHealthy(); // true when broker connected
145
+ helper.isReady(); // true when broker connected AND consumer is active
146
+
147
+ // Shutdown
148
+ await helper.close();
149
+ ```
150
+
151
+ ## Message Callback Flow
152
+
153
+ When `start()` is called, the helper creates a `MessagesStream` and wires the callbacks:
154
+
155
+ ```
156
+ Stream 'data' event
157
+ → onMessage({ message })
158
+ ├── success → onMessageDone({ message })
159
+ └── error → onMessageError({ error, message })
160
+
161
+ Stream 'error' event
162
+ → onMessageError({ error }) (no message available)
163
+ ```
164
+
165
+ - `onMessage` is the main processing callback — do your business logic here
166
+ - `onMessageDone` fires only after `onMessage` resolves successfully — use for logging, metrics, etc.
167
+ - `onMessageError` fires if `onMessage` OR `onMessageDone` throws — use for error tracking
168
+ - The stream `'error'` event also calls `onMessageError` (without `message` since it's a stream-level error)
169
+
170
+ ## start()
171
+
172
+ `start()` creates the consume stream and wires all message callbacks. It must be called explicitly after construction.
173
+
174
+ ```typescript
175
+ interface IKafkaConsumeStartOptions {
176
+ topics: string[];
177
+ mode?: MessagesStreamModeValue; // Default: 'committed'
178
+ fallbackMode?: MessagesStreamFallbackModeValue; // Default: 'latest'
179
+ }
180
+ ```
181
+
182
+ | Mode | Description |
183
+ |------|-------------|
184
+ | `'committed'` | Resume from last committed offset. **Recommended for production** |
185
+ | `'latest'` | Start from the latest offset (skip existing messages) |
186
+ | `'earliest'` | Start from the beginning of the topic |
187
+ | `'manual'` | Start from explicitly provided offsets |
188
+
189
+ | Fallback | Description |
190
+ |----------|-------------|
191
+ | `'latest'` | Start from latest (default) — ignore historical messages |
192
+ | `'earliest'` | Start from beginning — process all historical messages |
193
+ | `'fail'` | Throw an error |
194
+
195
+ ```typescript
196
+ // Production pattern
197
+ await helper.start({ topics: ['orders'] });
198
+
199
+ // Replay all historical messages
200
+ await helper.start({ topics: ['orders'], mode: 'earliest' });
201
+
202
+ // Custom mode
203
+ await helper.start({
204
+ topics: ['orders'],
205
+ mode: 'committed',
206
+ fallbackMode: 'earliest',
207
+ });
208
+ ```
209
+
210
+ Guards against duplicate starts — calling `start()` twice logs a warning and returns immediately.
211
+
212
+ ## Lag Monitoring
213
+
214
+ ```typescript
215
+ // Start monitoring (polls every interval)
216
+ helper.startLagMonitoring({ topics: ['orders'], interval: 10_000 });
217
+
218
+ // Stop monitoring
219
+ helper.stopLagMonitoring();
220
+ ```
221
+
222
+ Lag data is delivered via the `onLag` callback. Errors via `onLagError`.
223
+
224
+ Guards against duplicate starts — calling `startLagMonitoring()` twice logs a warning.
225
+
226
+ For one-time lag checks, use the underlying consumer directly:
227
+
228
+ ```typescript
229
+ const lag = await helper.getConsumer().getLag({ topics: ['orders'] });
230
+ ```
231
+
232
+ ## Graceful Shutdown
233
+
234
+ `close()` implements an ordered shutdown:
235
+
236
+ 1. Stop lag monitoring
237
+ 2. Close the stream (leave consumer group)
238
+ 3. Close the consumer client (graceful with timeout, or force)
239
+ 4. Set health status to `'disconnected'`
240
+
241
+ ```typescript
242
+ // Graceful (recommended)
243
+ await helper.close();
244
+
245
+ // Force
246
+ await helper.close({ isForce: true });
247
+ ```
248
+
249
+ ## Direct Stream Access
250
+
251
+ If you don't use the callback pattern, you can access the stream directly after `start()`:
252
+
253
+ ```typescript
254
+ // After start()
255
+ const stream = helper.getStream();
256
+
257
+ // Or use the consumer directly (bypass helper's start())
258
+ const consumer = helper.getConsumer();
259
+ const stream = await consumer.consume({
260
+ topics: ['orders'],
261
+ mode: 'committed',
262
+ fallbackMode: 'latest',
263
+ });
264
+
265
+ for await (const message of stream) {
266
+ await processMessage(message);
267
+ await message.commit();
268
+ }
269
+ ```
270
+
271
+ ## API Reference (`@platformatic/kafka`)
272
+
273
+ ### Message Object
274
+
275
+ ```typescript
276
+ interface Message<Key, Value, HeaderKey, HeaderValue> {
277
+ topic: string;
278
+ key: Key;
279
+ value: Value;
280
+ partition: number;
281
+ offset: bigint;
282
+ timestamp: bigint;
283
+ headers: Map<HeaderKey, HeaderValue>;
284
+ metadata: Record<string, unknown>;
285
+ commit(callback?: (error?: Error) => void): void | Promise<void>;
286
+ toJSON(): MessageJSON<Key, Value, HeaderKey, HeaderValue>;
287
+ }
288
+ ```
289
+
290
+ > [!WARNING]
291
+ > `message.offset` and `message.timestamp` are `bigint`. When using `JSON.stringify`, provide a custom replacer:
292
+ > ```typescript
293
+ > JSON.stringify(data, (_key, value) => (typeof value === 'bigint' ? value.toString() : value))
294
+ > ```
295
+
296
+ ### MessagesStream
297
+
298
+ `MessagesStream` extends Node.js `Readable`. Three consumption patterns:
299
+
300
+ **Async Iterator** (sequential, backpressure):
301
+ ```typescript
302
+ for await (const message of stream) {
303
+ await processMessage(message);
304
+ await message.commit();
305
+ }
306
+ ```
307
+
308
+ **Event-Based** (high-throughput):
309
+ ```typescript
310
+ stream.on('data', (message) => {
311
+ processMessage(message);
312
+ message.commit();
313
+ });
314
+ ```
315
+
316
+ **Pause/Resume** (manual flow control):
317
+ ```typescript
318
+ stream.on('data', async (message) => {
319
+ stream.pause();
320
+ await heavyProcessing(message);
321
+ message.commit();
322
+ stream.resume();
323
+ });
324
+ ```
325
+
326
+ ### Offset Management
327
+
328
+ ```typescript
329
+ // Manual commit (when autocommit: false)
330
+ for await (const message of stream) {
331
+ await processMessage(message);
332
+ await message.commit();
333
+ }
334
+
335
+ // Bulk commit
336
+ await consumer.commit({
337
+ offsets: [
338
+ { topic: 'orders', partition: 0, offset: 150n, leaderEpoch: 0 },
339
+ { topic: 'orders', partition: 1, offset: 300n, leaderEpoch: 0 },
340
+ ],
341
+ });
342
+
343
+ // List offsets
344
+ const offsets = await consumer.listOffsets({ topics: ['orders'] });
345
+ const committed = await consumer.listCommittedOffsets({
346
+ topics: [{ topic: 'orders', partitions: [0, 1, 2] }],
347
+ });
348
+ ```
349
+
350
+ ### Consumer Group Management
351
+
352
+ ```typescript
353
+ consumer.groupId; // string
354
+ consumer.memberId; // string | null
355
+ consumer.generationId; // number
356
+ consumer.assignments; // GroupAssignment[] | null
357
+ consumer.isActive(); // boolean
358
+
359
+ // Static membership — prevents rebalance on restart
360
+ const helper = KafkaConsumerHelper.newInstance({
361
+ ...
362
+ groupInstanceId: 'worker-1',
363
+ sessionTimeout: 60_000,
364
+ });
365
+ ```
366
+
367
+ ### Consumer Group Partitioning
368
+
369
+ When multiple consumers share the same `groupId`, Kafka distributes topic partitions across group members:
370
+
371
+ ```
372
+ Topic "orders" (3 partitions)
373
+ ├── Partition 0 → Consumer A
374
+ ├── Partition 1 → Consumer B
375
+ └── Partition 2 → Consumer C
376
+ ```
377
+
378
+ - Each partition is assigned to **exactly one** consumer in the group
379
+ - If a consumer leaves/crashes, its partitions are redistributed (**rebalance**)
380
+ - If consumers > partitions, excess consumers sit idle
381
+ - Messages within a partition are processed **in order**
382
+
383
+ > [!TIP]
384
+ > Create topics with enough partitions for your expected parallelism. You can increase partitions later with `admin.createPartitions()`, but you cannot decrease them.