@venizia/ignis-docs 0.0.7-2 → 0.0.8-0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/mcp-server/common/paths.d.ts +4 -2
- package/dist/mcp-server/common/paths.d.ts.map +1 -1
- package/dist/mcp-server/common/paths.js +8 -6
- package/dist/mcp-server/common/paths.js.map +1 -1
- package/dist/mcp-server/tools/docs/get-document-content.tool.d.ts +1 -1
- package/dist/mcp-server/tools/docs/get-document-content.tool.d.ts.map +1 -1
- package/dist/mcp-server/tools/docs/get-document-content.tool.js +7 -7
- package/dist/mcp-server/tools/docs/get-document-metadata.tool.js +3 -3
- package/dist/mcp-server/tools/docs/get-package-overview.tool.d.ts +1 -1
- package/dist/mcp-server/tools/docs/get-package-overview.tool.js +1 -1
- package/package.json +1 -1
- package/wiki/best-practices/api-usage-examples.md +9 -9
- package/wiki/best-practices/architectural-patterns.md +19 -3
- package/wiki/best-practices/architecture-decisions.md +6 -6
- package/wiki/best-practices/code-style-standards/advanced-patterns.md +1 -1
- package/wiki/best-practices/code-style-standards/control-flow.md +1 -1
- package/wiki/best-practices/code-style-standards/function-patterns.md +2 -2
- package/wiki/best-practices/code-style-standards/index.md +2 -2
- package/wiki/best-practices/code-style-standards/naming-conventions.md +1 -1
- package/wiki/best-practices/code-style-standards/route-definitions.md +4 -4
- package/wiki/best-practices/data-modeling.md +1 -1
- package/wiki/best-practices/deployment-strategies.md +1 -1
- package/wiki/best-practices/error-handling.md +2 -2
- package/wiki/best-practices/performance-optimization.md +3 -3
- package/wiki/best-practices/security-guidelines.md +2 -2
- package/wiki/best-practices/troubleshooting-tips.md +1 -1
- package/wiki/{references → extensions}/components/authentication/api.md +12 -20
- package/wiki/{references → extensions}/components/authentication/errors.md +1 -1
- package/wiki/{references → extensions}/components/authentication/index.md +5 -8
- package/wiki/{references → extensions}/components/authentication/usage.md +20 -36
- package/wiki/{references → extensions}/components/authorization/api.md +62 -13
- package/wiki/{references → extensions}/components/authorization/errors.md +12 -7
- package/wiki/{references → extensions}/components/authorization/index.md +93 -6
- package/wiki/{references → extensions}/components/authorization/usage.md +42 -4
- package/wiki/{references → extensions}/components/health-check.md +5 -4
- package/wiki/{references → extensions}/components/index.md +2 -0
- package/wiki/{references → extensions}/components/mail/index.md +1 -1
- package/wiki/{references → extensions}/components/request-tracker.md +1 -1
- package/wiki/{references → extensions}/components/socket-io/api.md +2 -2
- package/wiki/{references → extensions}/components/socket-io/errors.md +2 -0
- package/wiki/{references → extensions}/components/socket-io/index.md +24 -20
- package/wiki/{references → extensions}/components/socket-io/usage.md +2 -2
- package/wiki/{references → extensions}/components/static-asset/api.md +14 -15
- package/wiki/{references → extensions}/components/static-asset/errors.md +3 -1
- package/wiki/{references → extensions}/components/static-asset/index.md +158 -89
- package/wiki/{references → extensions}/components/static-asset/usage.md +8 -5
- package/wiki/{references → extensions}/components/swagger.md +3 -3
- package/wiki/{references → extensions}/components/template/index.md +4 -4
- package/wiki/{references → extensions}/components/template/setup-page.md +1 -1
- package/wiki/{references → extensions}/components/template/single-page.md +1 -1
- package/wiki/{references → extensions}/components/websocket/api.md +7 -6
- package/wiki/{references → extensions}/components/websocket/errors.md +17 -3
- package/wiki/{references → extensions}/components/websocket/index.md +17 -11
- package/wiki/{references → extensions}/components/websocket/usage.md +2 -2
- package/wiki/{references → extensions}/helpers/crypto/index.md +1 -1
- package/wiki/{references → extensions}/helpers/env/index.md +9 -5
- package/wiki/{references → extensions}/helpers/error/index.md +2 -7
- package/wiki/{references → extensions}/helpers/index.md +18 -6
- package/wiki/{references → extensions}/helpers/kafka/admin.md +33 -16
- package/wiki/extensions/helpers/kafka/consumer.md +384 -0
- package/wiki/extensions/helpers/kafka/examples.md +361 -0
- package/wiki/extensions/helpers/kafka/index.md +639 -0
- package/wiki/{references → extensions}/helpers/kafka/producer.md +100 -96
- package/wiki/extensions/helpers/kafka/schema-registry.md +214 -0
- package/wiki/{references → extensions}/helpers/logger/index.md +2 -2
- package/wiki/{references → extensions}/helpers/queue/index.md +400 -4
- package/wiki/{references → extensions}/helpers/storage/api.md +170 -10
- package/wiki/{references → extensions}/helpers/storage/index.md +44 -8
- package/wiki/{references → extensions}/helpers/template/index.md +1 -1
- package/wiki/{references → extensions}/helpers/testing/index.md +4 -4
- package/wiki/{references → extensions}/helpers/types/index.md +63 -16
- package/wiki/{references → extensions}/helpers/websocket/index.md +1 -1
- package/wiki/extensions/index.md +48 -0
- package/wiki/guides/core-concepts/application/bootstrapping.md +55 -37
- package/wiki/guides/core-concepts/application/index.md +95 -35
- package/wiki/guides/core-concepts/components-guide.md +23 -19
- package/wiki/guides/core-concepts/components.md +34 -10
- package/wiki/guides/core-concepts/dependency-injection.md +99 -34
- package/wiki/guides/core-concepts/grpc-controllers.md +295 -0
- package/wiki/guides/core-concepts/persistent/datasources.md +27 -8
- package/wiki/guides/core-concepts/persistent/models.md +43 -1
- package/wiki/guides/core-concepts/persistent/repositories.md +75 -8
- package/wiki/guides/core-concepts/persistent/transactions.md +38 -8
- package/wiki/guides/core-concepts/{controllers.md → rest-controllers.md} +30 -33
- package/wiki/guides/core-concepts/services.md +19 -5
- package/wiki/guides/get-started/5-minute-quickstart.md +6 -7
- package/wiki/guides/get-started/philosophy.md +1 -1
- package/wiki/guides/index.md +2 -2
- package/wiki/guides/reference/glossary.md +7 -7
- package/wiki/guides/reference/mcp-docs-server.md +1 -1
- package/wiki/guides/tutorials/building-a-crud-api.md +2 -2
- package/wiki/guides/tutorials/complete-installation.md +17 -14
- package/wiki/guides/tutorials/ecommerce-api.md +18 -18
- package/wiki/guides/tutorials/realtime-chat.md +8 -8
- package/wiki/guides/tutorials/testing.md +2 -2
- package/wiki/index.md +4 -3
- package/wiki/references/base/application.md +341 -21
- package/wiki/references/base/bootstrapping.md +43 -13
- package/wiki/references/base/components.md +259 -8
- package/wiki/references/base/controllers.md +556 -253
- package/wiki/references/base/datasources.md +159 -79
- package/wiki/references/base/dependency-injection.md +299 -48
- package/wiki/references/base/filter-system/application-usage.md +18 -2
- package/wiki/references/base/filter-system/array-operators.md +14 -6
- package/wiki/references/base/filter-system/comparison-operators.md +9 -3
- package/wiki/references/base/filter-system/default-filter.md +28 -3
- package/wiki/references/base/filter-system/fields-order-pagination.md +17 -13
- package/wiki/references/base/filter-system/index.md +169 -11
- package/wiki/references/base/filter-system/json-filtering.md +51 -18
- package/wiki/references/base/filter-system/list-operators.md +4 -3
- package/wiki/references/base/filter-system/logical-operators.md +7 -2
- package/wiki/references/base/filter-system/null-operators.md +50 -0
- package/wiki/references/base/filter-system/quick-reference.md +82 -243
- package/wiki/references/base/filter-system/range-operators.md +7 -1
- package/wiki/references/base/filter-system/tips.md +34 -7
- package/wiki/references/base/filter-system/use-cases.md +6 -5
- package/wiki/references/base/grpc-controllers.md +984 -0
- package/wiki/references/base/index.md +32 -24
- package/wiki/references/base/middleware.md +347 -0
- package/wiki/references/base/models.md +390 -46
- package/wiki/references/base/providers.md +14 -14
- package/wiki/references/base/repositories/advanced.md +84 -69
- package/wiki/references/base/repositories/index.md +447 -12
- package/wiki/references/base/repositories/mixins.md +103 -98
- package/wiki/references/base/repositories/relations.md +129 -45
- package/wiki/references/base/repositories/soft-deletable.md +104 -23
- package/wiki/references/base/services.md +94 -14
- package/wiki/references/index.md +12 -10
- package/wiki/references/quick-reference.md +98 -65
- package/wiki/references/utilities/crypto.md +21 -4
- package/wiki/references/utilities/date.md +25 -7
- package/wiki/references/utilities/index.md +26 -24
- package/wiki/references/utilities/jsx.md +54 -54
- package/wiki/references/utilities/module.md +8 -6
- package/wiki/references/utilities/parse.md +16 -9
- package/wiki/references/utilities/performance.md +22 -7
- package/wiki/references/utilities/promise.md +19 -16
- package/wiki/references/utilities/request.md +48 -26
- package/wiki/references/utilities/schema.md +69 -6
- package/wiki/references/utilities/statuses.md +131 -140
- package/wiki/references/helpers/kafka/consumer.md +0 -473
- package/wiki/references/helpers/kafka/examples.md +0 -234
- package/wiki/references/helpers/kafka/index.md +0 -482
- /package/wiki/{references → extensions}/components/mail/api.md +0 -0
- /package/wiki/{references → extensions}/components/mail/errors.md +0 -0
- /package/wiki/{references → extensions}/components/mail/usage.md +0 -0
- /package/wiki/{references → extensions}/components/template/api-page.md +0 -0
- /package/wiki/{references → extensions}/components/template/errors-page.md +0 -0
- /package/wiki/{references → extensions}/components/template/usage-page.md +0 -0
- /package/wiki/{references → extensions}/helpers/cron/index.md +0 -0
- /package/wiki/{references → extensions}/helpers/inversion/index.md +0 -0
- /package/wiki/{references → extensions}/helpers/network/api.md +0 -0
- /package/wiki/{references → extensions}/helpers/network/index.md +0 -0
- /package/wiki/{references → extensions}/helpers/redis/index.md +0 -0
- /package/wiki/{references → extensions}/helpers/socket-io/api.md +0 -0
- /package/wiki/{references → extensions}/helpers/socket-io/index.md +0 -0
- /package/wiki/{references → extensions}/helpers/template/single-page.md +0 -0
- /package/wiki/{references → extensions}/helpers/uid/index.md +0 -0
- /package/wiki/{references → extensions}/helpers/websocket/api.md +0 -0
- /package/wiki/{references → extensions}/helpers/worker-thread/index.md +0 -0
- /package/wiki/{references → extensions}/src-details/mcp-server.md +0 -0
|
@@ -1,473 +0,0 @@
|
|
|
1
|
-
# Consumer
|
|
2
|
-
|
|
3
|
-
The `KafkaConsumerHelper` is a thin wrapper around `@platformatic/kafka`'s `Consumer`. It manages creation, logging, and lifecycle.
|
|
4
|
-
|
|
5
|
-
```typescript
|
|
6
|
-
class KafkaConsumerHelper<
|
|
7
|
-
KeyType = string,
|
|
8
|
-
ValueType = string,
|
|
9
|
-
HeaderKeyType = string,
|
|
10
|
-
HeaderValueType = string,
|
|
11
|
-
> extends BaseHelper
|
|
12
|
-
```
|
|
13
|
-
|
|
14
|
-
## Helper API
|
|
15
|
-
|
|
16
|
-
| Method | Signature | Description |
|
|
17
|
-
|--------|-----------|-------------|
|
|
18
|
-
| `newInstance(opts)` | `static newInstance<K,V,HK,HV>(opts): KafkaConsumerHelper<K,V,HK,HV>` | Factory method |
|
|
19
|
-
| `getConsumer()` | `(): Consumer<KeyType, ValueType, HeaderKeyType, HeaderValueType>` | Access the underlying `Consumer` |
|
|
20
|
-
| `close(isForce?)` | `(isForce?: boolean): Promise<void>` | Close the consumer. Default: `force=true` |
|
|
21
|
-
|
|
22
|
-
> [!NOTE]
|
|
23
|
-
> Consumer defaults to `force=true` on close (unlike producer which defaults to `false`). This is because consumers should leave the group promptly to trigger faster rebalancing.
|
|
24
|
-
|
|
25
|
-
## IKafkaConsumerOpts
|
|
26
|
-
|
|
27
|
-
```typescript
|
|
28
|
-
interface IKafkaConsumerOpts<KeyType, ValueType, HeaderKeyType, HeaderValueType>
|
|
29
|
-
extends IKafkaConnectionOptions
|
|
30
|
-
```
|
|
31
|
-
|
|
32
|
-
| Option | Type | Default | Description |
|
|
33
|
-
|--------|------|---------|-------------|
|
|
34
|
-
| `groupId` | `string` | — | Consumer group ID. **Required** |
|
|
35
|
-
| `identifier` | `string` | `'kafka-consumer'` | Scoped logging identifier |
|
|
36
|
-
| `deserializers` | `Partial<Deserializers<K,V,HK,HV>>` | — | Key/value/header deserializers. **Pass explicitly** |
|
|
37
|
-
| `autocommit` | `boolean \| number` | `false` | Auto-commit offsets. `true` = default interval, `number` = custom interval in ms |
|
|
38
|
-
| `sessionTimeout` | `number` | `30000` | Session timeout — if no heartbeat within this period, consumer is removed from group |
|
|
39
|
-
| `heartbeatInterval` | `number` | `3000` | Heartbeat interval — must be less than `sessionTimeout` |
|
|
40
|
-
| `rebalanceTimeout` | `number` | `sessionTimeout` | Max time for rebalance — defaults to `sessionTimeout` value |
|
|
41
|
-
| `highWaterMark` | `number` | `1024` | Stream buffer size (messages) |
|
|
42
|
-
| `minBytes` | `number` | `1` | Min bytes per fetch response — broker waits until this threshold |
|
|
43
|
-
| `maxBytes` | `number` | — | Max bytes per fetch response per partition |
|
|
44
|
-
| `maxWaitTime` | `number` | — | Max time (ms) broker waits for `minBytes` |
|
|
45
|
-
| `metadataMaxAge` | `number` | `300000` | Metadata cache TTL (ms) — how often to refresh topic/partition info |
|
|
46
|
-
| `groupProtocol` | `'classic' \| 'consumer'` | `'classic'` | Consumer group protocol. `'consumer'` = KIP-848 (Kafka 3.7+) |
|
|
47
|
-
| `groupInstanceId` | `string` | — | Static group membership ID — prevents rebalance on restart |
|
|
48
|
-
|
|
49
|
-
Plus all [Connection Options](./#connection-options).
|
|
50
|
-
|
|
51
|
-
## Basic Example
|
|
52
|
-
|
|
53
|
-
```typescript
|
|
54
|
-
import { KafkaConsumerHelper } from '@venizia/ignis-helpers/kafka';
|
|
55
|
-
import { stringDeserializers } from '@platformatic/kafka';
|
|
56
|
-
|
|
57
|
-
const helper = KafkaConsumerHelper.newInstance({
|
|
58
|
-
bootstrapBrokers: ['localhost:9092'],
|
|
59
|
-
clientId: 'order-consumer',
|
|
60
|
-
groupId: 'order-processing',
|
|
61
|
-
deserializers: stringDeserializers,
|
|
62
|
-
autocommit: false,
|
|
63
|
-
});
|
|
64
|
-
|
|
65
|
-
const consumer = helper.getConsumer();
|
|
66
|
-
|
|
67
|
-
const stream = await consumer.consume({
|
|
68
|
-
topics: ['orders'],
|
|
69
|
-
mode: 'committed',
|
|
70
|
-
fallbackMode: 'latest',
|
|
71
|
-
});
|
|
72
|
-
|
|
73
|
-
for await (const message of stream) {
|
|
74
|
-
console.log(`${message.topic}[${message.partition}] @${message.offset}: ${message.key} → ${message.value}`);
|
|
75
|
-
await message.commit();
|
|
76
|
-
}
|
|
77
|
-
|
|
78
|
-
await stream.close();
|
|
79
|
-
await helper.close();
|
|
80
|
-
```
|
|
81
|
-
|
|
82
|
-
---
|
|
83
|
-
|
|
84
|
-
## API Reference (`@platformatic/kafka`)
|
|
85
|
-
|
|
86
|
-
After calling `helper.getConsumer()`, you have full access to the `Consumer` class.
|
|
87
|
-
|
|
88
|
-
### `consumer.consume(options)`
|
|
89
|
-
|
|
90
|
-
Start consuming messages. Returns a `MessagesStream` (extends Node.js `Readable`).
|
|
91
|
-
|
|
92
|
-
```typescript
|
|
93
|
-
interface ConsumeOptions<Key, Value, HeaderKey, HeaderValue> {
|
|
94
|
-
topics: string[];
|
|
95
|
-
mode?: 'latest' | 'earliest' | 'committed' | 'manual';
|
|
96
|
-
fallbackMode?: 'latest' | 'earliest' | 'fail';
|
|
97
|
-
maxFetches?: number;
|
|
98
|
-
offsets?: { topic: string; partition: number; offset: bigint }[];
|
|
99
|
-
onCorruptedMessage?: CorruptedMessageHandler;
|
|
100
|
-
// Plus ConsumeBaseOptions (autocommit, minBytes, maxBytes, etc.)
|
|
101
|
-
// Plus GroupOptions (sessionTimeout, heartbeatInterval, etc.)
|
|
102
|
-
}
|
|
103
|
-
```
|
|
104
|
-
|
|
105
|
-
#### Stream Modes
|
|
106
|
-
|
|
107
|
-
| Mode | Description |
|
|
108
|
-
|------|-------------|
|
|
109
|
-
| `'committed'` | Resume from last committed offset. **Recommended for production** |
|
|
110
|
-
| `'latest'` | Start from the latest offset (skip existing messages) |
|
|
111
|
-
| `'earliest'` | Start from the beginning of the topic |
|
|
112
|
-
| `'manual'` | Start from explicitly provided offsets via `offsets` option |
|
|
113
|
-
|
|
114
|
-
#### Fallback Modes
|
|
115
|
-
|
|
116
|
-
Used when `mode: 'committed'` but no committed offset exists (new consumer group):
|
|
117
|
-
|
|
118
|
-
| Fallback | Description |
|
|
119
|
-
|----------|-------------|
|
|
120
|
-
| `'latest'` | Start from latest (default) — ignore historical messages |
|
|
121
|
-
| `'earliest'` | Start from beginning — process all historical messages |
|
|
122
|
-
| `'fail'` | Throw an error |
|
|
123
|
-
|
|
124
|
-
```typescript
|
|
125
|
-
// Production pattern: resume from committed, start from latest for new groups
|
|
126
|
-
const stream = await consumer.consume({
|
|
127
|
-
topics: ['orders'],
|
|
128
|
-
mode: 'committed',
|
|
129
|
-
fallbackMode: 'latest',
|
|
130
|
-
});
|
|
131
|
-
|
|
132
|
-
// Replay all historical messages
|
|
133
|
-
const stream = await consumer.consume({
|
|
134
|
-
topics: ['orders'],
|
|
135
|
-
mode: 'earliest',
|
|
136
|
-
});
|
|
137
|
-
|
|
138
|
-
// Start from specific offsets
|
|
139
|
-
const stream = await consumer.consume({
|
|
140
|
-
topics: ['orders'],
|
|
141
|
-
mode: 'manual',
|
|
142
|
-
offsets: [
|
|
143
|
-
{ topic: 'orders', partition: 0, offset: 100n },
|
|
144
|
-
{ topic: 'orders', partition: 1, offset: 200n },
|
|
145
|
-
],
|
|
146
|
-
});
|
|
147
|
-
```
|
|
148
|
-
|
|
149
|
-
---
|
|
150
|
-
|
|
151
|
-
## MessagesStream
|
|
152
|
-
|
|
153
|
-
`MessagesStream` extends Node.js `Readable`. It supports three consumption patterns:
|
|
154
|
-
|
|
155
|
-
### Pattern 1: Async Iterator (`for await`)
|
|
156
|
-
|
|
157
|
-
Best for sequential processing with backpressure:
|
|
158
|
-
|
|
159
|
-
```typescript
|
|
160
|
-
const stream = await consumer.consume({ topics: ['orders'], mode: 'committed', fallbackMode: 'latest' });
|
|
161
|
-
|
|
162
|
-
for await (const message of stream) {
|
|
163
|
-
await processMessage(message);
|
|
164
|
-
await message.commit();
|
|
165
|
-
}
|
|
166
|
-
```
|
|
167
|
-
|
|
168
|
-
### Pattern 2: Event-Based (`.on('data')`)
|
|
169
|
-
|
|
170
|
-
Best for high-throughput or when you need non-blocking processing:
|
|
171
|
-
|
|
172
|
-
```typescript
|
|
173
|
-
const stream = await consumer.consume({ topics: ['orders'], mode: 'committed', fallbackMode: 'latest' });
|
|
174
|
-
|
|
175
|
-
stream.on('data', (message) => {
|
|
176
|
-
console.log(JSON.stringify({
|
|
177
|
-
topic: message.topic,
|
|
178
|
-
partition: message.partition,
|
|
179
|
-
offset: message.offset,
|
|
180
|
-
key: message.key,
|
|
181
|
-
value: message.value,
|
|
182
|
-
headers: Object.fromEntries(message.headers ?? new Map()),
|
|
183
|
-
timestamp: message.timestamp,
|
|
184
|
-
}, (_key, value) => (typeof value === 'bigint' ? value.toString() : value)));
|
|
185
|
-
|
|
186
|
-
message.commit();
|
|
187
|
-
});
|
|
188
|
-
|
|
189
|
-
stream.on('error', (err) => {
|
|
190
|
-
console.error('Stream error:', err);
|
|
191
|
-
});
|
|
192
|
-
```
|
|
193
|
-
|
|
194
|
-
> [!WARNING]
|
|
195
|
-
> `message.offset` and `message.timestamp` are `bigint`. When using `JSON.stringify`, you must provide a custom replacer:
|
|
196
|
-
> ```typescript
|
|
197
|
-
> JSON.stringify(data, (_key, value) => (typeof value === 'bigint' ? value.toString() : value))
|
|
198
|
-
> ```
|
|
199
|
-
|
|
200
|
-
### Pattern 3: Pause/Resume
|
|
201
|
-
|
|
202
|
-
Manual flow control:
|
|
203
|
-
|
|
204
|
-
```typescript
|
|
205
|
-
stream.on('data', async (message) => {
|
|
206
|
-
stream.pause();
|
|
207
|
-
await heavyProcessing(message);
|
|
208
|
-
message.commit();
|
|
209
|
-
stream.resume();
|
|
210
|
-
});
|
|
211
|
-
```
|
|
212
|
-
|
|
213
|
-
### Message Object
|
|
214
|
-
|
|
215
|
-
Each message from the stream has the following shape:
|
|
216
|
-
|
|
217
|
-
```typescript
|
|
218
|
-
interface Message<Key, Value, HeaderKey, HeaderValue> {
|
|
219
|
-
topic: string;
|
|
220
|
-
key: Key;
|
|
221
|
-
value: Value;
|
|
222
|
-
partition: number;
|
|
223
|
-
offset: bigint;
|
|
224
|
-
timestamp: bigint;
|
|
225
|
-
headers: Map<HeaderKey, HeaderValue>;
|
|
226
|
-
metadata: Record<string, unknown>;
|
|
227
|
-
commit(callback?: (error?: Error) => void): void | Promise<void>;
|
|
228
|
-
toJSON(): MessageJSON<Key, Value, HeaderKey, HeaderValue>;
|
|
229
|
-
}
|
|
230
|
-
```
|
|
231
|
-
|
|
232
|
-
| Property | Type | Description |
|
|
233
|
-
|----------|------|-------------|
|
|
234
|
-
| `topic` | `string` | Source topic |
|
|
235
|
-
| `partition` | `number` | Source partition |
|
|
236
|
-
| `offset` | `bigint` | Message offset within the partition |
|
|
237
|
-
| `key` | `KeyType` | Deserialized message key |
|
|
238
|
-
| `value` | `ValueType` | Deserialized message value |
|
|
239
|
-
| `headers` | `Map<HK, HV>` | Deserialized message headers |
|
|
240
|
-
| `timestamp` | `bigint` | Message timestamp (ms since epoch) |
|
|
241
|
-
| `metadata` | `Record<string, unknown>` | Additional metadata |
|
|
242
|
-
| `commit()` | `() => void \| Promise<void>` | Commit this message's offset |
|
|
243
|
-
|
|
244
|
-
### Stream Events
|
|
245
|
-
|
|
246
|
-
| Event | Payload | Description |
|
|
247
|
-
|-------|---------|-------------|
|
|
248
|
-
| `'data'` | `Message<K,V,HK,HV>` | New message received |
|
|
249
|
-
| `'error'` | `Error` | Stream error |
|
|
250
|
-
| `'close'` | — | Stream closed |
|
|
251
|
-
| `'end'` | — | Stream ended (all data consumed) |
|
|
252
|
-
| `'autocommit'` | `(err, offsets)` | Auto-commit completed (or failed) |
|
|
253
|
-
| `'fetch'` | — | Fetch request sent |
|
|
254
|
-
| `'offsets'` | — | Offsets updated |
|
|
255
|
-
| `'pause'` | — | Stream paused |
|
|
256
|
-
| `'resume'` | — | Stream resumed |
|
|
257
|
-
| `'readable'` | — | Stream has data available |
|
|
258
|
-
|
|
259
|
-
### Stream Methods
|
|
260
|
-
|
|
261
|
-
| Method | Returns | Description |
|
|
262
|
-
|--------|---------|-------------|
|
|
263
|
-
| `close()` | `Promise<void>` | Close the stream and leave the consumer group |
|
|
264
|
-
| `isActive()` | `boolean` | Whether the stream is actively consuming |
|
|
265
|
-
| `isConnected()` | `boolean` | Whether the underlying connection is active |
|
|
266
|
-
| `pause()` | `this` | Pause consumption (stop fetching) |
|
|
267
|
-
| `resume()` | `this` | Resume consumption |
|
|
268
|
-
| `[Symbol.asyncIterator]()` | `AsyncIterator<Message>` | Async iteration support |
|
|
269
|
-
|
|
270
|
-
### Stream Properties
|
|
271
|
-
|
|
272
|
-
| Property | Type | Description |
|
|
273
|
-
|----------|------|-------------|
|
|
274
|
-
| `consumer` | `Consumer` | Reference to the parent consumer |
|
|
275
|
-
| `offsetsToFetch` | `Map<string, bigint>` | Next offsets to fetch per topic-partition |
|
|
276
|
-
| `offsetsToCommit` | `Map<string, CommitOptionsPartition>` | Pending commit offsets |
|
|
277
|
-
| `offsetsCommitted` | `Map<string, bigint>` | Last committed offsets |
|
|
278
|
-
| `committedOffsets` | `Map<string, bigint>` | Alias for `offsetsCommitted` |
|
|
279
|
-
|
|
280
|
-
---
|
|
281
|
-
|
|
282
|
-
## Offset Management
|
|
283
|
-
|
|
284
|
-
### Manual Commit
|
|
285
|
-
|
|
286
|
-
When `autocommit: false`, commit offsets explicitly after processing:
|
|
287
|
-
|
|
288
|
-
```typescript
|
|
289
|
-
const stream = await consumer.consume({
|
|
290
|
-
topics: ['orders'],
|
|
291
|
-
mode: 'committed',
|
|
292
|
-
fallbackMode: 'latest',
|
|
293
|
-
});
|
|
294
|
-
|
|
295
|
-
for await (const message of stream) {
|
|
296
|
-
try {
|
|
297
|
-
await processMessage(message);
|
|
298
|
-
await message.commit(); // Commit on success
|
|
299
|
-
} catch (err) {
|
|
300
|
-
// Don't commit — message will be redelivered
|
|
301
|
-
console.error('Processing failed:', err);
|
|
302
|
-
}
|
|
303
|
-
}
|
|
304
|
-
```
|
|
305
|
-
|
|
306
|
-
### Auto-Commit
|
|
307
|
-
|
|
308
|
-
Commit offsets automatically at a configurable interval:
|
|
309
|
-
|
|
310
|
-
```typescript
|
|
311
|
-
const helper = KafkaConsumerHelper.newInstance({
|
|
312
|
-
// ...
|
|
313
|
-
autocommit: true, // Default interval
|
|
314
|
-
// autocommit: 5000, // Custom: commit every 5 seconds
|
|
315
|
-
});
|
|
316
|
-
```
|
|
317
|
-
|
|
318
|
-
### Bulk Commit via Consumer
|
|
319
|
-
|
|
320
|
-
```typescript
|
|
321
|
-
await consumer.commit({
|
|
322
|
-
offsets: [
|
|
323
|
-
{ topic: 'orders', partition: 0, offset: 150n, leaderEpoch: 0 },
|
|
324
|
-
{ topic: 'orders', partition: 1, offset: 300n, leaderEpoch: 0 },
|
|
325
|
-
],
|
|
326
|
-
});
|
|
327
|
-
```
|
|
328
|
-
|
|
329
|
-
### List Offsets
|
|
330
|
-
|
|
331
|
-
```typescript
|
|
332
|
-
// Current log-end offsets
|
|
333
|
-
const offsets = await consumer.listOffsets({ topics: ['orders'] });
|
|
334
|
-
// Map<string, bigint[]> — topic → partition offsets
|
|
335
|
-
|
|
336
|
-
// Committed offsets
|
|
337
|
-
const committed = await consumer.listCommittedOffsets({
|
|
338
|
-
topics: [{ topic: 'orders', partitions: [0, 1, 2] }],
|
|
339
|
-
});
|
|
340
|
-
|
|
341
|
-
// Offsets at a specific timestamp
|
|
342
|
-
const historical = await consumer.listOffsetsWithTimestamps({
|
|
343
|
-
topics: ['orders'],
|
|
344
|
-
timestamp: BigInt(Date.now() - 3600_000), // 1 hour ago
|
|
345
|
-
});
|
|
346
|
-
```
|
|
347
|
-
|
|
348
|
-
---
|
|
349
|
-
|
|
350
|
-
## Lag Monitoring
|
|
351
|
-
|
|
352
|
-
```typescript
|
|
353
|
-
// One-time lag check
|
|
354
|
-
const lag = await consumer.getLag({ topics: ['orders'] });
|
|
355
|
-
// Map<string, bigint[]> — topic → lag per partition
|
|
356
|
-
|
|
357
|
-
// Continuous monitoring (emits 'consumer:lag' events)
|
|
358
|
-
consumer.startLagMonitoring({ topics: ['orders'] }, 5000); // Check every 5s
|
|
359
|
-
|
|
360
|
-
consumer.on('consumer:lag', (lag) => {
|
|
361
|
-
for (const [topic, partitionLags] of lag) {
|
|
362
|
-
partitionLags.forEach((lagValue, partition) => {
|
|
363
|
-
if (lagValue > 1000n) {
|
|
364
|
-
console.warn(`High lag on ${topic}[${partition}]: ${lagValue}`);
|
|
365
|
-
}
|
|
366
|
-
});
|
|
367
|
-
}
|
|
368
|
-
});
|
|
369
|
-
|
|
370
|
-
consumer.on('consumer:lag:error', (error) => {
|
|
371
|
-
console.error('Lag monitoring error:', error);
|
|
372
|
-
});
|
|
373
|
-
|
|
374
|
-
// Stop monitoring
|
|
375
|
-
consumer.stopLagMonitoring();
|
|
376
|
-
```
|
|
377
|
-
|
|
378
|
-
---
|
|
379
|
-
|
|
380
|
-
## Consumer Events
|
|
381
|
-
|
|
382
|
-
The `Consumer` class emits lifecycle events via `consumer.on(event, handler)`:
|
|
383
|
-
|
|
384
|
-
| Event | Payload | Description |
|
|
385
|
-
|-------|---------|-------------|
|
|
386
|
-
| `'consumer:group:join'` | `{ groupId, memberId, generationId, isLeader, assignments }` | Joined consumer group |
|
|
387
|
-
| `'consumer:group:leave'` | `{ groupId, memberId, generationId }` | Left consumer group |
|
|
388
|
-
| `'consumer:group:rejoin'` | — | Rejoining group after rebalance |
|
|
389
|
-
| `'consumer:group:rebalance'` | `{ groupId }` | Partition rebalance triggered |
|
|
390
|
-
| `'consumer:heartbeat:start'` | `{ groupId, memberId, generationId }` | Heartbeat started |
|
|
391
|
-
| `'consumer:heartbeat:cancel'` | `{ groupId, memberId, generationId }` | Heartbeat cancelled |
|
|
392
|
-
| `'consumer:heartbeat:end'` | `{ groupId, memberId, generationId }` | Heartbeat completed |
|
|
393
|
-
| `'consumer:heartbeat:error'` | `{ groupId, memberId, generationId, error }` | Heartbeat failed |
|
|
394
|
-
| `'consumer:lag'` | `Map<string, bigint[]>` | Lag report (from `startLagMonitoring`) |
|
|
395
|
-
| `'consumer:lag:error'` | `Error` | Lag monitoring error |
|
|
396
|
-
|
|
397
|
-
**Base client events** (shared with Producer and Admin):
|
|
398
|
-
|
|
399
|
-
| Event | Payload | Description |
|
|
400
|
-
|-------|---------|-------------|
|
|
401
|
-
| `'client:broker:connect'` | `{ node, host, port }` | Connected to broker |
|
|
402
|
-
| `'client:broker:disconnect'` | `{ node, host, port }` | Disconnected from broker |
|
|
403
|
-
| `'client:broker:failed'` | `{ node, host, port }` | Broker connection failed |
|
|
404
|
-
| `'client:metadata'` | `ClusterMetadata` | Metadata refreshed |
|
|
405
|
-
| `'client:close'` | — | Client closed |
|
|
406
|
-
|
|
407
|
-
```typescript
|
|
408
|
-
consumer.on('consumer:group:join', ({ groupId, memberId, assignments }) => {
|
|
409
|
-
console.log(`Joined group ${groupId} as ${memberId}`);
|
|
410
|
-
console.log('Assigned partitions:', assignments);
|
|
411
|
-
});
|
|
412
|
-
|
|
413
|
-
consumer.on('consumer:group:rebalance', ({ groupId }) => {
|
|
414
|
-
console.log(`Rebalance triggered for group ${groupId}`);
|
|
415
|
-
});
|
|
416
|
-
```
|
|
417
|
-
|
|
418
|
-
---
|
|
419
|
-
|
|
420
|
-
## Consumer Group Management
|
|
421
|
-
|
|
422
|
-
```typescript
|
|
423
|
-
// Consumer properties
|
|
424
|
-
consumer.groupId; // string
|
|
425
|
-
consumer.memberId; // string | null
|
|
426
|
-
consumer.generationId; // number
|
|
427
|
-
consumer.assignments; // GroupAssignment[] | null
|
|
428
|
-
consumer.isActive(); // boolean
|
|
429
|
-
consumer.lastHeartbeat; // Date | null
|
|
430
|
-
|
|
431
|
-
// Manually leave and rejoin group
|
|
432
|
-
await consumer.leaveGroup();
|
|
433
|
-
await consumer.joinGroup();
|
|
434
|
-
|
|
435
|
-
// Static membership — prevents rebalance on restart
|
|
436
|
-
const helper = KafkaConsumerHelper.newInstance({
|
|
437
|
-
// ...
|
|
438
|
-
groupInstanceId: 'worker-1', // Unique per instance
|
|
439
|
-
sessionTimeout: 60_000, // Longer timeout for static members
|
|
440
|
-
});
|
|
441
|
-
```
|
|
442
|
-
|
|
443
|
-
---
|
|
444
|
-
|
|
445
|
-
## Consumer Group Partitioning
|
|
446
|
-
|
|
447
|
-
When multiple consumers share the same `groupId`, Kafka distributes topic partitions across group members:
|
|
448
|
-
|
|
449
|
-
```
|
|
450
|
-
Topic "orders" (3 partitions)
|
|
451
|
-
├── Partition 0 → Consumer A
|
|
452
|
-
├── Partition 1 → Consumer B
|
|
453
|
-
└── Partition 2 → Consumer C
|
|
454
|
-
```
|
|
455
|
-
|
|
456
|
-
- Each partition is assigned to **exactly one** consumer in the group
|
|
457
|
-
- If a consumer leaves/crashes, its partitions are redistributed (**rebalance**)
|
|
458
|
-
- If consumers > partitions, excess consumers sit idle
|
|
459
|
-
- Messages within a partition are processed **in order**
|
|
460
|
-
|
|
461
|
-
```bash
|
|
462
|
-
# Terminal 1 — gets partition 0
|
|
463
|
-
bun run consumer.ts --clientId=worker-1
|
|
464
|
-
|
|
465
|
-
# Terminal 2 — gets partition 1
|
|
466
|
-
bun run consumer.ts --clientId=worker-2
|
|
467
|
-
|
|
468
|
-
# Terminal 3 — gets partition 2
|
|
469
|
-
bun run consumer.ts --clientId=worker-3
|
|
470
|
-
```
|
|
471
|
-
|
|
472
|
-
> [!TIP]
|
|
473
|
-
> Create topics with enough partitions for your expected parallelism. You can increase partitions later with `admin.createPartitions()`, but you cannot decrease them.
|
|
@@ -1,234 +0,0 @@
|
|
|
1
|
-
# Examples & Troubleshooting
|
|
2
|
-
|
|
3
|
-
Complete examples and common issue resolution for the Kafka helpers.
|
|
4
|
-
|
|
5
|
-
## Producer: Interval-Based Message Sending
|
|
6
|
-
|
|
7
|
-
```typescript
|
|
8
|
-
import { Producer, serializersFrom, jsonSerializer, stringSerializer } from '@platformatic/kafka';
|
|
9
|
-
|
|
10
|
-
const producer = new Producer({
|
|
11
|
-
clientId: 'interval-producer',
|
|
12
|
-
bootstrapBrokers: ['broker1:9092', 'broker2:9092', 'broker3:9092'],
|
|
13
|
-
serializers: { ...serializersFrom(jsonSerializer), key: stringSerializer },
|
|
14
|
-
sasl: { mechanism: 'SCRAM-SHA-512', username: 'user', password: 'pass' },
|
|
15
|
-
connectTimeout: 30_000,
|
|
16
|
-
requestTimeout: 30_000,
|
|
17
|
-
});
|
|
18
|
-
|
|
19
|
-
let count = 0;
|
|
20
|
-
const interval = setInterval(async () => {
|
|
21
|
-
const key = `key-${count % 3}`;
|
|
22
|
-
const value = { index: count, timestamp: new Date().toISOString() };
|
|
23
|
-
|
|
24
|
-
await producer.send({ messages: [{ topic: 'events', key, value }] });
|
|
25
|
-
console.log(JSON.stringify({ topic: 'events', key, value }));
|
|
26
|
-
count++;
|
|
27
|
-
}, 100);
|
|
28
|
-
|
|
29
|
-
process.on('SIGINT', async () => {
|
|
30
|
-
clearInterval(interval);
|
|
31
|
-
console.log(`Shutting down... (sent ${count} messages)`);
|
|
32
|
-
await producer.close();
|
|
33
|
-
process.exit(0);
|
|
34
|
-
});
|
|
35
|
-
```
|
|
36
|
-
|
|
37
|
-
## Consumer: Event-Based with Manual Commit
|
|
38
|
-
|
|
39
|
-
```typescript
|
|
40
|
-
import { Consumer, deserializersFrom, jsonDeserializer, stringDeserializer } from '@platformatic/kafka';
|
|
41
|
-
|
|
42
|
-
const consumer = new Consumer({
|
|
43
|
-
clientId: 'event-consumer',
|
|
44
|
-
bootstrapBrokers: ['broker1:9092', 'broker2:9092', 'broker3:9092'],
|
|
45
|
-
groupId: 'processing-group',
|
|
46
|
-
deserializers: { ...deserializersFrom(jsonDeserializer), key: stringDeserializer },
|
|
47
|
-
sasl: { mechanism: 'SCRAM-SHA-512', username: 'user', password: 'pass' },
|
|
48
|
-
connectTimeout: 30_000,
|
|
49
|
-
requestTimeout: 30_000,
|
|
50
|
-
autocommit: false,
|
|
51
|
-
});
|
|
52
|
-
|
|
53
|
-
const stream = await consumer.consume({
|
|
54
|
-
topics: ['events'],
|
|
55
|
-
mode: 'committed',
|
|
56
|
-
fallbackMode: 'latest',
|
|
57
|
-
});
|
|
58
|
-
|
|
59
|
-
stream.on('data', (message) => {
|
|
60
|
-
console.log(JSON.stringify({
|
|
61
|
-
topic: message.topic,
|
|
62
|
-
partition: message.partition,
|
|
63
|
-
offset: message.offset,
|
|
64
|
-
key: message.key,
|
|
65
|
-
value: message.value,
|
|
66
|
-
headers: Object.fromEntries(message.headers ?? new Map()),
|
|
67
|
-
timestamp: message.timestamp,
|
|
68
|
-
}, (_key, value) => (typeof value === 'bigint' ? value.toString() : value)));
|
|
69
|
-
|
|
70
|
-
message.commit();
|
|
71
|
-
});
|
|
72
|
-
|
|
73
|
-
stream.on('error', (err) => console.error('Stream error:', err));
|
|
74
|
-
|
|
75
|
-
process.on('SIGINT', async () => {
|
|
76
|
-
await stream.close();
|
|
77
|
-
await consumer.close();
|
|
78
|
-
process.exit(0);
|
|
79
|
-
});
|
|
80
|
-
```
|
|
81
|
-
|
|
82
|
-
## Admin: Topic Setup Script
|
|
83
|
-
|
|
84
|
-
```typescript
|
|
85
|
-
import { KafkaAdminHelper } from '@venizia/ignis-helpers/kafka';
|
|
86
|
-
|
|
87
|
-
async function setupTopics() {
|
|
88
|
-
const helper = KafkaAdminHelper.newInstance({
|
|
89
|
-
bootstrapBrokers: ['localhost:9092'],
|
|
90
|
-
clientId: 'topic-setup',
|
|
91
|
-
});
|
|
92
|
-
|
|
93
|
-
const admin = helper.getAdmin();
|
|
94
|
-
|
|
95
|
-
// Create topics
|
|
96
|
-
await admin.createTopics({
|
|
97
|
-
topics: ['orders', 'inventory', 'notifications'],
|
|
98
|
-
partitions: 6,
|
|
99
|
-
replicas: 3,
|
|
100
|
-
configs: [
|
|
101
|
-
{ name: 'retention.ms', value: '604800000' },
|
|
102
|
-
{ name: 'compression.type', value: 'zstd' },
|
|
103
|
-
],
|
|
104
|
-
});
|
|
105
|
-
|
|
106
|
-
// Verify
|
|
107
|
-
const topics = await admin.listTopics({ includeInternals: false });
|
|
108
|
-
console.log('Topics:', topics);
|
|
109
|
-
|
|
110
|
-
await helper.close();
|
|
111
|
-
}
|
|
112
|
-
|
|
113
|
-
setupTopics();
|
|
114
|
-
```
|
|
115
|
-
|
|
116
|
-
## Using Helpers with Ignis IoC
|
|
117
|
-
|
|
118
|
-
```typescript
|
|
119
|
-
import { KafkaProducerHelper, KafkaConsumerHelper } from '@venizia/ignis-helpers/kafka';
|
|
120
|
-
import { stringSerializers, stringDeserializers } from '@platformatic/kafka';
|
|
121
|
-
import { inject, injectable } from '@venizia/ignis-inversion';
|
|
122
|
-
|
|
123
|
-
@injectable()
|
|
124
|
-
export class OrderEventService {
|
|
125
|
-
private producer: KafkaProducerHelper;
|
|
126
|
-
private consumer: KafkaConsumerHelper;
|
|
127
|
-
|
|
128
|
-
constructor(
|
|
129
|
-
@inject({ key: 'kafka.producer' }) producer: KafkaProducerHelper,
|
|
130
|
-
@inject({ key: 'kafka.consumer' }) consumer: KafkaConsumerHelper,
|
|
131
|
-
) {
|
|
132
|
-
this.producer = producer;
|
|
133
|
-
this.consumer = consumer;
|
|
134
|
-
}
|
|
135
|
-
|
|
136
|
-
async publishOrderCreated(orderId: string, data: Record<string, unknown>) {
|
|
137
|
-
const producer = this.producer.getProducer();
|
|
138
|
-
await producer.send({
|
|
139
|
-
messages: [{ topic: 'order-events', key: orderId, value: JSON.stringify(data) }],
|
|
140
|
-
});
|
|
141
|
-
}
|
|
142
|
-
|
|
143
|
-
async startConsuming() {
|
|
144
|
-
const consumer = this.consumer.getConsumer();
|
|
145
|
-
const stream = await consumer.consume({
|
|
146
|
-
topics: ['order-events'],
|
|
147
|
-
mode: 'committed',
|
|
148
|
-
fallbackMode: 'latest',
|
|
149
|
-
});
|
|
150
|
-
|
|
151
|
-
for await (const message of stream) {
|
|
152
|
-
await this.handleOrderEvent(message.key!, JSON.parse(message.value!));
|
|
153
|
-
await message.commit();
|
|
154
|
-
}
|
|
155
|
-
}
|
|
156
|
-
|
|
157
|
-
private async handleOrderEvent(orderId: string, data: Record<string, unknown>) {
|
|
158
|
-
// Process order event
|
|
159
|
-
}
|
|
160
|
-
}
|
|
161
|
-
|
|
162
|
-
// Register in application
|
|
163
|
-
app.bind('kafka.producer').to(
|
|
164
|
-
KafkaProducerHelper.newInstance({
|
|
165
|
-
bootstrapBrokers: ['localhost:9092'],
|
|
166
|
-
clientId: 'order-service-producer',
|
|
167
|
-
serializers: stringSerializers,
|
|
168
|
-
}),
|
|
169
|
-
);
|
|
170
|
-
|
|
171
|
-
app.bind('kafka.consumer').to(
|
|
172
|
-
KafkaConsumerHelper.newInstance({
|
|
173
|
-
bootstrapBrokers: ['localhost:9092'],
|
|
174
|
-
clientId: 'order-service-consumer',
|
|
175
|
-
groupId: 'order-service',
|
|
176
|
-
deserializers: stringDeserializers,
|
|
177
|
-
}),
|
|
178
|
-
);
|
|
179
|
-
```
|
|
180
|
-
|
|
181
|
-
---
|
|
182
|
-
|
|
183
|
-
## Troubleshooting
|
|
184
|
-
|
|
185
|
-
### Common Issues
|
|
186
|
-
|
|
187
|
-
| Error | Cause | Fix |
|
|
188
|
-
|-------|-------|-----|
|
|
189
|
-
| `ECONNREFUSED localhost:9092` | Broker `advertised.listeners` set to `localhost` but connecting remotely | Set `KAFKA_ADVERTISED_LISTENERS` with the correct external host IP |
|
|
190
|
-
| `Request timed out` | SASL handshake or broker unreachable | Add `connectTimeout: 30_000, requestTimeout: 30_000` |
|
|
191
|
-
| `Connection closed` | Connecting without SASL to a SASL-required listener | Check `KAFKA_LISTENER_SECURITY_PROTOCOL_MAP` — use `SASL_PLAINTEXT` |
|
|
192
|
-
| `Cannot find a suitable SASL mechanism` | Wrong mechanism (e.g., `PLAIN` when broker only supports `SCRAM-SHA-512`) | Check error message for supported mechanisms, match `mechanism` |
|
|
193
|
-
| `Failed to deserialize a message` | Mismatch between serializer used for producing and deserializer used for consuming | Ensure matching serde. For old data, use a new consumer group or recreate topic |
|
|
194
|
-
| `JSON.stringify cannot serialize BigInt` | `message.offset` and `message.timestamp` are `bigint` | Use custom replacer: `(_k, v) => typeof v === 'bigint' ? v.toString() : v` |
|
|
195
|
-
| Consumer idle (no messages) | More consumers than partitions | Ensure `numPartitions >= numConsumers` |
|
|
196
|
-
|
|
197
|
-
### Docker Kafka Configuration
|
|
198
|
-
|
|
199
|
-
When running Kafka in Docker and connecting from outside the container:
|
|
200
|
-
|
|
201
|
-
```yaml
|
|
202
|
-
environment:
|
|
203
|
-
DOCKER_HOST_IP: '192.168.1.100' # Your host machine's IP
|
|
204
|
-
KAFKA_ADVERTISED_LISTENERS: >
|
|
205
|
-
INTERNAL://kafka-1:29092,
|
|
206
|
-
EXTERNAL://${DOCKER_HOST_IP}:19092
|
|
207
|
-
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: >
|
|
208
|
-
INTERNAL:PLAINTEXT,
|
|
209
|
-
EXTERNAL:SASL_PLAINTEXT,
|
|
210
|
-
CONTROLLER:PLAINTEXT
|
|
211
|
-
```
|
|
212
|
-
|
|
213
|
-
- `INTERNAL` — used for inter-broker communication
|
|
214
|
-
- `EXTERNAL` — used for client connections from outside Docker
|
|
215
|
-
- `CONTROLLER` — used for KRaft controller communication
|
|
216
|
-
|
|
217
|
-
---
|
|
218
|
-
|
|
219
|
-
## See Also
|
|
220
|
-
|
|
221
|
-
- **Kafka Pages:**
|
|
222
|
-
- [Overview & Fundamentals](./) — Connection, serialization, constants, compression
|
|
223
|
-
- [Producer](./producer) — Producer helper & API reference
|
|
224
|
-
- [Consumer](./consumer) — Consumer helper & API reference
|
|
225
|
-
- [Admin](./admin) — Admin helper & API reference
|
|
226
|
-
|
|
227
|
-
- **Other Helpers:**
|
|
228
|
-
- [Queue Helper](../queue/) — BullMQ, MQTT, and in-memory queues
|
|
229
|
-
- [Redis Helper](../redis/) — Redis connection management
|
|
230
|
-
|
|
231
|
-
- **External Resources:**
|
|
232
|
-
- [@platformatic/kafka](https://github.com/platformatic/kafka) — Underlying Kafka client library
|
|
233
|
-
- [Apache Kafka Documentation](https://kafka.apache.org/documentation/) — Official Kafka docs
|
|
234
|
-
- [KIP-848](https://cwiki.apache.org/confluence/display/KAFKA/KIP-848) — New consumer group protocol
|