@eqxjs/kafka-server-confluent-kafka 0.0.1-alpha.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +27 -0
- package/FEATURES.md +278 -0
- package/KAFKA-CONFIG.md +304 -0
- package/README.md +593 -0
- package/dist/dto/header.dto.d.ts +25 -0
- package/dist/dto/header.dto.js +3 -0
- package/dist/dto/header.dto.js.map +1 -0
- package/dist/dto/m1.dto.d.ts +7 -0
- package/dist/dto/m1.dto.js +3 -0
- package/dist/dto/m1.dto.js.map +1 -0
- package/dist/dto/m2.dto.d.ts +5 -0
- package/dist/dto/m2.dto.js +3 -0
- package/dist/dto/m2.dto.js.map +1 -0
- package/dist/dto/m3.dto.d.ts +5 -0
- package/dist/dto/m3.dto.js +3 -0
- package/dist/dto/m3.dto.js.map +1 -0
- package/dist/dto/protocol.dto.d.ts +14 -0
- package/dist/dto/protocol.dto.js +3 -0
- package/dist/dto/protocol.dto.js.map +1 -0
- package/dist/dto/service.dto.d.ts +29 -0
- package/dist/dto/service.dto.js +3 -0
- package/dist/dto/service.dto.js.map +1 -0
- package/dist/index.d.ts +2 -0
- package/dist/index.js +9 -0
- package/dist/index.js.map +1 -0
- package/dist/kafka.server.d.ts +52 -0
- package/dist/kafka.server.js +478 -0
- package/dist/kafka.server.js.map +1 -0
- package/dist/tsconfig.tsbuildinfo +1 -0
- package/dist/utils/get-mem.d.ts +9 -0
- package/dist/utils/get-mem.js +22 -0
- package/dist/utils/get-mem.js.map +1 -0
- package/dist/utils/parse-env.d.ts +6 -0
- package/dist/utils/parse-env.js +35 -0
- package/dist/utils/parse-env.js.map +1 -0
- package/dist/utils/time.d.ts +6 -0
- package/dist/utils/time.js +17 -0
- package/dist/utils/time.js.map +1 -0
- package/package.json +27 -0
- package/tsconfig.json +21 -0
package/CHANGELOG.md
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
1
|
+
# Changelog
|
|
2
|
+
|
|
3
|
+
## v2.2.0
|
|
4
|
+
|
|
5
|
+
- **Feature:** Topic monitor polling interval configurable via `KAFKA_TOPIC_MONITOR_INTERVAL_MS` env var (default: `300000` ms / 5 minutes)
|
|
6
|
+
- **Feature:** `KAFKA_DISABLE_TOPIC_MONITOR=true` disables the topic monitor entirely; a warning is logged at startup when set
|
|
7
|
+
- **Refactor:** `monitorTopicNewChange` rewritten from a `while(true)` async loop to `setInterval`, stored in `topicMonitorInterval` (public property, same pattern as `consumeInterval`)
|
|
8
|
+
- **Fix:** `monitorTopicNewChange` now clears any existing interval before starting a new one, preventing duplicate monitors on reconnect
|
|
9
|
+
- **Fix:** `close()` now clears `topicMonitorInterval` before disconnecting, ensuring the monitor loop is stopped cleanly on shutdown
|
|
10
|
+
|
|
11
|
+
## v2.1.0
|
|
12
|
+
|
|
13
|
+
- **Feature:** Heap-based consumer back-pressure — consumer automatically pauses and resumes based on Node.js heap usage. Configure threshold via 4th constructor argument or `KAFKA_HEAP_LIMIT_PERCENT` env var (default `85%`, clamped `10–99`)
|
|
14
|
+
- **Feature:** Heap calculation extracted to `utils/get-mem.ts` as `getHeapUsage()` — returns `usedPercent`, `usedMB`, `limitMB`, `isOverLimit(threshold)`, and `format()` helpers
|
|
15
|
+
- **Feature:** Consumer `ready` event now logs `client name`, `group.id`, `assignment.strategy`, `bootstrap broker`, and full `broker list` sourced from the server response metadata
|
|
16
|
+
- **Feature:** Rebalance callback now logs current partition assignment state on both ASSIGN and REVOKE events, showing `before` / `assigned` / `revoking` / `remaining` maps per topic
|
|
17
|
+
- **Fix:** Cooperative rebalance now correctly maintains `memberAssignment` and `assignment` incrementally (merge on ASSIGN, filter on REVOKE) instead of replacing the full state with the delta
|
|
18
|
+
- **Fix:** `group.id` is always set to a stable value (`groupId` config or `"nestjs-kafka-consumer"`), preventing the "different consumer group" behaviour caused by librdkafka generating a random group ID when none is provided
|
|
19
|
+
- **Fix:** `group.instance.id` automatically set to `os.hostname()` to enable static membership and reduce unnecessary rebalances on pod/container restarts
|
|
20
|
+
|
|
21
|
+
## v2.0.2
|
|
22
|
+
|
|
23
|
+
- **Fix:** `rb_callback` now supports both **EAGER** and **COOPERATIVE** rebalance protocols. When `partition.assignment.strategy` is set to `cooperative-sticky` (or any cooperative variant), the callback automatically detects the protocol on the first rebalance event and switches to `incremental_assign(assignment)` / `incremental_unassign(assignment)` for all subsequent rebalances. The protocol is auto-detected at runtime — no configuration change required.
|
|
24
|
+
|
|
25
|
+
## v2.0.1
|
|
26
|
+
|
|
27
|
+
- **Fix:** `consumer.assign()` / `consumer.unassign()` now guarded with `isConnected()` and wrapped in try/catch inside `rb_callback` to prevent "Local: Erroneous state" (`ERR__STATE`) crash during partition rebalance when the consumer is in a transitional (disconnecting / reconnecting) state.
|
package/FEATURES.md
ADDED
|
@@ -0,0 +1,278 @@
|
|
|
1
|
+
# Features
|
|
2
|
+
|
|
3
|
+
## Stable Consumer Group Identity
|
|
4
|
+
|
|
5
|
+
`group.id` is always written to `consumerConfig`, defaulting to `"nestjs-kafka-consumer"` when not provided. This ensures offsets are persisted across restarts and the consumer always resumes from where it left off.
|
|
6
|
+
|
|
7
|
+
Additionally, `group.instance.id` is automatically set to `os.hostname()`, enabling **static membership**. This prevents unnecessary rebalances when a container restarts — Kafka recognises the returning member by its static ID instead of treating it as a new joiner.
|
|
8
|
+
|
|
9
|
+
**Relevant config:**
|
|
10
|
+
```typescript
|
|
11
|
+
consumer: { groupId: 'my-service-consumer' }
|
|
12
|
+
// or
|
|
13
|
+
consumerConfig['group.id'] = 'my-service-consumer';
|
|
14
|
+
```
|
|
15
|
+
|
|
16
|
+
---
|
|
17
|
+
|
|
18
|
+
## Consumer Connected Log
|
|
19
|
+
|
|
20
|
+
On the `ready` event, the consumer logs a one-line summary built from the **server response metadata** — not just local config:
|
|
21
|
+
|
|
22
|
+
```
|
|
23
|
+
Kafka consumer connected — client=rdkafka#consumer-1, group.id=my-group, assignment.strategy=range, bootstrap.broker=kafka-1:9092, brokers=[kafka-1:9092, kafka-2:9092, kafka-3:9092]
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
| Field | Source |
|
|
27
|
+
|---|---|
|
|
28
|
+
| `client` | `ReadyInfo.name` — librdkafka-assigned client identity |
|
|
29
|
+
| `group.id` | Configured `consumerConfig["group.id"]` |
|
|
30
|
+
| `assignment.strategy` | Configured `consumerConfig["partition.assignment.strategy"]` |
|
|
31
|
+
| `bootstrap.broker` | `Metadata.orig_broker_name` — first broker the client connected to |
|
|
32
|
+
| `brokers` | `Metadata.brokers[]` — full broker list advertised by the cluster |
|
|
33
|
+
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
## Rebalance Logging with Partition State
|
|
37
|
+
|
|
38
|
+
Every rebalance event logs the consumer's full partition state before and after the event, per topic.
|
|
39
|
+
|
|
40
|
+
**ASSIGN** — logs what is being assigned and the resulting full assignment:
|
|
41
|
+
```
|
|
42
|
+
Rebalance ASSIGN [Assign] — group=my-group, strategy=range | assigned={"orders":[0,1]}
|
|
43
|
+
Rebalance ASSIGN — current assignment: {"orders":[0,1],"events":[0]}
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
**REVOKE** — logs the current assignment, what is being revoked, and what remains:
|
|
47
|
+
```
|
|
48
|
+
Rebalance REVOKE [Revoke] — group=my-group, strategy=range | before={"orders":[0,1],"events":[0]} | revoking={"events":[0]}
|
|
49
|
+
Rebalance REVOKE — remaining assignment: {"orders":[0,1]}
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
The current assignment is also always readable at runtime:
|
|
53
|
+
```typescript
|
|
54
|
+
const server = CustomServerConfluentKafka.getInstance();
|
|
55
|
+
console.log(server.memberAssignment); // { 'orders': [0, 1], 'events': [0] }
|
|
56
|
+
console.log(server.assignment); // TopicPartition[]
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
---
|
|
60
|
+
|
|
61
|
+
## EAGER and COOPERATIVE Rebalance Protocol Support
|
|
62
|
+
|
|
63
|
+
The rebalance callback supports both protocols, auto-detected from `partition.assignment.strategy` at construction time.
|
|
64
|
+
|
|
65
|
+
| Protocol | Strategies | Behaviour |
|
|
66
|
+
|---|---|---|
|
|
67
|
+
| **Eager** | `range`, `roundrobin` | Full revoke then full reassign — `assign()` / `unassign()` |
|
|
68
|
+
| **Cooperative** | `cooperative-sticky` | Incremental delta only — `incrementalAssign()` / `incrementalUnassign()` |
|
|
69
|
+
|
|
70
|
+
For **cooperative**, `memberAssignment` and `assignment` are maintained incrementally:
|
|
71
|
+
- **ASSIGN**: incoming partitions are merged into the existing assignment
|
|
72
|
+
- **REVOKE**: only the revoked partitions are removed; the rest remain untouched
|
|
73
|
+
|
|
74
|
+
All assign/unassign calls are guarded with `isConnected()` and wrapped in try/catch to prevent `ERR__STATE` crashes during shutdown or reconnection.
|
|
75
|
+
|
|
76
|
+
**Configuration:**
|
|
77
|
+
```typescript
|
|
78
|
+
consumerConfig['partition.assignment.strategy'] = 'cooperative-sticky';
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
---
|
|
82
|
+
|
|
83
|
+
## Heap-based Consumer Back-pressure
|
|
84
|
+
|
|
85
|
+
Every second, before calling `consumer.consume()`, the server checks Node.js heap usage via `v8.getHeapStatistics()`. If `used_heap_size / heap_size_limit` exceeds the configured threshold, the consume tick is **skipped** — no messages are fetched until heap recovers.
|
|
86
|
+
|
|
87
|
+
The consumer logs `PAUSED` once on the first over-limit tick, then `RESUMED` when it drops back below the threshold, avoiding log spam on sustained pressure.
|
|
88
|
+
|
|
89
|
+
```
|
|
90
|
+
Consumer PAUSED — heap usage 87.3% (used=694.1 MB, limit=796.0 MB) exceeds limit 85%
|
|
91
|
+
Consumer RESUMED — heap usage recovered to 81.2% (used=646.4 MB, limit=796.0 MB)
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
> The pause does **not** unsubscribe or trigger a rebalance. The consumer stays in the group and simply stops fetching until memory recovers.
|
|
95
|
+
|
|
96
|
+
**Configuration:**
|
|
97
|
+
```typescript
|
|
98
|
+
// 4th constructor argument (default: 85, clamped to [10, 99])
|
|
99
|
+
new CustomServerConfluentKafka(options, consumerConfig, producerConfig, 80);
|
|
100
|
+
|
|
101
|
+
// Or via environment variable
|
|
102
|
+
KAFKA_HEAP_LIMIT_PERCENT=80
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
**Heap utility (`utils/get-mem`):**
|
|
106
|
+
```typescript
|
|
107
|
+
import { getHeapUsage } from '@eqxjs/kafka-server-confluent-kafka/utils/get-mem';
|
|
108
|
+
|
|
109
|
+
const heap = getHeapUsage();
|
|
110
|
+
heap.usedPercent // 87.3
|
|
111
|
+
heap.usedMB // 694.1
|
|
112
|
+
heap.limitMB // 796.0
|
|
113
|
+
heap.isOverLimit(85) // true
|
|
114
|
+
heap.format() // "87.3% (used=694.1 MB, limit=796.0 MB)"
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
---
|
|
118
|
+
|
|
119
|
+
## librdkafka Config via Environment Variables
|
|
120
|
+
|
|
121
|
+
All `ConsumerGlobalConfig` and `ProducerGlobalConfig` keys can be provided via environment variables without writing any code.
|
|
122
|
+
|
|
123
|
+
**Convention:** `KAFKA_CONSUMER_{KEY}` → consumer config, `KAFKA_PRODUCER_{KEY}` → producer config, where `KEY` is the librdkafka config key name with `.` replaced by `_` and uppercased.
|
|
124
|
+
|
|
125
|
+
```
|
|
126
|
+
KAFKA_CONSUMER_GROUP_ID → group.id
|
|
127
|
+
KAFKA_CONSUMER_SESSION_TIMEOUT_MS → session.timeout.ms
|
|
128
|
+
KAFKA_CONSUMER_BOOTSTRAP_SERVERS → bootstrap.servers
|
|
129
|
+
KAFKA_PRODUCER_COMPRESSION_TYPE → compression.type
|
|
130
|
+
KAFKA_PRODUCER_LINGER_MS → linger.ms
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
Env vars are merged before explicit constructor arguments, so code-level config always takes precedence. Values are automatically coerced — `"true"`/`"false"` → `boolean`, numeric strings → `number`, everything else stays a `string`.
|
|
134
|
+
|
|
135
|
+
For a full list of all `ConsumerGlobalConfig` and `ProducerGlobalConfig` keys, defaults, and descriptions see **[KAFKA-CONFIG.md](KAFKA-CONFIG.md)**.
|
|
136
|
+
|
|
137
|
+
```bash
|
|
138
|
+
# Boot a consumer with no code-level config at all
|
|
139
|
+
KAFKA_CONSUMER_BOOTSTRAP_SERVERS=kafka:9092
|
|
140
|
+
KAFKA_CONSUMER_GROUP_ID=my-service
|
|
141
|
+
KAFKA_CONSUMER_SESSION_TIMEOUT_MS=30000
|
|
142
|
+
KAFKA_PRODUCER_COMPRESSION_TYPE=snappy
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
---
|
|
146
|
+
|
|
147
|
+
## Automatic Topic Monitoring
|
|
148
|
+
|
|
149
|
+
After the consumer connects, a `setInterval` loop (stored in the public `topicMonitorInterval` property) checks for topic changes at a configurable interval:
|
|
150
|
+
|
|
151
|
+
- **New topics**: if a topic matching a registered `@EventPattern` handler appears on the broker, the consumer unsubscribes and re-subscribes to include it
|
|
152
|
+
- **Deleted topics**: if a subscribed topic disappears from broker metadata, it is removed from the subscription and logged as an error
|
|
153
|
+
|
|
154
|
+
This means handlers registered at startup will automatically begin consuming newly created topics without a restart. The monitor logs its interval at startup:
|
|
155
|
+
```
|
|
156
|
+
Topic monitor started — interval=300000ms
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
Before starting, any existing monitor interval is cleared to prevent duplicate monitors on reconnect. `close()` also clears the interval as part of graceful shutdown.
|
|
160
|
+
|
|
161
|
+
**Disable via environment variable:**
|
|
162
|
+
```bash
|
|
163
|
+
KAFKA_DISABLE_TOPIC_MONITOR=true
|
|
164
|
+
```
|
|
165
|
+
When disabled, a warning is logged at startup and the consumer subscribes only to topics available at connect time.
|
|
166
|
+
|
|
167
|
+
**Configure polling interval:**
|
|
168
|
+
```bash
|
|
169
|
+
KAFKA_TOPIC_MONITOR_INTERVAL_MS=60000 # default: 300000 (5 minutes)
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
---
|
|
173
|
+
|
|
174
|
+
## Consumer Throughput Control
|
|
175
|
+
|
|
176
|
+
Two environment variables jointly control how fast messages are consumed:
|
|
177
|
+
|
|
178
|
+
**Polling interval** — how often the consume tick fires:
|
|
179
|
+
```bash
|
|
180
|
+
KAFKA_CONSUME_INTERVAL_MS=500 # default: 1000 (1 second)
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
**Batch size** — how many messages are fetched per tick:
|
|
184
|
+
```bash
|
|
185
|
+
KAFKA_CONSUME_MESSAGES_PER_INTERVAL=50 # default: 10
|
|
186
|
+
```
|
|
187
|
+
|
|
188
|
+
Combined with heap back-pressure, these provide three independent levers for rate control.
|
|
189
|
+
|
|
190
|
+
---
|
|
191
|
+
|
|
192
|
+
## Dual Producer Support
|
|
193
|
+
|
|
194
|
+
Every instance initialises two producers:
|
|
195
|
+
|
|
196
|
+
| Producer | Class | Use case |
|
|
197
|
+
|---|---|---|
|
|
198
|
+
| `highProducer` | `HighLevelProducer` | `produce()` — async with per-message callback, returns committed offset |
|
|
199
|
+
| `producer` | `Producer` | `sendMessage()` — fire-and-forget, lower overhead |
|
|
200
|
+
|
|
201
|
+
Both producers emit delivery reports, which feed into optional success/error callbacks:
|
|
202
|
+
|
|
203
|
+
```typescript
|
|
204
|
+
kafkaServer.setSuccessCallback((err, report) => { /* ... */ });
|
|
205
|
+
kafkaServer.setErrorCallback((err, report) => { /* ... */ });
|
|
206
|
+
```
|
|
207
|
+
|
|
208
|
+
---
|
|
209
|
+
|
|
210
|
+
## Authentication Support
|
|
211
|
+
|
|
212
|
+
### SASL (PLAIN / SCRAM)
|
|
213
|
+
|
|
214
|
+
```typescript
|
|
215
|
+
sasl: {
|
|
216
|
+
mechanism: 'plain', // or 'scram-sha-256', 'scram-sha-512'
|
|
217
|
+
username: 'my-username',
|
|
218
|
+
password: 'my-password',
|
|
219
|
+
}
|
|
220
|
+
```
|
|
221
|
+
|
|
222
|
+
Maps to librdkafka: `sasl.username`, `sasl.password`, `sasl.mechanisms`, `security.protocol=sasl_plaintext`.
|
|
223
|
+
|
|
224
|
+
### SSL / TLS
|
|
225
|
+
|
|
226
|
+
```typescript
|
|
227
|
+
ssl: true
|
|
228
|
+
// or
|
|
229
|
+
ssl: {
|
|
230
|
+
ca: fs.readFileSync('./ca-cert.pem'),
|
|
231
|
+
cert: fs.readFileSync('./client-cert.pem'),
|
|
232
|
+
key: fs.readFileSync('./client-key.pem'),
|
|
233
|
+
}
|
|
234
|
+
```
|
|
235
|
+
|
|
236
|
+
Maps to: `ssl.ca.pem`, `ssl.certificate.pem`, `ssl.key.pem`, `security.protocol=sasl_ssl`.
|
|
237
|
+
|
|
238
|
+
---
|
|
239
|
+
|
|
240
|
+
## Native Config Passthrough
|
|
241
|
+
|
|
242
|
+
Any librdkafka option not exposed via `KafkaOptions` can be set directly via the `consumerConfig` and `producerConfig` constructor arguments. These are merged with (and take priority over) options derived from the first `KafkaOptions` parameter.
|
|
243
|
+
|
|
244
|
+
```typescript
|
|
245
|
+
new CustomServerConfluentKafka(
|
|
246
|
+
kafkaOptions,
|
|
247
|
+
{ 'fetch.wait.max.ms': 100, 'max.poll.interval.ms': 300000 },
|
|
248
|
+
{ 'compression.type': 'snappy', 'linger.ms': 10 },
|
|
249
|
+
);
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
See the [librdkafka configuration reference](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md) for all available options.
|
|
253
|
+
|
|
254
|
+
---
|
|
255
|
+
|
|
256
|
+
## Singleton Access
|
|
257
|
+
|
|
258
|
+
The server registers itself as a static singleton at construction time, allowing access from anywhere in the application without dependency injection:
|
|
259
|
+
|
|
260
|
+
```typescript
|
|
261
|
+
const server = CustomServerConfluentKafka.getInstance();
|
|
262
|
+
server.produce('my-topic', message);
|
|
263
|
+
server.memberAssignment;
|
|
264
|
+
server.isKafkaConnected();
|
|
265
|
+
```
|
|
266
|
+
|
|
267
|
+
---
|
|
268
|
+
|
|
269
|
+
## Direct Client Access
|
|
270
|
+
|
|
271
|
+
Use `unwrap()` to get the underlying librdkafka consumer and producer instances for operations not covered by this wrapper:
|
|
272
|
+
|
|
273
|
+
```typescript
|
|
274
|
+
const { consumer, producer } = kafkaServer.unwrap<{
|
|
275
|
+
consumer: KafkaConsumer;
|
|
276
|
+
producer: Producer;
|
|
277
|
+
}>();
|
|
278
|
+
```
|
package/KAFKA-CONFIG.md
ADDED
|
@@ -0,0 +1,304 @@
|
|
|
1
|
+
# Kafka Configuration Reference
|
|
2
|
+
|
|
3
|
+
Configuration for `ConsumerGlobalConfig` and `ProducerGlobalConfig`, base on **librdkafka 2.13.2**.
|
|
4
|
+
|
|
5
|
+
Properties are passed directly to the `CustomServerConfluentKafka` constructor or set via environment variables (see [Environment Variables via Env](#environment-variables-via-env)).
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## Table of Contents
|
|
10
|
+
|
|
11
|
+
- [Shared (GlobalConfig)](#shared-globalconfig)
|
|
12
|
+
- [Core](#core)
|
|
13
|
+
- [Network & Socket](#network--socket)
|
|
14
|
+
- [SSL / TLS](#ssl--tls)
|
|
15
|
+
- [SASL Authentication](#sasl-authentication)
|
|
16
|
+
- [Metadata & Topic](#metadata--topic)
|
|
17
|
+
- [Retry & Backoff](#retry--backoff)
|
|
18
|
+
- [Logging & Debug](#logging--debug)
|
|
19
|
+
- [Misc](#misc)
|
|
20
|
+
- [ConsumerGlobalConfig](#consumerglobalconfig)
|
|
21
|
+
- [Group & Rebalance](#group--rebalance)
|
|
22
|
+
- [Offset & Commit](#offset--commit)
|
|
23
|
+
- [Fetch & Queue](#fetch--queue)
|
|
24
|
+
- [Consumer Misc](#consumer-misc)
|
|
25
|
+
- [ProducerGlobalConfig](#producerglobalconfig)
|
|
26
|
+
- [Delivery & Reliability](#delivery--reliability)
|
|
27
|
+
- [Batching & Throughput](#batching--throughput)
|
|
28
|
+
- [Producer Misc](#producer-misc)
|
|
29
|
+
- [Environment Variables via Env](#environment-variables-via-env)
|
|
30
|
+
|
|
31
|
+
---
|
|
32
|
+
|
|
33
|
+
## Shared (GlobalConfig)
|
|
34
|
+
|
|
35
|
+
Properties available to **both** consumer and producer.
|
|
36
|
+
|
|
37
|
+
### Core
|
|
38
|
+
|
|
39
|
+
| Property | Type | Default | Description |
|
|
40
|
+
|----------|------|---------|-------------|
|
|
41
|
+
| `client.id` | `string` | `rdkafka` | Client identifier |
|
|
42
|
+
| `bootstrap.servers` | `string` | — | Initial broker list as CSV of `host` or `host:port`. Alias: `metadata.broker.list` |
|
|
43
|
+
| `security.protocol` | `'plaintext'\|'ssl'\|'sasl_plaintext'\|'sasl_ssl'` | `plaintext` | Protocol used to communicate with brokers |
|
|
44
|
+
| `client.rack` | `string` | — | Rack identifier for this client (matches broker `broker.rack`) |
|
|
45
|
+
| `client.dns.lookup` | `'use_all_dns_ips'\|'resolve_canonical_bootstrap_servers_only'` | `use_all_dns_ips` | Controls how the client uses DNS lookups |
|
|
46
|
+
| `allow.auto.create.topics` | `boolean` | `false` | Allow automatic topic creation on subscribe/assign when broker also has it enabled |
|
|
47
|
+
| `builtin.features` | `string` | *(build-dependent)* | Read-only: lists features compiled into librdkafka |
|
|
48
|
+
|
|
49
|
+
### Network & Socket
|
|
50
|
+
|
|
51
|
+
| Property | Type | Default | Description |
|
|
52
|
+
|----------|------|---------|-------------|
|
|
53
|
+
| `socket.timeout.ms` | `number` | `60000` | Default timeout for network requests |
|
|
54
|
+
| `socket.send.buffer.bytes` | `number` | `0` | Broker socket send buffer size (0 = system default) |
|
|
55
|
+
| `socket.receive.buffer.bytes` | `number` | `0` | Broker socket receive buffer size (0 = system default) |
|
|
56
|
+
| `socket.keepalive.enable` | `boolean` | `false` | Enable TCP keep-alives (`SO_KEEPALIVE`) |
|
|
57
|
+
| `socket.nagle.disable` | `boolean` | `true` | Disable Nagle algorithm (`TCP_NODELAY`) |
|
|
58
|
+
| `socket.max.fails` | `number` | `1` | Disconnect from broker after this many send failures (0 = disable) |
|
|
59
|
+
| `socket.connection.setup.timeout.ms` | `number` | `30000` | Max time for broker connection setup including SSL/SASL handshake |
|
|
60
|
+
| `connections.max.idle.ms` | `number` | `0` | Close idle broker connections after this ms (0 = disable) |
|
|
61
|
+
| `broker.address.ttl` | `number` | `1000` | How long to cache broker address resolution results (ms) |
|
|
62
|
+
| `broker.address.family` | `'any'\|'v4'\|'v6'` | `any` | Allowed broker IP address families |
|
|
63
|
+
| `reconnect.backoff.ms` | `number` | `100` | Initial wait before reconnecting to a broker |
|
|
64
|
+
| `reconnect.backoff.max.ms` | `number` | `10000` | Maximum wait before reconnecting to a broker |
|
|
65
|
+
| `max.in.flight.requests.per.connection` | `number` | `1000000` | Maximum in-flight requests per broker connection. Alias: `max.in.flight` |
|
|
66
|
+
| `message.max.bytes` | `number` | `1000000` | Maximum Kafka protocol request message size |
|
|
67
|
+
| `receive.message.max.bytes` | `number` | `100000000` | Maximum Kafka protocol response message size |
|
|
68
|
+
|
|
69
|
+
### SSL / TLS
|
|
70
|
+
|
|
71
|
+
| Property | Type | Default | Description |
|
|
72
|
+
|----------|------|---------|-------------|
|
|
73
|
+
| `ssl.key.location` | `string` | — | Path to client private key (PEM) |
|
|
74
|
+
| `ssl.key.password` | `string` | — | Private key passphrase |
|
|
75
|
+
| `ssl.key.pem` | `string` | — | Client private key string (PEM format) |
|
|
76
|
+
| `ssl.certificate.location` | `string` | — | Path to client public key (PEM) |
|
|
77
|
+
| `ssl.certificate.pem` | `string` | — | Client public key string (PEM format) |
|
|
78
|
+
| `ssl.ca.location` | `string` | — | Path to CA certificate(s) for verifying broker key |
|
|
79
|
+
| `ssl.ca.pem` | `string` | — | CA certificate string (PEM format) for verifying broker key |
|
|
80
|
+
| `ssl.ca.certificate.stores` | `string` | `Root` | Windows Certificate stores to load CA certificates from |
|
|
81
|
+
| `ssl.crl.location` | `string` | — | Path to CRL for verifying broker certificate validity |
|
|
82
|
+
| `ssl.keystore.location` | `string` | — | Path to client keystore (PKCS#12) |
|
|
83
|
+
| `ssl.keystore.password` | `string` | — | Client keystore password |
|
|
84
|
+
| `ssl.cipher.suites` | `string` | — | TLS cipher suite list |
|
|
85
|
+
| `ssl.curves.list` | `string` | — | Supported curves extension for TLS ClientHello |
|
|
86
|
+
| `ssl.sigalgs.list` | `string` | — | Signature/hash algorithm pairs for TLS ClientHello |
|
|
87
|
+
| `ssl.providers` | `string` | — | Comma-separated OpenSSL 3.0.x implementation providers |
|
|
88
|
+
| `ssl.engine.id` | `string` | `dynamic` | OpenSSL engine id |
|
|
89
|
+
| `enable.ssl.certificate.verification` | `boolean` | `true` | Enable OpenSSL's built-in broker certificate verification |
|
|
90
|
+
| `ssl.endpoint.identification.algorithm` | `'none'\|'https'` | `https` | Endpoint identification algorithm to validate broker hostname |
|
|
91
|
+
| `https.ca.location` | `string` | — | Path to CA certificate(s) for verifying HTTPS endpoints (e.g. OAUTHBEARER) |
|
|
92
|
+
| `https.ca.pem` | `string` | — | CA certificate string (PEM) for verifying HTTPS endpoints |
|
|
93
|
+
|
|
94
|
+
### SASL Authentication
|
|
95
|
+
|
|
96
|
+
| Property | Type | Default | Description |
|
|
97
|
+
|----------|------|---------|-------------|
|
|
98
|
+
| `sasl.mechanisms` | `string` | `GSSAPI` | SASL mechanism: `GSSAPI`, `PLAIN`, `SCRAM-SHA-256`, `SCRAM-SHA-512`, `OAUTHBEARER`. Alias: `sasl.mechanism` |
|
|
99
|
+
| `sasl.username` | `string` | — | SASL username (PLAIN / SCRAM) |
|
|
100
|
+
| `sasl.password` | `string` | — | SASL password (PLAIN / SCRAM) |
|
|
101
|
+
| `sasl.kerberos.service.name` | `string` | `kafka` | Kerberos principal name Kafka runs as |
|
|
102
|
+
| `sasl.kerberos.principal` | `string` | `kafkaclient` | This client's Kerberos principal name |
|
|
103
|
+
| `sasl.kerberos.keytab` | `string` | — | Path to Kerberos keytab file |
|
|
104
|
+
| `sasl.kerberos.kinit.cmd` | `string` | *(kinit command)* | Shell command to refresh/acquire Kerberos ticket |
|
|
105
|
+
| `sasl.kerberos.min.time.before.relogin` | `number` | `60000` | Minimum ms between Kerberos key refresh attempts (0 = disable) |
|
|
106
|
+
| `sasl.oauthbearer.config` | `string` | — | SASL/OAUTHBEARER configuration string |
|
|
107
|
+
| `sasl.oauthbearer.method` | `'default'\|'oidc'` | `default` | Login method for OAUTHBEARER |
|
|
108
|
+
| `sasl.oauthbearer.client.id` | `string` | — | OIDC client ID. Alias: `sasl.oauthbearer.client.credentials.client.id` |
|
|
109
|
+
| `sasl.oauthbearer.client.secret` | `string` | — | OIDC client secret. Alias: `sasl.oauthbearer.client.credentials.client.secret` |
|
|
110
|
+
| `sasl.oauthbearer.scope` | `string` | — | OIDC access scope |
|
|
111
|
+
| `sasl.oauthbearer.extensions` | `string` | — | Additional OIDC key=value pairs |
|
|
112
|
+
| `sasl.oauthbearer.token.endpoint.url` | `string` | — | OIDC token endpoint URL |
|
|
113
|
+
| `sasl.oauthbearer.grant.type` | `'client_credentials'\|'urn:ietf:params:oauth:grant-type:jwt-bearer'` | `client_credentials` | OAuth grant type |
|
|
114
|
+
| `sasl.oauthbearer.assertion.algorithm` | `'RS256'\|'ES256'` | `RS256` | Algorithm for JWT assertion signing |
|
|
115
|
+
| `sasl.oauthbearer.assertion.private.key.file` | `string` | — | Path to private key (PEM) for JWT assertion |
|
|
116
|
+
| `sasl.oauthbearer.assertion.private.key.pem` | `string` | — | Private key (PEM string) for JWT assertion |
|
|
117
|
+
| `sasl.oauthbearer.assertion.private.key.passphrase` | `string` | — | Passphrase for JWT assertion private key |
|
|
118
|
+
| `sasl.oauthbearer.assertion.file` | `string` | — | Path to assertion file |
|
|
119
|
+
| `sasl.oauthbearer.assertion.claim.aud` | `string` | — | JWT audience claim |
|
|
120
|
+
| `sasl.oauthbearer.assertion.claim.iss` | `string` | — | JWT issuer claim |
|
|
121
|
+
| `sasl.oauthbearer.assertion.claim.sub` | `string` | — | JWT subject claim |
|
|
122
|
+
| `sasl.oauthbearer.assertion.claim.exp.seconds` | `number` | `300` | JWT assertion expiration in seconds |
|
|
123
|
+
| `sasl.oauthbearer.assertion.claim.nbf.seconds` | `number` | `60` | JWT assertion not-before time in seconds |
|
|
124
|
+
| `sasl.oauthbearer.assertion.claim.jti.include` | `boolean` | `false` | Include random UUID as JWT ID claim |
|
|
125
|
+
| `sasl.oauthbearer.assertion.jwt.template.file` | `string` | — | Path to JWT template file |
|
|
126
|
+
| `sasl.oauthbearer.metadata.authentication.type` | `'none'\|'azure_imds'` | `none` | Metadata-based authentication type |
|
|
127
|
+
| `enable.sasl.oauthbearer.unsecure.jwt` | `boolean` | `false` | Enable built-in unsecure JWT handler (dev/test only) |
|
|
128
|
+
|
|
129
|
+
### Metadata & Topic
|
|
130
|
+
|
|
131
|
+
| Property | Type | Default | Description |
|
|
132
|
+
|----------|------|---------|-------------|
|
|
133
|
+
| `topic.metadata.refresh.interval.ms` | `number` | `300000` | Interval for proactive topic and broker metadata refresh (-1 = disable) |
|
|
134
|
+
| `metadata.max.age.ms` | `number` | `900000` | Metadata cache max age |
|
|
135
|
+
| `topic.metadata.refresh.fast.interval.ms` | `number` | `100` | Fast metadata refresh interval after a leader is lost |
|
|
136
|
+
| `topic.metadata.refresh.sparse` | `boolean` | `true` | Sparse metadata requests (reduces network bandwidth) |
|
|
137
|
+
| `topic.metadata.propagation.max.ms` | `number` | `30000` | Delay before marking a newly created topic as non-existent |
|
|
138
|
+
| `topic.blacklist` | `any` | — | Comma-separated regex list of topics to ignore in broker metadata |
|
|
139
|
+
| `metadata.recovery.strategy` | `'none'\|'rebootstrap'` | `rebootstrap` | Client recovery strategy when no brokers are available |
|
|
140
|
+
| `metadata.recovery.rebootstrap.trigger.ms` | `number` | `300000` | Interval before triggering rebootstrap when metadata is unavailable |
|
|
141
|
+
|
|
142
|
+
### Retry & Backoff
|
|
143
|
+
|
|
144
|
+
| Property | Type | Default | Description |
|
|
145
|
+
|----------|------|---------|-------------|
|
|
146
|
+
| `retry.backoff.ms` | `number` | `100` | Initial backoff before retrying a protocol request |
|
|
147
|
+
| `retry.backoff.max.ms` | `number` | `1000` | Maximum backoff for exponentially backed-off requests |
|
|
148
|
+
|
|
149
|
+
### Logging & Debug
|
|
150
|
+
|
|
151
|
+
| Property | Type | Default | Description |
|
|
152
|
+
|----------|------|---------|-------------|
|
|
153
|
+
| `debug` | `string` | — | Comma-separated debug contexts. Producer: `broker,topic,msg`. Consumer: `consumer,cgrp,topic,fetch` |
|
|
154
|
+
| `log_level` | `number` | `6` | Logging level (syslog levels 0–7) |
|
|
155
|
+
| `log.queue` | `boolean` | `false` | Enqueue log messages instead of spontaneous log callbacks |
|
|
156
|
+
| `log.thread.name` | `boolean` | `true` | Print internal thread name in log messages |
|
|
157
|
+
| `log.connection.close` | `boolean` | `true` | Log broker disconnects |
|
|
158
|
+
| `statistics.interval.ms` | `number` | `0` | Stats emit interval in ms (0 = disable) |
|
|
159
|
+
|
|
160
|
+
### Misc
|
|
161
|
+
|
|
162
|
+
| Property | Type | Default | Description |
|
|
163
|
+
|----------|------|---------|-------------|
|
|
164
|
+
| `enable.random.seed` | `boolean` | `true` | Initialize PRNG with current time on first `rd_kafka_new()` |
|
|
165
|
+
| `enable.metrics.push` | `boolean` | `true` | Enable pushing client metrics to the cluster |
|
|
166
|
+
| `api.version.request` | `boolean` | `true` | Request broker supported API versions |
|
|
167
|
+
| `api.version.request.timeout.ms` | `number` | `10000` | Timeout for broker API version requests |
|
|
168
|
+
| `internal.termination.signal` | `number` | `0` | Signal used to quickly terminate on destroy |
|
|
169
|
+
| `plugin.library.paths` | `string` | — | Semicolon-separated list of plugin libraries to load |
|
|
170
|
+
|
|
171
|
+
---
|
|
172
|
+
|
|
173
|
+
## ConsumerGlobalConfig
|
|
174
|
+
|
|
175
|
+
Properties specific to the **consumer** (extends `GlobalConfig`).
|
|
176
|
+
|
|
177
|
+
### Group & Rebalance
|
|
178
|
+
|
|
179
|
+
| Property | Type | Default | Description |
|
|
180
|
+
|----------|------|---------|-------------|
|
|
181
|
+
| `group.id` | `string` | — | **Required.** Consumer group ID. All consumers sharing the same `group.id` form one group |
|
|
182
|
+
| `group.instance.id` | `string` | — | Static member ID for static group membership (avoids rebalance on restart within `session.timeout.ms`) |
|
|
183
|
+
| `partition.assignment.strategy` | `string` | `range,roundrobin` | Comma-separated partition assignment strategies. Options: `range`, `roundrobin`, `cooperative-sticky` |
|
|
184
|
+
| `session.timeout.ms` | `number` | `45000` | Group session timeout. Consumer is removed from group if no heartbeat is received within this interval |
|
|
185
|
+
| `heartbeat.interval.ms` | `number` | `3000` | Frequency of heartbeats to the broker |
|
|
186
|
+
| `max.poll.interval.ms` | `number` | `300000` | Maximum time between `consume()` calls before the consumer is considered failed |
|
|
187
|
+
| `coordinator.query.interval.ms` | `number` | `600000` | How often to query for the current group coordinator |
|
|
188
|
+
| `group.protocol` | `'classic'\|'consumer'` | `classic` | Group protocol to use (`classic` = original, `consumer` = KIP-848) |
|
|
189
|
+
| `group.protocol.type` | `string` | `consumer` | Group protocol type (classic protocol only) |
|
|
190
|
+
| `group.remote.assignor` | `string` | — | Server-side assignor for `group.protocol=consumer` (`uniform` or `range`) |
|
|
191
|
+
|
|
192
|
+
### Offset & Commit
|
|
193
|
+
|
|
194
|
+
| Property | Type | Default | Description |
|
|
195
|
+
|----------|------|---------|-------------|
|
|
196
|
+
| `enable.auto.commit` | `boolean` | `true` | Automatically commit offsets in the background |
|
|
197
|
+
| `auto.commit.interval.ms` | `number` | `5000` | Frequency (ms) for automatic offset commits (0 = disable) |
|
|
198
|
+
| `enable.auto.offset.store` | `boolean` | `true` | Automatically store offset of last message delivered to the application |
|
|
199
|
+
| `isolation.level` | `'read_uncommitted'\|'read_committed'` | `read_committed` | Controls visibility of transactional messages |
|
|
200
|
+
|
|
201
|
+
### Fetch & Queue
|
|
202
|
+
|
|
203
|
+
| Property | Type | Default | Description |
|
|
204
|
+
|----------|------|---------|-------------|
|
|
205
|
+
| `fetch.min.bytes` | `number` | `1` | Minimum bytes the broker responds with per fetch |
|
|
206
|
+
| `fetch.wait.max.ms` | `number` | `500` | Maximum time the broker waits to fill the fetch response |
|
|
207
|
+
| `fetch.message.max.bytes` | `number` | `1048576` | Initial max bytes per partition per fetch request. Alias: `max.partition.fetch.bytes` |
|
|
208
|
+
| `fetch.max.bytes` | `number` | `52428800` | Maximum bytes returned for a single Fetch request (50 MB default) |
|
|
209
|
+
| `fetch.error.backoff.ms` | `number` | `500` | How long to postpone the next fetch after an error |
|
|
210
|
+
| `fetch.queue.backoff.ms` | `number` | `1000` | How long to postpone the next fetch when queue thresholds are exceeded |
|
|
211
|
+
| `queued.min.messages` | `number` | `100000` | Minimum messages per topic+partition librdkafka tries to maintain in the local queue |
|
|
212
|
+
| `queued.max.messages.kbytes` | `number` | `65536` | Maximum kilobytes of queued pre-fetched messages in the local consumer queue |
|
|
213
|
+
|
|
214
|
+
### Consumer Misc
|
|
215
|
+
|
|
216
|
+
| Property | Type | Default | Description |
|
|
217
|
+
|----------|------|---------|-------------|
|
|
218
|
+
| `enable.partition.eof` | `boolean` | `false` | Emit `ERR__PARTITION_EOF` event when consumer reaches end of partition |
|
|
219
|
+
| `check.crcs` | `boolean` | `false` | Verify CRC32 of consumed messages |
|
|
220
|
+
| `offset.store.method` | `'none'\|'file'\|'broker'` | `broker` | Offset commit store method (**`file` is deprecated**) |
|
|
221
|
+
|
|
222
|
+
---
|
|
223
|
+
|
|
224
|
+
## ProducerGlobalConfig
|
|
225
|
+
|
|
226
|
+
Properties specific to the **producer** (extends `GlobalConfig`).
|
|
227
|
+
|
|
228
|
+
### Delivery & Reliability
|
|
229
|
+
|
|
230
|
+
| Property | Type | Default | Description |
|
|
231
|
+
|----------|------|---------|-------------|
|
|
232
|
+
| `enable.idempotence` | `boolean` | `false` | Enable exactly-once delivery and original produce order |
|
|
233
|
+
| `transactional.id` | `string` | — | Transactional producer ID. Enables transactional semantics (requires broker >= 0.11) |
|
|
234
|
+
| `transaction.timeout.ms` | `number` | `60000` | Transaction coordinator timeout. Adjusts `message.timeout.ms` and `socket.timeout.ms` automatically |
|
|
235
|
+
| `message.send.max.retries` | `number` | `2147483647` | How many times to retry a failing message. Alias: `retries` |
|
|
236
|
+
| `enable.gapless.guarantee` | `boolean` | `false` | *(Experimental)* Raise fatal error on any gap in produced message sequence |
|
|
237
|
+
| `dr_cb` | `boolean\|Function` | — | Delivery report callback |
|
|
238
|
+
|
|
239
|
+
### Batching & Throughput
|
|
240
|
+
|
|
241
|
+
| Property | Type | Default | Description |
|
|
242
|
+
|----------|------|---------|-------------|
|
|
243
|
+
| `queue.buffering.max.messages` | `number` | `100000` | Maximum messages on the producer queue (0 = unlimited) |
|
|
244
|
+
| `queue.buffering.max.kbytes` | `number` | `1048576` | Maximum total message size on the producer queue (1 GB default) |
|
|
245
|
+
| `queue.buffering.max.ms` | `number` | `5` | Delay (ms) to accumulate messages before transmitting (linger). Alias: `linger.ms` |
|
|
246
|
+
| `queue.buffering.backpressure.threshold` | `number` | `1` | Outstanding un-transmitted requests threshold before backpressure kicks in |
|
|
247
|
+
| `batch.num.messages` | `number` | `10000` | Maximum messages per MessageSet batch |
|
|
248
|
+
| `batch.size` | `number` | `1000000` | Maximum bytes per MessageSet batch (1 MB default) |
|
|
249
|
+
| `sticky.partitioning.linger.ms` | `number` | `10` | How long to use the same sticky partition for null-key messages |
|
|
250
|
+
| `compression.codec` | `'none'\|'gzip'\|'snappy'\|'lz4'\|'zstd'` | `none` | Compression codec for messages. Alias: `compression.type` |
|
|
251
|
+
| `delivery.report.only.error` | `boolean` | `false` | Only emit delivery reports for failed messages |
|
|
252
|
+
|
|
253
|
+
### Producer Misc
|
|
254
|
+
|
|
255
|
+
| Property | Type | Default | Description |
|
|
256
|
+
|----------|------|---------|-------------|
|
|
257
|
+
| `allow.auto.create.topics` | `boolean` | `true` | Allow automatic topic creation on produce (producer default differs from consumer) |
|
|
258
|
+
|
|
259
|
+
---
|
|
260
|
+
|
|
261
|
+
## Environment Variables via Env
|
|
262
|
+
|
|
263
|
+
Any property above can be configured without code using the `KAFKA_CONSUMER_*` / `KAFKA_PRODUCER_*` environment variable convention supported by this library.
|
|
264
|
+
|
|
265
|
+
**Conversion rule:** strip prefix → lowercase → replace `_` with `.`
|
|
266
|
+
|
|
267
|
+
```
|
|
268
|
+
KAFKA_CONSUMER_GROUP_ID → group.id
|
|
269
|
+
KAFKA_CONSUMER_SESSION_TIMEOUT_MS → session.timeout.ms
|
|
270
|
+
KAFKA_CONSUMER_PARTITION_ASSIGNMENT_STRATEGY → partition.assignment.strategy
|
|
271
|
+
KAFKA_CONSUMER_ENABLE_AUTO_COMMIT → enable.auto.commit
|
|
272
|
+
KAFKA_CONSUMER_ISOLATION_LEVEL → isolation.level
|
|
273
|
+
KAFKA_PRODUCER_COMPRESSION_TYPE → compression.type (= compression.codec)
|
|
274
|
+
KAFKA_PRODUCER_LINGER_MS → linger.ms (= queue.buffering.max.ms)
|
|
275
|
+
KAFKA_PRODUCER_BATCH_SIZE → batch.size
|
|
276
|
+
KAFKA_PRODUCER_ENABLE_IDEMPOTENCE → enable.idempotence
|
|
277
|
+
KAFKA_PRODUCER_TRANSACTIONAL_ID → transactional.id
|
|
278
|
+
```
|
|
279
|
+
|
|
280
|
+
Values are automatically coerced:
|
|
281
|
+
- `"true"` / `"false"` → `boolean`
|
|
282
|
+
- Numeric strings (e.g. `"30000"`) → `number`
|
|
283
|
+
- All other strings remain as `string`
|
|
284
|
+
|
|
285
|
+
Code-level config passed to the constructor always takes precedence over env vars.
|
|
286
|
+
|
|
287
|
+
```bash
|
|
288
|
+
# Example .env
|
|
289
|
+
KAFKA_CONSUMER_BOOTSTRAP_SERVERS=kafka:9092
|
|
290
|
+
KAFKA_CONSUMER_GROUP_ID=my-service
|
|
291
|
+
KAFKA_CONSUMER_SESSION_TIMEOUT_MS=30000
|
|
292
|
+
KAFKA_CONSUMER_MAX_POLL_INTERVAL_MS=300000
|
|
293
|
+
KAFKA_CONSUMER_ENABLE_AUTO_COMMIT=true
|
|
294
|
+
KAFKA_CONSUMER_AUTO_COMMIT_INTERVAL_MS=5000
|
|
295
|
+
KAFKA_CONSUMER_ISOLATION_LEVEL=read_committed
|
|
296
|
+
KAFKA_CONSUMER_FETCH_MAX_BYTES=52428800
|
|
297
|
+
KAFKA_PRODUCER_BOOTSTRAP_SERVERS=kafka:9092
|
|
298
|
+
KAFKA_PRODUCER_COMPRESSION_TYPE=snappy
|
|
299
|
+
KAFKA_PRODUCER_LINGER_MS=10
|
|
300
|
+
KAFKA_PRODUCER_BATCH_SIZE=65536
|
|
301
|
+
KAFKA_PRODUCER_ENABLE_IDEMPOTENCE=false
|
|
302
|
+
```
|
|
303
|
+
|
|
304
|
+
> For the full librdkafka configuration reference, see the [upstream CONFIGURATION.md](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md).
|