logstash-integration-kafka 10.1.0-java → 10.5.1-java

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 254abccf066d63d45cf0660dafa06b603c97fb5557c1f978ecc41b41078c6ead
4
- data.tar.gz: a6bcb799f703db46d80a4412f35809b7a7d13bcdf8eaf91e12ef06befc700a93
3
+ metadata.gz: 93e42aba43873cfb3c8165a31b184e8178f738787329a36bcde79b234d2f642a
4
+ data.tar.gz: 41cca0d58b15c3072d51d009399e01e6854dea4c1d0cc3791e52430365599d73
5
5
  SHA512:
6
- metadata.gz: 9551a410f21e1015e56ebd2d2881d75c1eb3d13e5a3aff609e98ac4111376764bcc1978612bb9b239b2335757e129897a94be69bac02d996dfbf31d50ffc9614
7
- data.tar.gz: 106b3fa2816631035f132a4771f4b5776fd9a79811305200d37a1b539a3ba1600079e8b9d9c1ec718ff9a2cecdea9115fb31b2ccd425edfe9097114da7ca752f
6
+ metadata.gz: b40a479faa7568a326b53fb721f692bdb4f66e0c87e3553d39b29456b3e8b61f4849d0c325c21c3a8247989c16467d4c72268673b0c7e7ee41de7af1d939a523
7
+ data.tar.gz: 1df3fe42154e9b035145a159fbca4605f3c82d8d61765af3242ec6a2ff571b9b9d2bd536e484dfba766ca353722976c0c81b1e8f70676f083cec0ab3b6983a4e
@@ -1,3 +1,23 @@
1
+ ## 10.5.1
2
+ - [DOC]Replaced plugin_header file with plugin_header-integration file. [#46](https://github.com/logstash-plugins/logstash-integration-kafka/pull/46)
3
+ - [DOC]Update kafka client version across kafka integration docs [#47](https://github.com/logstash-plugins/logstash-integration-kafka/pull/47)
4
+ - [DOC]Replace hard-coded kafka client and doc path version numbers with attributes to simplify doc maintenance [#48](https://github.com/logstash-plugins/logstash-integration-kafka/pull/48)
5
+
6
+ ## 10.5.0
7
+ - Changed: retry sending messages only for retriable exceptions [#27](https://github.com/logstash-plugins/logstash-integration-kafka/pull/29)
8
+
9
+ ## 10.4.1
10
+ - [DOC] Fixed formatting issues and made minor content edits [#43](https://github.com/logstash-plugins/logstash-integration-kafka/pull/43)
11
+
12
+ ## 10.4.0
13
+ - added the input `isolation_level` to allow fine control of whether to return transactional messages [#44](https://github.com/logstash-plugins/logstash-integration-kafka/pull/44)
14
+
15
+ ## 10.3.0
16
+ - added the input and output `client_dns_lookup` parameter to allow control of how DNS requests are made [#28](https://github.com/logstash-plugins/logstash-integration-kafka/pull/28)
17
+
18
+ ## 10.2.0
19
+ - Changed: config defaults to be aligned with Kafka client defaults [#30](https://github.com/logstash-plugins/logstash-integration-kafka/pull/30)
20
+
1
21
  ## 10.1.0
2
22
  - updated kafka client (and its dependencies) to version 2.4.1 ([#16](https://github.com/logstash-plugins/logstash-integration-kafka/pull/16))
3
23
  - added the input `client_rack` parameter to enable support for follower fetching
@@ -12,6 +12,7 @@ Contributors:
12
12
  * Kurt Hurtado (kurtado)
13
13
  * Ry Biesemeyer (yaauie)
14
14
  * Rob Cowart (robcowart)
15
+ * Tim te Beek (timtebeek)
15
16
 
16
17
  Note: If you've sent us patches, bug reports, or otherwise contributed to
17
18
  Logstash, and you aren't on the list above and want to be, please let us know
@@ -1,6 +1,7 @@
1
1
  :plugin: kafka
2
2
  :type: integration
3
3
  :no_codec:
4
+ :kafka_client: 2.4
4
5
 
5
6
  ///////////////////////////////////////////
6
7
  START - GENERATED VARIABLES, DO NOT EDIT!
@@ -21,11 +22,15 @@ include::{include_path}/plugin_header.asciidoc[]
21
22
 
22
23
  ==== Description
23
24
 
24
- The Kafka Integration Plugin provides integrated plugins for working with the https://kafka.apache.org/[Kafka] distributed streaming platform.
25
+ The Kafka Integration Plugin provides integrated plugins for working with the
26
+ https://kafka.apache.org/[Kafka] distributed streaming platform.
25
27
 
26
28
  - {logstash-ref}/plugins-inputs-kafka.html[Kafka Input Plugin]
27
29
  - {logstash-ref}/plugins-outputs-kafka.html[Kafka Output Plugin]
28
30
 
29
- This plugin uses Kafka Client 2.3.0. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
31
+ This plugin uses Kafka Client {kafka_client}. For broker compatibility, see the official
32
+ https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka
33
+ compatibility reference]. If the linked compatibility wiki is not up-to-date,
34
+ please contact Kafka support/community to confirm compatibility.
30
35
 
31
36
  :no_codec!:
@@ -1,6 +1,9 @@
1
+ :integration: kafka
1
2
  :plugin: kafka
2
3
  :type: input
3
4
  :default_codec: plain
5
+ :kafka_client: 2.4
6
+ :kafka_client_doc: 24
4
7
 
5
8
  ///////////////////////////////////////////
6
9
  START - GENERATED VARIABLES, DO NOT EDIT!
@@ -17,15 +20,20 @@ END - GENERATED VARIABLES, DO NOT EDIT!
17
20
 
18
21
  === Kafka input plugin
19
22
 
20
- include::{include_path}/plugin_header.asciidoc[]
23
+ include::{include_path}/plugin_header-integration.asciidoc[]
21
24
 
22
25
  ==== Description
23
26
 
24
27
  This input will read events from a Kafka topic.
25
28
 
26
- This plugin uses Kafka Client 2.1.0. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
29
+ This plugin uses Kafka Client {kafka_client}. For broker compatibility, see the
30
+ official
31
+ https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka
32
+ compatibility reference]. If the linked compatibility wiki is not up-to-date,
33
+ please contact Kafka support/community to confirm compatibility.
27
34
 
28
- If you require features not yet available in this plugin (including client version upgrades), please file an issue with details about what you need.
35
+ If you require features not yet available in this plugin (including client
36
+ version upgrades), please file an issue with details about what you need.
29
37
 
30
38
  This input supports connecting to Kafka over:
31
39
 
@@ -46,9 +54,9 @@ the same `group_id`.
46
54
  Ideally you should have as many threads as the number of partitions for a perfect balance --
47
55
  more threads than partitions means that some threads will be idle
48
56
 
49
- For more information see http://kafka.apache.org/documentation.html#theconsumer
57
+ For more information see https://kafka.apache.org/{kafka_client_doc}/documentation.html#theconsumer
50
58
 
51
- Kafka consumer configuration: http://kafka.apache.org/documentation.html#consumerconfigs
59
+ Kafka consumer configuration: https://kafka.apache.org/{kafka_client_doc}/documentation.html#consumerconfigs
52
60
 
53
61
  ==== Metadata fields
54
62
 
@@ -59,7 +67,11 @@ The following metadata from Kafka broker are added under the `[@metadata]` field
59
67
  * `[@metadata][kafka][partition]`: Partition info for this message.
60
68
  * `[@metadata][kafka][offset]`: Original record offset for this message.
61
69
  * `[@metadata][kafka][key]`: Record key, if any.
62
- * `[@metadata][kafka][timestamp]`: Timestamp in the Record. Depending on your broker configuration, this can be either when the record was created (default) or when it was received by the broker. See more about property log.message.timestamp.type at https://kafka.apache.org/10/documentation.html#brokerconfigs
70
+ * `[@metadata][kafka][timestamp]`: Timestamp in the Record.
71
+ Depending on your broker configuration, this can be
72
+ either when the record was created (default) or when it was received by the
73
+ broker. See more about property log.message.timestamp.type at
74
+ https://kafka.apache.org/{kafka_client_doc}/documentation.html#brokerconfigs
63
75
 
64
76
  Metadata is only added to the event if the `decorate_events` option is set to true (it defaults to false).
65
77
 
@@ -71,45 +83,50 @@ inserted into your original event, you'll have to use the `mutate` filter to man
71
83
 
72
84
  This plugin supports these configuration options plus the <<plugins-{type}s-{plugin}-common-options>> described later.
73
85
 
74
- NOTE: Some of these options map to a Kafka option. See the https://kafka.apache.org/documentation for more details.
86
+ NOTE: Some of these options map to a Kafka option. Defaults usually reflect the Kafka default setting,
87
+ and might change if Kafka's consumer defaults change.
88
+ See the https://kafka.apache.org/{kafka_client_doc}/documentation for more details.
75
89
 
76
90
  [cols="<,<,<",options="header",]
77
91
  |=======================================================================
78
92
  |Setting |Input type|Required
79
- | <<plugins-{type}s-{plugin}-auto_commit_interval_ms>> |<<string,string>>|No
93
+ | <<plugins-{type}s-{plugin}-auto_commit_interval_ms>> |<<number,number>>|No
80
94
  | <<plugins-{type}s-{plugin}-auto_offset_reset>> |<<string,string>>|No
81
95
  | <<plugins-{type}s-{plugin}-bootstrap_servers>> |<<string,string>>|No
82
- | <<plugins-{type}s-{plugin}-check_crcs>> |<<string,string>>|No
96
+ | <<plugins-{type}s-{plugin}-check_crcs>> |<<boolean,boolean>>|No
97
+ | <<plugins-{type}s-{plugin}-client_dns_lookup>> |<<string,string>>|No
83
98
  | <<plugins-{type}s-{plugin}-client_id>> |<<string,string>>|No
84
- | <<plugins-{type}s-{plugin}-connections_max_idle_ms>> |<<string,string>>|No
99
+ | <<plugins-{type}s-{plugin}-client_rack>> |<<string,string>>|No
100
+ | <<plugins-{type}s-{plugin}-connections_max_idle_ms>> |<<number,number>>|No
85
101
  | <<plugins-{type}s-{plugin}-consumer_threads>> |<<number,number>>|No
86
102
  | <<plugins-{type}s-{plugin}-decorate_events>> |<<boolean,boolean>>|No
87
- | <<plugins-{type}s-{plugin}-enable_auto_commit>> |<<string,string>>|No
103
+ | <<plugins-{type}s-{plugin}-enable_auto_commit>> |<<boolean,boolean>>|No
88
104
  | <<plugins-{type}s-{plugin}-exclude_internal_topics>> |<<string,string>>|No
89
- | <<plugins-{type}s-{plugin}-fetch_max_bytes>> |<<string,string>>|No
90
- | <<plugins-{type}s-{plugin}-fetch_max_wait_ms>> |<<string,string>>|No
91
- | <<plugins-{type}s-{plugin}-fetch_min_bytes>> |<<string,string>>|No
105
+ | <<plugins-{type}s-{plugin}-fetch_max_bytes>> |<<number,number>>|No
106
+ | <<plugins-{type}s-{plugin}-fetch_max_wait_ms>> |<<number,number>>|No
107
+ | <<plugins-{type}s-{plugin}-fetch_min_bytes>> |<<number,number>>|No
92
108
  | <<plugins-{type}s-{plugin}-group_id>> |<<string,string>>|No
93
- | <<plugins-{type}s-{plugin}-heartbeat_interval_ms>> |<<string,string>>|No
109
+ | <<plugins-{type}s-{plugin}-heartbeat_interval_ms>> |<<number,number>>|No
110
+ | <<plugins-{type}s-{plugin}-isolation_level>> |<<string,string>>|No
94
111
  | <<plugins-{type}s-{plugin}-jaas_path>> |a valid filesystem path|No
95
112
  | <<plugins-{type}s-{plugin}-kerberos_config>> |a valid filesystem path|No
96
113
  | <<plugins-{type}s-{plugin}-key_deserializer_class>> |<<string,string>>|No
97
- | <<plugins-{type}s-{plugin}-max_partition_fetch_bytes>> |<<string,string>>|No
98
- | <<plugins-{type}s-{plugin}-max_poll_interval_ms>> |<<string,string>>|No
99
- | <<plugins-{type}s-{plugin}-max_poll_records>> |<<string,string>>|No
100
- | <<plugins-{type}s-{plugin}-metadata_max_age_ms>> |<<string,string>>|No
114
+ | <<plugins-{type}s-{plugin}-max_partition_fetch_bytes>> |<<number,number>>|No
115
+ | <<plugins-{type}s-{plugin}-max_poll_interval_ms>> |<<number,number>>|No
116
+ | <<plugins-{type}s-{plugin}-max_poll_records>> |<<number,number>>|No
117
+ | <<plugins-{type}s-{plugin}-metadata_max_age_ms>> |<<number,number>>|No
101
118
  | <<plugins-{type}s-{plugin}-partition_assignment_strategy>> |<<string,string>>|No
102
119
  | <<plugins-{type}s-{plugin}-poll_timeout_ms>> |<<number,number>>|No
103
- | <<plugins-{type}s-{plugin}-receive_buffer_bytes>> |<<string,string>>|No
104
- | <<plugins-{type}s-{plugin}-reconnect_backoff_ms>> |<<string,string>>|No
105
- | <<plugins-{type}s-{plugin}-request_timeout_ms>> |<<string,string>>|No
106
- | <<plugins-{type}s-{plugin}-retry_backoff_ms>> |<<string,string>>|No
120
+ | <<plugins-{type}s-{plugin}-receive_buffer_bytes>> |<<number,number>>|No
121
+ | <<plugins-{type}s-{plugin}-reconnect_backoff_ms>> |<<number,number>>|No
122
+ | <<plugins-{type}s-{plugin}-request_timeout_ms>> |<<number,number>>|No
123
+ | <<plugins-{type}s-{plugin}-retry_backoff_ms>> |<<number,number>>|No
107
124
  | <<plugins-{type}s-{plugin}-sasl_jaas_config>> |<<string,string>>|No
108
125
  | <<plugins-{type}s-{plugin}-sasl_kerberos_service_name>> |<<string,string>>|No
109
126
  | <<plugins-{type}s-{plugin}-sasl_mechanism>> |<<string,string>>|No
110
127
  | <<plugins-{type}s-{plugin}-security_protocol>> |<<string,string>>, one of `["PLAINTEXT", "SSL", "SASL_PLAINTEXT", "SASL_SSL"]`|No
111
- | <<plugins-{type}s-{plugin}-send_buffer_bytes>> |<<string,string>>|No
112
- | <<plugins-{type}s-{plugin}-session_timeout_ms>> |<<string,string>>|No
128
+ | <<plugins-{type}s-{plugin}-send_buffer_bytes>> |<<number,number>>|No
129
+ | <<plugins-{type}s-{plugin}-session_timeout_ms>> |<<number,number>>|No
113
130
  | <<plugins-{type}s-{plugin}-ssl_endpoint_identification_algorithm>> |<<string,string>>|No
114
131
  | <<plugins-{type}s-{plugin}-ssl_key_password>> |<<password,password>>|No
115
132
  | <<plugins-{type}s-{plugin}-ssl_keystore_location>> |a valid filesystem path|No
@@ -121,7 +138,6 @@ NOTE: Some of these options map to a Kafka option. See the https://kafka.apache.
121
138
  | <<plugins-{type}s-{plugin}-topics>> |<<array,array>>|No
122
139
  | <<plugins-{type}s-{plugin}-topics_pattern>> |<<string,string>>|No
123
140
  | <<plugins-{type}s-{plugin}-value_deserializer_class>> |<<string,string>>|No
124
- | <<plugins-{type}s-{plugin}-client_rack>> |<<string,string>>|No
125
141
  |=======================================================================
126
142
 
127
143
  Also see <<plugins-{type}s-{plugin}-common-options>> for a list of options supported by all
@@ -132,8 +148,8 @@ input plugins.
132
148
  [id="plugins-{type}s-{plugin}-auto_commit_interval_ms"]
133
149
  ===== `auto_commit_interval_ms`
134
150
 
135
- * Value type is <<string,string>>
136
- * Default value is `"5000"`
151
+ * Value type is <<number,number>>
152
+ * Default value is `5000`.
137
153
 
138
154
  The frequency in milliseconds that the consumer offsets are committed to Kafka.
139
155
 
@@ -165,12 +181,23 @@ case a server is down).
165
181
  [id="plugins-{type}s-{plugin}-check_crcs"]
166
182
  ===== `check_crcs`
167
183
 
184
+ * Value type is <<boolean,boolean>>
185
+ * Default value is `true`
186
+
187
+ Automatically check the CRC32 of the records consumed.
188
+ This ensures no on-the-wire or on-disk corruption to the messages occurred.
189
+ This check adds some overhead, so it may be disabled in cases seeking extreme performance.
190
+
191
+ [id="plugins-{type}s-{plugin}-client_dns_lookup"]
192
+ ===== `client_dns_lookup`
193
+
168
194
  * Value type is <<string,string>>
169
- * There is no default value for this setting.
195
+ * Default value is `"default"`
170
196
 
171
- Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk
172
- corruption to the messages occurred. This check adds some overhead, so it may be
173
- disabled in cases seeking extreme performance.
197
+ How DNS lookups should be done. If set to `use_all_dns_ips`, when the lookup returns multiple
198
+ IP addresses for a hostname, they will all be attempted to connect to before failing the
199
+ connection. If the value is `resolve_canonical_bootstrap_servers_only` each entry will be
200
+ resolved and expanded into a list of canonical names.
174
201
 
175
202
  [id="plugins-{type}s-{plugin}-client_id"]
176
203
  ===== `client_id`
@@ -182,12 +209,25 @@ The id string to pass to the server when making requests. The purpose of this
182
209
  is to be able to track the source of requests beyond just ip/port by allowing
183
210
  a logical application name to be included.
184
211
 
185
- [id="plugins-{type}s-{plugin}-connections_max_idle_ms"]
186
- ===== `connections_max_idle_ms`
212
+ [id="plugins-{type}s-{plugin}-client_rack"]
213
+ ===== `client_rack`
187
214
 
188
215
  * Value type is <<string,string>>
189
216
  * There is no default value for this setting.
190
217
 
218
+ A rack identifier for the Kafka consumer.
219
+ Used to select the physically closest rack for the consumer to read from.
220
+ The setting corresponds with Kafka's `broker.rack` configuration.
221
+
222
+ NOTE: Available only for Kafka 2.4.0 and higher. See
223
+ https://cwiki.apache.org/confluence/display/KAFKA/KIP-392%3A+Allow+consumers+to+fetch+from+closest+replica[KIP-392].
224
+
225
+ [id="plugins-{type}s-{plugin}-connections_max_idle_ms"]
226
+ ===== `connections_max_idle_ms`
227
+
228
+ * Value type is <<number,number>>
229
+ * Default value is `540000` milliseconds (9 minutes).
230
+
191
231
  Close idle connections after the number of milliseconds specified by this config.
192
232
 
193
233
  [id="plugins-{type}s-{plugin}-consumer_threads"]
@@ -217,8 +257,8 @@ This will add a field named `kafka` to the logstash event containing the followi
217
257
  [id="plugins-{type}s-{plugin}-enable_auto_commit"]
218
258
  ===== `enable_auto_commit`
219
259
 
220
- * Value type is <<string,string>>
221
- * Default value is `"true"`
260
+ * Value type is <<boolean,boolean>>
261
+ * Default value is `true`
222
262
 
223
263
  This committed offset will be used when the process fails as the position from
224
264
  which the consumption will begin.
@@ -239,8 +279,8 @@ If set to true the only way to receive records from an internal topic is subscri
239
279
  [id="plugins-{type}s-{plugin}-fetch_max_bytes"]
240
280
  ===== `fetch_max_bytes`
241
281
 
242
- * Value type is <<string,string>>
243
- * There is no default value for this setting.
282
+ * Value type is <<number,number>>
283
+ * Default value is `52428800` (50MB)
244
284
 
245
285
  The maximum amount of data the server should return for a fetch request. This is not an
246
286
  absolute maximum, if the first message in the first non-empty partition of the fetch is larger
@@ -249,8 +289,8 @@ than this value, the message will still be returned to ensure that the consumer
249
289
  [id="plugins-{type}s-{plugin}-fetch_max_wait_ms"]
250
290
  ===== `fetch_max_wait_ms`
251
291
 
252
- * Value type is <<string,string>>
253
- * There is no default value for this setting.
292
+ * Value type is <<number,number>>
293
+ * Default value is `500` milliseconds.
254
294
 
255
295
  The maximum amount of time the server will block before answering the fetch request if
256
296
  there isn't sufficient data to immediately satisfy `fetch_min_bytes`. This
@@ -259,7 +299,7 @@ should be less than or equal to the timeout used in `poll_timeout_ms`
259
299
  [id="plugins-{type}s-{plugin}-fetch_min_bytes"]
260
300
  ===== `fetch_min_bytes`
261
301
 
262
- * Value type is <<string,string>>
302
+ * Value type is <<number,number>>
263
303
  * There is no default value for this setting.
264
304
 
265
305
  The minimum amount of data the server should return for a fetch request. If insufficient
@@ -279,8 +319,8 @@ Logstash instances with the same `group_id`
279
319
  [id="plugins-{type}s-{plugin}-heartbeat_interval_ms"]
280
320
  ===== `heartbeat_interval_ms`
281
321
 
282
- * Value type is <<string,string>>
283
- * There is no default value for this setting.
322
+ * Value type is <<number,number>>
323
+ * Default value is `3000` milliseconds (3 seconds).
284
324
 
285
325
  The expected time between heartbeats to the consumer coordinator. Heartbeats are used to ensure
286
326
  that the consumer's session stays active and to facilitate rebalancing when new
@@ -288,6 +328,17 @@ consumers join or leave the group. The value must be set lower than
288
328
  `session.timeout.ms`, but typically should be set no higher than 1/3 of that value.
289
329
  It can be adjusted even lower to control the expected time for normal rebalances.
290
330
 
331
+ [id="plugins-{type}s-{plugin}-isolation_level"]
332
+ ===== `isolation_level`
333
+
334
+ * Value type is <<string,string>>
335
+ * Default value is `"read_uncommitted"`
336
+
337
+ Controls how to read messages written transactionally. If set to `read_committed`, polling messages will only return
338
+ transactional messages which have been committed. If set to `read_uncommitted` (the default), polling messages will
339
+ return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned
340
+ unconditionally in either mode.
341
+
291
342
  [id="plugins-{type}s-{plugin}-jaas_path"]
292
343
  ===== `jaas_path`
293
344
 
@@ -330,8 +381,8 @@ Java Class used to deserialize the record's key
330
381
  [id="plugins-{type}s-{plugin}-max_partition_fetch_bytes"]
331
382
  ===== `max_partition_fetch_bytes`
332
383
 
333
- * Value type is <<string,string>>
334
- * There is no default value for this setting.
384
+ * Value type is <<number,number>>
385
+ * Default value is `1048576` (1MB).
335
386
 
336
387
  The maximum amount of data per-partition the server will return. The maximum total memory used for a
337
388
  request will be `#partitions * max.partition.fetch.bytes`. This size must be at least
@@ -342,28 +393,28 @@ to fetch a large message on a certain partition.
342
393
  [id="plugins-{type}s-{plugin}-max_poll_interval_ms"]
343
394
  ===== `max_poll_interval_ms`
344
395
 
345
- * Value type is <<string,string>>
346
- * There is no default value for this setting.
396
+ * Value type is <<number,number>>
397
+ * Default value is `300000` milliseconds (5 minutes).
347
398
 
348
399
  The maximum delay between invocations of poll() when using consumer group management. This places
349
400
  an upper bound on the amount of time that the consumer can be idle before fetching more records.
350
401
  If poll() is not called before expiration of this timeout, then the consumer is considered failed and
351
402
  the group will rebalance in order to reassign the partitions to another member.
352
- The value of the configuration `request_timeout_ms` must always be larger than max_poll_interval_ms
403
+ The value of the configuration `request_timeout_ms` must always be larger than `max_poll_interval_ms`. ???
353
404
 
354
405
  [id="plugins-{type}s-{plugin}-max_poll_records"]
355
406
  ===== `max_poll_records`
356
407
 
357
- * Value type is <<string,string>>
358
- * There is no default value for this setting.
408
+ * Value type is <<number,number>>
409
+ * Default value is `500`.
359
410
 
360
411
  The maximum number of records returned in a single call to poll().
361
412
 
362
413
  [id="plugins-{type}s-{plugin}-metadata_max_age_ms"]
363
414
  ===== `metadata_max_age_ms`
364
415
 
365
- * Value type is <<string,string>>
366
- * There is no default value for this setting.
416
+ * Value type is <<number,number>>
417
+ * Default value is `300000` milliseconds (5 minutes).
367
418
 
368
419
  The period of time in milliseconds after which we force a refresh of metadata even if
369
420
  we haven't seen any partition leadership changes to proactively discover any new brokers or partitions
@@ -382,30 +433,35 @@ partition ownership amongst consumer instances, supported options are:
382
433
  * `sticky`
383
434
  * `cooperative_sticky`
384
435
 
385
- These map to Kafka's corresponding https://kafka.apache.org/24/javadoc/org/apache/kafka/clients/consumer/ConsumerPartitionAssignor.html[`ConsumerPartitionAssignor`]
436
+ These map to Kafka's corresponding https://kafka.apache.org/{kafka_client_doc}/javadoc/org/apache/kafka/clients/consumer/ConsumerPartitionAssignor.html[`ConsumerPartitionAssignor`]
386
437
  implementations.
387
438
 
388
439
  [id="plugins-{type}s-{plugin}-poll_timeout_ms"]
389
440
  ===== `poll_timeout_ms`
390
441
 
391
442
  * Value type is <<number,number>>
392
- * Default value is `100`
443
+ * Default value is `100` milliseconds.
444
+
445
+ Time Kafka consumer will wait to receive new messages from topics.
393
446
 
394
- Time kafka consumer will wait to receive new messages from topics
447
+ After subscribing to a set of topics, the Kafka consumer automatically joins the group when polling.
448
+ The plugin poll-ing in a loop ensures consumer liveness.
449
+ Underneath the covers, Kafka client sends periodic heartbeats to the server.
450
+ The timeout specified the time to block waiting for input on each poll.
395
451
 
396
452
  [id="plugins-{type}s-{plugin}-receive_buffer_bytes"]
397
453
  ===== `receive_buffer_bytes`
398
454
 
399
- * Value type is <<string,string>>
400
- * There is no default value for this setting.
455
+ * Value type is <<number,number>>
456
+ * Default value is `32768` (32KB).
401
457
 
402
458
  The size of the TCP receive buffer (SO_RCVBUF) to use when reading data.
403
459
 
404
460
  [id="plugins-{type}s-{plugin}-reconnect_backoff_ms"]
405
461
  ===== `reconnect_backoff_ms`
406
462
 
407
- * Value type is <<string,string>>
408
- * There is no default value for this setting.
463
+ * Value type is <<number,number>>
464
+ * Default value is `50` milliseconds.
409
465
 
410
466
  The amount of time to wait before attempting to reconnect to a given host.
411
467
  This avoids repeatedly connecting to a host in a tight loop.
@@ -414,8 +470,8 @@ This backoff applies to all requests sent by the consumer to the broker.
414
470
  [id="plugins-{type}s-{plugin}-request_timeout_ms"]
415
471
  ===== `request_timeout_ms`
416
472
 
417
- * Value type is <<string,string>>
418
- * There is no default value for this setting.
473
+ * Value type is <<number,number>>
474
+ * Default value is `40000` milliseconds (40 seconds).
419
475
 
420
476
  The configuration controls the maximum amount of time the client will wait
421
477
  for the response of a request. If the response is not received before the timeout
@@ -425,8 +481,8 @@ retries are exhausted.
425
481
  [id="plugins-{type}s-{plugin}-retry_backoff_ms"]
426
482
  ===== `retry_backoff_ms`
427
483
 
428
- * Value type is <<string,string>>
429
- * There is no default value for this setting.
484
+ * Value type is <<number,number>>
485
+ * Default value is `100` milliseconds.
430
486
 
431
487
  The amount of time to wait before attempting to retry a failed fetch request
432
488
  to a given topic partition. This avoids repeated fetching-and-failing in a tight loop.
@@ -479,16 +535,16 @@ Security protocol to use, which can be either of PLAINTEXT,SSL,SASL_PLAINTEXT,SA
479
535
  [id="plugins-{type}s-{plugin}-send_buffer_bytes"]
480
536
  ===== `send_buffer_bytes`
481
537
 
482
- * Value type is <<string,string>>
483
- * There is no default value for this setting.
538
+ * Value type is <<number,number>>
539
+ * Default value is `131072` (128KB).
484
540
 
485
541
  The size of the TCP send buffer (SO_SNDBUF) to use when sending data
486
542
 
487
543
  [id="plugins-{type}s-{plugin}-session_timeout_ms"]
488
544
  ===== `session_timeout_ms`
489
545
 
490
- * Value type is <<string,string>>
491
- * There is no default value for this setting.
546
+ * Value type is <<number,number>>
547
+ * Default value is `10000` milliseconds (10 seconds).
492
548
 
493
549
  The timeout after which, if the `poll_timeout_ms` is not invoked, the consumer is marked dead
494
550
  and a rebalance operation is triggered for the group identified by `group_id`
@@ -548,7 +604,7 @@ The JKS truststore path to validate the Kafka broker's certificate.
548
604
  * Value type is <<password,password>>
549
605
  * There is no default value for this setting.
550
606
 
551
- The truststore password
607
+ The truststore password.
552
608
 
553
609
  [id="plugins-{type}s-{plugin}-ssl_truststore_type"]
554
610
  ===== `ssl_truststore_type`
@@ -583,19 +639,6 @@ The topics configuration will be ignored when using this configuration.
583
639
 
584
640
  Java Class used to deserialize the record's value
585
641
 
586
-
587
- [id="plugins-{type}s-{plugin}-client_rack"]
588
- ===== `client_rack`
589
-
590
- * Value type is <<string,string>>
591
- * There is no default value for this setting.
592
-
593
- A rack identifier for the Kafka consumer.
594
- Used to select the physically closest rack for the consumer to read from.
595
- The setting corresponds with Kafka's `broker.rack` configuration.
596
-
597
- NOTE: Only available for Kafka 2.4.0 and higher; see https://cwiki.apache.org/confluence/display/KAFKA/KIP-392%3A+Allow+consumers+to+fetch+from+closest+replica[KIP-392].
598
-
599
642
  [id="plugins-{type}s-{plugin}-common-options"]
600
643
  include::{include_path}/{type}.asciidoc[]
601
644