logstash-integration-kafka 10.0.1-java → 10.5.0-java
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +24 -0
- data/CONTRIBUTORS +2 -0
- data/LICENSE +199 -10
- data/docs/index.asciidoc +1 -1
- data/docs/input-kafka.asciidoc +118 -70
- data/docs/output-kafka.asciidoc +68 -23
- data/lib/logstash-integration-kafka_jars.rb +3 -3
- data/lib/logstash/inputs/kafka.rb +90 -54
- data/lib/logstash/outputs/kafka.rb +83 -45
- data/logstash-integration-kafka.gemspec +2 -2
- data/spec/integration/inputs/kafka_spec.rb +81 -112
- data/spec/integration/outputs/kafka_spec.rb +89 -72
- data/spec/unit/inputs/kafka_spec.rb +63 -1
- data/spec/unit/outputs/kafka_spec.rb +62 -9
- data/vendor/jar-dependencies/com/github/luben/zstd-jni/1.4.3-1/zstd-jni-1.4.3-1.jar +0 -0
- data/vendor/jar-dependencies/org/apache/kafka/kafka-clients/2.4.1/kafka-clients-2.4.1.jar +0 -0
- data/vendor/jar-dependencies/org/slf4j/slf4j-api/1.7.28/slf4j-api-1.7.28.jar +0 -0
- metadata +6 -6
- data/vendor/jar-dependencies/com/github/luben/zstd-jni/1.4.2-1/zstd-jni-1.4.2-1.jar +0 -0
- data/vendor/jar-dependencies/org/apache/kafka/kafka-clients/2.3.0/kafka-clients-2.3.0.jar +0 -0
- data/vendor/jar-dependencies/org/slf4j/slf4j-api/1.7.26/slf4j-api-1.7.26.jar +0 -0
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: c35575aafc1330e1fac14f137c818b3836f399ee1f4514a86a8a7387c4d8e8e9
|
4
|
+
data.tar.gz: 7fb89bca8ec2b25e07ab411b75f7de8fc4edc97f22fd2b0d5869452b10a529d9
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 7d0185e11c203175272ac52eb89a1fb3c1a09906832bc34dde3e2eca513575d9b47c79ca741be609c07f3d8f3d191f4371447d6987ad715516da786f7c1622a3
|
7
|
+
data.tar.gz: 93063bdf2cb603134865fc745c31c37a134d2fca09fac7bd9d68c798c792f471c1388fb29a600b45cc56d506c35a700a2c1236577d0fcbf1a65097cea91bf3f4
|
data/CHANGELOG.md
CHANGED
@@ -1,7 +1,31 @@
|
|
1
|
+
## 10.5.0
|
2
|
+
- Changed: retry sending messages only for retriable exceptions [#27](https://github.com/logstash-plugins/logstash-integration-kafka/pull/29)
|
3
|
+
|
4
|
+
## 10.4.1
|
5
|
+
- [DOC] Fixed formatting issues and made minor content edits [#43](https://github.com/logstash-plugins/logstash-integration-kafka/pull/43)
|
6
|
+
|
7
|
+
## 10.4.0
|
8
|
+
- added the input `isolation_level` to allow fine control of whether to return transactional messages [#44](https://github.com/logstash-plugins/logstash-integration-kafka/pull/44)
|
9
|
+
|
10
|
+
## 10.3.0
|
11
|
+
- added the input and output `client_dns_lookup` parameter to allow control of how DNS requests are made [#28](https://github.com/logstash-plugins/logstash-integration-kafka/pull/28)
|
12
|
+
|
13
|
+
## 10.2.0
|
14
|
+
- Changed: config defaults to be aligned with Kafka client defaults [#30](https://github.com/logstash-plugins/logstash-integration-kafka/pull/30)
|
15
|
+
|
16
|
+
## 10.1.0
|
17
|
+
- updated kafka client (and its dependencies) to version 2.4.1 ([#16](https://github.com/logstash-plugins/logstash-integration-kafka/pull/16))
|
18
|
+
- added the input `client_rack` parameter to enable support for follower fetching
|
19
|
+
- added the output `partitioner` parameter for tuning partitioning strategy
|
20
|
+
- Refactor: normalized error logging a bit - make sure exception type is logged
|
21
|
+
- Fix: properly handle empty ssl_endpoint_identification_algorithm [#8](https://github.com/logstash-plugins/logstash-integration-kafka/pull/8)
|
22
|
+
- Refactor : made `partition_assignment_strategy` option easier to configure by accepting simple values from an enumerated set instead of requiring lengthy class paths ([#25](https://github.com/logstash-plugins/logstash-integration-kafka/pull/25))
|
23
|
+
|
1
24
|
## 10.0.1
|
2
25
|
- Fix links in changelog pointing to stand-alone plugin changelogs.
|
3
26
|
- Refactor: scope java_import to plugin class
|
4
27
|
|
28
|
+
|
5
29
|
## 10.0.0
|
6
30
|
- Initial release of the Kafka Integration Plugin, which combines
|
7
31
|
previously-separate Kafka plugins and shared dependencies into a single
|
data/CONTRIBUTORS
CHANGED
@@ -11,6 +11,8 @@ Contributors:
|
|
11
11
|
* João Duarte (jsvd)
|
12
12
|
* Kurt Hurtado (kurtado)
|
13
13
|
* Ry Biesemeyer (yaauie)
|
14
|
+
* Rob Cowart (robcowart)
|
15
|
+
* Tim te Beek (timtebeek)
|
14
16
|
|
15
17
|
Note: If you've sent us patches, bug reports, or otherwise contributed to
|
16
18
|
Logstash, and you aren't on the list above and want to be, please let us know
|
data/LICENSE
CHANGED
@@ -1,13 +1,202 @@
|
|
1
|
-
Copyright (c) 2012-2018 Elasticsearch <http://www.elastic.co>
|
2
1
|
|
3
|
-
|
4
|
-
|
5
|
-
|
2
|
+
Apache License
|
3
|
+
Version 2.0, January 2004
|
4
|
+
http://www.apache.org/licenses/
|
6
5
|
|
7
|
-
|
6
|
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
8
7
|
|
9
|
-
|
10
|
-
|
11
|
-
|
12
|
-
|
13
|
-
|
8
|
+
1. Definitions.
|
9
|
+
|
10
|
+
"License" shall mean the terms and conditions for use, reproduction,
|
11
|
+
and distribution as defined by Sections 1 through 9 of this document.
|
12
|
+
|
13
|
+
"Licensor" shall mean the copyright owner or entity authorized by
|
14
|
+
the copyright owner that is granting the License.
|
15
|
+
|
16
|
+
"Legal Entity" shall mean the union of the acting entity and all
|
17
|
+
other entities that control, are controlled by, or are under common
|
18
|
+
control with that entity. For the purposes of this definition,
|
19
|
+
"control" means (i) the power, direct or indirect, to cause the
|
20
|
+
direction or management of such entity, whether by contract or
|
21
|
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
22
|
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
23
|
+
|
24
|
+
"You" (or "Your") shall mean an individual or Legal Entity
|
25
|
+
exercising permissions granted by this License.
|
26
|
+
|
27
|
+
"Source" form shall mean the preferred form for making modifications,
|
28
|
+
including but not limited to software source code, documentation
|
29
|
+
source, and configuration files.
|
30
|
+
|
31
|
+
"Object" form shall mean any form resulting from mechanical
|
32
|
+
transformation or translation of a Source form, including but
|
33
|
+
not limited to compiled object code, generated documentation,
|
34
|
+
and conversions to other media types.
|
35
|
+
|
36
|
+
"Work" shall mean the work of authorship, whether in Source or
|
37
|
+
Object form, made available under the License, as indicated by a
|
38
|
+
copyright notice that is included in or attached to the work
|
39
|
+
(an example is provided in the Appendix below).
|
40
|
+
|
41
|
+
"Derivative Works" shall mean any work, whether in Source or Object
|
42
|
+
form, that is based on (or derived from) the Work and for which the
|
43
|
+
editorial revisions, annotations, elaborations, or other modifications
|
44
|
+
represent, as a whole, an original work of authorship. For the purposes
|
45
|
+
of this License, Derivative Works shall not include works that remain
|
46
|
+
separable from, or merely link (or bind by name) to the interfaces of,
|
47
|
+
the Work and Derivative Works thereof.
|
48
|
+
|
49
|
+
"Contribution" shall mean any work of authorship, including
|
50
|
+
the original version of the Work and any modifications or additions
|
51
|
+
to that Work or Derivative Works thereof, that is intentionally
|
52
|
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
53
|
+
or by an individual or Legal Entity authorized to submit on behalf of
|
54
|
+
the copyright owner. For the purposes of this definition, "submitted"
|
55
|
+
means any form of electronic, verbal, or written communication sent
|
56
|
+
to the Licensor or its representatives, including but not limited to
|
57
|
+
communication on electronic mailing lists, source code control systems,
|
58
|
+
and issue tracking systems that are managed by, or on behalf of, the
|
59
|
+
Licensor for the purpose of discussing and improving the Work, but
|
60
|
+
excluding communication that is conspicuously marked or otherwise
|
61
|
+
designated in writing by the copyright owner as "Not a Contribution."
|
62
|
+
|
63
|
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
64
|
+
on behalf of whom a Contribution has been received by Licensor and
|
65
|
+
subsequently incorporated within the Work.
|
66
|
+
|
67
|
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
68
|
+
this License, each Contributor hereby grants to You a perpetual,
|
69
|
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
70
|
+
copyright license to reproduce, prepare Derivative Works of,
|
71
|
+
publicly display, publicly perform, sublicense, and distribute the
|
72
|
+
Work and such Derivative Works in Source or Object form.
|
73
|
+
|
74
|
+
3. Grant of Patent License. Subject to the terms and conditions of
|
75
|
+
this License, each Contributor hereby grants to You a perpetual,
|
76
|
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
77
|
+
(except as stated in this section) patent license to make, have made,
|
78
|
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
79
|
+
where such license applies only to those patent claims licensable
|
80
|
+
by such Contributor that are necessarily infringed by their
|
81
|
+
Contribution(s) alone or by combination of their Contribution(s)
|
82
|
+
with the Work to which such Contribution(s) was submitted. If You
|
83
|
+
institute patent litigation against any entity (including a
|
84
|
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
85
|
+
or a Contribution incorporated within the Work constitutes direct
|
86
|
+
or contributory patent infringement, then any patent licenses
|
87
|
+
granted to You under this License for that Work shall terminate
|
88
|
+
as of the date such litigation is filed.
|
89
|
+
|
90
|
+
4. Redistribution. You may reproduce and distribute copies of the
|
91
|
+
Work or Derivative Works thereof in any medium, with or without
|
92
|
+
modifications, and in Source or Object form, provided that You
|
93
|
+
meet the following conditions:
|
94
|
+
|
95
|
+
(a) You must give any other recipients of the Work or
|
96
|
+
Derivative Works a copy of this License; and
|
97
|
+
|
98
|
+
(b) You must cause any modified files to carry prominent notices
|
99
|
+
stating that You changed the files; and
|
100
|
+
|
101
|
+
(c) You must retain, in the Source form of any Derivative Works
|
102
|
+
that You distribute, all copyright, patent, trademark, and
|
103
|
+
attribution notices from the Source form of the Work,
|
104
|
+
excluding those notices that do not pertain to any part of
|
105
|
+
the Derivative Works; and
|
106
|
+
|
107
|
+
(d) If the Work includes a "NOTICE" text file as part of its
|
108
|
+
distribution, then any Derivative Works that You distribute must
|
109
|
+
include a readable copy of the attribution notices contained
|
110
|
+
within such NOTICE file, excluding those notices that do not
|
111
|
+
pertain to any part of the Derivative Works, in at least one
|
112
|
+
of the following places: within a NOTICE text file distributed
|
113
|
+
as part of the Derivative Works; within the Source form or
|
114
|
+
documentation, if provided along with the Derivative Works; or,
|
115
|
+
within a display generated by the Derivative Works, if and
|
116
|
+
wherever such third-party notices normally appear. The contents
|
117
|
+
of the NOTICE file are for informational purposes only and
|
118
|
+
do not modify the License. You may add Your own attribution
|
119
|
+
notices within Derivative Works that You distribute, alongside
|
120
|
+
or as an addendum to the NOTICE text from the Work, provided
|
121
|
+
that such additional attribution notices cannot be construed
|
122
|
+
as modifying the License.
|
123
|
+
|
124
|
+
You may add Your own copyright statement to Your modifications and
|
125
|
+
may provide additional or different license terms and conditions
|
126
|
+
for use, reproduction, or distribution of Your modifications, or
|
127
|
+
for any such Derivative Works as a whole, provided Your use,
|
128
|
+
reproduction, and distribution of the Work otherwise complies with
|
129
|
+
the conditions stated in this License.
|
130
|
+
|
131
|
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
132
|
+
any Contribution intentionally submitted for inclusion in the Work
|
133
|
+
by You to the Licensor shall be under the terms and conditions of
|
134
|
+
this License, without any additional terms or conditions.
|
135
|
+
Notwithstanding the above, nothing herein shall supersede or modify
|
136
|
+
the terms of any separate license agreement you may have executed
|
137
|
+
with Licensor regarding such Contributions.
|
138
|
+
|
139
|
+
6. Trademarks. This License does not grant permission to use the trade
|
140
|
+
names, trademarks, service marks, or product names of the Licensor,
|
141
|
+
except as required for reasonable and customary use in describing the
|
142
|
+
origin of the Work and reproducing the content of the NOTICE file.
|
143
|
+
|
144
|
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
145
|
+
agreed to in writing, Licensor provides the Work (and each
|
146
|
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
147
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
148
|
+
implied, including, without limitation, any warranties or conditions
|
149
|
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
150
|
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
151
|
+
appropriateness of using or redistributing the Work and assume any
|
152
|
+
risks associated with Your exercise of permissions under this License.
|
153
|
+
|
154
|
+
8. Limitation of Liability. In no event and under no legal theory,
|
155
|
+
whether in tort (including negligence), contract, or otherwise,
|
156
|
+
unless required by applicable law (such as deliberate and grossly
|
157
|
+
negligent acts) or agreed to in writing, shall any Contributor be
|
158
|
+
liable to You for damages, including any direct, indirect, special,
|
159
|
+
incidental, or consequential damages of any character arising as a
|
160
|
+
result of this License or out of the use or inability to use the
|
161
|
+
Work (including but not limited to damages for loss of goodwill,
|
162
|
+
work stoppage, computer failure or malfunction, or any and all
|
163
|
+
other commercial damages or losses), even if such Contributor
|
164
|
+
has been advised of the possibility of such damages.
|
165
|
+
|
166
|
+
9. Accepting Warranty or Additional Liability. While redistributing
|
167
|
+
the Work or Derivative Works thereof, You may choose to offer,
|
168
|
+
and charge a fee for, acceptance of support, warranty, indemnity,
|
169
|
+
or other liability obligations and/or rights consistent with this
|
170
|
+
License. However, in accepting such obligations, You may act only
|
171
|
+
on Your own behalf and on Your sole responsibility, not on behalf
|
172
|
+
of any other Contributor, and only if You agree to indemnify,
|
173
|
+
defend, and hold each Contributor harmless for any liability
|
174
|
+
incurred by, or claims asserted against, such Contributor by reason
|
175
|
+
of your accepting any such warranty or additional liability.
|
176
|
+
|
177
|
+
END OF TERMS AND CONDITIONS
|
178
|
+
|
179
|
+
APPENDIX: How to apply the Apache License to your work.
|
180
|
+
|
181
|
+
To apply the Apache License to your work, attach the following
|
182
|
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
183
|
+
replaced with your own identifying information. (Don't include
|
184
|
+
the brackets!) The text should be enclosed in the appropriate
|
185
|
+
comment syntax for the file format. We also recommend that a
|
186
|
+
file or class name and description of purpose be included on the
|
187
|
+
same "printed page" as the copyright notice for easier
|
188
|
+
identification within third-party archives.
|
189
|
+
|
190
|
+
Copyright 2020 Elastic and contributors
|
191
|
+
|
192
|
+
Licensed under the Apache License, Version 2.0 (the "License");
|
193
|
+
you may not use this file except in compliance with the License.
|
194
|
+
You may obtain a copy of the License at
|
195
|
+
|
196
|
+
http://www.apache.org/licenses/LICENSE-2.0
|
197
|
+
|
198
|
+
Unless required by applicable law or agreed to in writing, software
|
199
|
+
distributed under the License is distributed on an "AS IS" BASIS,
|
200
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
201
|
+
See the License for the specific language governing permissions and
|
202
|
+
limitations under the License.
|
data/docs/index.asciidoc
CHANGED
@@ -26,6 +26,6 @@ The Kafka Integration Plugin provides integrated plugins for working with the ht
|
|
26
26
|
- {logstash-ref}/plugins-inputs-kafka.html[Kafka Input Plugin]
|
27
27
|
- {logstash-ref}/plugins-outputs-kafka.html[Kafka Output Plugin]
|
28
28
|
|
29
|
-
This plugin uses Kafka Client 2.
|
29
|
+
This plugin uses Kafka Client 2.4. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
|
30
30
|
|
31
31
|
:no_codec!:
|
data/docs/input-kafka.asciidoc
CHANGED
@@ -23,7 +23,7 @@ include::{include_path}/plugin_header.asciidoc[]
|
|
23
23
|
|
24
24
|
This input will read events from a Kafka topic.
|
25
25
|
|
26
|
-
This plugin uses Kafka Client 2.
|
26
|
+
This plugin uses Kafka Client 2.3.0. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
|
27
27
|
|
28
28
|
If you require features not yet available in this plugin (including client version upgrades), please file an issue with details about what you need.
|
29
29
|
|
@@ -46,9 +46,9 @@ the same `group_id`.
|
|
46
46
|
Ideally you should have as many threads as the number of partitions for a perfect balance --
|
47
47
|
more threads than partitions means that some threads will be idle
|
48
48
|
|
49
|
-
For more information see
|
49
|
+
For more information see https://kafka.apache.org/24/documentation.html#theconsumer
|
50
50
|
|
51
|
-
Kafka consumer configuration:
|
51
|
+
Kafka consumer configuration: https://kafka.apache.org/24/documentation.html#consumerconfigs
|
52
52
|
|
53
53
|
==== Metadata fields
|
54
54
|
|
@@ -71,46 +71,50 @@ inserted into your original event, you'll have to use the `mutate` filter to man
|
|
71
71
|
|
72
72
|
This plugin supports these configuration options plus the <<plugins-{type}s-{plugin}-common-options>> described later.
|
73
73
|
|
74
|
-
NOTE: Some of these options map to a Kafka option.
|
75
|
-
|
74
|
+
NOTE: Some of these options map to a Kafka option. Defaults usually reflect the Kafka default setting,
|
75
|
+
and might change if Kafka's consumer defaults change.
|
76
|
+
See the https://kafka.apache.org/24/documentation for more details.
|
76
77
|
|
77
78
|
[cols="<,<,<",options="header",]
|
78
79
|
|=======================================================================
|
79
80
|
|Setting |Input type|Required
|
80
|
-
| <<plugins-{type}s-{plugin}-auto_commit_interval_ms>> |<<
|
81
|
+
| <<plugins-{type}s-{plugin}-auto_commit_interval_ms>> |<<number,number>>|No
|
81
82
|
| <<plugins-{type}s-{plugin}-auto_offset_reset>> |<<string,string>>|No
|
82
83
|
| <<plugins-{type}s-{plugin}-bootstrap_servers>> |<<string,string>>|No
|
83
|
-
| <<plugins-{type}s-{plugin}-check_crcs>> |<<
|
84
|
+
| <<plugins-{type}s-{plugin}-check_crcs>> |<<boolean,boolean>>|No
|
85
|
+
| <<plugins-{type}s-{plugin}-client_dns_lookup>> |<<string,string>>|No
|
84
86
|
| <<plugins-{type}s-{plugin}-client_id>> |<<string,string>>|No
|
85
|
-
| <<plugins-{type}s-{plugin}-
|
87
|
+
| <<plugins-{type}s-{plugin}-client_rack>> |<<string,string>>|No
|
88
|
+
| <<plugins-{type}s-{plugin}-connections_max_idle_ms>> |<<number,number>>|No
|
86
89
|
| <<plugins-{type}s-{plugin}-consumer_threads>> |<<number,number>>|No
|
87
90
|
| <<plugins-{type}s-{plugin}-decorate_events>> |<<boolean,boolean>>|No
|
88
|
-
| <<plugins-{type}s-{plugin}-enable_auto_commit>> |<<
|
91
|
+
| <<plugins-{type}s-{plugin}-enable_auto_commit>> |<<boolean,boolean>>|No
|
89
92
|
| <<plugins-{type}s-{plugin}-exclude_internal_topics>> |<<string,string>>|No
|
90
|
-
| <<plugins-{type}s-{plugin}-fetch_max_bytes>> |<<
|
91
|
-
| <<plugins-{type}s-{plugin}-fetch_max_wait_ms>> |<<
|
92
|
-
| <<plugins-{type}s-{plugin}-fetch_min_bytes>> |<<
|
93
|
+
| <<plugins-{type}s-{plugin}-fetch_max_bytes>> |<<number,number>>|No
|
94
|
+
| <<plugins-{type}s-{plugin}-fetch_max_wait_ms>> |<<number,number>>|No
|
95
|
+
| <<plugins-{type}s-{plugin}-fetch_min_bytes>> |<<number,number>>|No
|
93
96
|
| <<plugins-{type}s-{plugin}-group_id>> |<<string,string>>|No
|
94
|
-
| <<plugins-{type}s-{plugin}-heartbeat_interval_ms>> |<<
|
97
|
+
| <<plugins-{type}s-{plugin}-heartbeat_interval_ms>> |<<number,number>>|No
|
98
|
+
| <<plugins-{type}s-{plugin}-isolation_level>> |<<string,string>>|No
|
95
99
|
| <<plugins-{type}s-{plugin}-jaas_path>> |a valid filesystem path|No
|
96
100
|
| <<plugins-{type}s-{plugin}-kerberos_config>> |a valid filesystem path|No
|
97
101
|
| <<plugins-{type}s-{plugin}-key_deserializer_class>> |<<string,string>>|No
|
98
|
-
| <<plugins-{type}s-{plugin}-max_partition_fetch_bytes>> |<<
|
99
|
-
| <<plugins-{type}s-{plugin}-max_poll_interval_ms>> |<<
|
100
|
-
| <<plugins-{type}s-{plugin}-max_poll_records>> |<<
|
101
|
-
| <<plugins-{type}s-{plugin}-metadata_max_age_ms>> |<<
|
102
|
+
| <<plugins-{type}s-{plugin}-max_partition_fetch_bytes>> |<<number,number>>|No
|
103
|
+
| <<plugins-{type}s-{plugin}-max_poll_interval_ms>> |<<number,number>>|No
|
104
|
+
| <<plugins-{type}s-{plugin}-max_poll_records>> |<<number,number>>|No
|
105
|
+
| <<plugins-{type}s-{plugin}-metadata_max_age_ms>> |<<number,number>>|No
|
102
106
|
| <<plugins-{type}s-{plugin}-partition_assignment_strategy>> |<<string,string>>|No
|
103
107
|
| <<plugins-{type}s-{plugin}-poll_timeout_ms>> |<<number,number>>|No
|
104
|
-
| <<plugins-{type}s-{plugin}-receive_buffer_bytes>> |<<
|
105
|
-
| <<plugins-{type}s-{plugin}-reconnect_backoff_ms>> |<<
|
106
|
-
| <<plugins-{type}s-{plugin}-request_timeout_ms>> |<<
|
107
|
-
| <<plugins-{type}s-{plugin}-retry_backoff_ms>> |<<
|
108
|
+
| <<plugins-{type}s-{plugin}-receive_buffer_bytes>> |<<number,number>>|No
|
109
|
+
| <<plugins-{type}s-{plugin}-reconnect_backoff_ms>> |<<number,number>>|No
|
110
|
+
| <<plugins-{type}s-{plugin}-request_timeout_ms>> |<<number,number>>|No
|
111
|
+
| <<plugins-{type}s-{plugin}-retry_backoff_ms>> |<<number,number>>|No
|
108
112
|
| <<plugins-{type}s-{plugin}-sasl_jaas_config>> |<<string,string>>|No
|
109
113
|
| <<plugins-{type}s-{plugin}-sasl_kerberos_service_name>> |<<string,string>>|No
|
110
114
|
| <<plugins-{type}s-{plugin}-sasl_mechanism>> |<<string,string>>|No
|
111
115
|
| <<plugins-{type}s-{plugin}-security_protocol>> |<<string,string>>, one of `["PLAINTEXT", "SSL", "SASL_PLAINTEXT", "SASL_SSL"]`|No
|
112
|
-
| <<plugins-{type}s-{plugin}-send_buffer_bytes>> |<<
|
113
|
-
| <<plugins-{type}s-{plugin}-session_timeout_ms>> |<<
|
116
|
+
| <<plugins-{type}s-{plugin}-send_buffer_bytes>> |<<number,number>>|No
|
117
|
+
| <<plugins-{type}s-{plugin}-session_timeout_ms>> |<<number,number>>|No
|
114
118
|
| <<plugins-{type}s-{plugin}-ssl_endpoint_identification_algorithm>> |<<string,string>>|No
|
115
119
|
| <<plugins-{type}s-{plugin}-ssl_key_password>> |<<password,password>>|No
|
116
120
|
| <<plugins-{type}s-{plugin}-ssl_keystore_location>> |a valid filesystem path|No
|
@@ -132,8 +136,8 @@ input plugins.
|
|
132
136
|
[id="plugins-{type}s-{plugin}-auto_commit_interval_ms"]
|
133
137
|
===== `auto_commit_interval_ms`
|
134
138
|
|
135
|
-
* Value type is <<
|
136
|
-
* Default value is `
|
139
|
+
* Value type is <<number,number>>
|
140
|
+
* Default value is `5000`.
|
137
141
|
|
138
142
|
The frequency in milliseconds that the consumer offsets are committed to Kafka.
|
139
143
|
|
@@ -165,12 +169,23 @@ case a server is down).
|
|
165
169
|
[id="plugins-{type}s-{plugin}-check_crcs"]
|
166
170
|
===== `check_crcs`
|
167
171
|
|
172
|
+
* Value type is <<boolean,boolean>>
|
173
|
+
* Default value is `true`
|
174
|
+
|
175
|
+
Automatically check the CRC32 of the records consumed.
|
176
|
+
This ensures no on-the-wire or on-disk corruption to the messages occurred.
|
177
|
+
This check adds some overhead, so it may be disabled in cases seeking extreme performance.
|
178
|
+
|
179
|
+
[id="plugins-{type}s-{plugin}-client_dns_lookup"]
|
180
|
+
===== `client_dns_lookup`
|
181
|
+
|
168
182
|
* Value type is <<string,string>>
|
169
|
-
*
|
183
|
+
* Default value is `"default"`
|
170
184
|
|
171
|
-
|
172
|
-
|
173
|
-
|
185
|
+
How DNS lookups should be done. If set to `use_all_dns_ips`, when the lookup returns multiple
|
186
|
+
IP addresses for a hostname, they will all be attempted to connect to before failing the
|
187
|
+
connection. If the value is `resolve_canonical_bootstrap_servers_only` each entry will be
|
188
|
+
resolved and expanded into a list of canonical names.
|
174
189
|
|
175
190
|
[id="plugins-{type}s-{plugin}-client_id"]
|
176
191
|
===== `client_id`
|
@@ -182,12 +197,25 @@ The id string to pass to the server when making requests. The purpose of this
|
|
182
197
|
is to be able to track the source of requests beyond just ip/port by allowing
|
183
198
|
a logical application name to be included.
|
184
199
|
|
185
|
-
[id="plugins-{type}s-{plugin}-
|
186
|
-
===== `
|
200
|
+
[id="plugins-{type}s-{plugin}-client_rack"]
|
201
|
+
===== `client_rack`
|
187
202
|
|
188
203
|
* Value type is <<string,string>>
|
189
204
|
* There is no default value for this setting.
|
190
205
|
|
206
|
+
A rack identifier for the Kafka consumer.
|
207
|
+
Used to select the physically closest rack for the consumer to read from.
|
208
|
+
The setting corresponds with Kafka's `broker.rack` configuration.
|
209
|
+
|
210
|
+
NOTE: Available only for Kafka 2.4.0 and higher. See
|
211
|
+
https://cwiki.apache.org/confluence/display/KAFKA/KIP-392%3A+Allow+consumers+to+fetch+from+closest+replica[KIP-392].
|
212
|
+
|
213
|
+
[id="plugins-{type}s-{plugin}-connections_max_idle_ms"]
|
214
|
+
===== `connections_max_idle_ms`
|
215
|
+
|
216
|
+
* Value type is <<number,number>>
|
217
|
+
* Default value is `540000` milliseconds (9 minutes).
|
218
|
+
|
191
219
|
Close idle connections after the number of milliseconds specified by this config.
|
192
220
|
|
193
221
|
[id="plugins-{type}s-{plugin}-consumer_threads"]
|
@@ -217,8 +245,8 @@ This will add a field named `kafka` to the logstash event containing the followi
|
|
217
245
|
[id="plugins-{type}s-{plugin}-enable_auto_commit"]
|
218
246
|
===== `enable_auto_commit`
|
219
247
|
|
220
|
-
* Value type is <<
|
221
|
-
* Default value is `
|
248
|
+
* Value type is <<boolean,boolean>>
|
249
|
+
* Default value is `true`
|
222
250
|
|
223
251
|
This committed offset will be used when the process fails as the position from
|
224
252
|
which the consumption will begin.
|
@@ -239,8 +267,8 @@ If set to true the only way to receive records from an internal topic is subscri
|
|
239
267
|
[id="plugins-{type}s-{plugin}-fetch_max_bytes"]
|
240
268
|
===== `fetch_max_bytes`
|
241
269
|
|
242
|
-
* Value type is <<
|
243
|
-
*
|
270
|
+
* Value type is <<number,number>>
|
271
|
+
* Default value is `52428800` (50MB)
|
244
272
|
|
245
273
|
The maximum amount of data the server should return for a fetch request. This is not an
|
246
274
|
absolute maximum, if the first message in the first non-empty partition of the fetch is larger
|
@@ -249,8 +277,8 @@ than this value, the message will still be returned to ensure that the consumer
|
|
249
277
|
[id="plugins-{type}s-{plugin}-fetch_max_wait_ms"]
|
250
278
|
===== `fetch_max_wait_ms`
|
251
279
|
|
252
|
-
* Value type is <<
|
253
|
-
*
|
280
|
+
* Value type is <<number,number>>
|
281
|
+
* Default value is `500` milliseconds.
|
254
282
|
|
255
283
|
The maximum amount of time the server will block before answering the fetch request if
|
256
284
|
there isn't sufficient data to immediately satisfy `fetch_min_bytes`. This
|
@@ -259,7 +287,7 @@ should be less than or equal to the timeout used in `poll_timeout_ms`
|
|
259
287
|
[id="plugins-{type}s-{plugin}-fetch_min_bytes"]
|
260
288
|
===== `fetch_min_bytes`
|
261
289
|
|
262
|
-
* Value type is <<
|
290
|
+
* Value type is <<number,number>>
|
263
291
|
* There is no default value for this setting.
|
264
292
|
|
265
293
|
The minimum amount of data the server should return for a fetch request. If insufficient
|
@@ -279,8 +307,8 @@ Logstash instances with the same `group_id`
|
|
279
307
|
[id="plugins-{type}s-{plugin}-heartbeat_interval_ms"]
|
280
308
|
===== `heartbeat_interval_ms`
|
281
309
|
|
282
|
-
* Value type is <<
|
283
|
-
*
|
310
|
+
* Value type is <<number,number>>
|
311
|
+
* Default value is `3000` milliseconds (3 seconds).
|
284
312
|
|
285
313
|
The expected time between heartbeats to the consumer coordinator. Heartbeats are used to ensure
|
286
314
|
that the consumer's session stays active and to facilitate rebalancing when new
|
@@ -288,6 +316,17 @@ consumers join or leave the group. The value must be set lower than
|
|
288
316
|
`session.timeout.ms`, but typically should be set no higher than 1/3 of that value.
|
289
317
|
It can be adjusted even lower to control the expected time for normal rebalances.
|
290
318
|
|
319
|
+
[id="plugins-{type}s-{plugin}-isolation_level"]
|
320
|
+
===== `isolation_level`
|
321
|
+
|
322
|
+
* Value type is <<string,string>>
|
323
|
+
* Default value is `"read_uncommitted"`
|
324
|
+
|
325
|
+
Controls how to read messages written transactionally. If set to `read_committed`, polling messages will only return
|
326
|
+
transactional messages which have been committed. If set to `read_uncommitted` (the default), polling messages will
|
327
|
+
return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned
|
328
|
+
unconditionally in either mode.
|
329
|
+
|
291
330
|
[id="plugins-{type}s-{plugin}-jaas_path"]
|
292
331
|
===== `jaas_path`
|
293
332
|
|
@@ -330,8 +369,8 @@ Java Class used to deserialize the record's key
|
|
330
369
|
[id="plugins-{type}s-{plugin}-max_partition_fetch_bytes"]
|
331
370
|
===== `max_partition_fetch_bytes`
|
332
371
|
|
333
|
-
* Value type is <<
|
334
|
-
*
|
372
|
+
* Value type is <<number,number>>
|
373
|
+
* Default value is `1048576` (1MB).
|
335
374
|
|
336
375
|
The maximum amount of data per-partition the server will return. The maximum total memory used for a
|
337
376
|
request will be `#partitions * max.partition.fetch.bytes`. This size must be at least
|
@@ -342,28 +381,28 @@ to fetch a large message on a certain partition.
|
|
342
381
|
[id="plugins-{type}s-{plugin}-max_poll_interval_ms"]
|
343
382
|
===== `max_poll_interval_ms`
|
344
383
|
|
345
|
-
* Value type is <<
|
346
|
-
*
|
384
|
+
* Value type is <<number,number>>
|
385
|
+
* Default value is `300000` milliseconds (5 minutes).
|
347
386
|
|
348
387
|
The maximum delay between invocations of poll() when using consumer group management. This places
|
349
388
|
an upper bound on the amount of time that the consumer can be idle before fetching more records.
|
350
389
|
If poll() is not called before expiration of this timeout, then the consumer is considered failed and
|
351
390
|
the group will rebalance in order to reassign the partitions to another member.
|
352
|
-
The value of the configuration `request_timeout_ms` must always be larger than max_poll_interval_ms
|
391
|
+
The value of the configuration `request_timeout_ms` must always be larger than `max_poll_interval_ms`. ???
|
353
392
|
|
354
393
|
[id="plugins-{type}s-{plugin}-max_poll_records"]
|
355
394
|
===== `max_poll_records`
|
356
395
|
|
357
|
-
* Value type is <<
|
358
|
-
*
|
396
|
+
* Value type is <<number,number>>
|
397
|
+
* Default value is `500`.
|
359
398
|
|
360
399
|
The maximum number of records returned in a single call to poll().
|
361
400
|
|
362
401
|
[id="plugins-{type}s-{plugin}-metadata_max_age_ms"]
|
363
402
|
===== `metadata_max_age_ms`
|
364
403
|
|
365
|
-
* Value type is <<
|
366
|
-
*
|
404
|
+
* Value type is <<number,number>>
|
405
|
+
* Default value is `300000` milliseconds (5 minutes).
|
367
406
|
|
368
407
|
The period of time in milliseconds after which we force a refresh of metadata even if
|
369
408
|
we haven't seen any partition leadership changes to proactively discover any new brokers or partitions
|
@@ -374,32 +413,43 @@ we haven't seen any partition leadership changes to proactively discover any new
|
|
374
413
|
* Value type is <<string,string>>
|
375
414
|
* There is no default value for this setting.
|
376
415
|
|
377
|
-
The
|
378
|
-
|
379
|
-
|
380
|
-
`
|
416
|
+
The name of the partition assignment strategy that the client uses to distribute
|
417
|
+
partition ownership amongst consumer instances, supported options are:
|
418
|
+
|
419
|
+
* `range`
|
420
|
+
* `round_robin`
|
421
|
+
* `sticky`
|
422
|
+
* `cooperative_sticky`
|
423
|
+
|
424
|
+
These map to Kafka's corresponding https://kafka.apache.org/24/javadoc/org/apache/kafka/clients/consumer/ConsumerPartitionAssignor.html[`ConsumerPartitionAssignor`]
|
425
|
+
implementations.
|
381
426
|
|
382
427
|
[id="plugins-{type}s-{plugin}-poll_timeout_ms"]
|
383
428
|
===== `poll_timeout_ms`
|
384
429
|
|
385
430
|
* Value type is <<number,number>>
|
386
|
-
* Default value is `100`
|
431
|
+
* Default value is `100` milliseconds.
|
387
432
|
|
388
|
-
Time
|
433
|
+
Time Kafka consumer will wait to receive new messages from topics.
|
434
|
+
|
435
|
+
After subscribing to a set of topics, the Kafka consumer automatically joins the group when polling.
|
436
|
+
The plugin poll-ing in a loop ensures consumer liveness.
|
437
|
+
Underneath the covers, Kafka client sends periodic heartbeats to the server.
|
438
|
+
The timeout specified the time to block waiting for input on each poll.
|
389
439
|
|
390
440
|
[id="plugins-{type}s-{plugin}-receive_buffer_bytes"]
|
391
441
|
===== `receive_buffer_bytes`
|
392
442
|
|
393
|
-
* Value type is <<
|
394
|
-
*
|
443
|
+
* Value type is <<number,number>>
|
444
|
+
* Default value is `32768` (32KB).
|
395
445
|
|
396
446
|
The size of the TCP receive buffer (SO_RCVBUF) to use when reading data.
|
397
447
|
|
398
448
|
[id="plugins-{type}s-{plugin}-reconnect_backoff_ms"]
|
399
449
|
===== `reconnect_backoff_ms`
|
400
450
|
|
401
|
-
* Value type is <<
|
402
|
-
*
|
451
|
+
* Value type is <<number,number>>
|
452
|
+
* Default value is `50` milliseconds.
|
403
453
|
|
404
454
|
The amount of time to wait before attempting to reconnect to a given host.
|
405
455
|
This avoids repeatedly connecting to a host in a tight loop.
|
@@ -408,8 +458,8 @@ This backoff applies to all requests sent by the consumer to the broker.
|
|
408
458
|
[id="plugins-{type}s-{plugin}-request_timeout_ms"]
|
409
459
|
===== `request_timeout_ms`
|
410
460
|
|
411
|
-
* Value type is <<
|
412
|
-
*
|
461
|
+
* Value type is <<number,number>>
|
462
|
+
* Default value is `40000` milliseconds (40 seconds).
|
413
463
|
|
414
464
|
The configuration controls the maximum amount of time the client will wait
|
415
465
|
for the response of a request. If the response is not received before the timeout
|
@@ -419,8 +469,8 @@ retries are exhausted.
|
|
419
469
|
[id="plugins-{type}s-{plugin}-retry_backoff_ms"]
|
420
470
|
===== `retry_backoff_ms`
|
421
471
|
|
422
|
-
* Value type is <<
|
423
|
-
*
|
472
|
+
* Value type is <<number,number>>
|
473
|
+
* Default value is `100` milliseconds.
|
424
474
|
|
425
475
|
The amount of time to wait before attempting to retry a failed fetch request
|
426
476
|
to a given topic partition. This avoids repeated fetching-and-failing in a tight loop.
|
@@ -473,16 +523,16 @@ Security protocol to use, which can be either of PLAINTEXT,SSL,SASL_PLAINTEXT,SA
|
|
473
523
|
[id="plugins-{type}s-{plugin}-send_buffer_bytes"]
|
474
524
|
===== `send_buffer_bytes`
|
475
525
|
|
476
|
-
* Value type is <<
|
477
|
-
*
|
526
|
+
* Value type is <<number,number>>
|
527
|
+
* Default value is `131072` (128KB).
|
478
528
|
|
479
529
|
The size of the TCP send buffer (SO_SNDBUF) to use when sending data
|
480
530
|
|
481
531
|
[id="plugins-{type}s-{plugin}-session_timeout_ms"]
|
482
532
|
===== `session_timeout_ms`
|
483
533
|
|
484
|
-
* Value type is <<
|
485
|
-
*
|
534
|
+
* Value type is <<number,number>>
|
535
|
+
* Default value is `10000` milliseconds (10 seconds).
|
486
536
|
|
487
537
|
The timeout after which, if the `poll_timeout_ms` is not invoked, the consumer is marked dead
|
488
538
|
and a rebalance operation is triggered for the group identified by `group_id`
|
@@ -542,7 +592,7 @@ The JKS truststore path to validate the Kafka broker's certificate.
|
|
542
592
|
* Value type is <<password,password>>
|
543
593
|
* There is no default value for this setting.
|
544
594
|
|
545
|
-
The truststore password
|
595
|
+
The truststore password.
|
546
596
|
|
547
597
|
[id="plugins-{type}s-{plugin}-ssl_truststore_type"]
|
548
598
|
===== `ssl_truststore_type`
|
@@ -577,8 +627,6 @@ The topics configuration will be ignored when using this configuration.
|
|
577
627
|
|
578
628
|
Java Class used to deserialize the record's value
|
579
629
|
|
580
|
-
|
581
|
-
|
582
630
|
[id="plugins-{type}s-{plugin}-common-options"]
|
583
631
|
include::{include_path}/{type}.asciidoc[]
|
584
632
|
|