logstash-integration-kafka 10.0.0-java → 10.4.0-java
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +5 -5
- data/CHANGELOG.md +24 -2
- data/CONTRIBUTORS +2 -0
- data/LICENSE +199 -10
- data/docs/index.asciidoc +4 -1
- data/docs/input-kafka.asciidoc +122 -71
- data/docs/output-kafka.asciidoc +50 -18
- data/lib/logstash-integration-kafka_jars.rb +3 -3
- data/lib/logstash/inputs/kafka.rb +90 -54
- data/lib/logstash/outputs/kafka.rb +59 -32
- data/logstash-integration-kafka.gemspec +3 -3
- data/spec/integration/inputs/kafka_spec.rb +81 -112
- data/spec/integration/outputs/kafka_spec.rb +89 -72
- data/spec/unit/inputs/kafka_spec.rb +63 -1
- data/spec/unit/outputs/kafka_spec.rb +26 -5
- data/vendor/jar-dependencies/com/github/luben/zstd-jni/1.4.3-1/zstd-jni-1.4.3-1.jar +0 -0
- data/vendor/jar-dependencies/org/apache/kafka/kafka-clients/2.4.1/kafka-clients-2.4.1.jar +0 -0
- data/vendor/jar-dependencies/org/slf4j/slf4j-api/1.7.28/slf4j-api-1.7.28.jar +0 -0
- metadata +9 -9
- data/vendor/jar-dependencies/com/github/luben/zstd-jni/1.4.2-1/zstd-jni-1.4.2-1.jar +0 -0
- data/vendor/jar-dependencies/org/apache/kafka/kafka-clients/2.3.0/kafka-clients-2.3.0.jar +0 -0
- data/vendor/jar-dependencies/org/slf4j/slf4j-api/1.7.26/slf4j-api-1.7.26.jar +0 -0
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
|
-
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
2
|
+
SHA256:
|
3
|
+
metadata.gz: 6ebbcd2d18d130e9fac997330c3c4b4bd9a959a982fe83215762b03638497ba4
|
4
|
+
data.tar.gz: 2b54ba231d9f74344a5ec321e0dcdec256ea5664001e7c3dc0323b2150761e30
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: a8c2aa5c2123fa001f58fc3670bd90face614fed72cf24f17ad645ba4de3bd689923d51ba5b5dd3a9201507657f6ed54326ed48495d274bb7c2284525470bdf7
|
7
|
+
data.tar.gz: cebe4abeda34edd6d5d1872c96f1b119abfa7abb2e40c52fb061e2c0953789441223e4b5a93a0d2fd7e3de1918c592dce9f5fee91bb6713b0e16f167033c13ce
|
data/CHANGELOG.md
CHANGED
@@ -1,6 +1,28 @@
|
|
1
|
+
## 10.4.0
|
2
|
+
- added the input `isolation_level` to allow fine control of whether to return transactional messages [#44](https://github.com/logstash-plugins/logstash-integration-kafka/pull/44)
|
3
|
+
|
4
|
+
## 10.3.0
|
5
|
+
- added the input and output `client_dns_lookup` parameter to allow control of how DNS requests are made
|
6
|
+
|
7
|
+
## 10.2.0
|
8
|
+
- Changed: config defaults to be aligned with Kafka client defaults [#30](https://github.com/logstash-plugins/logstash-integration-kafka/pull/30)
|
9
|
+
|
10
|
+
## 10.1.0
|
11
|
+
- updated kafka client (and its dependencies) to version 2.4.1 ([#16](https://github.com/logstash-plugins/logstash-integration-kafka/pull/16))
|
12
|
+
- added the input `client_rack` parameter to enable support for follower fetching
|
13
|
+
- added the output `partitioner` parameter for tuning partitioning strategy
|
14
|
+
- Refactor: normalized error logging a bit - make sure exception type is logged
|
15
|
+
- Fix: properly handle empty ssl_endpoint_identification_algorithm [#8](https://github.com/logstash-plugins/logstash-integration-kafka/pull/8)
|
16
|
+
- Refactor : made `partition_assignment_strategy` option easier to configure by accepting simple values from an enumerated set instead of requiring lengthy class paths ([#25](https://github.com/logstash-plugins/logstash-integration-kafka/pull/25))
|
17
|
+
|
18
|
+
## 10.0.1
|
19
|
+
- Fix links in changelog pointing to stand-alone plugin changelogs.
|
20
|
+
- Refactor: scope java_import to plugin class
|
21
|
+
|
22
|
+
|
1
23
|
## 10.0.0
|
2
24
|
- Initial release of the Kafka Integration Plugin, which combines
|
3
25
|
previously-separate Kafka plugins and shared dependencies into a single
|
4
26
|
codebase; independent changelogs for previous versions can be found:
|
5
|
-
- [Kafka Input Plugin @9.1.0](https://github.com/logstash-plugins/logstash-input-
|
6
|
-
- [Kafka Output Plugin @8.1.0](https://github.com/logstash-plugins/logstash-output-
|
27
|
+
- [Kafka Input Plugin @9.1.0](https://github.com/logstash-plugins/logstash-input-kafka/blob/v9.1.0/CHANGELOG.md)
|
28
|
+
- [Kafka Output Plugin @8.1.0](https://github.com/logstash-plugins/logstash-output-kafka/blob/v8.1.0/CHANGELOG.md)
|
data/CONTRIBUTORS
CHANGED
@@ -11,6 +11,8 @@ Contributors:
|
|
11
11
|
* João Duarte (jsvd)
|
12
12
|
* Kurt Hurtado (kurtado)
|
13
13
|
* Ry Biesemeyer (yaauie)
|
14
|
+
* Rob Cowart (robcowart)
|
15
|
+
* Tim te Beek (timtebeek)
|
14
16
|
|
15
17
|
Note: If you've sent us patches, bug reports, or otherwise contributed to
|
16
18
|
Logstash, and you aren't on the list above and want to be, please let us know
|
data/LICENSE
CHANGED
@@ -1,13 +1,202 @@
|
|
1
|
-
Copyright (c) 2012-2018 Elasticsearch <http://www.elastic.co>
|
2
1
|
|
3
|
-
|
4
|
-
|
5
|
-
|
2
|
+
Apache License
|
3
|
+
Version 2.0, January 2004
|
4
|
+
http://www.apache.org/licenses/
|
6
5
|
|
7
|
-
|
6
|
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
8
7
|
|
9
|
-
|
10
|
-
|
11
|
-
|
12
|
-
|
13
|
-
|
8
|
+
1. Definitions.
|
9
|
+
|
10
|
+
"License" shall mean the terms and conditions for use, reproduction,
|
11
|
+
and distribution as defined by Sections 1 through 9 of this document.
|
12
|
+
|
13
|
+
"Licensor" shall mean the copyright owner or entity authorized by
|
14
|
+
the copyright owner that is granting the License.
|
15
|
+
|
16
|
+
"Legal Entity" shall mean the union of the acting entity and all
|
17
|
+
other entities that control, are controlled by, or are under common
|
18
|
+
control with that entity. For the purposes of this definition,
|
19
|
+
"control" means (i) the power, direct or indirect, to cause the
|
20
|
+
direction or management of such entity, whether by contract or
|
21
|
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
22
|
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
23
|
+
|
24
|
+
"You" (or "Your") shall mean an individual or Legal Entity
|
25
|
+
exercising permissions granted by this License.
|
26
|
+
|
27
|
+
"Source" form shall mean the preferred form for making modifications,
|
28
|
+
including but not limited to software source code, documentation
|
29
|
+
source, and configuration files.
|
30
|
+
|
31
|
+
"Object" form shall mean any form resulting from mechanical
|
32
|
+
transformation or translation of a Source form, including but
|
33
|
+
not limited to compiled object code, generated documentation,
|
34
|
+
and conversions to other media types.
|
35
|
+
|
36
|
+
"Work" shall mean the work of authorship, whether in Source or
|
37
|
+
Object form, made available under the License, as indicated by a
|
38
|
+
copyright notice that is included in or attached to the work
|
39
|
+
(an example is provided in the Appendix below).
|
40
|
+
|
41
|
+
"Derivative Works" shall mean any work, whether in Source or Object
|
42
|
+
form, that is based on (or derived from) the Work and for which the
|
43
|
+
editorial revisions, annotations, elaborations, or other modifications
|
44
|
+
represent, as a whole, an original work of authorship. For the purposes
|
45
|
+
of this License, Derivative Works shall not include works that remain
|
46
|
+
separable from, or merely link (or bind by name) to the interfaces of,
|
47
|
+
the Work and Derivative Works thereof.
|
48
|
+
|
49
|
+
"Contribution" shall mean any work of authorship, including
|
50
|
+
the original version of the Work and any modifications or additions
|
51
|
+
to that Work or Derivative Works thereof, that is intentionally
|
52
|
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
53
|
+
or by an individual or Legal Entity authorized to submit on behalf of
|
54
|
+
the copyright owner. For the purposes of this definition, "submitted"
|
55
|
+
means any form of electronic, verbal, or written communication sent
|
56
|
+
to the Licensor or its representatives, including but not limited to
|
57
|
+
communication on electronic mailing lists, source code control systems,
|
58
|
+
and issue tracking systems that are managed by, or on behalf of, the
|
59
|
+
Licensor for the purpose of discussing and improving the Work, but
|
60
|
+
excluding communication that is conspicuously marked or otherwise
|
61
|
+
designated in writing by the copyright owner as "Not a Contribution."
|
62
|
+
|
63
|
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
64
|
+
on behalf of whom a Contribution has been received by Licensor and
|
65
|
+
subsequently incorporated within the Work.
|
66
|
+
|
67
|
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
68
|
+
this License, each Contributor hereby grants to You a perpetual,
|
69
|
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
70
|
+
copyright license to reproduce, prepare Derivative Works of,
|
71
|
+
publicly display, publicly perform, sublicense, and distribute the
|
72
|
+
Work and such Derivative Works in Source or Object form.
|
73
|
+
|
74
|
+
3. Grant of Patent License. Subject to the terms and conditions of
|
75
|
+
this License, each Contributor hereby grants to You a perpetual,
|
76
|
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
77
|
+
(except as stated in this section) patent license to make, have made,
|
78
|
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
79
|
+
where such license applies only to those patent claims licensable
|
80
|
+
by such Contributor that are necessarily infringed by their
|
81
|
+
Contribution(s) alone or by combination of their Contribution(s)
|
82
|
+
with the Work to which such Contribution(s) was submitted. If You
|
83
|
+
institute patent litigation against any entity (including a
|
84
|
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
85
|
+
or a Contribution incorporated within the Work constitutes direct
|
86
|
+
or contributory patent infringement, then any patent licenses
|
87
|
+
granted to You under this License for that Work shall terminate
|
88
|
+
as of the date such litigation is filed.
|
89
|
+
|
90
|
+
4. Redistribution. You may reproduce and distribute copies of the
|
91
|
+
Work or Derivative Works thereof in any medium, with or without
|
92
|
+
modifications, and in Source or Object form, provided that You
|
93
|
+
meet the following conditions:
|
94
|
+
|
95
|
+
(a) You must give any other recipients of the Work or
|
96
|
+
Derivative Works a copy of this License; and
|
97
|
+
|
98
|
+
(b) You must cause any modified files to carry prominent notices
|
99
|
+
stating that You changed the files; and
|
100
|
+
|
101
|
+
(c) You must retain, in the Source form of any Derivative Works
|
102
|
+
that You distribute, all copyright, patent, trademark, and
|
103
|
+
attribution notices from the Source form of the Work,
|
104
|
+
excluding those notices that do not pertain to any part of
|
105
|
+
the Derivative Works; and
|
106
|
+
|
107
|
+
(d) If the Work includes a "NOTICE" text file as part of its
|
108
|
+
distribution, then any Derivative Works that You distribute must
|
109
|
+
include a readable copy of the attribution notices contained
|
110
|
+
within such NOTICE file, excluding those notices that do not
|
111
|
+
pertain to any part of the Derivative Works, in at least one
|
112
|
+
of the following places: within a NOTICE text file distributed
|
113
|
+
as part of the Derivative Works; within the Source form or
|
114
|
+
documentation, if provided along with the Derivative Works; or,
|
115
|
+
within a display generated by the Derivative Works, if and
|
116
|
+
wherever such third-party notices normally appear. The contents
|
117
|
+
of the NOTICE file are for informational purposes only and
|
118
|
+
do not modify the License. You may add Your own attribution
|
119
|
+
notices within Derivative Works that You distribute, alongside
|
120
|
+
or as an addendum to the NOTICE text from the Work, provided
|
121
|
+
that such additional attribution notices cannot be construed
|
122
|
+
as modifying the License.
|
123
|
+
|
124
|
+
You may add Your own copyright statement to Your modifications and
|
125
|
+
may provide additional or different license terms and conditions
|
126
|
+
for use, reproduction, or distribution of Your modifications, or
|
127
|
+
for any such Derivative Works as a whole, provided Your use,
|
128
|
+
reproduction, and distribution of the Work otherwise complies with
|
129
|
+
the conditions stated in this License.
|
130
|
+
|
131
|
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
132
|
+
any Contribution intentionally submitted for inclusion in the Work
|
133
|
+
by You to the Licensor shall be under the terms and conditions of
|
134
|
+
this License, without any additional terms or conditions.
|
135
|
+
Notwithstanding the above, nothing herein shall supersede or modify
|
136
|
+
the terms of any separate license agreement you may have executed
|
137
|
+
with Licensor regarding such Contributions.
|
138
|
+
|
139
|
+
6. Trademarks. This License does not grant permission to use the trade
|
140
|
+
names, trademarks, service marks, or product names of the Licensor,
|
141
|
+
except as required for reasonable and customary use in describing the
|
142
|
+
origin of the Work and reproducing the content of the NOTICE file.
|
143
|
+
|
144
|
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
145
|
+
agreed to in writing, Licensor provides the Work (and each
|
146
|
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
147
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
148
|
+
implied, including, without limitation, any warranties or conditions
|
149
|
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
150
|
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
151
|
+
appropriateness of using or redistributing the Work and assume any
|
152
|
+
risks associated with Your exercise of permissions under this License.
|
153
|
+
|
154
|
+
8. Limitation of Liability. In no event and under no legal theory,
|
155
|
+
whether in tort (including negligence), contract, or otherwise,
|
156
|
+
unless required by applicable law (such as deliberate and grossly
|
157
|
+
negligent acts) or agreed to in writing, shall any Contributor be
|
158
|
+
liable to You for damages, including any direct, indirect, special,
|
159
|
+
incidental, or consequential damages of any character arising as a
|
160
|
+
result of this License or out of the use or inability to use the
|
161
|
+
Work (including but not limited to damages for loss of goodwill,
|
162
|
+
work stoppage, computer failure or malfunction, or any and all
|
163
|
+
other commercial damages or losses), even if such Contributor
|
164
|
+
has been advised of the possibility of such damages.
|
165
|
+
|
166
|
+
9. Accepting Warranty or Additional Liability. While redistributing
|
167
|
+
the Work or Derivative Works thereof, You may choose to offer,
|
168
|
+
and charge a fee for, acceptance of support, warranty, indemnity,
|
169
|
+
or other liability obligations and/or rights consistent with this
|
170
|
+
License. However, in accepting such obligations, You may act only
|
171
|
+
on Your own behalf and on Your sole responsibility, not on behalf
|
172
|
+
of any other Contributor, and only if You agree to indemnify,
|
173
|
+
defend, and hold each Contributor harmless for any liability
|
174
|
+
incurred by, or claims asserted against, such Contributor by reason
|
175
|
+
of your accepting any such warranty or additional liability.
|
176
|
+
|
177
|
+
END OF TERMS AND CONDITIONS
|
178
|
+
|
179
|
+
APPENDIX: How to apply the Apache License to your work.
|
180
|
+
|
181
|
+
To apply the Apache License to your work, attach the following
|
182
|
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
183
|
+
replaced with your own identifying information. (Don't include
|
184
|
+
the brackets!) The text should be enclosed in the appropriate
|
185
|
+
comment syntax for the file format. We also recommend that a
|
186
|
+
file or class name and description of purpose be included on the
|
187
|
+
same "printed page" as the copyright notice for easier
|
188
|
+
identification within third-party archives.
|
189
|
+
|
190
|
+
Copyright 2020 Elastic and contributors
|
191
|
+
|
192
|
+
Licensed under the Apache License, Version 2.0 (the "License");
|
193
|
+
you may not use this file except in compliance with the License.
|
194
|
+
You may obtain a copy of the License at
|
195
|
+
|
196
|
+
http://www.apache.org/licenses/LICENSE-2.0
|
197
|
+
|
198
|
+
Unless required by applicable law or agreed to in writing, software
|
199
|
+
distributed under the License is distributed on an "AS IS" BASIS,
|
200
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
201
|
+
See the License for the specific language governing permissions and
|
202
|
+
limitations under the License.
|
data/docs/index.asciidoc
CHANGED
@@ -23,6 +23,9 @@ include::{include_path}/plugin_header.asciidoc[]
|
|
23
23
|
|
24
24
|
The Kafka Integration Plugin provides integrated plugins for working with the https://kafka.apache.org/[Kafka] distributed streaming platform.
|
25
25
|
|
26
|
-
|
26
|
+
- {logstash-ref}/plugins-inputs-kafka.html[Kafka Input Plugin]
|
27
|
+
- {logstash-ref}/plugins-outputs-kafka.html[Kafka Output Plugin]
|
28
|
+
|
29
|
+
This plugin uses Kafka Client 2.4. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
|
27
30
|
|
28
31
|
:no_codec!:
|
data/docs/input-kafka.asciidoc
CHANGED
@@ -23,7 +23,7 @@ include::{include_path}/plugin_header.asciidoc[]
|
|
23
23
|
|
24
24
|
This input will read events from a Kafka topic.
|
25
25
|
|
26
|
-
This plugin uses Kafka Client 2.
|
26
|
+
This plugin uses Kafka Client 2.3.0. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
|
27
27
|
|
28
28
|
If you require features not yet available in this plugin (including client version upgrades), please file an issue with details about what you need.
|
29
29
|
|
@@ -46,9 +46,9 @@ the same `group_id`.
|
|
46
46
|
Ideally you should have as many threads as the number of partitions for a perfect balance --
|
47
47
|
more threads than partitions means that some threads will be idle
|
48
48
|
|
49
|
-
For more information see
|
49
|
+
For more information see https://kafka.apache.org/24/documentation.html#theconsumer
|
50
50
|
|
51
|
-
Kafka consumer configuration:
|
51
|
+
Kafka consumer configuration: https://kafka.apache.org/24/documentation.html#consumerconfigs
|
52
52
|
|
53
53
|
==== Metadata fields
|
54
54
|
|
@@ -71,46 +71,50 @@ inserted into your original event, you'll have to use the `mutate` filter to man
|
|
71
71
|
|
72
72
|
This plugin supports these configuration options plus the <<plugins-{type}s-{plugin}-common-options>> described later.
|
73
73
|
|
74
|
-
NOTE: Some of these options map to a Kafka option.
|
75
|
-
|
74
|
+
NOTE: Some of these options map to a Kafka option. Defaults usually reflect the Kafka default setting,
|
75
|
+
and might change if Kafka's consumer defaults change.
|
76
|
+
See the https://kafka.apache.org/24/documentation for more details.
|
76
77
|
|
77
78
|
[cols="<,<,<",options="header",]
|
78
79
|
|=======================================================================
|
79
80
|
|Setting |Input type|Required
|
80
|
-
| <<plugins-{type}s-{plugin}-auto_commit_interval_ms>> |<<
|
81
|
+
| <<plugins-{type}s-{plugin}-auto_commit_interval_ms>> |<<number,number>>|No
|
81
82
|
| <<plugins-{type}s-{plugin}-auto_offset_reset>> |<<string,string>>|No
|
82
83
|
| <<plugins-{type}s-{plugin}-bootstrap_servers>> |<<string,string>>|No
|
83
|
-
| <<plugins-{type}s-{plugin}-check_crcs>> |<<
|
84
|
+
| <<plugins-{type}s-{plugin}-check_crcs>> |<<boolean,boolean>>|No
|
85
|
+
| <<plugins-{type}s-{plugin}-client_dns_lookup>> |<<string,string>>|No
|
84
86
|
| <<plugins-{type}s-{plugin}-client_id>> |<<string,string>>|No
|
85
|
-
| <<plugins-{type}s-{plugin}-
|
87
|
+
| <<plugins-{type}s-{plugin}-client_rack>> |<<string,string>>|No
|
88
|
+
| <<plugins-{type}s-{plugin}-connections_max_idle_ms>> |<<number,number>>|No
|
86
89
|
| <<plugins-{type}s-{plugin}-consumer_threads>> |<<number,number>>|No
|
87
90
|
| <<plugins-{type}s-{plugin}-decorate_events>> |<<boolean,boolean>>|No
|
88
|
-
| <<plugins-{type}s-{plugin}-enable_auto_commit>> |<<
|
91
|
+
| <<plugins-{type}s-{plugin}-enable_auto_commit>> |<<boolean,boolean>>|No
|
89
92
|
| <<plugins-{type}s-{plugin}-exclude_internal_topics>> |<<string,string>>|No
|
90
|
-
| <<plugins-{type}s-{plugin}-fetch_max_bytes>> |<<
|
91
|
-
| <<plugins-{type}s-{plugin}-fetch_max_wait_ms>> |<<
|
92
|
-
| <<plugins-{type}s-{plugin}-fetch_min_bytes>> |<<
|
93
|
+
| <<plugins-{type}s-{plugin}-fetch_max_bytes>> |<<number,number>>|No
|
94
|
+
| <<plugins-{type}s-{plugin}-fetch_max_wait_ms>> |<<number,number>>|No
|
95
|
+
| <<plugins-{type}s-{plugin}-fetch_min_bytes>> |<<number,number>>|No
|
93
96
|
| <<plugins-{type}s-{plugin}-group_id>> |<<string,string>>|No
|
94
|
-
| <<plugins-{type}s-{plugin}-heartbeat_interval_ms>> |<<
|
97
|
+
| <<plugins-{type}s-{plugin}-heartbeat_interval_ms>> |<<number,number>>|No
|
98
|
+
| <<plugins-{type}s-{plugin}-isolation_level>> |<<string,string>>|No
|
95
99
|
| <<plugins-{type}s-{plugin}-jaas_path>> |a valid filesystem path|No
|
96
100
|
| <<plugins-{type}s-{plugin}-kerberos_config>> |a valid filesystem path|No
|
97
101
|
| <<plugins-{type}s-{plugin}-key_deserializer_class>> |<<string,string>>|No
|
98
|
-
| <<plugins-{type}s-{plugin}-max_partition_fetch_bytes>> |<<
|
99
|
-
| <<plugins-{type}s-{plugin}-max_poll_interval_ms>> |<<
|
100
|
-
| <<plugins-{type}s-{plugin}-max_poll_records>> |<<
|
101
|
-
| <<plugins-{type}s-{plugin}-metadata_max_age_ms>> |<<
|
102
|
+
| <<plugins-{type}s-{plugin}-max_partition_fetch_bytes>> |<<number,number>>|No
|
103
|
+
| <<plugins-{type}s-{plugin}-max_poll_interval_ms>> |<<number,number>>|No
|
104
|
+
| <<plugins-{type}s-{plugin}-max_poll_records>> |<<number,number>>|No
|
105
|
+
| <<plugins-{type}s-{plugin}-metadata_max_age_ms>> |<<number,number>>|No
|
102
106
|
| <<plugins-{type}s-{plugin}-partition_assignment_strategy>> |<<string,string>>|No
|
103
107
|
| <<plugins-{type}s-{plugin}-poll_timeout_ms>> |<<number,number>>|No
|
104
|
-
| <<plugins-{type}s-{plugin}-receive_buffer_bytes>> |<<
|
105
|
-
| <<plugins-{type}s-{plugin}-reconnect_backoff_ms>> |<<
|
106
|
-
| <<plugins-{type}s-{plugin}-request_timeout_ms>> |<<
|
107
|
-
| <<plugins-{type}s-{plugin}-retry_backoff_ms>> |<<
|
108
|
+
| <<plugins-{type}s-{plugin}-receive_buffer_bytes>> |<<number,number>>|No
|
109
|
+
| <<plugins-{type}s-{plugin}-reconnect_backoff_ms>> |<<number,number>>|No
|
110
|
+
| <<plugins-{type}s-{plugin}-request_timeout_ms>> |<<number,number>>|No
|
111
|
+
| <<plugins-{type}s-{plugin}-retry_backoff_ms>> |<<number,number>>|No
|
108
112
|
| <<plugins-{type}s-{plugin}-sasl_jaas_config>> |<<string,string>>|No
|
109
113
|
| <<plugins-{type}s-{plugin}-sasl_kerberos_service_name>> |<<string,string>>|No
|
110
114
|
| <<plugins-{type}s-{plugin}-sasl_mechanism>> |<<string,string>>|No
|
111
115
|
| <<plugins-{type}s-{plugin}-security_protocol>> |<<string,string>>, one of `["PLAINTEXT", "SSL", "SASL_PLAINTEXT", "SASL_SSL"]`|No
|
112
|
-
| <<plugins-{type}s-{plugin}-send_buffer_bytes>> |<<
|
113
|
-
| <<plugins-{type}s-{plugin}-session_timeout_ms>> |<<
|
116
|
+
| <<plugins-{type}s-{plugin}-send_buffer_bytes>> |<<number,number>>|No
|
117
|
+
| <<plugins-{type}s-{plugin}-session_timeout_ms>> |<<number,number>>|No
|
114
118
|
| <<plugins-{type}s-{plugin}-ssl_endpoint_identification_algorithm>> |<<string,string>>|No
|
115
119
|
| <<plugins-{type}s-{plugin}-ssl_key_password>> |<<password,password>>|No
|
116
120
|
| <<plugins-{type}s-{plugin}-ssl_keystore_location>> |a valid filesystem path|No
|
@@ -132,8 +136,8 @@ input plugins.
|
|
132
136
|
[id="plugins-{type}s-{plugin}-auto_commit_interval_ms"]
|
133
137
|
===== `auto_commit_interval_ms`
|
134
138
|
|
135
|
-
* Value type is <<
|
136
|
-
* Default value is `
|
139
|
+
* Value type is <<number,number>>
|
140
|
+
* Default value is `5000`.
|
137
141
|
|
138
142
|
The frequency in milliseconds that the consumer offsets are committed to Kafka.
|
139
143
|
|
@@ -165,12 +169,23 @@ case a server is down).
|
|
165
169
|
[id="plugins-{type}s-{plugin}-check_crcs"]
|
166
170
|
===== `check_crcs`
|
167
171
|
|
172
|
+
* Value type is <<boolean,boolean>>
|
173
|
+
* Default value is `true`
|
174
|
+
|
175
|
+
Automatically check the CRC32 of the records consumed.
|
176
|
+
This ensures no on-the-wire or on-disk corruption to the messages occurred.
|
177
|
+
This check adds some overhead, so it may be disabled in cases seeking extreme performance.
|
178
|
+
|
179
|
+
[id="plugins-{type}s-{plugin}-client_dns_lookup"]
|
180
|
+
===== `client_dns_lookup`
|
181
|
+
|
168
182
|
* Value type is <<string,string>>
|
169
|
-
*
|
183
|
+
* Default value is `"default"`
|
170
184
|
|
171
|
-
|
172
|
-
|
173
|
-
|
185
|
+
How DNS lookups should be done. If set to `use_all_dns_ips`, when the lookup returns multiple
|
186
|
+
IP addresses for a hostname, they will all be attempted to connect to before failing the
|
187
|
+
connection. If the value is `resolve_canonical_bootstrap_servers_only` each entry will be
|
188
|
+
resolved and expanded into a list of canonical names.
|
174
189
|
|
175
190
|
[id="plugins-{type}s-{plugin}-client_id"]
|
176
191
|
===== `client_id`
|
@@ -182,12 +197,25 @@ The id string to pass to the server when making requests. The purpose of this
|
|
182
197
|
is to be able to track the source of requests beyond just ip/port by allowing
|
183
198
|
a logical application name to be included.
|
184
199
|
|
185
|
-
[id="plugins-{type}s-{plugin}-
|
186
|
-
===== `
|
200
|
+
[id="plugins-{type}s-{plugin}-client_rack"]
|
201
|
+
===== `client_rack`
|
187
202
|
|
188
203
|
* Value type is <<string,string>>
|
189
204
|
* There is no default value for this setting.
|
190
205
|
|
206
|
+
A rack identifier for the Kafka consumer.
|
207
|
+
Used to select the physically closest rack for the consumer to read from.
|
208
|
+
The setting corresponds with Kafka's `broker.rack` configuration.
|
209
|
+
|
210
|
+
NOTE: Available only for Kafka 2.4.0 and higher. See
|
211
|
+
https://cwiki.apache.org/confluence/display/KAFKA/KIP-392%3A+Allow+consumers+to+fetch+from+closest+replica[KIP-392].
|
212
|
+
|
213
|
+
[id="plugins-{type}s-{plugin}-connections_max_idle_ms"]
|
214
|
+
===== `connections_max_idle_ms`
|
215
|
+
|
216
|
+
* Value type is <<number,number>>
|
217
|
+
* Default value is `540000` milliseconds (9 minutes).
|
218
|
+
|
191
219
|
Close idle connections after the number of milliseconds specified by this config.
|
192
220
|
|
193
221
|
[id="plugins-{type}s-{plugin}-consumer_threads"]
|
@@ -217,13 +245,16 @@ This will add a field named `kafka` to the logstash event containing the followi
|
|
217
245
|
[id="plugins-{type}s-{plugin}-enable_auto_commit"]
|
218
246
|
===== `enable_auto_commit`
|
219
247
|
|
220
|
-
* Value type is <<
|
221
|
-
* Default value is `
|
248
|
+
* Value type is <<boolean,boolean>>
|
249
|
+
* Default value is `true`
|
222
250
|
|
223
|
-
If true, periodically commit to Kafka the offsets of messages already returned by the consumer.
|
224
251
|
This committed offset will be used when the process fails as the position from
|
225
252
|
which the consumption will begin.
|
226
253
|
|
254
|
+
If true, periodically commit to Kafka the offsets of messages already returned by
|
255
|
+
the consumer. If value is `false` however, the offset is committed every time the
|
256
|
+
consumer fetches the data from the topic.
|
257
|
+
|
227
258
|
[id="plugins-{type}s-{plugin}-exclude_internal_topics"]
|
228
259
|
===== `exclude_internal_topics`
|
229
260
|
|
@@ -236,8 +267,8 @@ If set to true the only way to receive records from an internal topic is subscri
|
|
236
267
|
[id="plugins-{type}s-{plugin}-fetch_max_bytes"]
|
237
268
|
===== `fetch_max_bytes`
|
238
269
|
|
239
|
-
* Value type is <<
|
240
|
-
*
|
270
|
+
* Value type is <<number,number>>
|
271
|
+
* Default value is `52428800` (50MB)
|
241
272
|
|
242
273
|
The maximum amount of data the server should return for a fetch request. This is not an
|
243
274
|
absolute maximum, if the first message in the first non-empty partition of the fetch is larger
|
@@ -246,8 +277,8 @@ than this value, the message will still be returned to ensure that the consumer
|
|
246
277
|
[id="plugins-{type}s-{plugin}-fetch_max_wait_ms"]
|
247
278
|
===== `fetch_max_wait_ms`
|
248
279
|
|
249
|
-
* Value type is <<
|
250
|
-
*
|
280
|
+
* Value type is <<number,number>>
|
281
|
+
* Default value is `500` milliseconds.
|
251
282
|
|
252
283
|
The maximum amount of time the server will block before answering the fetch request if
|
253
284
|
there isn't sufficient data to immediately satisfy `fetch_min_bytes`. This
|
@@ -256,7 +287,7 @@ should be less than or equal to the timeout used in `poll_timeout_ms`
|
|
256
287
|
[id="plugins-{type}s-{plugin}-fetch_min_bytes"]
|
257
288
|
===== `fetch_min_bytes`
|
258
289
|
|
259
|
-
* Value type is <<
|
290
|
+
* Value type is <<number,number>>
|
260
291
|
* There is no default value for this setting.
|
261
292
|
|
262
293
|
The minimum amount of data the server should return for a fetch request. If insufficient
|
@@ -276,8 +307,8 @@ Logstash instances with the same `group_id`
|
|
276
307
|
[id="plugins-{type}s-{plugin}-heartbeat_interval_ms"]
|
277
308
|
===== `heartbeat_interval_ms`
|
278
309
|
|
279
|
-
* Value type is <<
|
280
|
-
*
|
310
|
+
* Value type is <<number,number>>
|
311
|
+
* Default value is `3000` milliseconds (3 seconds).
|
281
312
|
|
282
313
|
The expected time between heartbeats to the consumer coordinator. Heartbeats are used to ensure
|
283
314
|
that the consumer's session stays active and to facilitate rebalancing when new
|
@@ -285,6 +316,17 @@ consumers join or leave the group. The value must be set lower than
|
|
285
316
|
`session.timeout.ms`, but typically should be set no higher than 1/3 of that value.
|
286
317
|
It can be adjusted even lower to control the expected time for normal rebalances.
|
287
318
|
|
319
|
+
[id="plugins-{type}s-{plugin}-isolation_level"]
|
320
|
+
===== `isolation_level`
|
321
|
+
|
322
|
+
* Value type is <<string,string>>
|
323
|
+
* Default value is `"read_uncommitted"`
|
324
|
+
|
325
|
+
Controls how to read messages written transactionally. If set to `read_committed`, polling messages will only return
|
326
|
+
transactional messages which have been committed. If set to `read_uncommitted` (the default), polling messages will
|
327
|
+
return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned
|
328
|
+
unconditionally in either mode.
|
329
|
+
|
288
330
|
[id="plugins-{type}s-{plugin}-jaas_path"]
|
289
331
|
===== `jaas_path`
|
290
332
|
|
@@ -327,8 +369,8 @@ Java Class used to deserialize the record's key
|
|
327
369
|
[id="plugins-{type}s-{plugin}-max_partition_fetch_bytes"]
|
328
370
|
===== `max_partition_fetch_bytes`
|
329
371
|
|
330
|
-
* Value type is <<
|
331
|
-
*
|
372
|
+
* Value type is <<number,number>>
|
373
|
+
* Default value is `1048576` (1MB).
|
332
374
|
|
333
375
|
The maximum amount of data per-partition the server will return. The maximum total memory used for a
|
334
376
|
request will be `#partitions * max.partition.fetch.bytes`. This size must be at least
|
@@ -339,28 +381,28 @@ to fetch a large message on a certain partition.
|
|
339
381
|
[id="plugins-{type}s-{plugin}-max_poll_interval_ms"]
|
340
382
|
===== `max_poll_interval_ms`
|
341
383
|
|
342
|
-
* Value type is <<
|
343
|
-
*
|
384
|
+
* Value type is <<number,number>>
|
385
|
+
* Default value is `300000` milliseconds (5 minutes).
|
344
386
|
|
345
387
|
The maximum delay between invocations of poll() when using consumer group management. This places
|
346
388
|
an upper bound on the amount of time that the consumer can be idle before fetching more records.
|
347
389
|
If poll() is not called before expiration of this timeout, then the consumer is considered failed and
|
348
390
|
the group will rebalance in order to reassign the partitions to another member.
|
349
|
-
The value of the configuration `request_timeout_ms` must always be larger than max_poll_interval_ms
|
391
|
+
The value of the configuration `request_timeout_ms` must always be larger than `max_poll_interval_ms`. ???
|
350
392
|
|
351
393
|
[id="plugins-{type}s-{plugin}-max_poll_records"]
|
352
394
|
===== `max_poll_records`
|
353
395
|
|
354
|
-
* Value type is <<
|
355
|
-
*
|
396
|
+
* Value type is <<number,number>>
|
397
|
+
* Default value is `500`.
|
356
398
|
|
357
399
|
The maximum number of records returned in a single call to poll().
|
358
400
|
|
359
401
|
[id="plugins-{type}s-{plugin}-metadata_max_age_ms"]
|
360
402
|
===== `metadata_max_age_ms`
|
361
403
|
|
362
|
-
* Value type is <<
|
363
|
-
*
|
404
|
+
* Value type is <<number,number>>
|
405
|
+
* Default value is `300000` milliseconds (5 minutes).
|
364
406
|
|
365
407
|
The period of time in milliseconds after which we force a refresh of metadata even if
|
366
408
|
we haven't seen any partition leadership changes to proactively discover any new brokers or partitions
|
@@ -371,32 +413,43 @@ we haven't seen any partition leadership changes to proactively discover any new
|
|
371
413
|
* Value type is <<string,string>>
|
372
414
|
* There is no default value for this setting.
|
373
415
|
|
374
|
-
The
|
375
|
-
|
376
|
-
|
377
|
-
`
|
416
|
+
The name of the partition assignment strategy that the client uses to distribute
|
417
|
+
partition ownership amongst consumer instances, supported options are:
|
418
|
+
|
419
|
+
* `range`
|
420
|
+
* `round_robin`
|
421
|
+
* `sticky`
|
422
|
+
* `cooperative_sticky`
|
423
|
+
|
424
|
+
These map to Kafka's corresponding https://kafka.apache.org/24/javadoc/org/apache/kafka/clients/consumer/ConsumerPartitionAssignor.html[`ConsumerPartitionAssignor`]
|
425
|
+
implementations.
|
378
426
|
|
379
427
|
[id="plugins-{type}s-{plugin}-poll_timeout_ms"]
|
380
428
|
===== `poll_timeout_ms`
|
381
429
|
|
382
430
|
* Value type is <<number,number>>
|
383
|
-
* Default value is `100`
|
431
|
+
* Default value is `100` milliseconds.
|
384
432
|
|
385
|
-
Time
|
433
|
+
Time Kafka consumer will wait to receive new messages from topics.
|
434
|
+
|
435
|
+
After subscribing to a set of topics, the Kafka consumer automatically joins the group when polling.
|
436
|
+
The plugin poll-ing in a loop ensures consumer liveness.
|
437
|
+
Underneath the covers, Kafka client sends periodic heartbeats to the server.
|
438
|
+
The timeout specified the time to block waiting for input on each poll.
|
386
439
|
|
387
440
|
[id="plugins-{type}s-{plugin}-receive_buffer_bytes"]
|
388
441
|
===== `receive_buffer_bytes`
|
389
442
|
|
390
|
-
* Value type is <<
|
391
|
-
*
|
443
|
+
* Value type is <<number,number>>
|
444
|
+
* Default value is `32768` (32KB).
|
392
445
|
|
393
446
|
The size of the TCP receive buffer (SO_RCVBUF) to use when reading data.
|
394
447
|
|
395
448
|
[id="plugins-{type}s-{plugin}-reconnect_backoff_ms"]
|
396
449
|
===== `reconnect_backoff_ms`
|
397
450
|
|
398
|
-
* Value type is <<
|
399
|
-
*
|
451
|
+
* Value type is <<number,number>>
|
452
|
+
* Default value is `50` milliseconds.
|
400
453
|
|
401
454
|
The amount of time to wait before attempting to reconnect to a given host.
|
402
455
|
This avoids repeatedly connecting to a host in a tight loop.
|
@@ -405,8 +458,8 @@ This backoff applies to all requests sent by the consumer to the broker.
|
|
405
458
|
[id="plugins-{type}s-{plugin}-request_timeout_ms"]
|
406
459
|
===== `request_timeout_ms`
|
407
460
|
|
408
|
-
* Value type is <<
|
409
|
-
*
|
461
|
+
* Value type is <<number,number>>
|
462
|
+
* Default value is `40000` milliseconds (40 seconds).
|
410
463
|
|
411
464
|
The configuration controls the maximum amount of time the client will wait
|
412
465
|
for the response of a request. If the response is not received before the timeout
|
@@ -416,8 +469,8 @@ retries are exhausted.
|
|
416
469
|
[id="plugins-{type}s-{plugin}-retry_backoff_ms"]
|
417
470
|
===== `retry_backoff_ms`
|
418
471
|
|
419
|
-
* Value type is <<
|
420
|
-
*
|
472
|
+
* Value type is <<number,number>>
|
473
|
+
* Default value is `100` milliseconds.
|
421
474
|
|
422
475
|
The amount of time to wait before attempting to retry a failed fetch request
|
423
476
|
to a given topic partition. This avoids repeated fetching-and-failing in a tight loop.
|
@@ -470,16 +523,16 @@ Security protocol to use, which can be either of PLAINTEXT,SSL,SASL_PLAINTEXT,SA
|
|
470
523
|
[id="plugins-{type}s-{plugin}-send_buffer_bytes"]
|
471
524
|
===== `send_buffer_bytes`
|
472
525
|
|
473
|
-
* Value type is <<
|
474
|
-
*
|
526
|
+
* Value type is <<number,number>>
|
527
|
+
* Default value is `131072` (128KB).
|
475
528
|
|
476
529
|
The size of the TCP send buffer (SO_SNDBUF) to use when sending data
|
477
530
|
|
478
531
|
[id="plugins-{type}s-{plugin}-session_timeout_ms"]
|
479
532
|
===== `session_timeout_ms`
|
480
533
|
|
481
|
-
* Value type is <<
|
482
|
-
*
|
534
|
+
* Value type is <<number,number>>
|
535
|
+
* Default value is `10000` milliseconds (10 seconds).
|
483
536
|
|
484
537
|
The timeout after which, if the `poll_timeout_ms` is not invoked, the consumer is marked dead
|
485
538
|
and a rebalance operation is triggered for the group identified by `group_id`
|
@@ -539,7 +592,7 @@ The JKS truststore path to validate the Kafka broker's certificate.
|
|
539
592
|
* Value type is <<password,password>>
|
540
593
|
* There is no default value for this setting.
|
541
594
|
|
542
|
-
The truststore password
|
595
|
+
The truststore password.
|
543
596
|
|
544
597
|
[id="plugins-{type}s-{plugin}-ssl_truststore_type"]
|
545
598
|
===== `ssl_truststore_type`
|
@@ -574,8 +627,6 @@ The topics configuration will be ignored when using this configuration.
|
|
574
627
|
|
575
628
|
Java Class used to deserialize the record's value
|
576
629
|
|
577
|
-
|
578
|
-
|
579
630
|
[id="plugins-{type}s-{plugin}-common-options"]
|
580
631
|
include::{include_path}/{type}.asciidoc[]
|
581
632
|
|