logstash-integration-kafka 10.2.0-java → 10.3.0-java

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 88d87b3c5b9e443c0bc4ba8064b6316243f24bb0c0d6ce150ca0a19dd25ecb2a
4
- data.tar.gz: 5fc09468e8cdc7094bc6099c24d70b1c19bc339c3ec3dfa3562e0c389fa85753
3
+ metadata.gz: cbbffe559ff38eaa26c9922c62a3722b2b1404446602238bcc7de309ca428da6
4
+ data.tar.gz: 0ce7e58400e06ae5e1c7f689a14220600efe7ad9ba46f285a52323a05568f69f
5
5
  SHA512:
6
- metadata.gz: 4e0fc4586ca91967e9f8afa4a2eef95b7a5f8cd13f3b6689b4ace12b64d3ea628bc5112735d54edcb20c0fe1c6debac1e4d32983e1555eb9a84d380ab5d68a4e
7
- data.tar.gz: 10ae655222832b9b5f6b9d8aa155d0907da91e38f04c7ad74c949d7bf4c8ff5d222bde6b2e1779eca5bd4bebb99aaf54701c8e2fddd4cb5a3cff47fabe9831fe
6
+ metadata.gz: 7be900607b24146853783d4e3a2e5a6d351853ab4dda0b30d10769adf8b13df21540cea2699d94660bd9e56580c9bf88e9684d5762ff7684f5d3a6804637b602
7
+ data.tar.gz: cd1f72db84b5af7de286d683f0f312e4b826c8d75b9b8fc475f693d8bf62b517c61950dd5d4887767f79f69df17b2c9bf1223e03880b2724333aa0b695b81525
@@ -1,3 +1,6 @@
1
+ ## 10.3.0
2
+ - added the input and output `client_dns_lookup` parameter to allow control of how DNS requests are made
3
+
1
4
  ## 10.2.0
2
5
  - Changed: config defaults to be aligned with Kafka client defaults [#30](https://github.com/logstash-plugins/logstash-integration-kafka/pull/30)
3
6
 
@@ -23,7 +23,7 @@ include::{include_path}/plugin_header.asciidoc[]
23
23
 
24
24
  This input will read events from a Kafka topic.
25
25
 
26
- This plugin uses Kafka Client 2.1.0. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
26
+ This plugin uses Kafka Client 2.3.0. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
27
27
 
28
28
  If you require features not yet available in this plugin (including client version upgrades), please file an issue with details about what you need.
29
29
 
@@ -82,6 +82,7 @@ See the https://kafka.apache.org/24/documentation for more details.
82
82
  | <<plugins-{type}s-{plugin}-auto_offset_reset>> |<<string,string>>|No
83
83
  | <<plugins-{type}s-{plugin}-bootstrap_servers>> |<<string,string>>|No
84
84
  | <<plugins-{type}s-{plugin}-check_crcs>> |<<boolean,boolean>>|No
85
+ | <<plugins-{type}s-{plugin}-client_dns_lookup>> |<<string,string>>|No
85
86
  | <<plugins-{type}s-{plugin}-client_id>> |<<string,string>>|No
86
87
  | <<plugins-{type}s-{plugin}-client_rack>> |<<string,string>>|No
87
88
  | <<plugins-{type}s-{plugin}-connections_max_idle_ms>> |<<number,number>>|No
@@ -174,6 +175,17 @@ Automatically check the CRC32 of the records consumed.
174
175
  This ensures no on-the-wire or on-disk corruption to the messages occurred.
175
176
  This check adds some overhead, so it may be disabled in cases seeking extreme performance.
176
177
 
178
+ [id="plugins-{type}s-{plugin}-client_dns_lookup"]
179
+ ===== `client_dns_lookup`
180
+
181
+ * Value type is <<string,string>>
182
+ * Default value is `"default"`
183
+
184
+ How DNS lookups should be done. If set to `use_all_dns_ips`, when the lookup returns multiple
185
+ IP addresses for a hostname, they will all be attempted to connect to before failing the
186
+ connection. If the value is `resolve_canonical_bootstrap_servers_only` each entry will be
187
+ resolved and expanded into a list of canonical names.
188
+
177
189
  [id="plugins-{type}s-{plugin}-client_id"]
178
190
  ===== `client_id`
179
191
 
@@ -23,7 +23,7 @@ include::{include_path}/plugin_header.asciidoc[]
23
23
 
24
24
  Write events to a Kafka topic.
25
25
 
26
- This plugin uses Kafka Client 2.1.0. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
26
+ This plugin uses Kafka Client 2.3.0. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
27
27
 
28
28
  If you require features not yet available in this plugin (including client version upgrades), please file an issue with details about what you need.
29
29
 
@@ -67,6 +67,7 @@ See the https://kafka.apache.org/24/documentation for more details.
67
67
  | <<plugins-{type}s-{plugin}-batch_size>> |<<number,number>>|No
68
68
  | <<plugins-{type}s-{plugin}-bootstrap_servers>> |<<string,string>>|No
69
69
  | <<plugins-{type}s-{plugin}-buffer_memory>> |<<number,number>>|No
70
+ | <<plugins-{type}s-{plugin}-client_dns_lookup>> |<<string,string>>|No
70
71
  | <<plugins-{type}s-{plugin}-client_id>> |<<string,string>>|No
71
72
  | <<plugins-{type}s-{plugin}-compression_type>> |<<string,string>>, one of `["none", "gzip", "snappy", "lz4"]`|No
72
73
  | <<plugins-{type}s-{plugin}-jaas_path>> |a valid filesystem path|No
@@ -149,6 +150,17 @@ subset of brokers.
149
150
 
150
151
  The total bytes of memory the producer can use to buffer records waiting to be sent to the server.
151
152
 
153
+ [id="plugins-{type}s-{plugin}-client_dns_lookup"]
154
+ ===== `client_dns_lookup`
155
+
156
+ * Value type is <<string,string>>
157
+ * Default value is `"default"`
158
+
159
+ How DNS lookups should be done. If set to `use_all_dns_ips`, when the lookup returns multiple
160
+ IP addresses for a hostname, they will all be attempted to connect to before failing the
161
+ connection. If the value is `resolve_canonical_bootstrap_servers_only` each entry will be
162
+ resolved and expanded into a list of canonical names.
163
+
152
164
  [id="plugins-{type}s-{plugin}-client_id"]
153
165
  ===== `client_id`
154
166
 
@@ -71,6 +71,11 @@ class LogStash::Inputs::Kafka < LogStash::Inputs::Base
71
71
  # corruption to the messages occurred. This check adds some overhead, so it may be
72
72
  # disabled in cases seeking extreme performance.
73
73
  config :check_crcs, :validate => :boolean, :default => true
74
+ # How DNS lookups should be done. If set to `use_all_dns_ips`, when the lookup returns multiple
75
+ # IP addresses for a hostname, they will all be attempted to connect to before failing the
76
+ # connection. If the value is `resolve_canonical_bootstrap_servers_only` each entry will be
77
+ # resolved and expanded into a list of canonical names.
78
+ config :client_dns_lookup, :validate => ["default", "use_all_dns_ips", "resolve_canonical_bootstrap_servers_only"], :default => "default"
74
79
  # The id string to pass to the server when making requests. The purpose of this
75
80
  # is to be able to track the source of requests beyond just ip/port by allowing
76
81
  # a logical application name to be included.
@@ -296,6 +301,7 @@ class LogStash::Inputs::Kafka < LogStash::Inputs::Base
296
301
  props.put(kafka::AUTO_OFFSET_RESET_CONFIG, auto_offset_reset) unless auto_offset_reset.nil?
297
302
  props.put(kafka::BOOTSTRAP_SERVERS_CONFIG, bootstrap_servers)
298
303
  props.put(kafka::CHECK_CRCS_CONFIG, check_crcs.to_s) unless check_crcs.nil?
304
+ props.put(kafka::CLIENT_DNS_LOOKUP_CONFIG, client_dns_lookup)
299
305
  props.put(kafka::CLIENT_ID_CONFIG, client_id)
300
306
  props.put(kafka::CONNECTIONS_MAX_IDLE_MS_CONFIG, connections_max_idle_ms.to_s) unless connections_max_idle_ms.nil?
301
307
  props.put(kafka::ENABLE_AUTO_COMMIT_CONFIG, enable_auto_commit.to_s)
@@ -79,6 +79,11 @@ class LogStash::Outputs::Kafka < LogStash::Outputs::Base
79
79
  # The compression type for all data generated by the producer.
80
80
  # The default is none (i.e. no compression). Valid values are none, gzip, or snappy.
81
81
  config :compression_type, :validate => ["none", "gzip", "snappy", "lz4"], :default => "none"
82
+ # How DNS lookups should be done. If set to `use_all_dns_ips`, when the lookup returns multiple
83
+ # IP addresses for a hostname, they will all be attempted to connect to before failing the
84
+ # connection. If the value is `resolve_canonical_bootstrap_servers_only` each entry will be
85
+ # resolved and expanded into a list of canonical names.
86
+ config :client_dns_lookup, :validate => ["default", "use_all_dns_ips", "resolve_canonical_bootstrap_servers_only"], :default => "default"
82
87
  # The id string to pass to the server when making requests.
83
88
  # The purpose of this is to be able to track the source of requests beyond just
84
89
  # ip/port by allowing a logical application name to be included with the request
@@ -318,6 +323,7 @@ class LogStash::Outputs::Kafka < LogStash::Outputs::Base
318
323
  props.put(kafka::BOOTSTRAP_SERVERS_CONFIG, bootstrap_servers)
319
324
  props.put(kafka::BUFFER_MEMORY_CONFIG, buffer_memory.to_s)
320
325
  props.put(kafka::COMPRESSION_TYPE_CONFIG, compression_type)
326
+ props.put(kafka::CLIENT_DNS_LOOKUP_CONFIG, client_dns_lookup)
321
327
  props.put(kafka::CLIENT_ID_CONFIG, client_id) unless client_id.nil?
322
328
  props.put(kafka::KEY_SERIALIZER_CLASS_CONFIG, key_serializer)
323
329
  props.put(kafka::LINGER_MS_CONFIG, linger_ms.to_s)
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-integration-kafka'
3
- s.version = '10.2.0'
3
+ s.version = '10.3.0'
4
4
  s.licenses = ['Apache-2.0']
5
5
  s.summary = "Integration with Kafka - input and output plugins"
6
6
  s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline "+
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-integration-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 10.2.0
4
+ version: 10.3.0
5
5
  platform: java
6
6
  authors:
7
7
  - Elastic
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-04-30 00:00:00.000000000 Z
11
+ date: 2020-06-18 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement