logstash-output-kafka 5.1.11 → 6.0.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: ab8c64c41a0113d3e77d4d74adac247424d6a7c5
4
- data.tar.gz: ea85a76f270acd51915e1a1a7a80da0e83e9e82e
3
+ metadata.gz: 3fbe7d272d3d724287711d102a7f95c9ebb0a12d
4
+ data.tar.gz: 26bca0940efcb7a509ee256f939043aada0618d0
5
5
  SHA512:
6
- metadata.gz: 18927151780d0b0d085e5472b9d8d6a07601bd847163a3bd6578b9359227c73344d4f24ef1ed498d756e3fea8594860e3cafd0ddb15a2eb566b14ed3a340f890
7
- data.tar.gz: 76f4b59eec073cefafe18db46fae720cdede9a234d39aab6da3acc4e6588921e089e076afc8a2cd4542ba81572322229fd9e9332a66d38ea8420e5f7a49899f4
6
+ metadata.gz: 6947a968db077a2b434565940083e5f2727e1b303af01ffb33e7b7a80629707c2efaeaf84ad85ad02e0abfbc1eee625ff317d40e582b37def0d48bfcd32bfab3
7
+ data.tar.gz: 92eae1ac1ca9812e11bf042167cefa518616c6b2a142d14c354cd47461bca823c5ce0cd79a43b7314b80c2077d7acb47ce0f4b4f6a41a49d67010c4a9facfe54
@@ -1,44 +1,5 @@
1
- ## 5.1.11
2
- - Bugfix: Sends are now retried until successful. Previously, failed transmissions to Kafka
3
- could have been lost by the KafkaProducer library. Now we verify transmission explicitly.
4
- This changes the default 'retry' from 0 to retry-forever. It was a bug that we defaulted
5
- to a retry count of 0.
6
- https://github.com/logstash-plugins/logstash-output-kafka/pull/151
7
-
8
- - Docs: Fix misleading info about the default codec
9
-
10
- ## 5.1.10
11
- - Doc fixes
12
-
13
- ## 5.1.9
14
- - Doc fixes
15
-
16
- ## 5.1.8
17
- - Docs: Fix topic title
18
-
19
- ## 5.1.7
20
- - Docs: Fix asciidoc
21
-
22
- ## 5.1.6
23
- - Fix a bug when SASL_SSL+PLAIN (no Kerberos) was specified.
24
-
25
- ## 5.1.5
26
- - Fix a bug where consumer was not correctly setup when `SASL_SSL` option was specified.
27
-
28
- ## 5.1.4
29
- - Docs: Move info about security features out of the compatibility matrix and into the main text.
30
-
31
- ## 5.1.3
32
- - Docs: Clarify compatibility matrix and remove it from the changelog to avoid duplication.
33
-
34
- ## 5.1.2
35
- - bump sl4j version dep to 1.7.21 to align with input plugin and kafka-clients maven deps
36
-
37
- ## 5.1.1
38
- - Docs: Update the doc to mention the plain codec instead of the json codec.
39
-
40
- ## 5.1.0
41
- - Add Kerberos authentication feature.
1
+ ## 6.0.0
2
+ - BREAKING: update to 0.10.1.0 client protocol. not backwards compatible with 5.0 (protocol versions <= 10.0.0.1)
42
3
 
43
4
  ## 5.0.5
44
5
  - Fix logging
data/README.md CHANGED
@@ -6,6 +6,19 @@ This is a plugin for [Logstash](https://github.com/elastic/logstash).
6
6
 
7
7
  It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
8
8
 
9
+ ## Kafka Compatibility
10
+
11
+ Here's a table that describes the compatibility matrix for Kafka Broker support. Please remember that it is good advice to upgrade brokers before consumers/producers since brokers target backwards compatibility. The 0.9 broker will work with both the 0.8 consumer and 0.9 consumer APIs but not the other way around.
12
+
13
+ | Kafka Broker Version | Logstash Version | Input Plugin | Output Plugin | Why? |
14
+ |:---------------:|:------------------:|:--------------:|:---------------:|:------|
15
+ | 0.8 | 2.0 - 2.x | < 3.0.0 | <3.0.0 | Legacy, 0.8 is still popular |
16
+ | 0.9 | 2.0 - 2.3.x | 3.0.0 | 3.0.0 | Intermediate release before 0.10 that works with old Ruby Event API `[]` |
17
+ | 0.9 | 2.4, 5.0 | 4.0.0 | 4.0.0 | Intermediate release before 0.10 with new get/set API |
18
+ | 0.10.0.x | 2.4, 5.0 | 5.0.0 | 5.0.0 | Track latest Kafka release. Not compatible with 0.9 broker |
19
+ | 0.10.1.x | 2.4, 5.0 | 6.0.0 | X | Track latest Kafka release. Not compatible with < 0.10.0.x broker |
20
+
21
+
9
22
  ## Documentation
10
23
 
11
24
  Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
@@ -11,25 +11,19 @@ require 'logstash-output-kafka_jars.rb'
11
11
  #
12
12
  # [options="header"]
13
13
  # |==========================================================
14
- # |Kafka Client Version |Logstash Version |Plugin Version |Why?
15
- # |0.8 |2.0.0 - 2.x.x |<3.0.0 |Legacy, 0.8 is still popular
16
- # |0.9 |2.0.0 - 2.3.x | 3.x.x |Works with the old Ruby Event API (`event['product']['price'] = 10`)
17
- # |0.9 |2.4.x - 5.x.x | 4.x.x |Works with the new getter/setter APIs (`event.set('[product][price]', 10)`)
18
- # |0.10.0.x |2.4.x - 5.x.x | 5.x.x |Not compatible with the <= 0.9 broker
14
+ # |Kafka Client Version |Logstash Version |Plugin Version |Security Features |Why?
15
+ # |0.8 |2.0.0 - 2.x.x |<3.0.0 | |Legacy, 0.8 is still popular
16
+ # |0.9 |2.0.0 - 2.3.x | 3.x.x |Basic Auth, SSL |Works with the old Ruby Event API (`event['product']['price'] = 10`)
17
+ # |0.9 |2.4.0 - 5.0.x | 4.x.x |Basic Auth, SSL |Works with the new getter/setter APIs (`event.set('[product][price]', 10)`)
18
+ # |0.10.0.x |2.4.0 - 5.0.x | 5.x.x |Basic Auth, SSL |Not compatible with the <= 0.9 broker
19
+ # |0.10.1.x |2.4.0 - 5.0.x | 6.x.x |Basic Auth, SSL |Not compatible with <= 0.10.0.x broker
19
20
  # |==========================================================
20
- #
21
+ #
21
22
  # NOTE: We recommended that you use matching Kafka client and broker versions. During upgrades, you should
22
23
  # upgrade brokers before clients because brokers target backwards compatibility. For example, the 0.9 broker
23
24
  # is compatible with both the 0.8 consumer and 0.9 consumer APIs, but not the other way around.
24
25
  #
25
- # This output supports connecting to Kafka over:
26
- #
27
- # * SSL (requires plugin version 3.0.0 or later)
28
- # * Kerberos SASL (requires plugin version 5.1.0 or later)
29
- #
30
- # By default security is disabled but can be turned on as needed.
31
- #
32
- # The only required configuration is the topic_id. The default codec is plain,
26
+ # The only required configuration is the topic_id. The default codec is json,
33
27
  # so events will be persisted on the broker in json format. If you select a codec of plain,
34
28
  # Logstash will encode your messages with not only the message but also with a timestamp and
35
29
  # hostname. If you do not want anything but your message passing through, you should make the output
@@ -111,63 +105,23 @@ class LogStash::Outputs::Kafka < LogStash::Outputs::Base
111
105
  # elapses the client will resend the request if necessary or fail the request if
112
106
  # retries are exhausted.
113
107
  config :request_timeout_ms, :validate => :string
114
- # The default retry behavior is to retry until successful. To prevent data loss,
115
- # the use of this setting is discouraged.
116
- #
117
- # If you choose to set `retries`, a value greater than zero will cause the
118
- # client to only retry a fixed number of times. This will result in data loss
119
- # if a transient error outlasts your retry count.
120
- #
121
- # A value less than zero is a configuration error.
122
- config :retries, :validate => :number
108
+ # Setting a value greater than zero will cause the client to
109
+ # resend any record whose send fails with a potentially transient error.
110
+ config :retries, :validate => :number, :default => 0
123
111
  # The amount of time to wait before attempting to retry a failed produce request to a given topic partition.
124
112
  config :retry_backoff_ms, :validate => :number, :default => 100
125
113
  # The size of the TCP send buffer to use when sending data.
126
114
  config :send_buffer_bytes, :validate => :number, :default => 131072
127
115
  # Enable SSL/TLS secured communication to Kafka broker.
128
- config :ssl, :validate => :boolean, :default => false, :deprecated => "Use security_protocol => 'ssl'"
129
- # The truststore type.
130
- config :ssl_truststore_type, :validate => :string
116
+ config :ssl, :validate => :boolean, :default => false
131
117
  # The JKS truststore path to validate the Kafka broker's certificate.
132
118
  config :ssl_truststore_location, :validate => :path
133
119
  # The truststore password
134
120
  config :ssl_truststore_password, :validate => :password
135
- # The keystore type.
136
- config :ssl_keystore_type, :validate => :string
137
121
  # If client authentication is required, this setting stores the keystore path.
138
122
  config :ssl_keystore_location, :validate => :path
139
123
  # If client authentication is required, this setting stores the keystore password
140
124
  config :ssl_keystore_password, :validate => :password
141
- # The password of the private key in the key store file.
142
- config :ssl_key_password, :validate => :password
143
- # Security protocol to use, which can be either of PLAINTEXT,SSL,SASL_PLAINTEXT,SASL_SSL
144
- config :security_protocol, :validate => ["PLAINTEXT", "SSL", "SASL_PLAINTEXT", "SASL_SSL"], :default => "PLAINTEXT"
145
- # http://kafka.apache.org/documentation.html#security_sasl[SASL mechanism] used for client connections.
146
- # This may be any mechanism for which a security provider is available.
147
- # GSSAPI is the default mechanism.
148
- config :sasl_mechanism, :validate => :string, :default => "GSSAPI"
149
- # The Kerberos principal name that Kafka broker runs as.
150
- # This can be defined either in Kafka's JAAS config or in Kafka's config.
151
- config :sasl_kerberos_service_name, :validate => :string
152
- # The Java Authentication and Authorization Service (JAAS) API supplies user authentication and authorization
153
- # services for Kafka. This setting provides the path to the JAAS file. Sample JAAS file for Kafka client:
154
- # [source,java]
155
- # ----------------------------------
156
- # KafkaClient {
157
- # com.sun.security.auth.module.Krb5LoginModule required
158
- # useTicketCache=true
159
- # renewTicket=true
160
- # serviceName="kafka";
161
- # };
162
- # ----------------------------------
163
- #
164
- # Please note that specifying `jaas_path` and `kerberos_config` in the config file will add these
165
- # to the global JVM system properties. This means if you have multiple Kafka inputs, all of them would be sharing the same
166
- # `jaas_path` and `kerberos_config`. If this is not desirable, you would have to run separate instances of Logstash on
167
- # different JVM instances.
168
- config :jaas_path, :validate => :path
169
- # Optional path to kerberos config file. This is krb5.conf style as detailed in https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html
170
- config :kerberos_config, :validate => :path
171
125
  # The configuration controls the maximum amount of time the server will wait for acknowledgments
172
126
  # from followers to meet the acknowledgment requirements the producer has specified with the
173
127
  # acks configuration. If the requested number of acknowledgments are not met when the timeout
@@ -181,17 +135,6 @@ class LogStash::Outputs::Kafka < LogStash::Outputs::Base
181
135
 
182
136
  public
183
137
  def register
184
- @thread_batch_map = Concurrent::Hash.new
185
-
186
- if !@retries.nil?
187
- if @retries < 0
188
- raise ConfigurationError, "A negative retry count (#{@retries}) is not valid. Must be a value >= 0"
189
- end
190
-
191
- @logger.warn("Kafka output is configured with finite retry. This instructs Logstash to LOSE DATA after a set number of send attempts fails. If you do not want to lose data if Kafka is down, then you must remove the retry setting.", :retries => @retries)
192
- end
193
-
194
-
195
138
  @producer = create_producer
196
139
  @codec.on_event do |event, data|
197
140
  begin
@@ -200,7 +143,7 @@ class LogStash::Outputs::Kafka < LogStash::Outputs::Base
200
143
  else
201
144
  record = org.apache.kafka.clients.producer.ProducerRecord.new(event.sprintf(@topic_id), event.sprintf(@message_key), data)
202
145
  end
203
- prepare(record)
146
+ @producer.send(record)
204
147
  rescue LogStash::ShutdownSignal
205
148
  @logger.debug('Kafka producer got shutdown signal')
206
149
  rescue => e
@@ -208,89 +151,14 @@ class LogStash::Outputs::Kafka < LogStash::Outputs::Base
208
151
  :exception => e)
209
152
  end
210
153
  end
211
- end # def register
212
-
213
- def prepare(record)
214
- # This output is threadsafe, so we need to keep a batch per thread.
215
- @thread_batch_map[Thread.current].add(record)
216
- end
217
-
218
- def multi_receive(events)
219
- t = Thread.current
220
- if !@thread_batch_map.include?(t)
221
- @thread_batch_map[t] = java.util.ArrayList.new(events.size)
222
- end
223
-
224
- events.each do |event|
225
- break if event == LogStash::SHUTDOWN
226
- @codec.encode(event)
227
- end
228
-
229
- batch = @thread_batch_map[t]
230
- if batch.any?
231
- retrying_send(batch)
232
- batch.clear
233
- end
234
- end
235
154
 
236
- def retrying_send(batch)
237
- remaining = @retries;
238
-
239
- while batch.any?
240
- if !remaining.nil?
241
- if remaining < 0
242
- # TODO(sissel): Offer to DLQ? Then again, if it's a transient fault,
243
- # DLQing would make things worse (you dlq data that would be successful
244
- # after the fault is repaired)
245
- logger.info("Exhausted user-configured retry count when sending to Kafka. Dropping these events.",
246
- :max_retries => @retries, :drop_count => batch.count)
247
- break
248
- end
249
-
250
- remaining -= 1
251
- end
252
-
253
- failures = []
254
-
255
- futures = batch.collect do |record|
256
- begin
257
- # send() can throw an exception even before the future is created.
258
- @producer.send(record)
259
- rescue org.apache.kafka.common.errors.TimeoutException => e
260
- failures << record
261
- nil
262
- rescue org.apache.kafka.common.errors.InterruptException => e
263
- failures << record
264
- nil
265
- rescue org.apache.kafka.common.errors.SerializationException => e
266
- # TODO(sissel): Retrying will fail because the data itself has a problem serializing.
267
- # TODO(sissel): Let's add DLQ here.
268
- failures << record
269
- nil
270
- end
271
- end.compact
272
-
273
- futures.each_with_index do |future, i|
274
- begin
275
- result = future.get()
276
- rescue => e
277
- # TODO(sissel): Add metric to count failures, possibly by exception type.
278
- logger.debug? && logger.debug("KafkaProducer.send() failed: #{e}", :exception => e);
279
- failures << batch[i]
280
- end
281
- end
282
-
283
- # No failures? Cool. Let's move on.
284
- break if failures.empty?
155
+ end # def register
285
156
 
286
- # Otherwise, retry with any failed transmissions
287
- batch = failures
288
- delay = 1.0 / @retry_backoff_ms
289
- logger.info("Sending batch to Kafka failed. Will retry after a delay.", :batch_size => batch.size,
290
- :failures => failures.size, :sleep => delay);
291
- sleep(delay)
157
+ def receive(event)
158
+ if event == LogStash::SHUTDOWN
159
+ return
292
160
  end
293
-
161
+ @codec.encode(event)
294
162
  end
295
163
 
296
164
  def close
@@ -314,23 +182,24 @@ class LogStash::Outputs::Kafka < LogStash::Outputs::Base
314
182
  props.put(kafka::MAX_REQUEST_SIZE_CONFIG, max_request_size.to_s)
315
183
  props.put(kafka::RECONNECT_BACKOFF_MS_CONFIG, reconnect_backoff_ms) unless reconnect_backoff_ms.nil?
316
184
  props.put(kafka::REQUEST_TIMEOUT_MS_CONFIG, request_timeout_ms) unless request_timeout_ms.nil?
317
- props.put(kafka::RETRIES_CONFIG, retries.to_s) unless retries.nil?
318
- props.put(kafka::RETRY_BACKOFF_MS_CONFIG, retry_backoff_ms.to_s)
185
+ props.put(kafka::RETRIES_CONFIG, retries.to_s)
186
+ props.put(kafka::RETRY_BACKOFF_MS_CONFIG, retry_backoff_ms.to_s)
319
187
  props.put(kafka::SEND_BUFFER_CONFIG, send_buffer_bytes.to_s)
320
188
  props.put(kafka::VALUE_SERIALIZER_CLASS_CONFIG, value_serializer)
321
189
 
322
- props.put("security.protocol", security_protocol) unless security_protocol.nil?
190
+ if ssl
191
+ if ssl_truststore_location.nil?
192
+ raise LogStash::ConfigurationError, "ssl_truststore_location must be set when SSL is enabled"
193
+ end
194
+ props.put("security.protocol", "SSL")
195
+ props.put("ssl.truststore.location", ssl_truststore_location)
196
+ props.put("ssl.truststore.password", ssl_truststore_password.value) unless ssl_truststore_password.nil?
323
197
 
324
- if security_protocol == "SSL" || ssl
325
- set_trustore_keystore_config(props)
326
- elsif security_protocol == "SASL_PLAINTEXT"
327
- set_sasl_config(props)
328
- elsif security_protocol == "SASL_SSL"
329
- set_trustore_keystore_config(props)
330
- set_sasl_config(props)
198
+ #Client auth stuff
199
+ props.put("ssl.keystore.location", ssl_keystore_location) unless ssl_keystore_location.nil?
200
+ props.put("ssl.keystore.password", ssl_keystore_password.value) unless ssl_keystore_password.nil?
331
201
  end
332
202
 
333
-
334
203
  org.apache.kafka.clients.producer.KafkaProducer.new(props)
335
204
  rescue => e
336
205
  logger.error("Unable to create Kafka producer from given configuration", :kafka_error_message => e)
@@ -338,31 +207,4 @@ class LogStash::Outputs::Kafka < LogStash::Outputs::Base
338
207
  end
339
208
  end
340
209
 
341
- def set_trustore_keystore_config(props)
342
- if ssl_truststore_location.nil?
343
- raise LogStash::ConfigurationError, "ssl_truststore_location must be set when SSL is enabled"
344
- end
345
- props.put("ssl.truststore.type", ssl_truststore_type) unless ssl_truststore_type.nil?
346
- props.put("ssl.truststore.location", ssl_truststore_location)
347
- props.put("ssl.truststore.password", ssl_truststore_password.value) unless ssl_truststore_password.nil?
348
-
349
- # Client auth stuff
350
- props.put("ssl.keystore.type", ssl_keystore_type) unless ssl_keystore_type.nil?
351
- props.put("ssl.key.password", ssl_key_password.value) unless ssl_key_password.nil?
352
- props.put("ssl.keystore.location", ssl_keystore_location) unless ssl_keystore_location.nil?
353
- props.put("ssl.keystore.password", ssl_keystore_password.value) unless ssl_keystore_password.nil?
354
- end
355
-
356
- def set_sasl_config(props)
357
- java.lang.System.setProperty("java.security.auth.login.config",jaas_path) unless jaas_path.nil?
358
- java.lang.System.setProperty("java.security.krb5.conf",kerberos_config) unless kerberos_config.nil?
359
-
360
- props.put("sasl.mechanism",sasl_mechanism)
361
- if sasl_mechanism == "GSSAPI" && sasl_kerberos_service_name.nil?
362
- raise LogStash::ConfigurationError, "sasl_kerberos_service_name must be specified when SASL mechanism is GSSAPI"
363
- end
364
-
365
- props.put("sasl.kerberos.service.name",sasl_kerberos_service_name) unless sasl_kerberos_service_name.nil?
366
- end
367
-
368
210
  end #class LogStash::Outputs::Kafka
@@ -1,7 +1,7 @@
1
1
  Gem::Specification.new do |s|
2
2
 
3
3
  s.name = 'logstash-output-kafka'
4
- s.version = '5.1.11'
4
+ s.version = '6.0.0'
5
5
  s.licenses = ['Apache License (2.0)']
6
6
  s.summary = 'Output events to a Kafka topic. This uses the Kafka Producer API to write messages to a topic on the broker'
7
7
  s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
@@ -19,7 +19,7 @@ Gem::Specification.new do |s|
19
19
  # Special flag to let us know this is actually a logstash plugin
20
20
  s.metadata = { 'logstash_plugin' => 'true', 'group' => 'output'}
21
21
 
22
- s.requirements << "jar 'org.apache.kafka:kafka-clients', '0.10.0.1'"
22
+ s.requirements << "jar 'org.apache.kafka:kafka-clients', '0.10.1.0'"
23
23
  s.requirements << "jar 'org.slf4j:slf4j-log4j12', '1.7.21'"
24
24
 
25
25
  s.add_development_dependency 'jar-dependencies', '~> 0.3.2'
@@ -157,7 +157,7 @@ describe "outputs/kafka", :integration => true do
157
157
  def load_kafka_data(config)
158
158
  kafka = LogStash::Outputs::Kafka.new(config)
159
159
  kafka.register
160
- kafka.multi_receive(num_events.times.collect { event })
160
+ num_events.times do kafka.receive(event) end
161
161
  kafka.close
162
162
  end
163
163
 
@@ -25,118 +25,34 @@ describe "outputs/kafka" do
25
25
  context 'when outputting messages' do
26
26
  it 'should send logstash event to kafka broker' do
27
27
  expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send)
28
- .with(an_instance_of(org.apache.kafka.clients.producer.ProducerRecord)).and_call_original
28
+ .with(an_instance_of(org.apache.kafka.clients.producer.ProducerRecord))
29
29
  kafka = LogStash::Outputs::Kafka.new(simple_kafka_config)
30
30
  kafka.register
31
- kafka.multi_receive([event])
31
+ kafka.receive(event)
32
32
  end
33
33
 
34
34
  it 'should support Event#sprintf placeholders in topic_id' do
35
35
  topic_field = 'topic_name'
36
36
  expect(org.apache.kafka.clients.producer.ProducerRecord).to receive(:new)
37
- .with("my_topic", event.to_s).and_call_original
38
- expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send).and_call_original
37
+ .with("my_topic", event.to_s)
38
+ expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send)
39
39
  kafka = LogStash::Outputs::Kafka.new({'topic_id' => "%{#{topic_field}}"})
40
40
  kafka.register
41
- kafka.multi_receive([event])
41
+ kafka.receive(event)
42
42
  end
43
43
 
44
44
  it 'should support field referenced message_keys' do
45
45
  expect(org.apache.kafka.clients.producer.ProducerRecord).to receive(:new)
46
- .with("test", "172.0.0.1", event.to_s).and_call_original
47
- expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send).and_call_original
46
+ .with("test", "172.0.0.1", event.to_s)
47
+ expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send)
48
48
  kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge({"message_key" => "%{host}"}))
49
49
  kafka.register
50
- kafka.multi_receive([event])
50
+ kafka.receive(event)
51
51
  end
52
52
 
53
53
  it 'should raise config error when truststore location is not set and ssl is enabled' do
54
- kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge("security_protocol" => "SSL"))
54
+ kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge({"ssl" => "true"}))
55
55
  expect { kafka.register }.to raise_error(LogStash::ConfigurationError, /ssl_truststore_location must be set when SSL is enabled/)
56
56
  end
57
57
  end
58
-
59
- context "when KafkaProducer#send() raises an exception" do
60
- let(:failcount) { (rand * 10).to_i }
61
- let(:sendcount) { failcount + 1 }
62
-
63
- let(:exception_classes) { [
64
- org.apache.kafka.common.errors.TimeoutException,
65
- org.apache.kafka.common.errors.InterruptException,
66
- org.apache.kafka.common.errors.SerializationException
67
- ] }
68
-
69
- before do
70
- count = 0
71
- expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send)
72
- .exactly(sendcount).times
73
- .and_wrap_original do |m, *args|
74
- if count < failcount # fail 'failcount' times in a row.
75
- count += 1
76
- # Pick an exception at random
77
- raise exception_classes.shuffle.first.new("injected exception for testing")
78
- else
79
- m.call(*args) # call original
80
- end
81
- end
82
- end
83
-
84
- it "should retry until successful" do
85
- kafka = LogStash::Outputs::Kafka.new(simple_kafka_config)
86
- kafka.register
87
- kafka.multi_receive([event])
88
- end
89
- end
90
-
91
- context "when a send fails" do
92
- context "and the default retries behavior is used" do
93
- # Fail this many times and then finally succeed.
94
- let(:failcount) { (rand * 10).to_i }
95
-
96
- # Expect KafkaProducer.send() to get called again after every failure, plus the successful one.
97
- let(:sendcount) { failcount + 1 }
98
-
99
- it "should retry until successful" do
100
- count = 0;
101
-
102
- expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send)
103
- .exactly(sendcount).times
104
- .and_wrap_original do |m, *args|
105
- if count < failcount
106
- count += 1
107
- # inject some failures.
108
-
109
- # Return a custom Future that will raise an exception to simulate a Kafka send() problem.
110
- future = java.util.concurrent.FutureTask.new { raise "Failed" }
111
- future.run
112
- future
113
- else
114
- m.call(*args)
115
- end
116
- end
117
- kafka = LogStash::Outputs::Kafka.new(simple_kafka_config)
118
- kafka.register
119
- kafka.multi_receive([event])
120
- end
121
- end
122
-
123
- context "and when retries is set by the user" do
124
- let(:retries) { (rand * 10).to_i }
125
- let(:max_sends) { retries + 1 }
126
-
127
- it "should give up after retries are exhausted" do
128
- expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send)
129
- .at_most(max_sends).times
130
- .and_wrap_original do |m, *args|
131
- # Always fail.
132
- future = java.util.concurrent.FutureTask.new { raise "Failed" }
133
- future.run
134
- future
135
- end
136
- kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge("retries" => retries))
137
- kafka.register
138
- kafka.multi_receive([event])
139
- end
140
- end
141
- end
142
58
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-output-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 5.1.11
4
+ version: 6.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Elasticsearch
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2017-10-09 00:00:00.000000000 Z
11
+ date: 2016-11-08 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement
@@ -132,7 +132,7 @@ files:
132
132
  - logstash-output-kafka.gemspec
133
133
  - spec/integration/outputs/kafka_spec.rb
134
134
  - spec/unit/outputs/kafka_spec.rb
135
- - vendor/jar-dependencies/runtime-jars/kafka-clients-0.10.0.1.jar
135
+ - vendor/jar-dependencies/runtime-jars/kafka-clients-0.10.1.0.jar
136
136
  - vendor/jar-dependencies/runtime-jars/log4j-1.2.17.jar
137
137
  - vendor/jar-dependencies/runtime-jars/lz4-1.3.0.jar
138
138
  - vendor/jar-dependencies/runtime-jars/slf4j-api-1.7.21.jar
@@ -159,7 +159,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
159
159
  - !ruby/object:Gem::Version
160
160
  version: '0'
161
161
  requirements:
162
- - jar 'org.apache.kafka:kafka-clients', '0.10.0.1'
162
+ - jar 'org.apache.kafka:kafka-clients', '0.10.1.0'
163
163
  - jar 'org.slf4j:slf4j-log4j12', '1.7.21'
164
164
  rubyforge_project:
165
165
  rubygems_version: 2.4.8