logstash-integration-kafka 10.6.0-java → 10.7.4-java

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 1d8b40d779e91e9c05dece65249660ab5c272b6833658cbca51b977d92936f42
4
- data.tar.gz: 58323e216be645aede9f0b49c27824958b6627485bdf79a6774cd5f87b818245
3
+ metadata.gz: efc6c33cf871ecd41fc07468d3d6e47dc1a71c4dbd1800fe99127da703547dd2
4
+ data.tar.gz: 644f506705807c95e15fc035aac7f5d57233dd5809ae37f2c71939324ab9c3e7
5
5
  SHA512:
6
- metadata.gz: 0f05eec028758745a2ab04b90d721128c088c46d0bb9a01923c389118c99a718a561d2ca2420fc2e206e4cf75e4e3695e1704a17d9eff9f4f18e66da3d3ccb85
7
- data.tar.gz: 85e117a64d14d013674869ccadfb5487a04991ec83c6a9d6874496b862be200bf4e7203d3ac4fc779d67131ecbe82212fafd66884114337d17ed33dda7ad0963
6
+ metadata.gz: 28341e37050a860c8e87d0b74a4da4d1fdd37eff3e9c95bd5596dff4d9a613149f4dd09052a949f6002fb6da835601df12ca6d37d03ea25819ff6517833da37c
7
+ data.tar.gz: eb196288b02dd30b92bf3a4e083be0b0d8f005307203e20e8892262d46e48041e4d4212182be685ecae668970530c7050c7132f4c2aaa3be50f3f00e46a04122
data/CHANGELOG.md CHANGED
@@ -1,3 +1,20 @@
1
+ ## 10.7.4
2
+ - Docs: make sure Kafka clients version is updated in docs [#83](https://github.com/logstash-plugins/logstash-integration-kafka/pull/83)
3
+ Since **10.6.0** Kafka client was updated to **2.5.1**
4
+
5
+ ## 10.7.3
6
+ - Changed `decorate_events` to add also Kafka headers [#78](https://github.com/logstash-plugins/logstash-integration-kafka/pull/78)
7
+
8
+ ## 10.7.2
9
+ - Update Jersey dependency to version 2.33 [#75](https://github.com/logstash-plugins/logstash-integration-kafka/pull/75)
10
+
11
+ ## 10.7.1
12
+ - Fix: dropped usage of SHUTDOWN event deprecated since Logstash 5.0 [#71](https://github.com/logstash-plugins/logstash-integration-kafka/pull/71)
13
+
14
+ ## 10.7.0
15
+ - Switched use from Faraday to Manticore as HTTP client library to access Schema Registry service
16
+ to fix issue [#63](https://github.com/logstash-plugins/logstash-integration-kafka/pull/63)
17
+
1
18
  ## 10.6.0
2
19
  - Added functionality to Kafka input to use Avro deserializer in retrieving data from Kafka. The schema is retrieved
3
20
  from an instance of Confluent's Schema Registry service [#51](https://github.com/logstash-plugins/logstash-integration-kafka/pull/51)
data/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # Logstash Plugin
2
2
 
3
- [![Travis Build Status](https://travis-ci.org/logstash-plugins/logstash-integration-kafka.svg)](https://travis-ci.org/logstash-plugins/logstash-integration-kafka)
3
+ [![Travis Build Status](https://travis-ci.com/logstash-plugins/logstash-integration-kafka.svg)](https://travis-ci.com/logstash-plugins/logstash-integration-kafka)
4
4
 
5
5
  This is a plugin for [Logstash](https://github.com/elastic/logstash).
6
6
 
data/docs/index.asciidoc CHANGED
@@ -1,7 +1,7 @@
1
1
  :plugin: kafka
2
2
  :type: integration
3
3
  :no_codec:
4
- :kafka_client: 2.4
4
+ :kafka_client: 2.5.1
5
5
 
6
6
  ///////////////////////////////////////////
7
7
  START - GENERATED VARIABLES, DO NOT EDIT!
@@ -2,8 +2,8 @@
2
2
  :plugin: kafka
3
3
  :type: input
4
4
  :default_codec: plain
5
- :kafka_client: 2.4
6
- :kafka_client_doc: 24
5
+ :kafka_client: 2.5
6
+ :kafka_client_doc: 25
7
7
 
8
8
  ///////////////////////////////////////////
9
9
  START - GENERATED VARIABLES, DO NOT EDIT!
@@ -73,7 +73,7 @@ either when the record was created (default) or when it was received by the
73
73
  broker. See more about property log.message.timestamp.type at
74
74
  https://kafka.apache.org/{kafka_client_doc}/documentation.html#brokerconfigs
75
75
 
76
- Metadata is only added to the event if the `decorate_events` option is set to true (it defaults to false).
76
+ Metadata is only added to the event if the `decorate_events` option is set to `basic` or `extended` (it defaults to `none`).
77
77
 
78
78
  Please note that `@metadata` fields are not part of any of your events at output time. If you need these information to be
79
79
  inserted into your original event, you'll have to use the `mutate` filter to manually copy the required fields into your `event`.
@@ -99,7 +99,7 @@ See the https://kafka.apache.org/{kafka_client_doc}/documentation for more detai
99
99
  | <<plugins-{type}s-{plugin}-client_rack>> |<<string,string>>|No
100
100
  | <<plugins-{type}s-{plugin}-connections_max_idle_ms>> |<<number,number>>|No
101
101
  | <<plugins-{type}s-{plugin}-consumer_threads>> |<<number,number>>|No
102
- | <<plugins-{type}s-{plugin}-decorate_events>> |<<boolean,boolean>>|No
102
+ | <<plugins-{type}s-{plugin}-decorate_events>> |<<string,string>>|No
103
103
  | <<plugins-{type}s-{plugin}-enable_auto_commit>> |<<boolean,boolean>>|No
104
104
  | <<plugins-{type}s-{plugin}-exclude_internal_topics>> |<<string,string>>|No
105
105
  | <<plugins-{type}s-{plugin}-fetch_max_bytes>> |<<number,number>>|No
@@ -246,10 +246,16 @@ balance — more threads than partitions means that some threads will be idl
246
246
  [id="plugins-{type}s-{plugin}-decorate_events"]
247
247
  ===== `decorate_events`
248
248
 
249
- * Value type is <<boolean,boolean>>
250
- * Default value is `false`
251
-
252
- Option to add Kafka metadata like topic, message size to the event.
249
+ * Value type is <<string,string>>
250
+ * Accepted values are:
251
+ - `none`: no metadata is added
252
+ - `basic`: record's attributes are added
253
+ - `extended`: record's attributes, headers are added
254
+ - `false`: deprecated alias for `none`
255
+ - `true`: deprecated alias for `basic`
256
+ * Default value is `none`
257
+
258
+ Option to add Kafka metadata like topic, message size and header key values to the event.
253
259
  This will add a field named `kafka` to the logstash event containing the following attributes:
254
260
 
255
261
  * `topic`: The topic this message is associated with
@@ -2,8 +2,8 @@
2
2
  :plugin: kafka
3
3
  :type: output
4
4
  :default_codec: plain
5
- :kafka_client: 2.4
6
- :kafka_client_doc: 24
5
+ :kafka_client: 2.5
6
+ :kafka_client_doc: 25
7
7
 
8
8
  ///////////////////////////////////////////
9
9
  START - GENERATED VARIABLES, DO NOT EDIT!
@@ -9,7 +9,7 @@ require_jar('io.confluent', 'kafka-schema-registry-client', '5.5.1')
9
9
  require_jar('org.apache.kafka', 'kafka_2.12', '2.5.1')
10
10
  require_jar('io.confluent', 'common-utils', '5.5.1')
11
11
  require_jar('javax.ws.rs', 'javax.ws.rs-api', '2.1.1')
12
- require_jar('org.glassfish.jersey.core', 'jersey-common', '2.30')
12
+ require_jar('org.glassfish.jersey.core', 'jersey-common', '2.33')
13
13
  require_jar('org.apache.kafka', 'kafka-clients', '2.5.1')
14
14
  require_jar('com.github.luben', 'zstd-jni', '1.4.4-7')
15
15
  require_jar('org.slf4j', 'slf4j-api', '1.7.30')
@@ -4,10 +4,11 @@ require 'stud/interval'
4
4
  require 'java'
5
5
  require 'logstash-integration-kafka_jars.rb'
6
6
  require 'logstash/plugin_mixins/kafka_support'
7
- require "faraday"
7
+ require 'manticore'
8
8
  require "json"
9
9
  require "logstash/json"
10
10
  require_relative '../plugin_mixins/common'
11
+ require 'logstash/plugin_mixins/deprecation_logger_support'
11
12
 
12
13
  # This input will read events from a Kafka topic. It uses the 0.10 version of
13
14
  # the consumer API provided by Kafka to read messages from the broker.
@@ -58,6 +59,7 @@ class LogStash::Inputs::Kafka < LogStash::Inputs::Base
58
59
 
59
60
  include LogStash::PluginMixins::KafkaSupport
60
61
  include ::LogStash::PluginMixins::KafkaAvroSchemaRegistry
62
+ include LogStash::PluginMixins::DeprecationLoggerSupport
61
63
 
62
64
  config_name 'kafka'
63
65
 
@@ -233,22 +235,49 @@ class LogStash::Inputs::Kafka < LogStash::Inputs::Base
233
235
  config :sasl_jaas_config, :validate => :string
234
236
  # Optional path to kerberos config file. This is krb5.conf style as detailed in https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html
235
237
  config :kerberos_config, :validate => :path
236
- # Option to add Kafka metadata like topic, message size to the event.
237
- # This will add a field named `kafka` to the logstash event containing the following attributes:
238
+ # Option to add Kafka metadata like topic, message size and header key values to the event.
239
+ # With `basic` this will add a field named `kafka` to the logstash event containing the following attributes:
238
240
  # `topic`: The topic this message is associated with
239
241
  # `consumer_group`: The consumer group used to read in this event
240
242
  # `partition`: The partition this message is associated with
241
243
  # `offset`: The offset from the partition this message is associated with
242
244
  # `key`: A ByteBuffer containing the message key
243
245
  # `timestamp`: The timestamp of this message
244
- config :decorate_events, :validate => :boolean, :default => false
246
+ # While with `extended` it adds also all the key values present in the Kafka header if the key is valid UTF-8 else
247
+ # silently skip it.
248
+ config :decorate_events, :validate => %w(none basic extended false true), :default => "none"
249
+
250
+ attr_reader :metadata_mode
245
251
 
246
252
  public
247
253
  def register
248
254
  @runner_threads = []
255
+ @metadata_mode = extract_metadata_level(@decorate_events)
249
256
  check_schema_registry_parameters
250
257
  end
251
258
 
259
+ METADATA_NONE = Set[].freeze
260
+ METADATA_BASIC = Set[:record_props].freeze
261
+ METADATA_EXTENDED = Set[:record_props, :headers].freeze
262
+ METADATA_DEPRECATION_MAP = { 'true' => 'basic', 'false' => 'none' }
263
+
264
+ private
265
+ def extract_metadata_level(decorate_events_setting)
266
+ metadata_enabled = decorate_events_setting
267
+
268
+ if METADATA_DEPRECATION_MAP.include?(metadata_enabled)
269
+ canonical_value = METADATA_DEPRECATION_MAP[metadata_enabled]
270
+ deprecation_logger.deprecated("Deprecated value `#{decorate_events_setting}` for `decorate_events` option; use `#{canonical_value}` instead.")
271
+ metadata_enabled = canonical_value
272
+ end
273
+
274
+ case metadata_enabled
275
+ when 'none' then METADATA_NONE
276
+ when 'basic' then METADATA_BASIC
277
+ when 'extended' then METADATA_EXTENDED
278
+ end
279
+ end
280
+
252
281
  public
253
282
  def run(logstash_queue)
254
283
  @runner_consumers = consumer_threads.times.map { |i| create_consumer("#{client_id}-#{i}") }
@@ -292,7 +321,7 @@ class LogStash::Inputs::Kafka < LogStash::Inputs::Base
292
321
  end
293
322
  event.remove("message")
294
323
  end
295
- if @decorate_events
324
+ if @metadata_mode.include?(:record_props)
296
325
  event.set("[@metadata][kafka][topic]", record.topic)
297
326
  event.set("[@metadata][kafka][consumer_group]", @group_id)
298
327
  event.set("[@metadata][kafka][partition]", record.partition)
@@ -300,6 +329,15 @@ class LogStash::Inputs::Kafka < LogStash::Inputs::Base
300
329
  event.set("[@metadata][kafka][key]", record.key)
301
330
  event.set("[@metadata][kafka][timestamp]", record.timestamp)
302
331
  end
332
+ if @metadata_mode.include?(:headers)
333
+ record.headers.each do |header|
334
+ s = String.from_java_bytes(header.value)
335
+ s.force_encoding(Encoding::UTF_8)
336
+ if s.valid_encoding?
337
+ event.set("[@metadata][kafka][headers]["+header.key+"]", s)
338
+ end
339
+ end
340
+ end
303
341
  logstash_queue << event
304
342
  end
305
343
  end
@@ -224,7 +224,6 @@ class LogStash::Outputs::Kafka < LogStash::Outputs::Base
224
224
  end
225
225
 
226
226
  events.each do |event|
227
- break if event == LogStash::SHUTDOWN
228
227
  @codec.encode(event)
229
228
  end
230
229
 
@@ -45,20 +45,21 @@ module LogStash
45
45
 
46
46
  private
47
47
  def check_for_schema_registry_connectivity_and_subjects
48
- client = Faraday.new(@schema_registry_url.to_s) do |conn|
49
- if schema_registry_proxy && !schema_registry_proxy.empty?
50
- conn.proxy = schema_registry_proxy.to_s
51
- end
52
- if schema_registry_key and !schema_registry_key.empty?
53
- conn.basic_auth(schema_registry_key, schema_registry_secret.value)
54
- end
48
+ options = {}
49
+ if schema_registry_proxy && !schema_registry_proxy.empty?
50
+ options[:proxy] = schema_registry_proxy.to_s
55
51
  end
52
+ if schema_registry_key and !schema_registry_key.empty?
53
+ options[:auth] = {:user => schema_registry_key, :password => schema_registry_secret.value}
54
+ end
55
+ client = Manticore::Client.new(options)
56
+
56
57
  begin
57
- response = client.get('/subjects')
58
- rescue Faraday::Error => e
58
+ response = client.get(@schema_registry_url.to_s + '/subjects').body
59
+ rescue Manticore::ManticoreException => e
59
60
  raise LogStash::ConfigurationError.new("Schema registry service doesn't respond, error: #{e.message}")
60
61
  end
61
- registered_subjects = JSON.parse response.body
62
+ registered_subjects = JSON.parse response
62
63
  expected_subjects = @topics.map { |t| "#{t}-value"}
63
64
  if (expected_subjects & registered_subjects).size != expected_subjects.size
64
65
  undefined_topic_subjects = expected_subjects - registered_subjects
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-integration-kafka'
3
- s.version = '10.6.0'
3
+ s.version = '10.7.4'
4
4
  s.licenses = ['Apache-2.0']
5
5
  s.summary = "Integration with Kafka - input and output plugins"
6
6
  s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline "+
@@ -46,6 +46,8 @@ Gem::Specification.new do |s|
46
46
  s.add_runtime_dependency 'logstash-codec-json'
47
47
  s.add_runtime_dependency 'logstash-codec-plain'
48
48
  s.add_runtime_dependency 'stud', '>= 0.0.22', '< 0.1.0'
49
+ s.add_runtime_dependency "manticore", '>= 0.5.4', '< 1.0.0'
50
+ s.add_runtime_dependency 'logstash-mixin-deprecation_logger_support', '~>1.0'
49
51
 
50
52
  s.add_development_dependency 'logstash-devutils'
51
53
  s.add_development_dependency 'rspec-wait'
@@ -3,7 +3,7 @@ require "logstash/devutils/rspec/spec_helper"
3
3
  require "logstash/inputs/kafka"
4
4
  require "rspec/wait"
5
5
  require "stud/try"
6
- require "faraday"
6
+ require "manticore"
7
7
  require "json"
8
8
 
9
9
  # Please run kafka_test_setup.sh prior to executing this integration test.
@@ -36,7 +36,15 @@ describe "inputs/kafka", :integration => true do
36
36
  end
37
37
  let(:decorate_config) do
38
38
  { 'topics' => ['logstash_integration_topic_plain'], 'codec' => 'plain', 'group_id' => group_id_3,
39
- 'auto_offset_reset' => 'earliest', 'decorate_events' => true }
39
+ 'auto_offset_reset' => 'earliest', 'decorate_events' => 'true' }
40
+ end
41
+ let(:decorate_headers_config) do
42
+ { 'topics' => ['logstash_integration_topic_plain_with_headers'], 'codec' => 'plain', 'group_id' => group_id_3,
43
+ 'auto_offset_reset' => 'earliest', 'decorate_events' => 'extended' }
44
+ end
45
+ let(:decorate_bad_headers_config) do
46
+ { 'topics' => ['logstash_integration_topic_plain_with_headers_badly'], 'codec' => 'plain', 'group_id' => group_id_3,
47
+ 'auto_offset_reset' => 'earliest', 'decorate_events' => 'extended' }
40
48
  end
41
49
  let(:manual_commit_config) do
42
50
  { 'topics' => ['logstash_integration_topic_plain'], 'codec' => 'plain', 'group_id' => group_id_5,
@@ -45,6 +53,35 @@ describe "inputs/kafka", :integration => true do
45
53
  let(:timeout_seconds) { 30 }
46
54
  let(:num_events) { 103 }
47
55
 
56
+ before(:all) do
57
+ # Prepare message with headers with valid UTF-8 chars
58
+ header = org.apache.kafka.common.header.internals.RecordHeader.new("name", "John ανδρεα €".to_java_bytes)
59
+ record = org.apache.kafka.clients.producer.ProducerRecord.new(
60
+ "logstash_integration_topic_plain_with_headers", 0, "key", "value", [header])
61
+ send_message(record)
62
+
63
+ # Prepare message with headers with invalid UTF-8 chars
64
+ invalid = "日本".encode('Shift_JIS').force_encoding(Encoding::UTF_8).to_java_bytes
65
+ header = org.apache.kafka.common.header.internals.RecordHeader.new("name", invalid)
66
+ record = org.apache.kafka.clients.producer.ProducerRecord.new(
67
+ "logstash_integration_topic_plain_with_headers_badly", 0, "key", "value", [header])
68
+
69
+ send_message(record)
70
+ end
71
+
72
+ def send_message(record)
73
+ props = java.util.Properties.new
74
+ kafka = org.apache.kafka.clients.producer.ProducerConfig
75
+ props.put(kafka::BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")
76
+ props.put(kafka::KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
77
+ props.put(kafka::VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
78
+
79
+ producer = org.apache.kafka.clients.producer.KafkaProducer.new(props)
80
+
81
+ producer.send(record)
82
+ producer.close
83
+ end
84
+
48
85
  describe "#kafka-topics" do
49
86
 
50
87
  it "should consume all messages from plain 3-partition topic" do
@@ -74,7 +111,7 @@ describe "inputs/kafka", :integration => true do
74
111
 
75
112
  context "#kafka-topics-pattern" do
76
113
  it "should consume all messages from all 3 topics" do
77
- total_events = num_events * 3
114
+ total_events = num_events * 3 + 2
78
115
  queue = consume_messages(pattern_config, timeout: timeout_seconds, event_count: total_events)
79
116
  expect(queue.length).to eq(total_events)
80
117
  end
@@ -91,6 +128,31 @@ describe "inputs/kafka", :integration => true do
91
128
  expect(event.get("[@metadata][kafka][timestamp]")).to be >= start
92
129
  end
93
130
  end
131
+
132
+ it "should show the right topic and group name in and kafka headers decorated kafka section" do
133
+ start = LogStash::Timestamp.now.time.to_i
134
+ consume_messages(decorate_headers_config, timeout: timeout_seconds, event_count: 1) do |queue, _|
135
+ expect(queue.length).to eq(1)
136
+ event = queue.shift
137
+ expect(event.get("[@metadata][kafka][topic]")).to eq("logstash_integration_topic_plain_with_headers")
138
+ expect(event.get("[@metadata][kafka][consumer_group]")).to eq(group_id_3)
139
+ expect(event.get("[@metadata][kafka][timestamp]")).to be >= start
140
+ expect(event.get("[@metadata][kafka][headers][name]")).to eq("John ανδρεα €")
141
+ end
142
+ end
143
+
144
+ it "should skip headers not encoded in UTF-8" do
145
+ start = LogStash::Timestamp.now.time.to_i
146
+ consume_messages(decorate_bad_headers_config, timeout: timeout_seconds, event_count: 1) do |queue, _|
147
+ expect(queue.length).to eq(1)
148
+ event = queue.shift
149
+ expect(event.get("[@metadata][kafka][topic]")).to eq("logstash_integration_topic_plain_with_headers_badly")
150
+ expect(event.get("[@metadata][kafka][consumer_group]")).to eq(group_id_3)
151
+ expect(event.get("[@metadata][kafka][timestamp]")).to be >= start
152
+
153
+ expect(event.include?("[@metadata][kafka][headers][name]")).to eq(false)
154
+ end
155
+ end
94
156
  end
95
157
 
96
158
  context "#kafka-offset-commit" do
@@ -129,6 +191,7 @@ private
129
191
 
130
192
  def consume_messages(config, queue: Queue.new, timeout:, event_count:)
131
193
  kafka_input = LogStash::Inputs::Kafka.new(config)
194
+ kafka_input.register
132
195
  t = Thread.new { kafka_input.run(queue) }
133
196
  begin
134
197
  t.run
@@ -164,11 +227,11 @@ describe "schema registry connection options" do
164
227
 
165
228
  before(:each) do
166
229
  response = save_avro_schema_to_schema_registry(File.join(Dir.pwd, "spec", "unit", "inputs", "avro_schema_fixture_payment.asvc"), SUBJECT_NAME)
167
- expect( response.status ).to be(200)
230
+ expect( response.code ).to be(200)
168
231
  end
169
232
 
170
233
  after(:each) do
171
- schema_registry_client = Faraday.new('http://localhost:8081')
234
+ schema_registry_client = Manticore::Client.new
172
235
  delete_remote_schema(schema_registry_client, SUBJECT_NAME)
173
236
  end
174
237
 
@@ -187,26 +250,26 @@ end
187
250
  def save_avro_schema_to_schema_registry(schema_file, subject_name)
188
251
  raw_schema = File.readlines(schema_file).map(&:chomp).join
189
252
  raw_schema_quoted = raw_schema.gsub('"', '\"')
190
- response = Faraday.post("http://localhost:8081/subjects/#{subject_name}/versions",
191
- '{"schema": "' + raw_schema_quoted + '"}',
192
- "Content-Type" => "application/vnd.schemaregistry.v1+json")
253
+ response = Manticore.post("http://localhost:8081/subjects/#{subject_name}/versions",
254
+ body: '{"schema": "' + raw_schema_quoted + '"}',
255
+ headers: {"Content-Type" => "application/vnd.schemaregistry.v1+json"})
193
256
  response
194
257
  end
195
258
 
196
259
  def delete_remote_schema(schema_registry_client, subject_name)
197
- expect(schema_registry_client.delete("/subjects/#{subject_name}").status ).to be(200)
198
- expect(schema_registry_client.delete("/subjects/#{subject_name}?permanent=true").status ).to be(200)
260
+ expect(schema_registry_client.delete("http://localhost:8081/subjects/#{subject_name}").code ).to be(200)
261
+ expect(schema_registry_client.delete("http://localhost:8081/subjects/#{subject_name}?permanent=true").code ).to be(200)
199
262
  end
200
263
 
201
264
  # AdminClientConfig = org.alpache.kafka.clients.admin.AdminClientConfig
202
265
 
203
266
  describe "Schema registry API", :integration => true do
204
267
 
205
- let(:schema_registry) { Faraday.new('http://localhost:8081') }
268
+ let(:schema_registry) { Manticore::Client.new }
206
269
 
207
270
  context 'listing subject on clean instance' do
208
271
  it "should return an empty set" do
209
- subjects = JSON.parse schema_registry.get('/subjects').body
272
+ subjects = JSON.parse schema_registry.get('http://localhost:8081/subjects').body
210
273
  expect( subjects ).to be_empty
211
274
  end
212
275
  end
@@ -214,30 +277,30 @@ describe "Schema registry API", :integration => true do
214
277
  context 'send a schema definition' do
215
278
  it "save the definition" do
216
279
  response = save_avro_schema_to_schema_registry(File.join(Dir.pwd, "spec", "unit", "inputs", "avro_schema_fixture_payment.asvc"), "schema_test_1")
217
- expect( response.status ).to be(200)
280
+ expect( response.code ).to be(200)
218
281
  delete_remote_schema(schema_registry, "schema_test_1")
219
282
  end
220
283
 
221
284
  it "delete the schema just added" do
222
285
  response = save_avro_schema_to_schema_registry(File.join(Dir.pwd, "spec", "unit", "inputs", "avro_schema_fixture_payment.asvc"), "schema_test_1")
223
- expect( response.status ).to be(200)
286
+ expect( response.code ).to be(200)
224
287
 
225
- expect( schema_registry.delete('/subjects/schema_test_1?permanent=false').status ).to be(200)
288
+ expect( schema_registry.delete('http://localhost:8081/subjects/schema_test_1?permanent=false').code ).to be(200)
226
289
  sleep(1)
227
- subjects = JSON.parse schema_registry.get('/subjects').body
290
+ subjects = JSON.parse schema_registry.get('http://localhost:8081/subjects').body
228
291
  expect( subjects ).to be_empty
229
292
  end
230
293
  end
231
294
 
232
295
  context 'use the schema to serialize' do
233
296
  after(:each) do
234
- expect( schema_registry.delete('/subjects/topic_avro-value').status ).to be(200)
297
+ expect( schema_registry.delete('http://localhost:8081/subjects/topic_avro-value').code ).to be(200)
235
298
  sleep 1
236
- expect( schema_registry.delete('/subjects/topic_avro-value?permanent=true').status ).to be(200)
299
+ expect( schema_registry.delete('http://localhost:8081/subjects/topic_avro-value?permanent=true').code ).to be(200)
237
300
 
238
301
  Stud.try(3.times, [StandardError, RSpec::Expectations::ExpectationNotMetError]) do
239
302
  wait(10).for do
240
- subjects = JSON.parse schema_registry.get('/subjects').body
303
+ subjects = JSON.parse schema_registry.get('http://localhost:8081/subjects').body
241
304
  subjects.empty?
242
305
  end.to be_truthy
243
306
  end
@@ -299,7 +362,7 @@ describe "Schema registry API", :integration => true do
299
362
  delete_topic_if_exists avro_topic_name
300
363
  write_some_data_to avro_topic_name
301
364
 
302
- subjects = JSON.parse schema_registry.get('/subjects').body
365
+ subjects = JSON.parse schema_registry.get('http://localhost:8081/subjects').body
303
366
  expect( subjects ).to contain_exactly("topic_avro-value")
304
367
 
305
368
  num_events = 1
@@ -38,23 +38,40 @@ describe LogStash::Inputs::Kafka do
38
38
  end
39
39
 
40
40
  context "register parameter verification" do
41
- let(:config) do
42
- { 'schema_registry_url' => 'http://localhost:8081', 'topics' => ['logstash'], 'consumer_threads' => 4 }
43
- end
41
+ context "schema_registry_url" do
42
+ let(:config) do
43
+ { 'schema_registry_url' => 'http://localhost:8081', 'topics' => ['logstash'], 'consumer_threads' => 4 }
44
+ end
44
45
 
45
- it "schema_registry_url conflict with value_deserializer_class should fail" do
46
- config['value_deserializer_class'] = 'my.fantasy.Deserializer'
47
- expect { subject.register }.to raise_error LogStash::ConfigurationError, /Option schema_registry_url prohibit the customization of value_deserializer_class/
46
+ it "conflict with value_deserializer_class should fail" do
47
+ config['value_deserializer_class'] = 'my.fantasy.Deserializer'
48
+ expect { subject.register }.to raise_error LogStash::ConfigurationError, /Option schema_registry_url prohibit the customization of value_deserializer_class/
49
+ end
50
+
51
+ it "conflict with topics_pattern should fail" do
52
+ config['topics_pattern'] = 'topic_.*'
53
+ expect { subject.register }.to raise_error LogStash::ConfigurationError, /Option schema_registry_url prohibit the customization of topics_pattern/
54
+ end
48
55
  end
49
56
 
50
- it "schema_registry_url conflict with topics_pattern should fail" do
51
- config['topics_pattern'] = 'topic_.*'
52
- expect { subject.register }.to raise_error LogStash::ConfigurationError, /Option schema_registry_url prohibit the customization of topics_pattern/
57
+ context "decorate_events" do
58
+ let(:config) { { 'decorate_events' => 'extended'} }
59
+
60
+ it "should raise error for invalid value" do
61
+ config['decorate_events'] = 'avoid'
62
+ expect { subject.register }.to raise_error LogStash::ConfigurationError, /Something is wrong with your configuration./
63
+ end
64
+
65
+ it "should map old true boolean value to :record_props mode" do
66
+ config['decorate_events'] = "true"
67
+ subject.register
68
+ expect(subject.metadata_mode).to include(:record_props)
69
+ end
53
70
  end
54
71
  end
55
72
 
56
73
  context 'with client_rack' do
57
- let(:config) { super.merge('client_rack' => 'EU-R1') }
74
+ let(:config) { super().merge('client_rack' => 'EU-R1') }
58
75
 
59
76
  it "sets broker rack parameter" do
60
77
  expect(org.apache.kafka.clients.consumer.KafkaConsumer).
@@ -66,7 +83,7 @@ describe LogStash::Inputs::Kafka do
66
83
  end
67
84
 
68
85
  context 'string integer config' do
69
- let(:config) { super.merge('session_timeout_ms' => '25000', 'max_poll_interval_ms' => '345000') }
86
+ let(:config) { super().merge('session_timeout_ms' => '25000', 'max_poll_interval_ms' => '345000') }
70
87
 
71
88
  it "sets integer values" do
72
89
  expect(org.apache.kafka.clients.consumer.KafkaConsumer).
@@ -78,7 +95,7 @@ describe LogStash::Inputs::Kafka do
78
95
  end
79
96
 
80
97
  context 'integer config' do
81
- let(:config) { super.merge('session_timeout_ms' => 25200, 'max_poll_interval_ms' => 123_000) }
98
+ let(:config) { super().merge('session_timeout_ms' => 25200, 'max_poll_interval_ms' => 123_000) }
82
99
 
83
100
  it "sets integer values" do
84
101
  expect(org.apache.kafka.clients.consumer.KafkaConsumer).
@@ -90,7 +107,7 @@ describe LogStash::Inputs::Kafka do
90
107
  end
91
108
 
92
109
  context 'string boolean config' do
93
- let(:config) { super.merge('enable_auto_commit' => 'false', 'check_crcs' => 'true') }
110
+ let(:config) { super().merge('enable_auto_commit' => 'false', 'check_crcs' => 'true') }
94
111
 
95
112
  it "sets parameters" do
96
113
  expect(org.apache.kafka.clients.consumer.KafkaConsumer).
@@ -103,7 +120,7 @@ describe LogStash::Inputs::Kafka do
103
120
  end
104
121
 
105
122
  context 'boolean config' do
106
- let(:config) { super.merge('enable_auto_commit' => true, 'check_crcs' => false) }
123
+ let(:config) { super().merge('enable_auto_commit' => true, 'check_crcs' => false) }
107
124
 
108
125
  it "sets parameters" do
109
126
  expect(org.apache.kafka.clients.consumer.KafkaConsumer).
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-integration-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 10.6.0
4
+ version: 10.7.4
5
5
  platform: java
6
6
  authors:
7
7
  - Elastic
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-10-28 00:00:00.000000000 Z
11
+ date: 2021-04-14 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement
@@ -106,6 +106,40 @@ dependencies:
106
106
  - - "<"
107
107
  - !ruby/object:Gem::Version
108
108
  version: 0.1.0
109
+ - !ruby/object:Gem::Dependency
110
+ requirement: !ruby/object:Gem::Requirement
111
+ requirements:
112
+ - - ">="
113
+ - !ruby/object:Gem::Version
114
+ version: 0.5.4
115
+ - - "<"
116
+ - !ruby/object:Gem::Version
117
+ version: 1.0.0
118
+ name: manticore
119
+ prerelease: false
120
+ type: :runtime
121
+ version_requirements: !ruby/object:Gem::Requirement
122
+ requirements:
123
+ - - ">="
124
+ - !ruby/object:Gem::Version
125
+ version: 0.5.4
126
+ - - "<"
127
+ - !ruby/object:Gem::Version
128
+ version: 1.0.0
129
+ - !ruby/object:Gem::Dependency
130
+ requirement: !ruby/object:Gem::Requirement
131
+ requirements:
132
+ - - "~>"
133
+ - !ruby/object:Gem::Version
134
+ version: '1.0'
135
+ name: logstash-mixin-deprecation_logger_support
136
+ prerelease: false
137
+ type: :runtime
138
+ version_requirements: !ruby/object:Gem::Requirement
139
+ requirements:
140
+ - - "~>"
141
+ - !ruby/object:Gem::Version
142
+ version: '1.0'
109
143
  - !ruby/object:Gem::Dependency
110
144
  requirement: !ruby/object:Gem::Requirement
111
145
  requirements:
@@ -202,7 +236,7 @@ files:
202
236
  - vendor/jar-dependencies/org/apache/avro/avro/1.9.2/avro-1.9.2.jar
203
237
  - vendor/jar-dependencies/org/apache/kafka/kafka-clients/2.5.1/kafka-clients-2.5.1.jar
204
238
  - vendor/jar-dependencies/org/apache/kafka/kafka_2.12/2.5.1/kafka_2.12-2.5.1.jar
205
- - vendor/jar-dependencies/org/glassfish/jersey/core/jersey-common/2.30/jersey-common-2.30.jar
239
+ - vendor/jar-dependencies/org/glassfish/jersey/core/jersey-common/2.33/jersey-common-2.33.jar
206
240
  - vendor/jar-dependencies/org/lz4/lz4-java/1.7.1/lz4-java-1.7.1.jar
207
241
  - vendor/jar-dependencies/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar
208
242
  - vendor/jar-dependencies/org/xerial/snappy/snappy-java/1.1.7.3/snappy-java-1.1.7.3.jar