logstash-input-elasticsearch 4.9.0 → 4.10.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 4091feeb0b3bf292cfb9afcde7496a72f95168eb570b21d29064ade174bb1352
4
- data.tar.gz: cfd02af050bb495dceea16b4ffe5daa6828088d2bd7994639ee08374f87da69d
3
+ metadata.gz: 84fb0092d51b303586c275c6cfaef3222391d7aabe910450a989291b691930c0
4
+ data.tar.gz: 955e27896dff25ec6568685f91cc7e34252bc54fbb5182133db1e4dd273efffe
5
5
  SHA512:
6
- metadata.gz: 873680bea22204e65d519d310f3952f26fd672ce336504e45e99b13988e2da2794b7c05f83b3e1b7a87f317eb51c079759f9df515dc99236577417ecfd379aa1
7
- data.tar.gz: 85c66a665eb7a3b503ab7cec64ea2f24b0e16829c4b78a75560d69c6272b1861583dad41e04ea1157ce58435714a86eaa6f4d71b5a4a724edfb9604f77150b99
6
+ metadata.gz: 2be653a935384617b32905f441060326f2b45eadcaacbf727d1ebe5c82711e3e9f0f3a038b9ca39aad6377e67bb5caf78c8b83a625d82aa8fca267b738dc9cab
7
+ data.tar.gz: a19916b987c434dfeb396379fa9f552fd001901bc5b37ce60407e4b7df69f5bd3ff76d4122b2f71d27c5d37d983a1dd283dcdf3aa1693730e93c2875f2459a1a
data/CHANGELOG.md CHANGED
@@ -1,3 +1,19 @@
1
+ ## 4.10.0
2
+ - Feat: added ecs_compatibility + event_factory support [#149](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/149)
3
+
4
+ ## 4.9.3
5
+ - Fixed SSL handshake hang indefinitely with proxy setup [#156](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/156)
6
+
7
+ ## 4.9.2
8
+ - Fix: a regression (in LS 7.14.0) where due the elasticsearch client update (from 5.0.5 to 7.5.0) the `Authorization`
9
+ header isn't passed, this leads to the plugin not being able to leverage `user`/`password` credentials set by the user.
10
+ [#153](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/153)
11
+
12
+
13
+ ## 4.9.1
14
+ - [DOC] Replaced hard-coded links with shared attributes [#143](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/143)
15
+ - [DOC] Added missing quote to docinfo_fields example [#145](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/145)
16
+
1
17
  ## 4.9.0
2
18
  - Added `target` option, allowing the hit's source to target a specific field instead of being expanded at the root of the event. This allows the input to play nicer with the Elastic Common Schema when the input does not follow the schema. [#117](https://github.com/logstash-plugins/logstash-input-elasticsearch/issues/117)
3
19
 
data/README.md CHANGED
@@ -1,7 +1,7 @@
1
1
  # Logstash Plugin
2
2
 
3
3
  [![Gem Version](https://badge.fury.io/rb/logstash-input-elasticsearch.svg)](https://badge.fury.io/rb/logstash-input-elasticsearch)
4
- [![Travis Build Status](https://travis-ci.org/logstash-plugins/logstash-input-elasticsearch.svg)](https://travis-ci.org/logstash-plugins/logstash-input-elasticsearch)
4
+ [![Travis Build Status](https://travis-ci.com/logstash-plugins/logstash-input-elasticsearch.svg)](https://travis-ci.com/logstash-plugins/logstash-input-elasticsearch)
5
5
 
6
6
  This is a plugin for [Logstash](https://github.com/elastic/logstash).
7
7
 
data/docs/index.asciidoc CHANGED
@@ -83,8 +83,18 @@ Authentication to a secure Elasticsearch cluster is possible using _one_ of the
83
83
  Authorization to a secure Elasticsearch cluster requires `read` permission at index level and `monitoring` permissions at cluster level.
84
84
  The `monitoring` permission at cluster level is necessary to perform periodic connectivity checks.
85
85
 
86
+ [id="plugins-{type}s-{plugin}-ecs"]
87
+ ==== Compatibility with the Elastic Common Schema (ECS)
88
+
89
+ When ECS compatibility is disabled, `docinfo_target` uses the `"@metadata"` field as a default, with ECS enabled the plugin
90
+ uses a naming convention `"[@metadata][input][elasticsearch]"` as a default target for placing document information.
91
+
92
+ The plugin logs a warning when ECS is enabled and `target` isn't set.
93
+
94
+ TIP: Set the `target` option to avoid potential schema conflicts.
95
+
86
96
  [id="plugins-{type}s-{plugin}-options"]
87
- ==== Elasticsearch Input Configuration Options
97
+ ==== Elasticsearch Input configuration options
88
98
 
89
99
  This plugin supports the following configuration options plus the <<plugins-{type}s-{plugin}-common-options>> described later.
90
100
 
@@ -99,6 +109,7 @@ This plugin supports the following configuration options plus the <<plugins-{typ
99
109
  | <<plugins-{type}s-{plugin}-docinfo>> |<<boolean,boolean>>|No
100
110
  | <<plugins-{type}s-{plugin}-docinfo_fields>> |<<array,array>>|No
101
111
  | <<plugins-{type}s-{plugin}-docinfo_target>> |<<string,string>>|No
112
+ | <<plugins-{type}s-{plugin}-ecs_compatibility>> |<<string,string>>|No
102
113
  | <<plugins-{type}s-{plugin}-hosts>> |<<array,array>>|No
103
114
  | <<plugins-{type}s-{plugin}-index>> |<<string,string>>|No
104
115
  | <<plugins-{type}s-{plugin}-password>> |<<password,password>>|No
@@ -111,7 +122,7 @@ This plugin supports the following configuration options plus the <<plugins-{typ
111
122
  | <<plugins-{type}s-{plugin}-slices>> |<<number,number>>|No
112
123
  | <<plugins-{type}s-{plugin}-ssl>> |<<boolean,boolean>>|No
113
124
  | <<plugins-{type}s-{plugin}-socket_timeout_seconds>> | <<number,number>>|No
114
- | <<plugins-{type}s-{plugin}-target>> | https://www.elastic.co/guide/en/logstash/master/field-references-deepdive.html[field reference] | No
125
+ | <<plugins-{type}s-{plugin}-target>> | {logstash-ref}/field-references-deepdive.html[field reference] | No
115
126
  | <<plugins-{type}s-{plugin}-user>> |<<string,string>>|No
116
127
  |=======================================================================
117
128
 
@@ -130,7 +141,7 @@ Authenticate using Elasticsearch API key. Note that this option also requires en
130
141
 
131
142
  Format is `id:api_key` where `id` and `api_key` are as returned by the
132
143
  Elasticsearch
133
- https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html[Create
144
+ {ref}/security-api-create-api-key.html[Create
134
145
  API key API].
135
146
 
136
147
  [id="plugins-{type}s-{plugin}-ca_file"]
@@ -150,8 +161,7 @@ SSL Certificate Authority file in PEM encoded format, must also include any chai
150
161
  Cloud authentication string ("<username>:<password>" format) is an alternative for the `user`/`password` pair.
151
162
 
152
163
  For more info, check out the
153
- https://www.elastic.co/guide/en/logstash/current/connecting-to-cloud.html[Logstash-to-Cloud
154
- documentation]
164
+ {logstash-ref}/connecting-to-cloud.html[Logstash-to-Cloud documentation].
155
165
 
156
166
  [id="plugins-{type}s-{plugin}-cloud_id"]
157
167
  ===== `cloud_id`
@@ -162,8 +172,7 @@ documentation]
162
172
  Cloud ID, from the Elastic Cloud web console. If set `hosts` should not be used.
163
173
 
164
174
  For more info, check out the
165
- https://www.elastic.co/guide/en/logstash/current/connecting-to-cloud.html[Logstash-to-Cloud
166
- documentation]
175
+ {logstash-ref}/connecting-to-cloud.html[Logstash-to-Cloud documentation].
167
176
 
168
177
  [id="plugins-{type}s-{plugin}-connect_timeout_seconds"]
169
178
  ===== `connect_timeout_seconds`
@@ -199,13 +208,14 @@ Example
199
208
  size => 500
200
209
  scroll => "5m"
201
210
  docinfo => true
211
+ docinfo_target => "[@metadata][doc]"
202
212
  }
203
213
  }
204
214
  output {
205
215
  elasticsearch {
206
- index => "copy-of-production.%{[@metadata][_index]}"
207
- document_type => "%{[@metadata][_type]}"
208
- document_id => "%{[@metadata][_id]}"
216
+ index => "copy-of-production.%{[@metadata][doc][_index]}"
217
+ document_type => "%{[@metadata][doc][_type]}"
218
+ document_id => "%{[@metadata][doc][_id]}"
209
219
  }
210
220
  }
211
221
 
@@ -216,8 +226,9 @@ Example
216
226
  input {
217
227
  elasticsearch {
218
228
  docinfo => true
229
+ docinfo_target => "[@metadata][doc]"
219
230
  add_field => {
220
- identifier => %{[@metadata][_index]}:%{[@metadata][_type]}:%{[@metadata][_id]}"
231
+ identifier => "%{[@metadata][doc][_index]}:%{[@metadata][doc][_type]}:%{[@metadata][doc][_id]}"
221
232
  }
222
233
  }
223
234
  }
@@ -238,11 +249,25 @@ more information.
238
249
  ===== `docinfo_target`
239
250
 
240
251
  * Value type is <<string,string>>
241
- * Default value is `"@metadata"`
252
+ * Default value depends on whether <<plugins-{type}s-{plugin}-ecs_compatibility>> is enabled:
253
+ ** ECS Compatibility disabled: `"@metadata"`
254
+ ** ECS Compatibility enabled: `"[@metadata][input][elasticsearch]"`
255
+
256
+ If document metadata storage is requested by enabling the `docinfo` option,
257
+ this option names the field under which to store the metadata fields as subfields.
258
+
259
+ [id="plugins-{type}s-{plugin}-ecs_compatibility"]
260
+ ===== `ecs_compatibility`
261
+
262
+ * Value type is <<string,string>>
263
+ * Supported values are:
264
+ ** `disabled`: CSV data added at root level
265
+ ** `v1`,`v8`: Elastic Common Schema compliant behavior
266
+ * Default value depends on which version of Logstash is running:
267
+ ** When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
268
+ ** Otherwise, the default value is `disabled`
242
269
 
243
- If document metadata storage is requested by enabling the `docinfo`
244
- option, this option names the field under which to store the metadata
245
- fields as subfields.
270
+ Controls this plugin's compatibility with the {ecs-ref}[Elastic Common Schema (ECS)].
246
271
 
247
272
  [id="plugins-{type}s-{plugin}-hosts"]
248
273
  ===== `hosts`
@@ -260,10 +285,9 @@ can be either IP, HOST, IP:port, or HOST:port. The port defaults to
260
285
  * Value type is <<string,string>>
261
286
  * Default value is `"logstash-*"`
262
287
 
263
- The index or alias to search. See
264
- https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-index.html[Multi Indices documentation]
265
- in the Elasticsearch documentation for more information on how to reference
266
- multiple indices.
288
+ The index or alias to search. See {ref}/multi-index.html[Multi Indices
289
+ documentation] in the Elasticsearch documentation for more information on how to
290
+ reference multiple indices.
267
291
 
268
292
 
269
293
  [id="plugins-{type}s-{plugin}-password"]
@@ -292,9 +316,8 @@ environment variables e.g. `proxy => '${LS_PROXY:}'`.
292
316
  * Value type is <<string,string>>
293
317
  * Default value is `'{ "sort": [ "_doc" ] }'`
294
318
 
295
- The query to be executed. Read the
296
- https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html[Elasticsearch query DSL documentation]
297
- for more information.
319
+ The query to be executed. Read the {ref}/query-dsl.html[Elasticsearch query DSL
320
+ documentation] for more information.
298
321
 
299
322
  [id="plugins-{type}s-{plugin}-request_timeout_seconds"]
300
323
  ===== `request_timeout_seconds`
@@ -345,7 +368,7 @@ This allows you to set the maximum number of hits returned per scroll.
345
368
 
346
369
  In some cases, it is possible to improve overall throughput by consuming multiple
347
370
  distinct slices of a query simultaneously using
348
- https://www.elastic.co/guide/en/elasticsearch/reference/current/paginate-search-results.html#slice-scroll[sliced scrolls],
371
+ {ref}/paginate-search-results.html#slice-scroll[sliced scrolls],
349
372
  especially if the pipeline is spending significant time waiting on Elasticsearch
350
373
  to provide results.
351
374
 
@@ -382,7 +405,7 @@ Socket timeouts usually occur while waiting for the first byte of a response, su
382
405
  [id="plugins-{type}s-{plugin}-target"]
383
406
  ===== `target`
384
407
 
385
- * Value type is https://www.elastic.co/guide/en/logstash/master/field-references-deepdive.html[field reference]
408
+ * Value type is {logstash-ref}/field-references-deepdive.html[field reference]
386
409
  * There is no default value for this setting.
387
410
 
388
411
  Without a `target`, events are created from each hit's `_source` at the root level.
@@ -406,4 +429,4 @@ empty string authentication will be disabled.
406
429
  [id="plugins-{type}s-{plugin}-common-options"]
407
430
  include::{include_path}/{type}.asciidoc[]
408
431
 
409
- :default_codec!:
432
+ :no_codec!:
@@ -4,9 +4,15 @@ require "logstash/namespace"
4
4
  require "logstash/json"
5
5
  require "logstash/util/safe_uri"
6
6
  require 'logstash/plugin_mixins/validator_support/field_reference_validation_adapter'
7
+ require 'logstash/plugin_mixins/event_support/event_factory_adapter'
8
+ require 'logstash/plugin_mixins/ecs_compatibility_support'
9
+ require 'logstash/plugin_mixins/ecs_compatibility_support/target_check'
7
10
  require "base64"
8
- require_relative "patch"
9
11
 
12
+ require "elasticsearch"
13
+ require "elasticsearch/transport/transport/http/manticore"
14
+ require_relative "elasticsearch/patches/_elasticsearch_transport_http_manticore"
15
+ require_relative "elasticsearch/patches/_elasticsearch_transport_connections_selector"
10
16
 
11
17
  # .Compatibility Note
12
18
  # [NOTE]
@@ -63,12 +69,16 @@ require_relative "patch"
63
69
  #
64
70
  #
65
71
  class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
72
+
73
+ include LogStash::PluginMixins::ECSCompatibilitySupport(:disabled, :v1, :v8 => :v1)
74
+ include LogStash::PluginMixins::ECSCompatibilitySupport::TargetCheck
75
+
76
+ include LogStash::PluginMixins::EventSupport::EventFactoryAdapter
77
+
66
78
  extend LogStash::PluginMixins::ValidatorSupport::FieldReferenceValidationAdapter
67
79
 
68
80
  config_name "elasticsearch"
69
81
 
70
- default :codec, "json"
71
-
72
82
  # List of elasticsearch hosts to use for querying.
73
83
  # Each host can be either IP, HOST, IP:port or HOST:port.
74
84
  # Port defaults to 9200
@@ -125,8 +135,9 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
125
135
  #
126
136
  config :docinfo, :validate => :boolean, :default => false
127
137
 
128
- # Where to move the Elasticsearch document information. By default we use the @metadata field.
129
- config :docinfo_target, :validate=> :string, :default => LogStash::Event::METADATA
138
+ # Where to move the Elasticsearch document information.
139
+ # default: [@metadata][input][elasticsearch] in ECS mode, @metadata field otherwise
140
+ config :docinfo_target, :validate=> :field_reference
130
141
 
131
142
  # List of document metadata to move to the `docinfo_target` field.
132
143
  # To learn more about Elasticsearch metadata fields read
@@ -181,10 +192,16 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
181
192
  # If set, the _source of each hit will be added nested under the target instead of at the top-level
182
193
  config :target, :validate => :field_reference
183
194
 
195
+ def initialize(params={})
196
+ super(params)
197
+
198
+ if docinfo_target.nil?
199
+ @docinfo_target = ecs_select[disabled: '@metadata', v1: '[@metadata][input][elasticsearch]']
200
+ end
201
+ end
202
+
184
203
  def register
185
- require "elasticsearch"
186
204
  require "rufus/scheduler"
187
- require "elasticsearch/transport/transport/http/manticore"
188
205
 
189
206
  @options = {
190
207
  :index => @index,
@@ -225,7 +242,6 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
225
242
  end
226
243
 
227
244
 
228
-
229
245
  def run(output_queue)
230
246
  if @schedule
231
247
  @scheduler = Rufus::Scheduler.new(:max_work_threads => 1)
@@ -267,7 +283,6 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
267
283
 
268
284
  logger.info("Slice starting", slice_id: slice_id, slices: @slices) unless slice_id.nil?
269
285
 
270
- scroll_id = nil
271
286
  begin
272
287
  r = search_request(slice_options)
273
288
 
@@ -298,47 +313,41 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
298
313
  [r['hits']['hits'].any?, r['_scroll_id']]
299
314
  rescue => e
300
315
  # this will typically be triggered by a scroll timeout
301
- logger.error("Scroll request error, aborting scroll", error: e.inspect)
316
+ logger.error("Scroll request error, aborting scroll", message: e.message, exception: e.class)
302
317
  # return no hits and original scroll_id so we can try to clear it
303
318
  [false, scroll_id]
304
319
  end
305
320
 
306
321
  def push_hit(hit, output_queue)
307
- if @target.nil?
308
- event = LogStash::Event.new(hit['_source'])
309
- else
310
- event = LogStash::Event.new
311
- event.set(@target, hit['_source'])
312
- end
313
-
314
- if @docinfo
315
- # do not assume event[@docinfo_target] to be in-place updatable. first get it, update it, then at the end set it in the event.
316
- docinfo_target = event.get(@docinfo_target) || {}
322
+ event = targeted_event_factory.new_event hit['_source']
323
+ set_docinfo_fields(hit, event) if @docinfo
324
+ decorate(event)
325
+ output_queue << event
326
+ end
317
327
 
318
- unless docinfo_target.is_a?(Hash)
319
- @logger.error("Elasticsearch Input: Incompatible Event, incompatible type for the docinfo_target=#{@docinfo_target} field in the `_source` document, expected a hash got:", :docinfo_target_type => docinfo_target.class, :event => event)
328
+ def set_docinfo_fields(hit, event)
329
+ # do not assume event[@docinfo_target] to be in-place updatable. first get it, update it, then at the end set it in the event.
330
+ docinfo_target = event.get(@docinfo_target) || {}
320
331
 
321
- # TODO: (colin) I am not sure raising is a good strategy here?
322
- raise Exception.new("Elasticsearch input: incompatible event")
323
- end
332
+ unless docinfo_target.is_a?(Hash)
333
+ @logger.error("Incompatible Event, incompatible type for the docinfo_target=#{@docinfo_target} field in the `_source` document, expected a hash got:", :docinfo_target_type => docinfo_target.class, :event => event.to_hash_with_metadata)
324
334
 
325
- @docinfo_fields.each do |field|
326
- docinfo_target[field] = hit[field]
327
- end
328
-
329
- event.set(@docinfo_target, docinfo_target)
335
+ # TODO: (colin) I am not sure raising is a good strategy here?
336
+ raise Exception.new("Elasticsearch input: incompatible event")
330
337
  end
331
338
 
332
- decorate(event)
339
+ @docinfo_fields.each do |field|
340
+ docinfo_target[field] = hit[field]
341
+ end
333
342
 
334
- output_queue << event
343
+ event.set(@docinfo_target, docinfo_target)
335
344
  end
336
345
 
337
346
  def clear_scroll(scroll_id)
338
347
  @client.clear_scroll(scroll_id: scroll_id) if scroll_id
339
348
  rescue => e
340
349
  # ignore & log any clear_scroll errors
341
- logger.warn("Ignoring clear_scroll exception", message: e.message)
350
+ logger.warn("Ignoring clear_scroll exception", message: e.message, exception: e.class)
342
351
  end
343
352
 
344
353
  def scroll_request scroll_id
@@ -388,14 +397,14 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
388
397
  return {} unless user && password && password.value
389
398
 
390
399
  token = ::Base64.strict_encode64("#{user}:#{password.value}")
391
- { Authorization: "Basic #{token}" }
400
+ { 'Authorization' => "Basic #{token}" }
392
401
  end
393
402
 
394
403
  def setup_api_key(api_key)
395
404
  return {} unless (api_key && api_key.value)
396
405
 
397
406
  token = ::Base64.strict_encode64(api_key.value)
398
- { Authorization: "ApiKey #{token}" }
407
+ { 'Authorization' => "ApiKey #{token}" }
399
408
  end
400
409
 
401
410
  def fill_user_password_from_cloud_auth
@@ -448,6 +457,9 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
448
457
  [ cloud_auth.username, cloud_auth.password ]
449
458
  end
450
459
 
460
+ # @private used by unit specs
461
+ attr_reader :client
462
+
451
463
  module URIOrEmptyValidator
452
464
  ##
453
465
  # @override to provide :uri_or_empty validator
@@ -1,10 +1,13 @@
1
- if Gem.loaded_specs['elasticsearch-transport'].version >= Gem::Version.new("7.2.0")
1
+ require 'elasticsearch'
2
+ require 'elasticsearch/transport/transport/connections/selector'
3
+
4
+ if Gem.loaded_specs['elasticsearch-transport'].version < Gem::Version.new("7.2.0")
2
5
  # elasticsearch-transport versions prior to 7.2.0 suffered of a race condition on accessing
3
- # the connection pool. This issue was fixed with https://github.com/elastic/elasticsearch-ruby/commit/15f9d78591a6e8823948494d94b15b0ca38819d1
4
- # This plugin, at the moment, is forced to use v5.x so we have to monkey patch the gem. When this requirement
5
- # ceases, this patch could be removed.
6
- puts "WARN remove the patch code into logstash-input-elasticsearch plugin"
7
- else
6
+ # the connection pool. This issue was fixed (in 7.2.0) with
7
+ # https://github.com/elastic/elasticsearch-ruby/commit/15f9d78591a6e8823948494d94b15b0ca38819d1
8
+ #
9
+ # This plugin, at the moment, is using elasticsearch >= 5.0.5
10
+ # When this requirement ceases, this patch could be removed.
8
11
  module Elasticsearch
9
12
  module Transport
10
13
  module Transport
@@ -0,0 +1,33 @@
1
+ # encoding: utf-8
2
+ require "elasticsearch"
3
+ require "elasticsearch/transport/transport/http/manticore"
4
+
5
+ es_client_version = Gem.loaded_specs['elasticsearch-transport'].version
6
+ if es_client_version >= Gem::Version.new('7.2') && es_client_version < Gem::Version.new('7.16')
7
+ # elasticsearch-transport 7.2.0 - 7.14.0 had a bug where setting http headers
8
+ # ES::Client.new ..., transport_options: { headers: { 'Authorization' => ... } }
9
+ # would be lost https://github.com/elastic/elasticsearch-ruby/issues/1428
10
+ #
11
+ # NOTE: needs to be idempotent as filter ES plugin might apply the same patch!
12
+ #
13
+ # @private
14
+ module Elasticsearch
15
+ module Transport
16
+ module Transport
17
+ module HTTP
18
+ class Manticore
19
+
20
+ def apply_headers(request_options, options)
21
+ headers = (options && options[:headers]) || {}
22
+ headers[CONTENT_TYPE_STR] = find_value(headers, CONTENT_TYPE_REGEX) || DEFAULT_CONTENT_TYPE
23
+ headers[USER_AGENT_STR] = find_value(headers, USER_AGENT_REGEX) || user_agent_header
24
+ headers[ACCEPT_ENCODING] = GZIP if use_compression?
25
+ (request_options[:headers] ||= {}).merge!(headers) # this line was changed
26
+ end
27
+
28
+ end
29
+ end
30
+ end
31
+ end
32
+ end
33
+ end
@@ -1,7 +1,7 @@
1
1
  Gem::Specification.new do |s|
2
2
 
3
3
  s.name = 'logstash-input-elasticsearch'
4
- s.version = '4.9.0'
4
+ s.version = '4.10.0'
5
5
  s.licenses = ['Apache License (2.0)']
6
6
  s.summary = "Reads query results from an Elasticsearch cluster"
7
7
  s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
@@ -20,20 +20,20 @@ Gem::Specification.new do |s|
20
20
  s.metadata = { "logstash_plugin" => "true", "logstash_group" => "input" }
21
21
 
22
22
  # Gem dependencies
23
- s.add_runtime_dependency "logstash-mixin-validator_support", '~> 1.0'
24
23
  s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99"
24
+ s.add_runtime_dependency 'logstash-mixin-ecs_compatibility_support', '~> 1.3'
25
+ s.add_runtime_dependency 'logstash-mixin-event_support', '~> 1.0'
26
+ s.add_runtime_dependency "logstash-mixin-validator_support", '~> 1.0'
25
27
 
26
- s.add_runtime_dependency 'elasticsearch', '>= 5.0.3'
28
+ s.add_runtime_dependency 'elasticsearch', '>= 5.0.5' # LS >= 6.7 and < 7.14 all used version 5.0.5
27
29
 
28
- s.add_runtime_dependency 'logstash-codec-json'
29
- s.add_runtime_dependency 'logstash-codec-plain'
30
- s.add_runtime_dependency 'sequel'
31
30
  s.add_runtime_dependency 'tzinfo'
32
31
  s.add_runtime_dependency 'tzinfo-data'
33
32
  s.add_runtime_dependency 'rufus-scheduler'
34
- s.add_runtime_dependency 'manticore', "~> 0.6"
35
- s.add_runtime_dependency 'faraday', "~> 0.15.4"
33
+ s.add_runtime_dependency 'manticore', ">= 0.7.1"
36
34
 
35
+ s.add_development_dependency 'logstash-codec-plain'
36
+ s.add_development_dependency 'faraday', "~> 0.15.4"
37
37
  s.add_development_dependency 'logstash-devutils'
38
38
  s.add_development_dependency 'timecop'
39
39
  end