logstash-input-elasticsearch 4.9.0 → 4.12.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 4091feeb0b3bf292cfb9afcde7496a72f95168eb570b21d29064ade174bb1352
4
- data.tar.gz: cfd02af050bb495dceea16b4ffe5daa6828088d2bd7994639ee08374f87da69d
3
+ metadata.gz: e2427b28640265b075a0e21240cf410b2e2252d7516ac7bd0955d48087317f7f
4
+ data.tar.gz: dbc7d84f18348e7fa2292d2b68a5db59a5ffda724eb090ff17503e10e3ff130d
5
5
  SHA512:
6
- metadata.gz: 873680bea22204e65d519d310f3952f26fd672ce336504e45e99b13988e2da2794b7c05f83b3e1b7a87f317eb51c079759f9df515dc99236577417ecfd379aa1
7
- data.tar.gz: 85c66a665eb7a3b503ab7cec64ea2f24b0e16829c4b78a75560d69c6272b1861583dad41e04ea1157ce58435714a86eaa6f4d71b5a4a724edfb9604f77150b99
6
+ metadata.gz: dd3f9693c355505fbe5a971a46899ba285e057406db5807d4226b4ace61449f41f20409926d49a5a69c19338020069de13e4a4da7028db9b8f7db6d3ce4e0e6c
7
+ data.tar.gz: 0c170d69801feac7d0df3a79ef4351a6237fa30d5ad3b69e0c8af660981f40a1d814e17d60cebd0e555cdee6613de93c7e407d2cba28f0092f85f5da7446b966
data/CHANGELOG.md CHANGED
@@ -1,3 +1,28 @@
1
+ ## 4.12.1
2
+ - Fixed too_long_frame_exception by passing scroll_id in the body [#159](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/159)
3
+
4
+ ## 4.12.0
5
+ - Feat: Update Elasticsearch client to 7.14.0 [#157](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/157)
6
+
7
+ ## 4.11.0
8
+ - Feat: add user-agent header passed to the Elasticsearch HTTP connection [#158](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/158)
9
+
10
+ ## 4.10.0
11
+ - Feat: added ecs_compatibility + event_factory support [#149](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/149)
12
+
13
+ ## 4.9.3
14
+ - Fixed SSL handshake hang indefinitely with proxy setup [#156](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/156)
15
+
16
+ ## 4.9.2
17
+ - Fix: a regression (in LS 7.14.0) where due the elasticsearch client update (from 5.0.5 to 7.5.0) the `Authorization`
18
+ header isn't passed, this leads to the plugin not being able to leverage `user`/`password` credentials set by the user.
19
+ [#153](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/153)
20
+
21
+
22
+ ## 4.9.1
23
+ - [DOC] Replaced hard-coded links with shared attributes [#143](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/143)
24
+ - [DOC] Added missing quote to docinfo_fields example [#145](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/145)
25
+
1
26
  ## 4.9.0
2
27
  - Added `target` option, allowing the hit's source to target a specific field instead of being expanded at the root of the event. This allows the input to play nicer with the Elastic Common Schema when the input does not follow the schema. [#117](https://github.com/logstash-plugins/logstash-input-elasticsearch/issues/117)
3
28
 
data/README.md CHANGED
@@ -1,7 +1,7 @@
1
1
  # Logstash Plugin
2
2
 
3
3
  [![Gem Version](https://badge.fury.io/rb/logstash-input-elasticsearch.svg)](https://badge.fury.io/rb/logstash-input-elasticsearch)
4
- [![Travis Build Status](https://travis-ci.org/logstash-plugins/logstash-input-elasticsearch.svg)](https://travis-ci.org/logstash-plugins/logstash-input-elasticsearch)
4
+ [![Travis Build Status](https://travis-ci.com/logstash-plugins/logstash-input-elasticsearch.svg)](https://travis-ci.com/logstash-plugins/logstash-input-elasticsearch)
5
5
 
6
6
  This is a plugin for [Logstash](https://github.com/elastic/logstash).
7
7
 
data/docs/index.asciidoc CHANGED
@@ -83,8 +83,18 @@ Authentication to a secure Elasticsearch cluster is possible using _one_ of the
83
83
  Authorization to a secure Elasticsearch cluster requires `read` permission at index level and `monitoring` permissions at cluster level.
84
84
  The `monitoring` permission at cluster level is necessary to perform periodic connectivity checks.
85
85
 
86
+ [id="plugins-{type}s-{plugin}-ecs"]
87
+ ==== Compatibility with the Elastic Common Schema (ECS)
88
+
89
+ When ECS compatibility is disabled, `docinfo_target` uses the `"@metadata"` field as a default, with ECS enabled the plugin
90
+ uses a naming convention `"[@metadata][input][elasticsearch]"` as a default target for placing document information.
91
+
92
+ The plugin logs a warning when ECS is enabled and `target` isn't set.
93
+
94
+ TIP: Set the `target` option to avoid potential schema conflicts.
95
+
86
96
  [id="plugins-{type}s-{plugin}-options"]
87
- ==== Elasticsearch Input Configuration Options
97
+ ==== Elasticsearch Input configuration options
88
98
 
89
99
  This plugin supports the following configuration options plus the <<plugins-{type}s-{plugin}-common-options>> described later.
90
100
 
@@ -99,6 +109,7 @@ This plugin supports the following configuration options plus the <<plugins-{typ
99
109
  | <<plugins-{type}s-{plugin}-docinfo>> |<<boolean,boolean>>|No
100
110
  | <<plugins-{type}s-{plugin}-docinfo_fields>> |<<array,array>>|No
101
111
  | <<plugins-{type}s-{plugin}-docinfo_target>> |<<string,string>>|No
112
+ | <<plugins-{type}s-{plugin}-ecs_compatibility>> |<<string,string>>|No
102
113
  | <<plugins-{type}s-{plugin}-hosts>> |<<array,array>>|No
103
114
  | <<plugins-{type}s-{plugin}-index>> |<<string,string>>|No
104
115
  | <<plugins-{type}s-{plugin}-password>> |<<password,password>>|No
@@ -111,7 +122,7 @@ This plugin supports the following configuration options plus the <<plugins-{typ
111
122
  | <<plugins-{type}s-{plugin}-slices>> |<<number,number>>|No
112
123
  | <<plugins-{type}s-{plugin}-ssl>> |<<boolean,boolean>>|No
113
124
  | <<plugins-{type}s-{plugin}-socket_timeout_seconds>> | <<number,number>>|No
114
- | <<plugins-{type}s-{plugin}-target>> | https://www.elastic.co/guide/en/logstash/master/field-references-deepdive.html[field reference] | No
125
+ | <<plugins-{type}s-{plugin}-target>> | {logstash-ref}/field-references-deepdive.html[field reference] | No
115
126
  | <<plugins-{type}s-{plugin}-user>> |<<string,string>>|No
116
127
  |=======================================================================
117
128
 
@@ -130,7 +141,7 @@ Authenticate using Elasticsearch API key. Note that this option also requires en
130
141
 
131
142
  Format is `id:api_key` where `id` and `api_key` are as returned by the
132
143
  Elasticsearch
133
- https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html[Create
144
+ {ref}/security-api-create-api-key.html[Create
134
145
  API key API].
135
146
 
136
147
  [id="plugins-{type}s-{plugin}-ca_file"]
@@ -150,8 +161,7 @@ SSL Certificate Authority file in PEM encoded format, must also include any chai
150
161
  Cloud authentication string ("<username>:<password>" format) is an alternative for the `user`/`password` pair.
151
162
 
152
163
  For more info, check out the
153
- https://www.elastic.co/guide/en/logstash/current/connecting-to-cloud.html[Logstash-to-Cloud
154
- documentation]
164
+ {logstash-ref}/connecting-to-cloud.html[Logstash-to-Cloud documentation].
155
165
 
156
166
  [id="plugins-{type}s-{plugin}-cloud_id"]
157
167
  ===== `cloud_id`
@@ -162,8 +172,7 @@ documentation]
162
172
  Cloud ID, from the Elastic Cloud web console. If set `hosts` should not be used.
163
173
 
164
174
  For more info, check out the
165
- https://www.elastic.co/guide/en/logstash/current/connecting-to-cloud.html[Logstash-to-Cloud
166
- documentation]
175
+ {logstash-ref}/connecting-to-cloud.html[Logstash-to-Cloud documentation].
167
176
 
168
177
  [id="plugins-{type}s-{plugin}-connect_timeout_seconds"]
169
178
  ===== `connect_timeout_seconds`
@@ -199,13 +208,14 @@ Example
199
208
  size => 500
200
209
  scroll => "5m"
201
210
  docinfo => true
211
+ docinfo_target => "[@metadata][doc]"
202
212
  }
203
213
  }
204
214
  output {
205
215
  elasticsearch {
206
- index => "copy-of-production.%{[@metadata][_index]}"
207
- document_type => "%{[@metadata][_type]}"
208
- document_id => "%{[@metadata][_id]}"
216
+ index => "copy-of-production.%{[@metadata][doc][_index]}"
217
+ document_type => "%{[@metadata][doc][_type]}"
218
+ document_id => "%{[@metadata][doc][_id]}"
209
219
  }
210
220
  }
211
221
 
@@ -216,8 +226,9 @@ Example
216
226
  input {
217
227
  elasticsearch {
218
228
  docinfo => true
229
+ docinfo_target => "[@metadata][doc]"
219
230
  add_field => {
220
- identifier => %{[@metadata][_index]}:%{[@metadata][_type]}:%{[@metadata][_id]}"
231
+ identifier => "%{[@metadata][doc][_index]}:%{[@metadata][doc][_type]}:%{[@metadata][doc][_id]}"
221
232
  }
222
233
  }
223
234
  }
@@ -238,11 +249,25 @@ more information.
238
249
  ===== `docinfo_target`
239
250
 
240
251
  * Value type is <<string,string>>
241
- * Default value is `"@metadata"`
252
+ * Default value depends on whether <<plugins-{type}s-{plugin}-ecs_compatibility>> is enabled:
253
+ ** ECS Compatibility disabled: `"@metadata"`
254
+ ** ECS Compatibility enabled: `"[@metadata][input][elasticsearch]"`
255
+
256
+ If document metadata storage is requested by enabling the `docinfo` option,
257
+ this option names the field under which to store the metadata fields as subfields.
258
+
259
+ [id="plugins-{type}s-{plugin}-ecs_compatibility"]
260
+ ===== `ecs_compatibility`
261
+
262
+ * Value type is <<string,string>>
263
+ * Supported values are:
264
+ ** `disabled`: CSV data added at root level
265
+ ** `v1`,`v8`: Elastic Common Schema compliant behavior
266
+ * Default value depends on which version of Logstash is running:
267
+ ** When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
268
+ ** Otherwise, the default value is `disabled`
242
269
 
243
- If document metadata storage is requested by enabling the `docinfo`
244
- option, this option names the field under which to store the metadata
245
- fields as subfields.
270
+ Controls this plugin's compatibility with the {ecs-ref}[Elastic Common Schema (ECS)].
246
271
 
247
272
  [id="plugins-{type}s-{plugin}-hosts"]
248
273
  ===== `hosts`
@@ -260,10 +285,9 @@ can be either IP, HOST, IP:port, or HOST:port. The port defaults to
260
285
  * Value type is <<string,string>>
261
286
  * Default value is `"logstash-*"`
262
287
 
263
- The index or alias to search. See
264
- https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-index.html[Multi Indices documentation]
265
- in the Elasticsearch documentation for more information on how to reference
266
- multiple indices.
288
+ The index or alias to search. See {ref}/multi-index.html[Multi Indices
289
+ documentation] in the Elasticsearch documentation for more information on how to
290
+ reference multiple indices.
267
291
 
268
292
 
269
293
  [id="plugins-{type}s-{plugin}-password"]
@@ -292,9 +316,8 @@ environment variables e.g. `proxy => '${LS_PROXY:}'`.
292
316
  * Value type is <<string,string>>
293
317
  * Default value is `'{ "sort": [ "_doc" ] }'`
294
318
 
295
- The query to be executed. Read the
296
- https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html[Elasticsearch query DSL documentation]
297
- for more information.
319
+ The query to be executed. Read the {ref}/query-dsl.html[Elasticsearch query DSL
320
+ documentation] for more information.
298
321
 
299
322
  [id="plugins-{type}s-{plugin}-request_timeout_seconds"]
300
323
  ===== `request_timeout_seconds`
@@ -345,7 +368,7 @@ This allows you to set the maximum number of hits returned per scroll.
345
368
 
346
369
  In some cases, it is possible to improve overall throughput by consuming multiple
347
370
  distinct slices of a query simultaneously using
348
- https://www.elastic.co/guide/en/elasticsearch/reference/current/paginate-search-results.html#slice-scroll[sliced scrolls],
371
+ {ref}/paginate-search-results.html#slice-scroll[sliced scrolls],
349
372
  especially if the pipeline is spending significant time waiting on Elasticsearch
350
373
  to provide results.
351
374
 
@@ -382,7 +405,7 @@ Socket timeouts usually occur while waiting for the first byte of a response, su
382
405
  [id="plugins-{type}s-{plugin}-target"]
383
406
  ===== `target`
384
407
 
385
- * Value type is https://www.elastic.co/guide/en/logstash/master/field-references-deepdive.html[field reference]
408
+ * Value type is {logstash-ref}/field-references-deepdive.html[field reference]
386
409
  * There is no default value for this setting.
387
410
 
388
411
  Without a `target`, events are created from each hit's `_source` at the root level.
@@ -406,4 +429,4 @@ empty string authentication will be disabled.
406
429
  [id="plugins-{type}s-{plugin}-common-options"]
407
430
  include::{include_path}/{type}.asciidoc[]
408
431
 
409
- :default_codec!:
432
+ :no_codec!:
@@ -1,10 +1,13 @@
1
- if Gem.loaded_specs['elasticsearch-transport'].version >= Gem::Version.new("7.2.0")
1
+ require 'elasticsearch'
2
+ require 'elasticsearch/transport/transport/connections/selector'
3
+
4
+ if Gem.loaded_specs['elasticsearch-transport'].version < Gem::Version.new("7.2.0")
2
5
  # elasticsearch-transport versions prior to 7.2.0 suffered of a race condition on accessing
3
- # the connection pool. This issue was fixed with https://github.com/elastic/elasticsearch-ruby/commit/15f9d78591a6e8823948494d94b15b0ca38819d1
4
- # This plugin, at the moment, is forced to use v5.x so we have to monkey patch the gem. When this requirement
5
- # ceases, this patch could be removed.
6
- puts "WARN remove the patch code into logstash-input-elasticsearch plugin"
7
- else
6
+ # the connection pool. This issue was fixed (in 7.2.0) with
7
+ # https://github.com/elastic/elasticsearch-ruby/commit/15f9d78591a6e8823948494d94b15b0ca38819d1
8
+ #
9
+ # This plugin, at the moment, is using elasticsearch >= 5.0.5
10
+ # When this requirement ceases, this patch could be removed.
8
11
  module Elasticsearch
9
12
  module Transport
10
13
  module Transport
@@ -0,0 +1,43 @@
1
+ # encoding: utf-8
2
+ require "elasticsearch"
3
+ require "elasticsearch/transport/transport/http/manticore"
4
+
5
+ es_client_version = Gem.loaded_specs['elasticsearch-transport'].version
6
+ if es_client_version >= Gem::Version.new('7.2') && es_client_version < Gem::Version.new('7.16')
7
+ # elasticsearch-transport 7.2.0 - 7.14.0 had a bug where setting http headers
8
+ # ES::Client.new ..., transport_options: { headers: { 'Authorization' => ... } }
9
+ # would be lost https://github.com/elastic/elasticsearch-ruby/issues/1428
10
+ #
11
+ # NOTE: needs to be idempotent as filter ES plugin might apply the same patch!
12
+ #
13
+ # @private
14
+ module Elasticsearch
15
+ module Transport
16
+ module Transport
17
+ module HTTP
18
+ class Manticore
19
+
20
+ def apply_headers(request_options, options)
21
+ headers = (options && options[:headers]) || {}
22
+ headers[CONTENT_TYPE_STR] = find_value(headers, CONTENT_TYPE_REGEX) || DEFAULT_CONTENT_TYPE
23
+
24
+ # this code is necessary to grab the correct user-agent header
25
+ # when this method is invoked with apply_headers(@request_options, options)
26
+ # from https://github.com/elastic/elasticsearch-ruby/blob/v7.14.0/elasticsearch-transport/lib/elasticsearch/transport/transport/http/manticore.rb#L113-L114
27
+ transport_user_agent = nil
28
+ if (options && options[:transport_options] && options[:transport_options][:headers])
29
+ transport_headers = options[:transport_options][:headers]
30
+ transport_user_agent = find_value(transport_headers, USER_AGENT_REGEX)
31
+ end
32
+
33
+ headers[USER_AGENT_STR] = transport_user_agent || find_value(headers, USER_AGENT_REGEX) || user_agent_header
34
+ headers[ACCEPT_ENCODING] = GZIP if use_compression?
35
+ (request_options[:headers] ||= {}).merge!(headers) # this line was changed
36
+ end
37
+
38
+ end
39
+ end
40
+ end
41
+ end
42
+ end
43
+ end
@@ -4,9 +4,15 @@ require "logstash/namespace"
4
4
  require "logstash/json"
5
5
  require "logstash/util/safe_uri"
6
6
  require 'logstash/plugin_mixins/validator_support/field_reference_validation_adapter'
7
+ require 'logstash/plugin_mixins/event_support/event_factory_adapter'
8
+ require 'logstash/plugin_mixins/ecs_compatibility_support'
9
+ require 'logstash/plugin_mixins/ecs_compatibility_support/target_check'
7
10
  require "base64"
8
- require_relative "patch"
9
11
 
12
+ require "elasticsearch"
13
+ require "elasticsearch/transport/transport/http/manticore"
14
+ require_relative "elasticsearch/patches/_elasticsearch_transport_http_manticore"
15
+ require_relative "elasticsearch/patches/_elasticsearch_transport_connections_selector"
10
16
 
11
17
  # .Compatibility Note
12
18
  # [NOTE]
@@ -63,12 +69,16 @@ require_relative "patch"
63
69
  #
64
70
  #
65
71
  class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
72
+
73
+ include LogStash::PluginMixins::ECSCompatibilitySupport(:disabled, :v1, :v8 => :v1)
74
+ include LogStash::PluginMixins::ECSCompatibilitySupport::TargetCheck
75
+
76
+ include LogStash::PluginMixins::EventSupport::EventFactoryAdapter
77
+
66
78
  extend LogStash::PluginMixins::ValidatorSupport::FieldReferenceValidationAdapter
67
79
 
68
80
  config_name "elasticsearch"
69
81
 
70
- default :codec, "json"
71
-
72
82
  # List of elasticsearch hosts to use for querying.
73
83
  # Each host can be either IP, HOST, IP:port or HOST:port.
74
84
  # Port defaults to 9200
@@ -125,8 +135,9 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
125
135
  #
126
136
  config :docinfo, :validate => :boolean, :default => false
127
137
 
128
- # Where to move the Elasticsearch document information. By default we use the @metadata field.
129
- config :docinfo_target, :validate=> :string, :default => LogStash::Event::METADATA
138
+ # Where to move the Elasticsearch document information.
139
+ # default: [@metadata][input][elasticsearch] in ECS mode, @metadata field otherwise
140
+ config :docinfo_target, :validate=> :field_reference
130
141
 
131
142
  # List of document metadata to move to the `docinfo_target` field.
132
143
  # To learn more about Elasticsearch metadata fields read
@@ -181,10 +192,16 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
181
192
  # If set, the _source of each hit will be added nested under the target instead of at the top-level
182
193
  config :target, :validate => :field_reference
183
194
 
195
+ def initialize(params={})
196
+ super(params)
197
+
198
+ if docinfo_target.nil?
199
+ @docinfo_target = ecs_select[disabled: '@metadata', v1: '[@metadata][input][elasticsearch]']
200
+ end
201
+ end
202
+
184
203
  def register
185
- require "elasticsearch"
186
204
  require "rufus/scheduler"
187
- require "elasticsearch/transport/transport/http/manticore"
188
205
 
189
206
  @options = {
190
207
  :index => @index,
@@ -205,6 +222,7 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
205
222
  transport_options = {:headers => {}}
206
223
  transport_options[:headers].merge!(setup_basic_auth(user, password))
207
224
  transport_options[:headers].merge!(setup_api_key(api_key))
225
+ transport_options[:headers].merge!({'user-agent' => prepare_user_agent()})
208
226
  transport_options[:request_timeout] = @request_timeout_seconds unless @request_timeout_seconds.nil?
209
227
  transport_options[:connect_timeout] = @connect_timeout_seconds unless @connect_timeout_seconds.nil?
210
228
  transport_options[:socket_timeout] = @socket_timeout_seconds unless @socket_timeout_seconds.nil?
@@ -222,10 +240,11 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
222
240
  :transport_class => ::Elasticsearch::Transport::Transport::HTTP::Manticore,
223
241
  :ssl => ssl_options
224
242
  )
243
+ test_connection!
244
+ @client
225
245
  end
226
246
 
227
247
 
228
-
229
248
  def run(output_queue)
230
249
  if @schedule
231
250
  @scheduler = Rufus::Scheduler.new(:max_work_threads => 1)
@@ -267,7 +286,6 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
267
286
 
268
287
  logger.info("Slice starting", slice_id: slice_id, slices: @slices) unless slice_id.nil?
269
288
 
270
- scroll_id = nil
271
289
  begin
272
290
  r = search_request(slice_options)
273
291
 
@@ -298,47 +316,41 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
298
316
  [r['hits']['hits'].any?, r['_scroll_id']]
299
317
  rescue => e
300
318
  # this will typically be triggered by a scroll timeout
301
- logger.error("Scroll request error, aborting scroll", error: e.inspect)
319
+ logger.error("Scroll request error, aborting scroll", message: e.message, exception: e.class)
302
320
  # return no hits and original scroll_id so we can try to clear it
303
321
  [false, scroll_id]
304
322
  end
305
323
 
306
324
  def push_hit(hit, output_queue)
307
- if @target.nil?
308
- event = LogStash::Event.new(hit['_source'])
309
- else
310
- event = LogStash::Event.new
311
- event.set(@target, hit['_source'])
312
- end
313
-
314
- if @docinfo
315
- # do not assume event[@docinfo_target] to be in-place updatable. first get it, update it, then at the end set it in the event.
316
- docinfo_target = event.get(@docinfo_target) || {}
317
-
318
- unless docinfo_target.is_a?(Hash)
319
- @logger.error("Elasticsearch Input: Incompatible Event, incompatible type for the docinfo_target=#{@docinfo_target} field in the `_source` document, expected a hash got:", :docinfo_target_type => docinfo_target.class, :event => event)
325
+ event = targeted_event_factory.new_event hit['_source']
326
+ set_docinfo_fields(hit, event) if @docinfo
327
+ decorate(event)
328
+ output_queue << event
329
+ end
320
330
 
321
- # TODO: (colin) I am not sure raising is a good strategy here?
322
- raise Exception.new("Elasticsearch input: incompatible event")
323
- end
331
+ def set_docinfo_fields(hit, event)
332
+ # do not assume event[@docinfo_target] to be in-place updatable. first get it, update it, then at the end set it in the event.
333
+ docinfo_target = event.get(@docinfo_target) || {}
324
334
 
325
- @docinfo_fields.each do |field|
326
- docinfo_target[field] = hit[field]
327
- end
335
+ unless docinfo_target.is_a?(Hash)
336
+ @logger.error("Incompatible Event, incompatible type for the docinfo_target=#{@docinfo_target} field in the `_source` document, expected a hash got:", :docinfo_target_type => docinfo_target.class, :event => event.to_hash_with_metadata)
328
337
 
329
- event.set(@docinfo_target, docinfo_target)
338
+ # TODO: (colin) I am not sure raising is a good strategy here?
339
+ raise Exception.new("Elasticsearch input: incompatible event")
330
340
  end
331
341
 
332
- decorate(event)
342
+ @docinfo_fields.each do |field|
343
+ docinfo_target[field] = hit[field]
344
+ end
333
345
 
334
- output_queue << event
346
+ event.set(@docinfo_target, docinfo_target)
335
347
  end
336
348
 
337
349
  def clear_scroll(scroll_id)
338
- @client.clear_scroll(scroll_id: scroll_id) if scroll_id
350
+ @client.clear_scroll(:body => { :scroll_id => scroll_id }) if scroll_id
339
351
  rescue => e
340
352
  # ignore & log any clear_scroll errors
341
- logger.warn("Ignoring clear_scroll exception", message: e.message)
353
+ logger.warn("Ignoring clear_scroll exception", message: e.message, exception: e.class)
342
354
  end
343
355
 
344
356
  def scroll_request scroll_id
@@ -388,14 +400,26 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
388
400
  return {} unless user && password && password.value
389
401
 
390
402
  token = ::Base64.strict_encode64("#{user}:#{password.value}")
391
- { Authorization: "Basic #{token}" }
403
+ { 'Authorization' => "Basic #{token}" }
392
404
  end
393
405
 
394
406
  def setup_api_key(api_key)
395
407
  return {} unless (api_key && api_key.value)
396
408
 
397
409
  token = ::Base64.strict_encode64(api_key.value)
398
- { Authorization: "ApiKey #{token}" }
410
+ { 'Authorization' => "ApiKey #{token}" }
411
+ end
412
+
413
+ def prepare_user_agent
414
+ os_name = java.lang.System.getProperty('os.name')
415
+ os_version = java.lang.System.getProperty('os.version')
416
+ os_arch = java.lang.System.getProperty('os.arch')
417
+ jvm_vendor = java.lang.System.getProperty('java.vendor')
418
+ jvm_version = java.lang.System.getProperty('java.version')
419
+
420
+ plugin_version = Gem.loaded_specs["logstash-input-elasticsearch"].version
421
+ # example: logstash/7.14.1 (OS=Linux-5.4.0-84-generic-amd64; JVM=AdoptOpenJDK-11.0.11) logstash-input-elasticsearch/4.10.0
422
+ "logstash/#{LOGSTASH_VERSION} (OS=#{os_name}-#{os_version}-#{os_arch}; JVM=#{jvm_vendor}-#{jvm_version}) logstash-#{@plugin_type}-#{config_name}/#{plugin_version}"
399
423
  end
400
424
 
401
425
  def fill_user_password_from_cloud_auth
@@ -448,6 +472,15 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
448
472
  [ cloud_auth.username, cloud_auth.password ]
449
473
  end
450
474
 
475
+ # @private used by unit specs
476
+ attr_reader :client
477
+
478
+ def test_connection!
479
+ @client.ping
480
+ rescue Elasticsearch::UnsupportedProductError
481
+ raise LogStash::ConfigurationError, "Could not connect to a compatible version of Elasticsearch"
482
+ end
483
+
451
484
  module URIOrEmptyValidator
452
485
  ##
453
486
  # @override to provide :uri_or_empty validator
@@ -1,7 +1,7 @@
1
1
  Gem::Specification.new do |s|
2
2
 
3
3
  s.name = 'logstash-input-elasticsearch'
4
- s.version = '4.9.0'
4
+ s.version = '4.12.1'
5
5
  s.licenses = ['Apache License (2.0)']
6
6
  s.summary = "Reads query results from an Elasticsearch cluster"
7
7
  s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
@@ -20,20 +20,22 @@ Gem::Specification.new do |s|
20
20
  s.metadata = { "logstash_plugin" => "true", "logstash_group" => "input" }
21
21
 
22
22
  # Gem dependencies
23
- s.add_runtime_dependency "logstash-mixin-validator_support", '~> 1.0'
24
23
  s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99"
24
+ s.add_runtime_dependency 'logstash-mixin-ecs_compatibility_support', '~> 1.3'
25
+ s.add_runtime_dependency 'logstash-mixin-event_support', '~> 1.0'
26
+ s.add_runtime_dependency "logstash-mixin-validator_support", '~> 1.0'
25
27
 
26
- s.add_runtime_dependency 'elasticsearch', '>= 5.0.3'
28
+ s.add_runtime_dependency 'elasticsearch', '>= 7.14.0' # LS >= 6.7 and < 7.14 all used version 5.0.5
27
29
 
28
- s.add_runtime_dependency 'logstash-codec-json'
29
- s.add_runtime_dependency 'logstash-codec-plain'
30
- s.add_runtime_dependency 'sequel'
31
30
  s.add_runtime_dependency 'tzinfo'
32
31
  s.add_runtime_dependency 'tzinfo-data'
33
32
  s.add_runtime_dependency 'rufus-scheduler'
34
- s.add_runtime_dependency 'manticore', "~> 0.6"
35
- s.add_runtime_dependency 'faraday', "~> 0.15.4"
33
+ s.add_runtime_dependency 'manticore', ">= 0.7.1"
36
34
 
35
+ s.add_development_dependency 'logstash-codec-plain'
36
+ s.add_development_dependency 'faraday', "~> 1"
37
37
  s.add_development_dependency 'logstash-devutils'
38
38
  s.add_development_dependency 'timecop'
39
+ s.add_development_dependency 'cabin', ['~> 0.6']
40
+ s.add_development_dependency 'webrick'
39
41
  end
data/spec/es_helper.rb CHANGED
@@ -1,30 +1,31 @@
1
1
  module ESHelper
2
2
  def self.get_host_port
3
- return "elasticsearch:9200" if ENV["INTEGRATION"] == "true" || ENV["SECURE_INTEGRATION"] == "true"
4
- raise "This setting is only used for integration tests"
3
+ if ENV["INTEGRATION"] == "true" || ENV["SECURE_INTEGRATION"] == "true"
4
+ "elasticsearch:9200"
5
+ else
6
+ "localhost:9200" # for local running integration specs outside docker
7
+ end
5
8
  end
6
9
 
7
- def self.get_client(options = {})
8
- ssl_options = {}
9
- hosts = [get_host_port]
10
+ def self.get_client(options)
11
+ require 'elasticsearch/transport/transport/http/faraday' # supports user/password options
12
+ host, port = get_host_port.split(':')
13
+ host_opts = { host: host, port: port, scheme: 'http' }
14
+ ssl_opts = {}
10
15
 
11
16
  if options[:ca_file]
12
- ssl_options = { :ssl => true, :ca_file => options[:ca_file] }
13
- hosts.map! do |h|
14
- host, port = h.split(":")
15
- { :host => host, :scheme => 'https', :port => port }
16
- end
17
+ ssl_opts = { ca_file: options[:ca_file], version: 'TLSv1.2', verify: false }
18
+ host_opts[:scheme] = 'https'
17
19
  end
18
20
 
19
- transport_options = {}
20
-
21
21
  if options[:user] && options[:password]
22
- token = Base64.strict_encode64("#{options[:user]}:#{options[:password]}")
23
- transport_options[:headers] = { :Authorization => "Basic #{token}" }
22
+ host_opts[:user] = options[:user]
23
+ host_opts[:password] = options[:password]
24
24
  end
25
25
 
26
- @client = Elasticsearch::Client.new(:hosts => hosts, :transport_options => transport_options, :ssl => ssl_options,
27
- :transport_class => ::Elasticsearch::Transport::Transport::HTTP::Manticore)
26
+ Elasticsearch::Client.new(hosts: [host_opts],
27
+ transport_options: { ssl: ssl_opts },
28
+ transport_class: Elasticsearch::Transport::Transport::HTTP::Faraday)
28
29
  end
29
30
 
30
31
  def self.doc_type
@@ -0,0 +1,20 @@
1
+ -----BEGIN CERTIFICATE-----
2
+ MIIDSTCCAjGgAwIBAgIUUcAg9c8B8jiliCkOEJyqoAHrmccwDQYJKoZIhvcNAQEL
3
+ BQAwNDEyMDAGA1UEAxMpRWxhc3RpYyBDZXJ0aWZpY2F0ZSBUb29sIEF1dG9nZW5l
4
+ cmF0ZWQgQ0EwHhcNMjEwODEyMDUxNDU1WhcNMjQwODExMDUxNDU1WjA0MTIwMAYD
5
+ VQQDEylFbGFzdGljIENlcnRpZmljYXRlIFRvb2wgQXV0b2dlbmVyYXRlZCBDQTCC
6
+ ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK1HuusRuGNsztd4EQvqwcMr
7
+ 8XvnNNaalerpMOorCGySEFrNf0HxDIVMGMCrOv1F8SvlcGq3XANs2MJ4F2xhhLZr
8
+ PpqVHx+QnSZ66lu5R89QVSuMh/dCMxhNBlOA/dDlvy+EJBl9H791UGy/ChhSgaBd
9
+ OKVyGkhjErRTeMIq7rR7UG6GL/fV+JGy41UiLrm1KQP7/XVD9UzZfGq/hylFkTPe
10
+ oox5BUxdxUdDZ2creOID+agtIYuJVIkelKPQ+ljBY3kWBRexqJQsvyNUs1gZpjpz
11
+ YUCzuVcXDRuJXYQXGqWXhsBPfJv+ZcSyMIBUfWT/G13cWU1iwufPy0NjajowPZsC
12
+ AwEAAaNTMFEwHQYDVR0OBBYEFMgkye5+2l+TE0I6RsXRHjGBwpBGMB8GA1UdIwQY
13
+ MBaAFMgkye5+2l+TE0I6RsXRHjGBwpBGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZI
14
+ hvcNAQELBQADggEBAIgtJW8sy5lBpzPRHkmWSS/SCZIPsABW+cHqQ3e0udrI3CLB
15
+ G9n7yqAPWOBTbdqC2GM8dvAS/Twx4Bub/lWr84dFCu+t0mQq4l5kpJMVRS0KKXPL
16
+ DwJbUN3oPNYy4uPn5Xi+XY3BYFce5vwJUsqIxeAbIOxVTNx++k5DFnB0ESAM23QL
17
+ sgUZl7xl3/DkdO4oHj30gmTRW9bjCJ6umnHIiO3JoJatrprurUIt80vHC4Ndft36
18
+ NBQ9mZpequ4RYjpSZNLcVsxyFAYwEY4g8MvH0MoMo2RRLfehmMCzXnI/Wh2qEyYz
19
+ emHprBii/5y1HieKXlX9CZRb5qEPHckDVXW3znw=
20
+ -----END CERTIFICATE-----
@@ -0,0 +1,27 @@
1
+ -----BEGIN RSA PRIVATE KEY-----
2
+ MIIEowIBAAKCAQEArUe66xG4Y2zO13gRC+rBwyvxe+c01pqV6ukw6isIbJIQWs1/
3
+ QfEMhUwYwKs6/UXxK+VwardcA2zYwngXbGGEtms+mpUfH5CdJnrqW7lHz1BVK4yH
4
+ 90IzGE0GU4D90OW/L4QkGX0fv3VQbL8KGFKBoF04pXIaSGMStFN4wirutHtQboYv
5
+ 99X4kbLjVSIuubUpA/v9dUP1TNl8ar+HKUWRM96ijHkFTF3FR0NnZyt44gP5qC0h
6
+ i4lUiR6Uo9D6WMFjeRYFF7GolCy/I1SzWBmmOnNhQLO5VxcNG4ldhBcapZeGwE98
7
+ m/5lxLIwgFR9ZP8bXdxZTWLC58/LQ2NqOjA9mwIDAQABAoIBABmBC0P6Ebegljkk
8
+ lO26GdbOKvbfqulDS3mN5QMyXkUMopea03YzMnKUJriE+2O33a1mUcuDPWnLpYPK
9
+ BTiQieYHlulNtY0Bzf+R69igRq9+1WpZftGnzrlu7NVxkOokRqWJv3546ilV7QZ0
10
+ f9ngmu+tiN7hEnlBC8m613VMuGGb3czwbCizEVZxlZX0Dk2GExbH7Yf3NNs/aOP/
11
+ 8x6CqgL+rhrtOQ80xwRrOlEF8oSSjXCzypa3nFv21YO3J2lVo4BoIwnHgOzyz46A
12
+ b37gekqXXajIYQ0HAB+NDgVoCRFFJ7Xe16mgB3DpyUpUJzwiMedJkeQ0TprIownQ
13
+ +1mPe9ECgYEA/K4jc0trr3sk8KtcZjOYdpvwrhEqSSGEPeGfFujZaKOb8PZ8PX6j
14
+ MbCTV12nEgm8FEhZQ3azxLnO17gbJ2A+Ksm/IIwnTWlqvvMZD5qTQ7L3qZuCtbWQ
15
+ +EGC/H1SDjhiwvjHcXP61/tYL/peApBSoj0L4kC+U/VaNyvicudKk08CgYEAr46J
16
+ 4VJBJfZ4ZaUBRy53+fy+mknOfaj2wo8MnD3u+/x4YWTapqvDOPN2nJVtKlIsxbS4
17
+ qCO+fzUV17YHlsQmGULNbtFuXWJkP/RcLVbe8VYg/6tmk0dJwNAe90flagX2KJov
18
+ 8eDX129nNpuUqrNNWsfeLmPmH6vUzpKlga+1zfUCgYBrbUHHJ96dmbZn2AMNtIvy
19
+ iXP3HXcj5msJwB3aKJ8eHMkU1kaWAnwxiQfrkfaQ9bCP0v6YbyQY1IJ7NlvdDs7/
20
+ dAydMtkW0WW/zyztdGN92d3vrx0QUiRTV87vt/wl7ZUXnZt1wcB5CPRCWaiUYHWx
21
+ YlDmHW6N1XdIk5DQF0OegwKBgEt7S8k3Zo9+A5IgegYy8p7njsQjy8a3qTFJ9DAR
22
+ aPmrOc8WX/SdkVihRXRZwxAZOOrgoyyYAcYL+xI+T9EBESh3UoC9R2ibb2MYG7Ha
23
+ 0gyN7a4/8eCNHCbs1QOZRAhr+8TFVqv28pbMbWJLToZ+hVns6Zikl0MyzFLtNoAm
24
+ HlMpAoGBAIOkqnwwuRKhWprL59sdcJfWY26os9nvuDV4LoKFNEFLJhj2AA2/3UlV
25
+ v85gqNSxnMNlHLZC9l2HZ3mKv/mfx1aikmFvyhJAnk5u0f9KkexmCPLjQzS5q3ba
26
+ yFuxK2DXwN4x46RgQPFlLjOTCX0BG6rkEu4JdonF8ETSjoCtGEU8
27
+ -----END RSA PRIVATE KEY-----