logstash-output-elasticsearch 12.0.1-java → 12.0.2-java

Sign up to get free protection for your applications and to get access to all the features.
Files changed (27) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +3 -0
  3. data/docs/index.asciidoc +18 -8
  4. data/lib/logstash/outputs/elasticsearch/data_stream_support.rb +0 -1
  5. data/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb +1 -4
  6. data/lib/logstash/outputs/elasticsearch/http_client/pool.rb +12 -19
  7. data/lib/logstash/outputs/elasticsearch/http_client.rb +25 -24
  8. data/lib/logstash/outputs/elasticsearch/ilm.rb +1 -11
  9. data/lib/logstash/outputs/elasticsearch/template_manager.rb +1 -1
  10. data/lib/logstash/outputs/elasticsearch.rb +10 -45
  11. data/logstash-output-elasticsearch.gemspec +1 -1
  12. data/spec/es_spec_helper.rb +1 -5
  13. data/spec/integration/outputs/compressed_indexing_spec.rb +5 -5
  14. data/spec/integration/outputs/index_spec.rb +7 -7
  15. data/spec/integration/outputs/no_es_on_startup_spec.rb +1 -1
  16. data/spec/integration/outputs/parent_spec.rb +2 -3
  17. data/spec/integration/outputs/retry_spec.rb +2 -10
  18. data/spec/integration/outputs/sniffer_spec.rb +5 -40
  19. data/spec/unit/outputs/elasticsearch/data_stream_support_spec.rb +0 -23
  20. data/spec/unit/outputs/elasticsearch/http_client/pool_spec.rb +12 -54
  21. data/spec/unit/outputs/elasticsearch/template_manager_spec.rb +3 -8
  22. data/spec/unit/outputs/elasticsearch_spec.rb +15 -17
  23. metadata +2 -8
  24. data/lib/logstash/outputs/elasticsearch/templates/ecs-disabled/elasticsearch-6x.json +0 -45
  25. data/lib/logstash/outputs/elasticsearch/templates/ecs-v1/elasticsearch-6x.json +0 -3695
  26. data/spec/fixtures/_nodes/6x.json +0 -81
  27. data/spec/fixtures/template-with-policy-es6x.json +0 -48
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 5e4412a465fe0bd0f91b4c6ac11e4fdaf30ead8eff0888f22e0b030af9b70207
4
- data.tar.gz: 36da192cd2b436f94a58c4e413e388db129c55eaa3def6b0a4840df79fe1a35b
3
+ metadata.gz: 54628286a20d2e1aef727ec2bb59ff32b0e56458451743ac36b1e30cc072328c
4
+ data.tar.gz: a9e90e4192ef149fa85fce4caa3aa720b6f9b01df1f29885bdeb3883619c7ca0
5
5
  SHA512:
6
- metadata.gz: 7e71766366f3d934883f47df25b3632a460cac2c868f0bc5994f18acec09426b5e30769fe3332722a85940421b9309fdad45dfd259ae5fa725766112f2c2d75d
7
- data.tar.gz: 27a0ad1475ad90efb62ebd3dd657ec9a6681eb8404b4891bb4bb0bcef6b8de48d6c906ce7f8faed42f511c1167f8833a65c4e4b8b82d0b7015c2c0bb1a2ee7f7
6
+ metadata.gz: e063a4a3796e8955a1d6ee1dc17f599e5fd3142f61e35956371ae43a2b47496180a687fb232e3abccaeab8fde646c67ba07a4191b90cd10bc119f9c7c9f35fe6
7
+ data.tar.gz: d664044b3ef0c9132c65a3af99bf55f84c1b3e59785078a4c46476e9b42bda542f3d94c0c4fb73b1106cae1f90cce144296e98abd86f8edc61de710a4a1dab9c
data/CHANGELOG.md CHANGED
@@ -1,3 +1,6 @@
1
+ ## 12.0.2
2
+ - Properly handle http code 413 (Payload Too Large) [#1199](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1199)
3
+
1
4
  ## 12.0.1
2
5
  - Remove irrelevant log warning about elastic stack version [#1200](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1200)
3
6
 
data/docs/index.asciidoc CHANGED
@@ -196,7 +196,22 @@ This plugin uses the Elasticsearch bulk API to optimize its imports into Elastic
196
196
  either partial or total failures. The bulk API sends batches of requests to an HTTP endpoint. Error codes for the HTTP
197
197
  request are handled differently than error codes for individual documents.
198
198
 
199
- HTTP requests to the bulk API are expected to return a 200 response code. All other response codes are retried indefinitely.
199
+
200
+ HTTP requests to the bulk API are expected to return a 200 response code. All other response codes are retried indefinitely,
201
+ including 413 (Payload Too Large) responses.
202
+
203
+ If you want to handle large payloads differently, you can configure 413 responses to go to the Dead Letter Queue instead:
204
+
205
+ [source,ruby]
206
+ -----
207
+ output {
208
+ elasticsearch {
209
+ hosts => ["localhost:9200"]
210
+ dlq_custom_codes => [413] # Send 413 errors to DLQ instead of retrying
211
+ }
212
+ -----
213
+
214
+ This will capture oversized payloads in the DLQ for analysis rather than retrying them.
200
215
 
201
216
  The following document errors are handled as follows:
202
217
 
@@ -612,8 +627,7 @@ Elasticsearch with the same ID.
612
627
 
613
628
  NOTE: This option is deprecated due to the
614
629
  https://www.elastic.co/guide/en/elasticsearch/reference/6.0/removal-of-types.html[removal
615
- of types in Elasticsearch 6.0]. It will be removed in the next major version of
616
- Logstash.
630
+ of types in Elasticsearch 6.0].
617
631
 
618
632
  NOTE: This value is ignored and has no effect for Elasticsearch clusters `8.x`.
619
633
 
@@ -622,9 +636,7 @@ similar events to the same 'type'. String expansion `%{foo}` works here.
622
636
  If you don't set a value for this option:
623
637
 
624
638
  - for elasticsearch clusters 8.x: no value will be used;
625
- - for elasticsearch clusters 7.x: the value of '_doc' will be used;
626
- - for elasticsearch clusters 6.x: the value of 'doc' will be used;
627
- - for elasticsearch clusters 5.x and below: the event's 'type' field will be used, if the field is not present the value of 'doc' will be used.
639
+ - for elasticsearch clusters 7.x: the value of '_doc' will be used.
628
640
 
629
641
  [id="plugins-{type}s-{plugin}-ecs_compatibility"]
630
642
  ===== `ecs_compatibility`
@@ -1039,8 +1051,6 @@ NOTE: Deprecates <<plugins-{type}s-{plugin}-failure_type_logging_whitelist>>.
1039
1051
 
1040
1052
  This setting asks Elasticsearch for the list of all cluster nodes and adds them
1041
1053
  to the hosts list.
1042
- For Elasticsearch 5.x and 6.x any nodes with `http.enabled` (on by default) will
1043
- be added to the hosts list, excluding master-only nodes.
1044
1054
 
1045
1055
  [id="plugins-{type}s-{plugin}-sniffing_delay"]
1046
1056
  ===== `sniffing_delay`
@@ -127,7 +127,6 @@ module LogStash module Outputs class ElasticSearch
127
127
  value.to_s == 'true'
128
128
  when 'manage_template'
129
129
  value.to_s == 'false'
130
- when 'ecs_compatibility' then true # required for LS <= 6.x
131
130
  else
132
131
  name.start_with?('data_stream_') ||
133
132
  shared_params.include?(name) ||
@@ -76,11 +76,8 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
76
76
  raise ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError.new(e, request_uri_as_string)
77
77
  end
78
78
 
79
- # 404s are excluded because they are valid codes in the case of
80
- # template installation. We might need a better story around this later
81
- # but for our current purposes this is correct
82
79
  code = resp.code
83
- if code < 200 || code > 299 && code != 404
80
+ if code < 200 || code > 299 # assume anything not 2xx is an error that the layer above needs to interpret
84
81
  raise ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError.new(code, request_uri, body, resp.body)
85
82
  end
86
83
 
@@ -52,7 +52,6 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
52
52
  ROOT_URI_PATH = '/'.freeze
53
53
  LICENSE_PATH = '/_license'.freeze
54
54
 
55
- VERSION_6_TO_7 = ::Gem::Requirement.new([">= 6.0.0", "< 7.0.0"])
56
55
  VERSION_7_TO_7_14 = ::Gem::Requirement.new([">= 7.0.0", "< 7.14.0"])
57
56
 
58
57
  DEFAULT_OPTIONS = {
@@ -253,13 +252,11 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
253
252
  def health_check_request(url)
254
253
  logger.debug("Running health check to see if an Elasticsearch connection is working",
255
254
  :healthcheck_url => url.sanitized.to_s, :path => @healthcheck_path)
256
- begin
257
- response = perform_request_to_url(url, :head, @healthcheck_path)
258
- return response, nil
259
- rescue ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError => e
260
- logger.warn("Health check failed", code: e.response_code, url: e.url, message: e.message)
261
- return nil, e
262
- end
255
+ response = perform_request_to_url(url, :head, @healthcheck_path)
256
+ return response, nil
257
+ rescue ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError => e
258
+ logger.warn("Health check failed", code: e.response_code, url: e.url, message: e.message)
259
+ return nil, e
263
260
  end
264
261
 
265
262
  def healthcheck!(register_phase = true)
@@ -312,13 +309,11 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
312
309
  end
313
310
 
314
311
  def get_root_path(url, params={})
315
- begin
316
- resp = perform_request_to_url(url, :get, ROOT_URI_PATH, params)
317
- return resp, nil
318
- rescue ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError => e
319
- logger.warn("Elasticsearch main endpoint returns #{e.response_code}", message: e.message, body: e.response_body)
320
- return nil, e
321
- end
312
+ resp = perform_request_to_url(url, :get, ROOT_URI_PATH, params)
313
+ return resp, nil
314
+ rescue ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError => e
315
+ logger.warn("Elasticsearch main endpoint returns #{e.response_code}", message: e.message, body: e.response_body)
316
+ return nil, e
322
317
  end
323
318
 
324
319
  def test_serverless_connection(url, root_response)
@@ -550,11 +545,9 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
550
545
  return false if version_info['version'].nil?
551
546
 
552
547
  version = ::Gem::Version.new(version_info["version"]['number'])
553
- return false if version < ::Gem::Version.new('6.0.0')
548
+ return false if version < ::Gem::Version.new('7.0.0')
554
549
 
555
- if VERSION_6_TO_7.satisfied_by?(version)
556
- return valid_tagline?(version_info)
557
- elsif VERSION_7_TO_7_14.satisfied_by?(version)
550
+ if VERSION_7_TO_7_14.satisfied_by?(version)
558
551
  build_flavor = version_info["version"]['build_flavor']
559
552
  return false if build_flavor.nil? || build_flavor != 'default' || !valid_tagline?(version_info)
560
553
  else
@@ -182,22 +182,20 @@ module LogStash; module Outputs; class ElasticSearch;
182
182
  def bulk_send(body_stream, batch_actions)
183
183
  params = compression_level? ? {:headers => {"Content-Encoding" => "gzip"}} : {}
184
184
 
185
- response = @pool.post(@bulk_path, params, body_stream.string)
186
-
187
- @bulk_response_metrics.increment(response.code.to_s)
188
-
189
- case response.code
190
- when 200 # OK
191
- LogStash::Json.load(response.body)
192
- when 413 # Payload Too Large
185
+ begin
186
+ response = @pool.post(@bulk_path, params, body_stream.string)
187
+ @bulk_response_metrics.increment(response.code.to_s)
188
+ rescue ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError => e
189
+ @bulk_response_metrics.increment(e.response_code.to_s)
190
+ raise e unless e.response_code == 413
191
+ # special handling for 413, treat it as a document level issue
193
192
  logger.warn("Bulk request rejected: `413 Payload Too Large`", :action_count => batch_actions.size, :content_length => body_stream.size)
194
- emulate_batch_error_response(batch_actions, response.code, 'payload_too_large')
195
- else
196
- url = ::LogStash::Util::SafeURI.new(response.final_url)
197
- raise ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError.new(
198
- response.code, url, body_stream.to_s, response.body
199
- )
193
+ return emulate_batch_error_response(batch_actions, 413, 'payload_too_large')
194
+ rescue => e # it may be a network issue instead, re-raise
195
+ raise e
200
196
  end
197
+
198
+ LogStash::Json.load(response.body)
201
199
  end
202
200
 
203
201
  def emulate_batch_error_response(actions, http_code, reason)
@@ -411,6 +409,9 @@ module LogStash; module Outputs; class ElasticSearch;
411
409
  def exists?(path, use_get=false)
412
410
  response = use_get ? @pool.get(path) : @pool.head(path)
413
411
  response.code >= 200 && response.code <= 299
412
+ rescue ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError => e
413
+ return false if e.response_code == 404
414
+ raise e
414
415
  end
415
416
 
416
417
  def template_exists?(template_endpoint, name)
@@ -421,6 +422,8 @@ module LogStash; module Outputs; class ElasticSearch;
421
422
  path = "#{template_endpoint}/#{name}"
422
423
  logger.info("Installing Elasticsearch template", name: name)
423
424
  @pool.put(path, nil, LogStash::Json.dump(template))
425
+ rescue ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError => e
426
+ raise e unless e.response_code == 404
424
427
  end
425
428
 
426
429
  # ILM methods
@@ -432,17 +435,15 @@ module LogStash; module Outputs; class ElasticSearch;
432
435
 
433
436
  # Create a new rollover alias
434
437
  def rollover_alias_put(alias_name, alias_definition)
435
- begin
436
- @pool.put(CGI::escape(alias_name), nil, LogStash::Json.dump(alias_definition))
437
- logger.info("Created rollover alias", name: alias_name)
438
- # If the rollover alias already exists, ignore the error that comes back from Elasticsearch
439
- rescue ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError => e
440
- if e.response_code == 400
441
- logger.info("Rollover alias already exists, skipping", name: alias_name)
442
- return
443
- end
444
- raise e
438
+ @pool.put(CGI::escape(alias_name), nil, LogStash::Json.dump(alias_definition))
439
+ logger.info("Created rollover alias", name: alias_name)
440
+ # If the rollover alias already exists, ignore the error that comes back from Elasticsearch
441
+ rescue ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError => e
442
+ if e.response_code == 400
443
+ logger.info("Rollover alias already exists, skipping", name: alias_name)
444
+ return
445
445
  end
446
+ raise e
446
447
  end
447
448
 
448
449
  def get_xpack_info
@@ -21,13 +21,7 @@ module LogStash; module Outputs; class ElasticSearch
21
21
  "Serverless Elasticsearch cluster does not support Index Lifecycle Management.") if @ilm_enabled == 'auto'
22
22
  false
23
23
  elsif @ilm_enabled == 'auto'
24
- if ilm_on_by_default?
25
- ilm_alias_set?
26
- else
27
- @logger.info("ILM auto configuration (`ilm_enabled => auto` or unset) resolved to `false`."\
28
- " Elasticsearch cluster is before 7.0.0, which is the minimum version required to automatically run Index Lifecycle Management")
29
- false
30
- end
24
+ ilm_alias_set?
31
25
  elsif @ilm_enabled.to_s == 'true'
32
26
  ilm_alias_set?
33
27
  else
@@ -42,10 +36,6 @@ module LogStash; module Outputs; class ElasticSearch
42
36
  default_index?(@index) || !default_rollover_alias?(@ilm_rollover_alias)
43
37
  end
44
38
 
45
- def ilm_on_by_default?
46
- maximum_seen_major_version >= 7
47
- end
48
-
49
39
  def default_index?(index)
50
40
  index == @default_index
51
41
  end
@@ -47,7 +47,7 @@ module LogStash; module Outputs; class ElasticSearch
47
47
  def self.add_ilm_settings_to_template(plugin, template)
48
48
  # Overwrite any index patterns, and use the rollover alias. Use 'index_patterns' rather than 'template' for pattern
49
49
  # definition - remove any existing definition of 'template'
50
- template.delete('template') if template.include?('template') if plugin.maximum_seen_major_version < 8
50
+ template.delete('template') if template.include?('template') if plugin.maximum_seen_major_version == 7
51
51
  template['index_patterns'] = "#{plugin.ilm_rollover_alias}-*"
52
52
  settings = resolve_template_settings(plugin, template)
53
53
  if settings && (settings['index.lifecycle.name'] || settings['index.lifecycle.rollover_alias'])
@@ -14,39 +14,16 @@ require "set"
14
14
  # .Compatibility Note
15
15
  # [NOTE]
16
16
  # ================================================================================
17
- # Starting with Elasticsearch 5.3, there's an {ref}modules-http.html[HTTP setting]
18
- # called `http.content_type.required`. If this option is set to `true`, and you
19
- # are using Logstash 2.4 through 5.2, you need to update the Elasticsearch output
20
- # plugin to version 6.2.5 or higher.
21
- #
22
- # ================================================================================
23
17
  #
24
18
  # This plugin is the recommended method of storing logs in Elasticsearch.
25
19
  # If you plan on using the Kibana web interface, you'll want to use this output.
26
20
  #
27
- # This output only speaks the HTTP protocol. HTTP is the preferred protocol for interacting with Elasticsearch as of Logstash 2.0.
28
- # We strongly encourage the use of HTTP over the node protocol for a number of reasons. HTTP is only marginally slower,
29
- # yet far easier to administer and work with. When using the HTTP protocol one may upgrade Elasticsearch versions without having
30
- # to upgrade Logstash in lock-step.
21
+ # This output only speaks the HTTP protocol.
31
22
  #
32
23
  # You can learn more about Elasticsearch at <https://www.elastic.co/products/elasticsearch>
33
24
  #
34
- # ==== Template management for Elasticsearch 5.x
35
- # Index template for this version (Logstash 5.0) has been changed to reflect Elasticsearch's mapping changes in version 5.0.
36
- # Most importantly, the subfield for string multi-fields has changed from `.raw` to `.keyword` to match ES default
37
- # behavior.
38
- #
39
- # ** Users installing ES 5.x and LS 5.x **
40
- # This change will not affect you and you will continue to use the ES defaults.
41
- #
42
- # ** Users upgrading from LS 2.x to LS 5.x with ES 5.x **
43
- # LS will not force upgrade the template, if `logstash` template already exists. This means you will still use
44
- # `.raw` for sub-fields coming from 2.x. If you choose to use the new template, you will have to reindex your data after
45
- # the new template is installed.
46
- #
47
25
  # ==== Retry Policy
48
26
  #
49
- # The retry policy has changed significantly in the 2.2.0 release.
50
27
  # This plugin uses the Elasticsearch bulk API to optimize its imports into Elasticsearch. These requests may experience
51
28
  # either partial or total failures.
52
29
  #
@@ -129,8 +106,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
129
106
  # - delete: deletes a document by id (An id is required for this action)
130
107
  # - create: indexes a document, fails if a document by that id already exists in the index.
131
108
  # - update: updates a document by id. Update has a special case where you can upsert -- update a
132
- # document if not already present. See the `upsert` option. NOTE: This does not work and is not supported
133
- # in Elasticsearch 1.x. Please upgrade to ES 2.x or greater to use this feature with Logstash!
109
+ # document if not already present. See the `upsert` option.
134
110
  # - A sprintf style string to change the action based on the content of the event. The value `%{[foo]}`
135
111
  # would use the foo field for the action
136
112
  #
@@ -148,7 +124,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
148
124
 
149
125
  config :document_type,
150
126
  :validate => :string,
151
- :deprecated => "Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature"
127
+ :deprecated => "Document types were deprecated in Elasticsearch 7.0, and no longer configurable since 8.0. You should avoid this feature."
152
128
 
153
129
  # From Logstash 1.3 onwards, a template is applied to Elasticsearch during
154
130
  # Logstash's startup if one with the name `template_name` does not already exist.
@@ -483,7 +459,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
483
459
  join_value = event.get(@join_field)
484
460
  parent_value = event.sprintf(@parent)
485
461
  event.set(@join_field, { "name" => join_value, "parent" => parent_value })
486
- params[routing_field_name] = event.sprintf(@parent)
462
+ params[:routing] = event.sprintf(@parent)
487
463
  else
488
464
  params[:parent] = event.sprintf(@parent)
489
465
  end
@@ -495,7 +471,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
495
471
  if action == 'update'
496
472
  params[:_upsert] = LogStash::Json.load(event.sprintf(@upsert)) if @upsert != ""
497
473
  params[:_script] = event.sprintf(@script) if @script != ""
498
- params[retry_on_conflict_action_name] = @retry_on_conflict
474
+ params[:retry_on_conflict] = @retry_on_conflict
499
475
  end
500
476
 
501
477
  event_control = event.get("[@metadata][_ingest_document]")
@@ -552,7 +528,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
552
528
  params = {
553
529
  :_id => resolve_document_id(event, event_id),
554
530
  :_index => resolve_index!(event, event_index),
555
- routing_field_name => resolve_routing(event, event_routing)
531
+ :routing => resolve_routing(event, event_routing)
556
532
  }
557
533
 
558
534
  target_pipeline = resolve_pipeline(event, event_pipeline)
@@ -615,16 +591,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
615
591
  require "logstash/outputs/elasticsearch/#{name}"
616
592
  end
617
593
 
618
- def retry_on_conflict_action_name
619
- maximum_seen_major_version >= 7 ? :retry_on_conflict : :_retry_on_conflict
620
- end
621
-
622
- def routing_field_name
623
- :routing
624
- end
625
-
626
594
  # Determine the correct value for the 'type' field for the given event
627
- DEFAULT_EVENT_TYPE_ES6 = "doc".freeze
628
595
  DEFAULT_EVENT_TYPE_ES7 = "_doc".freeze
629
596
 
630
597
  def get_event_type(event)
@@ -633,9 +600,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
633
600
  event.sprintf(@document_type)
634
601
  else
635
602
  major_version = maximum_seen_major_version
636
- if major_version == 6
637
- DEFAULT_EVENT_TYPE_ES6
638
- elsif major_version == 7
603
+ if major_version == 7
639
604
  DEFAULT_EVENT_TYPE_ES7
640
605
  else
641
606
  nil
@@ -653,9 +618,9 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
653
618
  # @param noop_required_client [nil]: required `nil` for legacy reasons.
654
619
  # @return [Boolean]
655
620
  def use_event_type?(noop_required_client)
656
- # always set type for ES 6
657
- # for ES 7 only set it if the user defined it
658
- (maximum_seen_major_version < 7) || (maximum_seen_major_version == 7 && @document_type)
621
+ # never use event type unless
622
+ # ES is 7.x and the user defined it
623
+ maximum_seen_major_version == 7 && @document_type
659
624
  end
660
625
 
661
626
  def install_template
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-output-elasticsearch'
3
- s.version = '12.0.1'
3
+ s.version = '12.0.2'
4
4
  s.licenses = ['apache-2.0']
5
5
  s.summary = "Stores logs in Elasticsearch"
6
6
  s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
@@ -30,8 +30,6 @@ module ESHelper
30
30
  nil
31
31
  elsif ESHelper.es_version_satisfies?(">=7")
32
32
  "_doc"
33
- else
34
- "doc"
35
33
  end
36
34
  end
37
35
 
@@ -70,7 +68,7 @@ module ESHelper
70
68
  end
71
69
 
72
70
  RSpec::Matchers.define :have_hits do |expected|
73
- hits_count_path = ESHelper.es_version_satisfies?(">=7") ? %w(hits total value) : %w(hits total)
71
+ hits_count_path = %w(hits total value)
74
72
 
75
73
  match do |actual|
76
74
  @actual_hits_count = actual&.dig(*hits_count_path)
@@ -214,8 +212,6 @@ module ESHelper
214
212
  template['template']['mappings']
215
213
  elsif ESHelper.es_version_satisfies?(">=7")
216
214
  template['mappings']
217
- else
218
- template['mappings']["_default_"]
219
215
  end
220
216
  end
221
217
  end
@@ -14,7 +14,7 @@ end
14
14
  let(:event_with_invalid_utf_8_bytes) { LogStash::Event.new("message" => "Message from spacecraft which contains \xAC invalid \xD7 byte sequences.", "type" => type) }
15
15
 
16
16
  let(:index) { 10.times.collect { rand(10).to_s }.join("") }
17
- let(:type) { ESHelper.es_version_satisfies?("< 7") ? "doc" : "_doc" }
17
+ let(:type) { "_doc" }
18
18
  let(:event_count) { 10000 + rand(500) }
19
19
  # mix the events with valid and invalid UTF-8 payloads
20
20
  let(:events) { event_count.times.map { |i| i%3 == 0 ? event : event_with_invalid_utf_8_bytes }.to_a }
@@ -59,10 +59,10 @@ end
59
59
  response = http_client.get("#{index_url}/_search?q=*&size=1000")
60
60
  result = LogStash::Json.load(response.body)
61
61
  result["hits"]["hits"].each do |doc|
62
- if ESHelper.es_version_satisfies?("< 8")
63
- expect(doc["_type"]).to eq(type)
64
- else
62
+ if ESHelper.es_version_satisfies?(">= 8")
65
63
  expect(doc).not_to include("_type")
64
+ else
65
+ expect(doc["_type"]).to eq(type)
66
66
  end
67
67
  expect(doc["_index"]).to eq(index)
68
68
  end
@@ -78,4 +78,4 @@ end
78
78
 
79
79
  it_behaves_like("an indexer")
80
80
  end
81
- end
81
+ end
@@ -13,7 +13,7 @@ describe "TARGET_BULK_BYTES", :integration => true do
13
13
  }
14
14
  }
15
15
  let(:index) { 10.times.collect { rand(10).to_s }.join("") }
16
- let(:type) { ESHelper.es_version_satisfies?("< 7") ? "doc" : "_doc" }
16
+ let(:type) { "_doc" }
17
17
 
18
18
  subject { LogStash::Outputs::ElasticSearch.new(config) }
19
19
 
@@ -82,7 +82,7 @@ describe "indexing with sprintf resolution", :integration => true do
82
82
  let(:message) { "Hello from #{__FILE__}" }
83
83
  let(:event) { LogStash::Event.new("message" => message, "type" => type) }
84
84
  let (:index) { "%{[index_name]}_dynamic" }
85
- let(:type) { ESHelper.es_version_satisfies?("< 7") ? "doc" : "_doc" }
85
+ let(:type) { "_doc" }
86
86
  let(:event_count) { 1 }
87
87
  let(:user) { "simpleuser" }
88
88
  let(:password) { "abc123" }
@@ -151,7 +151,7 @@ describe "indexing" do
151
151
  let(:message) { "Hello from #{__FILE__}" }
152
152
  let(:event) { LogStash::Event.new("message" => message, "type" => type) }
153
153
  let(:index) { 10.times.collect { rand(10).to_s }.join("") }
154
- let(:type) { ESHelper.es_version_satisfies?("< 7") ? "doc" : "_doc" }
154
+ let(:type) { "_doc" }
155
155
  let(:event_count) { 1 + rand(2) }
156
156
  let(:config) { "not implemented" }
157
157
  let(:events) { event_count.times.map { event }.to_a }
@@ -204,10 +204,10 @@ describe "indexing" do
204
204
  result["hits"]["hits"].each do |doc|
205
205
  expect(doc["_source"]["message"]).to eq(message)
206
206
 
207
- if ESHelper.es_version_satisfies?("< 8")
208
- expect(doc["_type"]).to eq(type)
209
- else
207
+ if ESHelper.es_version_satisfies?(">= 8")
210
208
  expect(doc).not_to include("_type")
209
+ else
210
+ expect(doc["_type"]).to eq(type)
211
211
  end
212
212
  expect(doc["_index"]).to eq(index)
213
213
  end
@@ -346,7 +346,7 @@ describe "indexing" do
346
346
  end
347
347
 
348
348
  describe "an indexer with no type value set (default to doc)", :integration => true do
349
- let(:type) { ESHelper.es_version_satisfies?("< 7") ? "doc" : "_doc" }
349
+ let(:type) { "_doc" }
350
350
  let(:config) {
351
351
  {
352
352
  "hosts" => get_host_port,
@@ -74,5 +74,5 @@ describe "elasticsearch is down on startup", :integration => true do
74
74
  expect(r).to have_hits(2)
75
75
  expect(subject.plugin_metadata.get(:cluster_uuid)).not_to be_empty
76
76
  expect(subject.plugin_metadata.get(:cluster_uuid)).not_to eq("_na_")
77
- end if ESHelper.es_version_satisfies?(">=7")
77
+ end
78
78
  end
@@ -6,7 +6,7 @@ describe "join type field", :integration => true do
6
6
  shared_examples "a join field based parent indexer" do
7
7
  let(:index) { 10.times.collect { rand(10).to_s }.join("") }
8
8
 
9
- let(:type) { ESHelper.es_version_satisfies?("< 7") ? "doc" : "_doc" }
9
+ let(:type) { "_doc" }
10
10
 
11
11
  let(:event_count) { 10000 + rand(500) }
12
12
  let(:parent) { "not_implemented" }
@@ -33,8 +33,7 @@ describe "join type field", :integration => true do
33
33
  }
34
34
  }
35
35
 
36
- mapping = ESHelper.es_version_satisfies?('<7') ? { "mappings" => { type => properties } }
37
- : { "mappings" => properties}
36
+ mapping = { "mappings" => properties}
38
37
 
39
38
  Manticore.put("#{index_url}", {:body => mapping.to_json, :headers => default_headers}).call
40
39
  pdoc = { "message" => "ohayo", join_field => parent_relation }
@@ -5,19 +5,11 @@ describe "failures in bulk class expected behavior", :integration => true do
5
5
  let(:template) { '{"template" : "not important, will be updated by :index"}' }
6
6
  let(:event1) { LogStash::Event.new("somevalue" => 100, "@timestamp" => "2014-11-17T20:37:17.223Z", "@metadata" => {"retry_count" => 0}) }
7
7
  let(:action1) do
8
- if ESHelper.es_version_satisfies?("< 7")
9
- ESHelper.action_for_version(["index", {:_id=>nil, routing_field_name =>nil, :_index=>"logstash-2014.11.17", :_type=> doc_type }, event1.to_hash])
10
- else
11
- ESHelper.action_for_version(["index", {:_id=>nil, routing_field_name =>nil, :_index=>"logstash-2014.11.17" }, event1.to_hash])
12
- end
8
+ ESHelper.action_for_version(["index", {:_id=>nil, routing_field_name =>nil, :_index=>"logstash-2014.11.17" }, event1.to_hash])
13
9
  end
14
10
  let(:event2) { LogStash::Event.new("geoip" => { "location" => [ 0.0, 0.0] }, "@timestamp" => "2014-11-17T20:37:17.223Z", "@metadata" => {"retry_count" => 0}) }
15
11
  let(:action2) do
16
- if ESHelper.es_version_satisfies?("< 7")
17
- ESHelper.action_for_version(["index", {:_id=>nil, routing_field_name =>nil, :_index=>"logstash-2014.11.17", :_type=> doc_type }, event2.to_hash])
18
- else
19
- ESHelper.action_for_version(["index", {:_id=>nil, routing_field_name =>nil, :_index=>"logstash-2014.11.17" }, event2.to_hash])
20
- end
12
+ ESHelper.action_for_version(["index", {:_id=>nil, routing_field_name =>nil, :_index=>"logstash-2014.11.17" }, event2.to_hash])
21
13
  end
22
14
  let(:invalid_event) { LogStash::Event.new("geoip" => { "location" => "notlatlon" }, "@timestamp" => "2014-11-17T20:37:17.223Z") }
23
15
 
@@ -33,48 +33,13 @@ describe "pool sniffer", :integration => true do
33
33
 
34
34
  expect(uris.size).to eq(1)
35
35
  end
36
-
37
- it "should return the correct sniff URL" do
38
- if ESHelper.es_version_satisfies?("<7")
39
- # We do a more thorough check on these versions because we can more reliably guess the ip
40
- uris = subject.check_sniff
41
-
42
- expect(uris).to include(::LogStash::Util::SafeURI.new("//#{es_ip}:#{es_port}"))
43
- else
44
- # ES 1.x (and ES 7.x) returned the public hostname by default. This is hard to approximate
45
- # so for ES1.x and 7.x we don't check the *exact* hostname
46
- skip
47
- end
48
- end
49
36
  end
50
37
  end
51
38
 
52
- if ESHelper.es_version_satisfies?(">= 7")
53
- describe("Complex sniff parsing ES 7x") do
54
- before(:each) do
55
- response_double = double("_nodes/http", body: File.read("spec/fixtures/_nodes/7x.json"))
56
- allow(subject).to receive(:perform_request).and_return([nil, { version: "7.0" }, response_double])
57
- subject.start
58
- end
59
-
60
- context "with mixed master-only, data-only, and data + master nodes" do
61
- it "should execute a sniff without error" do
62
- expect { subject.check_sniff }.not_to raise_error
63
- end
64
-
65
- it "should return the correct sniff URLs" do
66
- # ie. with the master-only node, and with the node name correctly set.
67
- uris = subject.check_sniff
68
-
69
- expect(uris).to include(::LogStash::Util::SafeURI.new("//dev-masterdata:9201"), ::LogStash::Util::SafeURI.new("//dev-data:9202"))
70
- end
71
- end
72
- end
73
- end
74
- describe("Complex sniff parsing ES") do
39
+ describe("Complex sniff parsing") do
75
40
  before(:each) do
76
- response_double = double("_nodes/http", body: File.read("spec/fixtures/_nodes/6x.json"))
77
- allow(subject).to receive(:perform_request).and_return([nil, { version: "6.8" }, response_double])
41
+ response_double = double("_nodes/http", body: File.read("spec/fixtures/_nodes/7x.json"))
42
+ allow(subject).to receive(:perform_request).and_return([nil, { version: "7.0" }, response_double])
78
43
  subject.start
79
44
  end
80
45
 
@@ -84,10 +49,10 @@ describe "pool sniffer", :integration => true do
84
49
  end
85
50
 
86
51
  it "should return the correct sniff URLs" do
87
- # ie. without the master-only node
52
+ # ie. with the master-only node, and with the node name correctly set.
88
53
  uris = subject.check_sniff
89
54
 
90
- expect(uris).to include(::LogStash::Util::SafeURI.new("//127.0.0.1:9201"), ::LogStash::Util::SafeURI.new("//127.0.0.1:9202"), ::LogStash::Util::SafeURI.new("//127.0.0.1:9203"))
55
+ expect(uris).to include(::LogStash::Util::SafeURI.new("//dev-masterdata:9201"), ::LogStash::Util::SafeURI.new("//dev-data:9202"))
91
56
  end
92
57
  end
93
58
  end