logstash-output-elasticsearch 10.8.6-java → 11.0.0-java

Sign up to get free protection for your applications and to get access to all the features.
Files changed (31) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +7 -0
  3. data/docs/index.asciidoc +132 -22
  4. data/lib/logstash/outputs/elasticsearch.rb +122 -64
  5. data/lib/logstash/outputs/elasticsearch/data_stream_support.rb +233 -0
  6. data/lib/logstash/outputs/elasticsearch/http_client.rb +9 -7
  7. data/lib/logstash/outputs/elasticsearch/http_client/pool.rb +47 -34
  8. data/lib/logstash/outputs/elasticsearch/ilm.rb +11 -12
  9. data/lib/logstash/outputs/elasticsearch/license_checker.rb +19 -22
  10. data/lib/logstash/outputs/elasticsearch/template_manager.rb +3 -5
  11. data/lib/logstash/plugin_mixins/elasticsearch/api_configs.rb +157 -153
  12. data/lib/logstash/plugin_mixins/elasticsearch/common.rb +70 -58
  13. data/logstash-output-elasticsearch.gemspec +2 -2
  14. data/spec/es_spec_helper.rb +3 -6
  15. data/spec/integration/outputs/data_stream_spec.rb +61 -0
  16. data/spec/integration/outputs/ilm_spec.rb +6 -2
  17. data/spec/integration/outputs/ingest_pipeline_spec.rb +4 -2
  18. data/spec/integration/outputs/retry_spec.rb +4 -4
  19. data/spec/integration/outputs/sniffer_spec.rb +0 -1
  20. data/spec/spec_helper.rb +14 -0
  21. data/spec/unit/outputs/elasticsearch/data_stream_support_spec.rb +542 -0
  22. data/spec/unit/outputs/elasticsearch/http_client/manticore_adapter_spec.rb +1 -0
  23. data/spec/unit/outputs/elasticsearch/http_client/pool_spec.rb +24 -10
  24. data/spec/unit/outputs/elasticsearch/http_client_spec.rb +2 -3
  25. data/spec/unit/outputs/elasticsearch/template_manager_spec.rb +1 -3
  26. data/spec/unit/outputs/elasticsearch_proxy_spec.rb +1 -2
  27. data/spec/unit/outputs/elasticsearch_spec.rb +122 -23
  28. data/spec/unit/outputs/elasticsearch_ssl_spec.rb +1 -2
  29. data/spec/unit/outputs/error_whitelist_spec.rb +3 -2
  30. data/spec/unit/outputs/license_check_spec.rb +0 -16
  31. metadata +23 -16
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 69557d21ffe4079cabafcf86949f41d85cb6781f8898cebdc54b354117333b6b
4
- data.tar.gz: a65b40a961335837f9ccff55472c0aeef033c5248cdcc579ffa98c6560fa377c
3
+ metadata.gz: dd0e368ed484aa214da94fcdc6978f919b3139cf90f8db462aba17f9c1e86670
4
+ data.tar.gz: 67b475fdd703d50d7bbb806adebd46c3b6657ead3019e76c7517e3c2428335be
5
5
  SHA512:
6
- metadata.gz: e8be38c81c89f8dca5dad83c79106180967cb5ed6806ed4a0ce97db1296a15bd8a462da80ef4a663807648164ac410d3d57fc46b2412ef497b1f9d0a4d7b57c6
7
- data.tar.gz: 1843e98054e65374fe4b72c5938b0d808fecca799294783893d75c955977ec0d34020cd688ec7a16e4e22bf8c6c2b9e53343e9bfd4dd0cbccee10e601d0b2e0f
6
+ metadata.gz: f1e85fea62c9173d0ffdbf739487d8bcbbcfb41f304893e37333d22a8f42b0f527506d926c9f983480ef1c76eee43869ded32b84df98e19fdd999b9d5a26baa9
7
+ data.tar.gz: 0d1df95d541fa6e1d1161763051a3b4dc8fed10f62d878e7b4aa556d00c17bf6daf44a2fdc0c356c14c9c5431457112da601942dcf34dbf5386b05235933146e
data/CHANGELOG.md CHANGED
@@ -1,3 +1,10 @@
1
+ ## 11.0.0
2
+ - Feat: Data stream support [#988](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/988)
3
+ - Refactor: reviewed logging format + restored ES (initial) setup error logging
4
+ - Feat: always check ES license [#1005](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1005)
5
+
6
+ Since Elasticsearch no longer provides an OSS artifact the plugin will no longer skip the license check on OSS Logstash.
7
+
1
8
  ## 10.8.6
2
9
  - Fixed an issue where a single over-size event being rejected by Elasticsearch would cause the entire entire batch to be retried indefinitely. The oversize event will still be retried on its own and logging has been improved to include payload sizes in this situation [#972](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/972)
3
10
  - Fixed an issue with `http_compression => true` where a well-compressed payload could fit under our outbound 20MB limit but expand beyond Elasticsearch's 100MB limit, causing bulk failures. Bulk grouping is now determined entirely by the decompressed payload size [#823](https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/823)
data/docs/index.asciidoc CHANGED
@@ -21,17 +21,9 @@ include::{include_path}/plugin_header.asciidoc[]
21
21
 
22
22
  ==== Description
23
23
 
24
- If you plan to use the Kibana web interface to analyze data transformed by
25
- Logstash, use the Elasticsearch output plugin to get your data into
26
- Elasticsearch.
27
-
28
- This output only speaks the HTTP protocol as it is the preferred protocol for
29
- interacting with Elasticsearch. In previous versions it was possible to
30
- communicate with Elasticsearch through the transport protocol, which is now
31
- reserved for internal cluster communication between nodes
32
- {ref}/modules-transport.html[communication between nodes].
33
- Using the transport protocol to communicate with the cluster has been deprecated
34
- in Elasticsearch 7.0.0 and will be removed in 8.0.0
24
+ Elasticsearch provides near real-time search and analytics for all types of
25
+ data. The Elasticsearch output plugin can store both time series datasets (such
26
+ as logs, events, and metrics) and non-time series data in Elasticsearch.
35
27
 
36
28
  You can https://www.elastic.co/elasticsearch/[learn more about Elasticsearch] on
37
29
  the website landing page or in the {ref}[Elasticsearch documentation].
@@ -70,6 +62,59 @@ By having an ECS-compatible template in place, we can ensure that Elasticsearch
70
62
  is prepared to create and index fields in a way that is compatible with ECS,
71
63
  and will correctly reject events with fields that conflict and cannot be coerced.
72
64
 
65
+ [id="plugins-{type}s-{plugin}-data-streams"]
66
+ ==== Data streams
67
+
68
+ The {es} output plugin can store both time series datasets (such
69
+ as logs, events, and metrics) and non-time series data in Elasticsearch.
70
+
71
+ The data stream options are recommended for indexing time series datasets (such
72
+ as logs, metrics, and events) into {es}:
73
+
74
+ * <<plugins-{type}s-{plugin}-data_stream>> |<<string,string>>
75
+ * <<plugins-{type}s-{plugin}-data_stream_auto_routing>>
76
+ * <<plugins-{type}s-{plugin}-data_stream_dataset>>
77
+ * <<plugins-{type}s-{plugin}-data_stream_namespace>>
78
+ * <<plugins-{type}s-{plugin}-data_stream_sync_fields>>
79
+ * <<plugins-{type}s-{plugin}-data_stream_type>>
80
+
81
+ [id="plugins-{type}s-{plugin}-ds-examples"]
82
+ ===== Data stream configuration examples
83
+
84
+ **Example: Basic default configuration**
85
+
86
+ [source,sh]
87
+ -----
88
+ output {
89
+ elasticsearch {
90
+ hosts => "hostname"
91
+ data_stream => "true"
92
+ }
93
+ }
94
+ -----
95
+
96
+ This example shows the minimal settings for processing data streams. Events
97
+ with `data_stream.*`` fields are routed to the appropriate data streams. If the
98
+ fields are missing, routing defaults to `logs-generic-logstash`.
99
+
100
+ **Example: Customize data stream name**
101
+
102
+ [source,sh]
103
+ -----
104
+ output {
105
+ elasticsearch {
106
+ hosts => "hostname"
107
+ data_stream => "true"
108
+ data_stream_type => "metrics"
109
+ data_stream_dataset => "foo"
110
+ data_stream_namespace => "bar"
111
+ }
112
+ }
113
+ -----
114
+
115
+
116
+
117
+
73
118
  ==== Writing to different indices: best practices
74
119
 
75
120
  [NOTE]
@@ -274,6 +319,12 @@ This plugin supports the following configuration options plus the
274
319
  | <<plugins-{type}s-{plugin}-cloud_auth>> |<<password,password>>|No
275
320
  | <<plugins-{type}s-{plugin}-cloud_id>> |<<string,string>>|No
276
321
  | <<plugins-{type}s-{plugin}-custom_headers>> |<<hash,hash>>|No
322
+ | <<plugins-{type}s-{plugin}-data_stream>> |<<string,string>>, one of `["true", "false", "auto"]`|No
323
+ | <<plugins-{type}s-{plugin}-data_stream_auto_routing>> |<<boolean,boolean>>|No
324
+ | <<plugins-{type}s-{plugin}-data_stream_dataset>> |<<string,string>>|No
325
+ | <<plugins-{type}s-{plugin}-data_stream_namespace>> |<<string,string>>|No
326
+ | <<plugins-{type}s-{plugin}-data_stream_sync_fields>> |<<boolean,boolean>>|No
327
+ | <<plugins-{type}s-{plugin}-data_stream_type>> |<<string,string>>|No
277
328
  | <<plugins-{type}s-{plugin}-doc_as_upsert>> |<<boolean,boolean>>|No
278
329
  | <<plugins-{type}s-{plugin}-document_id>> |<<string,string>>|No
279
330
  | <<plugins-{type}s-{plugin}-document_type>> |<<string,string>>|No
@@ -335,23 +386,20 @@ output plugins.
335
386
  ===== `action`
336
387
 
337
388
  * Value type is <<string,string>>
338
- * Default value is `"index"`
389
+ * Default value is `create` for data streams, and `index` for non-time series data.
339
390
 
340
- Protocol agnostic (i.e. non-http, non-java specific) configs go here
341
- Protocol agnostic methods
342
391
  The Elasticsearch action to perform. Valid actions are:
343
392
 
344
- - index: indexes a document (an event from Logstash).
345
- - delete: deletes a document by id (An id is required for this action)
346
- - create: indexes a document, fails if a document by that id already exists in the index.
347
- - update: updates a document by id. Update has a special case where you can upsert -- update a
393
+ - `index`: indexes a document (an event from Logstash).
394
+ - `delete`: deletes a document by id (An id is required for this action)
395
+ - `create`: indexes a document, fails if a document by that id already exists in the index.
396
+ - `update`: updates a document by id. Update has a special case where you can upsert -- update a
348
397
  document if not already present. See the `doc_as_upsert` option. NOTE: This does not work and is not supported
349
398
  in Elasticsearch 1.x. Please upgrade to ES 2.x or greater to use this feature with Logstash!
350
399
  - A sprintf style string to change the action based on the content of the event. The value `%{[foo]}`
351
400
  would use the foo field for the action
352
401
 
353
- For more details on actions, check out the {ref}/docs-bulk.html[Elasticsearch
354
- bulk API documentation].
402
+ For more details on actions, check out the {ref}/docs-bulk.html[Elasticsearch bulk API documentation].
355
403
 
356
404
  [id="plugins-{type}s-{plugin}-api_key"]
357
405
  ===== `api_key`
@@ -405,6 +453,69 @@ Cloud ID, from the Elastic Cloud web console. If set `hosts` should not be used.
405
453
  For more details, check out the
406
454
  {logstash-ref}/connecting-to-cloud.html[Logstash-to-Cloud documentation].
407
455
 
456
+ [id="plugins-{type}s-{plugin}-data_stream"]
457
+ ===== `data_stream`
458
+
459
+ * Value can be any of: `true`, `false` and `auto`
460
+ * Default is `false` in Logstash 7.x and `auto` starting in Logstash 8.0.
461
+
462
+ Defines whether data will be indexed into an Elasticsearch data stream.
463
+ The other `data_stream_*` settings will be used only if this setting is enabled.
464
+
465
+ Logstash handles the output as a data stream when the supplied configuration
466
+ is compatible with data streams and this value is set to `auto`.
467
+
468
+ [id="plugins-{type}s-{plugin}-data_stream_auto_routing"]
469
+ ===== `data_stream_auto_routing`
470
+
471
+ * Value type is <<boolean,boolean>>
472
+ * Default value is `true`.
473
+
474
+ Automatically routes events by deriving the data stream name using specific event
475
+ fields with the `%{[data_stream][type]}-%{[data_stream][dataset]}-%{[data_stream][namespace]}` format.
476
+
477
+ If enabled, the `data_stream.*` event fields will take precedence over the
478
+ `data_stream_type`, `data_stream_dataset`, and `data_stream_namespace` settings,
479
+ but will fall back to them if any of the fields are missing from the event.
480
+
481
+ [id="plugins-{type}s-{plugin}-data_stream_dataset"]
482
+ ===== `data_stream_dataset`
483
+
484
+ * Value type is <<string,string>>
485
+ * Default value is `generic`.
486
+
487
+ The data stream dataset used to construct the data stream at index time.
488
+
489
+ [id="plugins-{type}s-{plugin}-data_stream_namespace"]
490
+ ===== `data_stream_namespace`
491
+
492
+ * Value type is <<string,string>>
493
+ * Default value is `default`.
494
+
495
+ The data stream namespace used to construct the data stream at index time.
496
+
497
+ [id="plugins-{type}s-{plugin}-data_stream_sync_fields"]
498
+ ===== `data_stream_sync_fields`
499
+
500
+ * Value type is <<boolean,boolean>>
501
+ * Default value is `true`
502
+
503
+ Automatically adds and syncs the `data_stream.*` event fields if they are missing from the
504
+ event. This ensures that fields match the name of the data stream that is receiving events.
505
+
506
+ NOTE: If existing `data_stream.*` event fields do not match the data stream name
507
+ and `data_stream_auto_routing` is disabled, the event fields will be
508
+ overwritten with a warning.
509
+
510
+ [id="plugins-{type}s-{plugin}-data_stream_type"]
511
+ ===== `data_stream_type`
512
+
513
+ * Value type is <<string,string>>
514
+ * Default value is `logs`.
515
+
516
+ The data stream type used to construct the data stream at index time.
517
+ Currently, only `logs` and `metrics`are supported.
518
+
408
519
  [id="plugins-{type}s-{plugin}-doc_as_upsert"]
409
520
  ===== `doc_as_upsert`
410
521
 
@@ -457,8 +568,7 @@ If you don't set a value for this option:
457
568
  ** When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
458
569
  ** Otherwise, the default value is `disabled`.
459
570
 
460
- Controls this plugin's compatibility with the
461
- https://www.elastic.co/guide/en/ecs/current/index.html[Elastic Common Schema
571
+ Controls this plugin's compatibility with the {ecs-ref}[Elastic Common Schema
462
572
  (ECS)], including the installation of ECS-compatible index templates. The value
463
573
  of this setting affects the _default_ values of:
464
574
 
@@ -3,8 +3,8 @@ require "logstash/namespace"
3
3
  require "logstash/environment"
4
4
  require "logstash/outputs/base"
5
5
  require "logstash/json"
6
- require "concurrent"
7
- require "stud/buffer"
6
+ require "concurrent/atomic/atomic_boolean"
7
+ require "stud/interval"
8
8
  require "socket" # for Socket.gethostname
9
9
  require "thread" # for safe queueing
10
10
  require "uri" # for escaping user input
@@ -92,6 +92,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
92
92
  require "logstash/plugin_mixins/elasticsearch/api_configs"
93
93
  require "logstash/plugin_mixins/elasticsearch/common"
94
94
  require "logstash/outputs/elasticsearch/ilm"
95
+ require "logstash/outputs/elasticsearch/data_stream_support"
95
96
  require 'logstash/plugin_mixins/ecs_compatibility_support'
96
97
 
97
98
  # Protocol agnostic methods
@@ -106,6 +107,9 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
106
107
  # Generic/API config options that any document indexer output needs
107
108
  include(LogStash::PluginMixins::ElasticSearch::APIConfigs)
108
109
 
110
+ # DS support
111
+ include(LogStash::Outputs::ElasticSearch::DataStreamSupport)
112
+
109
113
  DEFAULT_POLICY = "logstash-policy"
110
114
 
111
115
  config_name "elasticsearch"
@@ -122,7 +126,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
122
126
  # would use the foo field for the action
123
127
  #
124
128
  # For more details on actions, check out the http://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html[Elasticsearch bulk API documentation]
125
- config :action, :validate => :string, :default => "index"
129
+ config :action, :validate => :string # :default => "index" unless data_stream
126
130
 
127
131
  # The index to write events to. This can be dynamic using the `%{foo}` syntax.
128
132
  # The default value will partition your indices by day so you can more easily
@@ -247,6 +251,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
247
251
  # ILM policy to use, if undefined the default policy will be used.
248
252
  config :ilm_policy, :validate => :string, :default => DEFAULT_POLICY
249
253
 
254
+ attr_reader :client
250
255
  attr_reader :default_index
251
256
  attr_reader :default_ilm_rollover_alias
252
257
  attr_reader :default_template_name
@@ -257,26 +262,53 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
257
262
  end
258
263
 
259
264
  def register
260
- @template_installed = Concurrent::AtomicBoolean.new(false)
265
+ @after_successful_connection_done = Concurrent::AtomicBoolean.new(false)
261
266
  @stopping = Concurrent::AtomicBoolean.new(false)
262
- # To support BWC, we check if DLQ exists in core (< 5.4). If it doesn't, we use nil to resort to previous behavior.
263
- @dlq_writer = dlq_enabled? ? execution_context.dlq_writer : nil
264
267
 
265
268
  check_action_validity
266
269
 
270
+ @logger.info("New Elasticsearch output", :class => self.class.name, :hosts => @hosts.map(&:sanitized).map(&:to_s))
271
+
267
272
  # the license_checking behaviour in the Pool class is externalized in the LogStash::ElasticSearchOutputLicenseChecker
268
273
  # class defined in license_check.rb. This license checking is specific to the elasticsearch output here and passed
269
274
  # to build_client down to the Pool class.
270
- build_client(LicenseChecker.new(@logger))
275
+ @client = build_client(LicenseChecker.new(@logger))
276
+
277
+ @after_successful_connection_thread = after_successful_connection do
278
+ begin
279
+ finish_register
280
+ true # thread.value
281
+ rescue => e
282
+ # we do not want to halt the thread with an exception as that has consequences for LS
283
+ e # thread.value
284
+ ensure
285
+ @after_successful_connection_done.make_true
286
+ end
287
+ end
271
288
 
272
- @template_installer = setup_after_successful_connection do
273
- discover_cluster_uuid
274
- install_template
275
- setup_ilm if ilm_in_use?
289
+ # To support BWC, we check if DLQ exists in core (< 5.4). If it doesn't, we use nil to resort to previous behavior.
290
+ @dlq_writer = dlq_enabled? ? execution_context.dlq_writer : nil
291
+
292
+ if data_stream_config?
293
+ @event_mapper = -> (e) { data_stream_event_action_tuple(e) }
294
+ @event_target = -> (e) { data_stream_name(e) }
295
+ @index = "#{data_stream_type}-#{data_stream_dataset}-#{data_stream_namespace}".freeze # default name
296
+ else
297
+ @event_mapper = -> (e) { event_action_tuple(e) }
298
+ @event_target = -> (e) { e.sprintf(@index) }
276
299
  end
300
+
277
301
  @bulk_request_metrics = metric.namespace(:bulk_requests)
278
302
  @document_level_metrics = metric.namespace(:documents)
279
- @logger.info("New Elasticsearch output", :class => self.class.name, :hosts => @hosts.map(&:sanitized).map(&:to_s))
303
+ end
304
+
305
+ # @override post-register when ES connection established
306
+ def finish_register
307
+ assert_es_version_supports_data_streams if data_stream_config?
308
+ discover_cluster_uuid
309
+ install_template
310
+ setup_ilm if ilm_in_use?
311
+ super
280
312
  end
281
313
 
282
314
  # @override to handle proxy => '' as if none was set
@@ -297,46 +329,47 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
297
329
 
298
330
  # Receive an array of events and immediately attempt to index them (no buffering)
299
331
  def multi_receive(events)
300
- until @template_installed.true?
301
- sleep 1
332
+ wait_for_successful_connection if @after_successful_connection_done
333
+ retrying_submit map_events(events)
334
+ end
335
+
336
+ def map_events(events)
337
+ events.map(&@event_mapper)
338
+ end
339
+
340
+ def wait_for_successful_connection
341
+ after_successful_connection_done = @after_successful_connection_done
342
+ return unless after_successful_connection_done
343
+ stoppable_sleep 1 until after_successful_connection_done.true?
344
+
345
+ status = @after_successful_connection_thread && @after_successful_connection_thread.value
346
+ if status.is_a?(Exception) # check if thread 'halted' with an error
347
+ # keep logging that something isn't right (from every #multi_receive)
348
+ @logger.error "Elasticsearch setup did not complete normally, please review previously logged errors",
349
+ message: status.message, exception: status.class
350
+ else
351
+ @after_successful_connection_done = nil # do not execute __method__ again if all went well
302
352
  end
303
- retrying_submit(events.map {|e| event_action_tuple(e)})
304
353
  end
354
+ private :wait_for_successful_connection
305
355
 
306
356
  def close
307
357
  @stopping.make_true if @stopping
308
- stop_template_installer
358
+ stop_after_successful_connection_thread
309
359
  @client.close if @client
310
360
  end
311
361
 
312
- # not private because used by ILM specs
313
- def stop_template_installer
314
- @template_installer.join unless @template_installer.nil?
362
+ private
363
+
364
+ def stop_after_successful_connection_thread
365
+ @after_successful_connection_thread.join unless @after_successful_connection_thread.nil?
315
366
  end
316
367
 
317
- # not private for elasticsearch_spec.rb
318
- # Convert the event into a 3-tuple of action, params, and event
368
+ # Convert the event into a 3-tuple of action, params and event hash
319
369
  def event_action_tuple(event)
320
- action = event.sprintf(@action)
321
-
322
- params = {
323
- :_id => @document_id ? event.sprintf(@document_id) : nil,
324
- :_index => event.sprintf(@index),
325
- routing_field_name => @routing ? event.sprintf(@routing) : nil
326
- }
327
-
370
+ params = common_event_params(event)
328
371
  params[:_type] = get_event_type(event) if use_event_type?(nil)
329
372
 
330
- if @pipeline
331
- value = event.sprintf(@pipeline)
332
- # convention: empty string equates to not using a pipeline
333
- # this is useful when using a field reference in the pipeline setting, e.g.
334
- # elasticsearch {
335
- # pipeline => "%{[@metadata][pipeline]}"
336
- # }
337
- params[:pipeline] = value unless value.empty?
338
- end
339
-
340
373
  if @parent
341
374
  if @join_field
342
375
  join_value = event.get(@join_field)
@@ -348,26 +381,40 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
348
381
  end
349
382
  end
350
383
 
384
+ action = event.sprintf(@action || 'index')
385
+
351
386
  if action == 'update'
352
387
  params[:_upsert] = LogStash::Json.load(event.sprintf(@upsert)) if @upsert != ""
353
388
  params[:_script] = event.sprintf(@script) if @script != ""
354
389
  params[retry_on_conflict_action_name] = @retry_on_conflict
355
390
  end
356
391
 
357
- if @version
358
- params[:version] = event.sprintf(@version)
359
- end
360
-
361
- if @version_type
362
- params[:version_type] = event.sprintf(@version_type)
363
- end
392
+ params[:version] = event.sprintf(@version) if @version
393
+ params[:version_type] = event.sprintf(@version_type) if @version_type
364
394
 
365
- [action, params, event]
395
+ [action, params, event.to_hash]
366
396
  end
367
397
 
368
- # not private for elasticsearch_spec.rb
369
- def retry_on_conflict_action_name
370
- maximum_seen_major_version >= 7 ? :retry_on_conflict : :_retry_on_conflict
398
+ # @return Hash (initial) parameters for given event
399
+ # @private shared event params factory between index and data_stream mode
400
+ def common_event_params(event)
401
+ params = {
402
+ :_id => @document_id ? event.sprintf(@document_id) : nil,
403
+ :_index => @event_target.call(event),
404
+ routing_field_name => @routing ? event.sprintf(@routing) : nil
405
+ }
406
+
407
+ if @pipeline
408
+ value = event.sprintf(@pipeline)
409
+ # convention: empty string equates to not using a pipeline
410
+ # this is useful when using a field reference in the pipeline setting, e.g.
411
+ # elasticsearch {
412
+ # pipeline => "%{[@metadata][pipeline]}"
413
+ # }
414
+ params[:pipeline] = value unless value.empty?
415
+ end
416
+
417
+ params
371
418
  end
372
419
 
373
420
  @@plugins = Gem::Specification.find_all{|spec| spec.name =~ /logstash-output-elasticsearch-/ }
@@ -377,38 +424,47 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
377
424
  require "logstash/outputs/elasticsearch/#{name}"
378
425
  end
379
426
 
380
- private
427
+ def retry_on_conflict_action_name
428
+ maximum_seen_major_version >= 7 ? :retry_on_conflict : :_retry_on_conflict
429
+ end
381
430
 
382
431
  def routing_field_name
383
432
  maximum_seen_major_version >= 6 ? :routing : :_routing
384
433
  end
385
434
 
386
435
  # Determine the correct value for the 'type' field for the given event
387
- DEFAULT_EVENT_TYPE_ES6="doc".freeze
388
- DEFAULT_EVENT_TYPE_ES7="_doc".freeze
436
+ DEFAULT_EVENT_TYPE_ES6 = "doc".freeze
437
+ DEFAULT_EVENT_TYPE_ES7 = "_doc".freeze
438
+
389
439
  def get_event_type(event)
390
440
  # Set the 'type' value for the index.
391
441
  type = if @document_type
392
442
  event.sprintf(@document_type)
393
443
  else
394
- if maximum_seen_major_version < 6
395
- event.get("type") || DEFAULT_EVENT_TYPE_ES6
396
- elsif maximum_seen_major_version == 6
444
+ major_version = maximum_seen_major_version
445
+ if major_version < 6
446
+ es5_event_type(event)
447
+ elsif major_version == 6
397
448
  DEFAULT_EVENT_TYPE_ES6
398
- elsif maximum_seen_major_version == 7
449
+ elsif major_version == 7
399
450
  DEFAULT_EVENT_TYPE_ES7
400
451
  else
401
452
  nil
402
453
  end
403
454
  end
404
455
 
405
- if !(type.is_a?(String) || type.is_a?(Numeric))
406
- @logger.warn("Bad event type! Non-string/integer type value set!", :type_class => type.class, :type_value => type.to_s, :event => event)
407
- end
408
-
409
456
  type.to_s
410
457
  end
411
458
 
459
+ def es5_event_type(event)
460
+ type = event.get('type')
461
+ return DEFAULT_EVENT_TYPE_ES6 unless type
462
+ if !type.is_a?(String) && !type.is_a?(Numeric)
463
+ @logger.warn("Bad event type (non-string/integer type value set)", :type_class => type.class, :type_value => type, :event => event.to_hash)
464
+ end
465
+ type
466
+ end
467
+
412
468
  ##
413
469
  # WARNING: This method is overridden in a subclass in Logstash Core 7.7-7.8's monitoring,
414
470
  # where a `client` argument is both required and ignored. In later versions of
@@ -424,7 +480,8 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
424
480
 
425
481
  def install_template
426
482
  TemplateManager.install_template(self)
427
- @template_installed.make_true
483
+ rescue => e
484
+ @logger.error("Failed to install template", message: e.message, exception: e.class, backtrace: e.backtrace)
428
485
  end
429
486
 
430
487
  def setup_ecs_compatibility_related_defaults
@@ -447,13 +504,14 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
447
504
  end
448
505
 
449
506
  # To be overidden by the -java version
450
- VALID_HTTP_ACTIONS=["index", "delete", "create", "update"]
507
+ VALID_HTTP_ACTIONS = ["index", "delete", "create", "update"]
451
508
  def valid_actions
452
509
  VALID_HTTP_ACTIONS
453
510
  end
454
511
 
455
512
  def check_action_validity
456
- raise LogStash::ConfigurationError, "No action specified!" unless @action
513
+ return if @action.nil? # not set
514
+ raise LogStash::ConfigurationError, "No action specified!" if @action.empty?
457
515
 
458
516
  # If we're using string interpolation, we're good!
459
517
  return if @action =~ /%{.+}/