logstash-output-elasticsearch 10.8.6-java → 11.0.3-java
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +17 -0
- data/docs/index.asciidoc +132 -22
- data/lib/logstash/outputs/elasticsearch.rb +125 -64
- data/lib/logstash/outputs/elasticsearch/data_stream_support.rb +233 -0
- data/lib/logstash/outputs/elasticsearch/http_client.rb +9 -7
- data/lib/logstash/outputs/elasticsearch/http_client/pool.rb +49 -62
- data/lib/logstash/outputs/elasticsearch/ilm.rb +13 -45
- data/lib/logstash/outputs/elasticsearch/license_checker.rb +26 -23
- data/lib/logstash/outputs/elasticsearch/template_manager.rb +4 -6
- data/lib/logstash/outputs/elasticsearch/templates/ecs-v1/elasticsearch-8x.json +1 -0
- data/lib/logstash/plugin_mixins/elasticsearch/api_configs.rb +157 -153
- data/lib/logstash/plugin_mixins/elasticsearch/common.rb +71 -58
- data/logstash-output-elasticsearch.gemspec +3 -3
- data/spec/es_spec_helper.rb +7 -12
- data/spec/fixtures/_nodes/{5x_6x.json → 6x.json} +5 -5
- data/spec/integration/outputs/compressed_indexing_spec.rb +47 -46
- data/spec/integration/outputs/data_stream_spec.rb +61 -0
- data/spec/integration/outputs/delete_spec.rb +49 -51
- data/spec/integration/outputs/ilm_spec.rb +236 -248
- data/spec/integration/outputs/index_spec.rb +5 -2
- data/spec/integration/outputs/index_version_spec.rb +78 -82
- data/spec/integration/outputs/ingest_pipeline_spec.rb +58 -58
- data/spec/integration/outputs/painless_update_spec.rb +74 -164
- data/spec/integration/outputs/parent_spec.rb +67 -75
- data/spec/integration/outputs/retry_spec.rb +6 -6
- data/spec/integration/outputs/sniffer_spec.rb +15 -54
- data/spec/integration/outputs/templates_spec.rb +79 -81
- data/spec/integration/outputs/update_spec.rb +99 -101
- data/spec/spec_helper.rb +10 -0
- data/spec/unit/outputs/elasticsearch/data_stream_support_spec.rb +528 -0
- data/spec/unit/outputs/elasticsearch/http_client/manticore_adapter_spec.rb +1 -0
- data/spec/unit/outputs/elasticsearch/http_client/pool_spec.rb +36 -29
- data/spec/unit/outputs/elasticsearch/http_client_spec.rb +2 -3
- data/spec/unit/outputs/elasticsearch/template_manager_spec.rb +10 -12
- data/spec/unit/outputs/elasticsearch_proxy_spec.rb +1 -2
- data/spec/unit/outputs/elasticsearch_spec.rb +176 -41
- data/spec/unit/outputs/elasticsearch_ssl_spec.rb +1 -2
- data/spec/unit/outputs/error_whitelist_spec.rb +3 -2
- data/spec/unit/outputs/license_check_spec.rb +0 -16
- metadata +29 -36
- data/lib/logstash/outputs/elasticsearch/templates/ecs-disabled/elasticsearch-2x.json +0 -95
- data/lib/logstash/outputs/elasticsearch/templates/ecs-disabled/elasticsearch-5x.json +0 -46
- data/spec/fixtures/_nodes/2x_1x.json +0 -27
- data/spec/fixtures/scripts/groovy/scripted_update.groovy +0 -2
- data/spec/fixtures/scripts/groovy/scripted_update_nested.groovy +0 -2
- data/spec/fixtures/scripts/groovy/scripted_upsert.groovy +0 -2
- data/spec/integration/outputs/groovy_update_spec.rb +0 -150
- data/spec/integration/outputs/templates_5x_spec.rb +0 -98
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 143b78ff484990a35c4d9a1b54ccb040bdba7e6aa2fd69928b0553544883fabd
|
4
|
+
data.tar.gz: a14e65b2b499d65d9a8f3389d0c0168f0447e1358338d867dd5b05a78ca28c13
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: fd6495287f9d5dff77d6006c5495013335eb869b7a9678973567358b9ce6ed772822cda5b38bfbeff1831e72406a4e550125cfafd9cbaeab20b588ccf584a7b6
|
7
|
+
data.tar.gz: d6eda43b3338ad401b889ff20f7bd246b3a44f921b0f006989572cb95145a518cb7103a1c23fd930018c2552316af33d356d4fecc8210ca95f0a6259a2b1037f
|
data/CHANGELOG.md
CHANGED
@@ -1,3 +1,20 @@
|
|
1
|
+
## 11.0.3
|
2
|
+
- Fixed SSL handshake hang indefinitely with proxy setup [#1032](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1032)
|
3
|
+
|
4
|
+
## 11.0.2
|
5
|
+
- Validate that required functionality in Elasticsearch is available upon initial connection [#1015](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1015)
|
6
|
+
|
7
|
+
## 11.0.1
|
8
|
+
- Fix: DLQ regression shipped in 11.0.0 [#1012](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1012)
|
9
|
+
- [DOC] Fixed broken link in list item [#1011](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1011)
|
10
|
+
|
11
|
+
## 11.0.0
|
12
|
+
- Feat: Data stream support [#988](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/988)
|
13
|
+
- Refactor: reviewed logging format + restored ES (initial) setup error logging
|
14
|
+
- Feat: always check ES license [#1005](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1005)
|
15
|
+
|
16
|
+
Since Elasticsearch no longer provides an OSS artifact the plugin will no longer skip the license check on OSS Logstash.
|
17
|
+
|
1
18
|
## 10.8.6
|
2
19
|
- Fixed an issue where a single over-size event being rejected by Elasticsearch would cause the entire entire batch to be retried indefinitely. The oversize event will still be retried on its own and logging has been improved to include payload sizes in this situation [#972](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/972)
|
3
20
|
- Fixed an issue with `http_compression => true` where a well-compressed payload could fit under our outbound 20MB limit but expand beyond Elasticsearch's 100MB limit, causing bulk failures. Bulk grouping is now determined entirely by the decompressed payload size [#823](https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/823)
|
data/docs/index.asciidoc
CHANGED
@@ -21,17 +21,9 @@ include::{include_path}/plugin_header.asciidoc[]
|
|
21
21
|
|
22
22
|
==== Description
|
23
23
|
|
24
|
-
|
25
|
-
|
26
|
-
Elasticsearch.
|
27
|
-
|
28
|
-
This output only speaks the HTTP protocol as it is the preferred protocol for
|
29
|
-
interacting with Elasticsearch. In previous versions it was possible to
|
30
|
-
communicate with Elasticsearch through the transport protocol, which is now
|
31
|
-
reserved for internal cluster communication between nodes
|
32
|
-
{ref}/modules-transport.html[communication between nodes].
|
33
|
-
Using the transport protocol to communicate with the cluster has been deprecated
|
34
|
-
in Elasticsearch 7.0.0 and will be removed in 8.0.0
|
24
|
+
Elasticsearch provides near real-time search and analytics for all types of
|
25
|
+
data. The Elasticsearch output plugin can store both time series datasets (such
|
26
|
+
as logs, events, and metrics) and non-time series data in Elasticsearch.
|
35
27
|
|
36
28
|
You can https://www.elastic.co/elasticsearch/[learn more about Elasticsearch] on
|
37
29
|
the website landing page or in the {ref}[Elasticsearch documentation].
|
@@ -70,6 +62,59 @@ By having an ECS-compatible template in place, we can ensure that Elasticsearch
|
|
70
62
|
is prepared to create and index fields in a way that is compatible with ECS,
|
71
63
|
and will correctly reject events with fields that conflict and cannot be coerced.
|
72
64
|
|
65
|
+
[id="plugins-{type}s-{plugin}-data-streams"]
|
66
|
+
==== Data streams
|
67
|
+
|
68
|
+
The {es} output plugin can store both time series datasets (such
|
69
|
+
as logs, events, and metrics) and non-time series data in Elasticsearch.
|
70
|
+
|
71
|
+
The data stream options are recommended for indexing time series datasets (such
|
72
|
+
as logs, metrics, and events) into {es}:
|
73
|
+
|
74
|
+
* <<plugins-{type}s-{plugin}-data_stream>>
|
75
|
+
* <<plugins-{type}s-{plugin}-data_stream_auto_routing>>
|
76
|
+
* <<plugins-{type}s-{plugin}-data_stream_dataset>>
|
77
|
+
* <<plugins-{type}s-{plugin}-data_stream_namespace>>
|
78
|
+
* <<plugins-{type}s-{plugin}-data_stream_sync_fields>>
|
79
|
+
* <<plugins-{type}s-{plugin}-data_stream_type>>
|
80
|
+
|
81
|
+
[id="plugins-{type}s-{plugin}-ds-examples"]
|
82
|
+
===== Data stream configuration examples
|
83
|
+
|
84
|
+
**Example: Basic default configuration**
|
85
|
+
|
86
|
+
[source,sh]
|
87
|
+
-----
|
88
|
+
output {
|
89
|
+
elasticsearch {
|
90
|
+
hosts => "hostname"
|
91
|
+
data_stream => "true"
|
92
|
+
}
|
93
|
+
}
|
94
|
+
-----
|
95
|
+
|
96
|
+
This example shows the minimal settings for processing data streams. Events
|
97
|
+
with `data_stream.*`` fields are routed to the appropriate data streams. If the
|
98
|
+
fields are missing, routing defaults to `logs-generic-logstash`.
|
99
|
+
|
100
|
+
**Example: Customize data stream name**
|
101
|
+
|
102
|
+
[source,sh]
|
103
|
+
-----
|
104
|
+
output {
|
105
|
+
elasticsearch {
|
106
|
+
hosts => "hostname"
|
107
|
+
data_stream => "true"
|
108
|
+
data_stream_type => "metrics"
|
109
|
+
data_stream_dataset => "foo"
|
110
|
+
data_stream_namespace => "bar"
|
111
|
+
}
|
112
|
+
}
|
113
|
+
-----
|
114
|
+
|
115
|
+
|
116
|
+
|
117
|
+
|
73
118
|
==== Writing to different indices: best practices
|
74
119
|
|
75
120
|
[NOTE]
|
@@ -274,6 +319,12 @@ This plugin supports the following configuration options plus the
|
|
274
319
|
| <<plugins-{type}s-{plugin}-cloud_auth>> |<<password,password>>|No
|
275
320
|
| <<plugins-{type}s-{plugin}-cloud_id>> |<<string,string>>|No
|
276
321
|
| <<plugins-{type}s-{plugin}-custom_headers>> |<<hash,hash>>|No
|
322
|
+
| <<plugins-{type}s-{plugin}-data_stream>> |<<string,string>>, one of `["true", "false", "auto"]`|No
|
323
|
+
| <<plugins-{type}s-{plugin}-data_stream_auto_routing>> |<<boolean,boolean>>|No
|
324
|
+
| <<plugins-{type}s-{plugin}-data_stream_dataset>> |<<string,string>>|No
|
325
|
+
| <<plugins-{type}s-{plugin}-data_stream_namespace>> |<<string,string>>|No
|
326
|
+
| <<plugins-{type}s-{plugin}-data_stream_sync_fields>> |<<boolean,boolean>>|No
|
327
|
+
| <<plugins-{type}s-{plugin}-data_stream_type>> |<<string,string>>|No
|
277
328
|
| <<plugins-{type}s-{plugin}-doc_as_upsert>> |<<boolean,boolean>>|No
|
278
329
|
| <<plugins-{type}s-{plugin}-document_id>> |<<string,string>>|No
|
279
330
|
| <<plugins-{type}s-{plugin}-document_type>> |<<string,string>>|No
|
@@ -335,23 +386,20 @@ output plugins.
|
|
335
386
|
===== `action`
|
336
387
|
|
337
388
|
* Value type is <<string,string>>
|
338
|
-
* Default value is `
|
389
|
+
* Default value is `create` for data streams, and `index` for non-time series data.
|
339
390
|
|
340
|
-
Protocol agnostic (i.e. non-http, non-java specific) configs go here
|
341
|
-
Protocol agnostic methods
|
342
391
|
The Elasticsearch action to perform. Valid actions are:
|
343
392
|
|
344
|
-
- index
|
345
|
-
- delete
|
346
|
-
- create
|
347
|
-
- update
|
393
|
+
- `index`: indexes a document (an event from Logstash).
|
394
|
+
- `delete`: deletes a document by id (An id is required for this action)
|
395
|
+
- `create`: indexes a document, fails if a document by that id already exists in the index.
|
396
|
+
- `update`: updates a document by id. Update has a special case where you can upsert -- update a
|
348
397
|
document if not already present. See the `doc_as_upsert` option. NOTE: This does not work and is not supported
|
349
398
|
in Elasticsearch 1.x. Please upgrade to ES 2.x or greater to use this feature with Logstash!
|
350
399
|
- A sprintf style string to change the action based on the content of the event. The value `%{[foo]}`
|
351
400
|
would use the foo field for the action
|
352
401
|
|
353
|
-
For more details on actions, check out the {ref}/docs-bulk.html[Elasticsearch
|
354
|
-
bulk API documentation].
|
402
|
+
For more details on actions, check out the {ref}/docs-bulk.html[Elasticsearch bulk API documentation].
|
355
403
|
|
356
404
|
[id="plugins-{type}s-{plugin}-api_key"]
|
357
405
|
===== `api_key`
|
@@ -405,6 +453,69 @@ Cloud ID, from the Elastic Cloud web console. If set `hosts` should not be used.
|
|
405
453
|
For more details, check out the
|
406
454
|
{logstash-ref}/connecting-to-cloud.html[Logstash-to-Cloud documentation].
|
407
455
|
|
456
|
+
[id="plugins-{type}s-{plugin}-data_stream"]
|
457
|
+
===== `data_stream`
|
458
|
+
|
459
|
+
* Value can be any of: `true`, `false` and `auto`
|
460
|
+
* Default is `false` in Logstash 7.x and `auto` starting in Logstash 8.0.
|
461
|
+
|
462
|
+
Defines whether data will be indexed into an Elasticsearch data stream.
|
463
|
+
The other `data_stream_*` settings will be used only if this setting is enabled.
|
464
|
+
|
465
|
+
Logstash handles the output as a data stream when the supplied configuration
|
466
|
+
is compatible with data streams and this value is set to `auto`.
|
467
|
+
|
468
|
+
[id="plugins-{type}s-{plugin}-data_stream_auto_routing"]
|
469
|
+
===== `data_stream_auto_routing`
|
470
|
+
|
471
|
+
* Value type is <<boolean,boolean>>
|
472
|
+
* Default value is `true`.
|
473
|
+
|
474
|
+
Automatically routes events by deriving the data stream name using specific event
|
475
|
+
fields with the `%{[data_stream][type]}-%{[data_stream][dataset]}-%{[data_stream][namespace]}` format.
|
476
|
+
|
477
|
+
If enabled, the `data_stream.*` event fields will take precedence over the
|
478
|
+
`data_stream_type`, `data_stream_dataset`, and `data_stream_namespace` settings,
|
479
|
+
but will fall back to them if any of the fields are missing from the event.
|
480
|
+
|
481
|
+
[id="plugins-{type}s-{plugin}-data_stream_dataset"]
|
482
|
+
===== `data_stream_dataset`
|
483
|
+
|
484
|
+
* Value type is <<string,string>>
|
485
|
+
* Default value is `generic`.
|
486
|
+
|
487
|
+
The data stream dataset used to construct the data stream at index time.
|
488
|
+
|
489
|
+
[id="plugins-{type}s-{plugin}-data_stream_namespace"]
|
490
|
+
===== `data_stream_namespace`
|
491
|
+
|
492
|
+
* Value type is <<string,string>>
|
493
|
+
* Default value is `default`.
|
494
|
+
|
495
|
+
The data stream namespace used to construct the data stream at index time.
|
496
|
+
|
497
|
+
[id="plugins-{type}s-{plugin}-data_stream_sync_fields"]
|
498
|
+
===== `data_stream_sync_fields`
|
499
|
+
|
500
|
+
* Value type is <<boolean,boolean>>
|
501
|
+
* Default value is `true`
|
502
|
+
|
503
|
+
Automatically adds and syncs the `data_stream.*` event fields if they are missing from the
|
504
|
+
event. This ensures that fields match the name of the data stream that is receiving events.
|
505
|
+
|
506
|
+
NOTE: If existing `data_stream.*` event fields do not match the data stream name
|
507
|
+
and `data_stream_auto_routing` is disabled, the event fields will be
|
508
|
+
overwritten with a warning.
|
509
|
+
|
510
|
+
[id="plugins-{type}s-{plugin}-data_stream_type"]
|
511
|
+
===== `data_stream_type`
|
512
|
+
|
513
|
+
* Value type is <<string,string>>
|
514
|
+
* Default value is `logs`.
|
515
|
+
|
516
|
+
The data stream type used to construct the data stream at index time.
|
517
|
+
Currently, only `logs`, `metrics` and `synthetics` are supported.
|
518
|
+
|
408
519
|
[id="plugins-{type}s-{plugin}-doc_as_upsert"]
|
409
520
|
===== `doc_as_upsert`
|
410
521
|
|
@@ -457,8 +568,7 @@ If you don't set a value for this option:
|
|
457
568
|
** When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
458
569
|
** Otherwise, the default value is `disabled`.
|
459
570
|
|
460
|
-
Controls this plugin's compatibility with the
|
461
|
-
https://www.elastic.co/guide/en/ecs/current/index.html[Elastic Common Schema
|
571
|
+
Controls this plugin's compatibility with the {ecs-ref}[Elastic Common Schema
|
462
572
|
(ECS)], including the installation of ECS-compatible index templates. The value
|
463
573
|
of this setting affects the _default_ values of:
|
464
574
|
|
@@ -3,8 +3,8 @@ require "logstash/namespace"
|
|
3
3
|
require "logstash/environment"
|
4
4
|
require "logstash/outputs/base"
|
5
5
|
require "logstash/json"
|
6
|
-
require "concurrent"
|
7
|
-
require "stud/
|
6
|
+
require "concurrent/atomic/atomic_boolean"
|
7
|
+
require "stud/interval"
|
8
8
|
require "socket" # for Socket.gethostname
|
9
9
|
require "thread" # for safe queueing
|
10
10
|
require "uri" # for escaping user input
|
@@ -92,6 +92,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
92
92
|
require "logstash/plugin_mixins/elasticsearch/api_configs"
|
93
93
|
require "logstash/plugin_mixins/elasticsearch/common"
|
94
94
|
require "logstash/outputs/elasticsearch/ilm"
|
95
|
+
require "logstash/outputs/elasticsearch/data_stream_support"
|
95
96
|
require 'logstash/plugin_mixins/ecs_compatibility_support'
|
96
97
|
|
97
98
|
# Protocol agnostic methods
|
@@ -106,6 +107,9 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
106
107
|
# Generic/API config options that any document indexer output needs
|
107
108
|
include(LogStash::PluginMixins::ElasticSearch::APIConfigs)
|
108
109
|
|
110
|
+
# DS support
|
111
|
+
include(LogStash::Outputs::ElasticSearch::DataStreamSupport)
|
112
|
+
|
109
113
|
DEFAULT_POLICY = "logstash-policy"
|
110
114
|
|
111
115
|
config_name "elasticsearch"
|
@@ -122,7 +126,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
122
126
|
# would use the foo field for the action
|
123
127
|
#
|
124
128
|
# For more details on actions, check out the http://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html[Elasticsearch bulk API documentation]
|
125
|
-
config :action, :validate => :string
|
129
|
+
config :action, :validate => :string # :default => "index" unless data_stream
|
126
130
|
|
127
131
|
# The index to write events to. This can be dynamic using the `%{foo}` syntax.
|
128
132
|
# The default value will partition your indices by day so you can more easily
|
@@ -247,6 +251,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
247
251
|
# ILM policy to use, if undefined the default policy will be used.
|
248
252
|
config :ilm_policy, :validate => :string, :default => DEFAULT_POLICY
|
249
253
|
|
254
|
+
attr_reader :client
|
250
255
|
attr_reader :default_index
|
251
256
|
attr_reader :default_ilm_rollover_alias
|
252
257
|
attr_reader :default_template_name
|
@@ -257,26 +262,53 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
257
262
|
end
|
258
263
|
|
259
264
|
def register
|
260
|
-
@
|
265
|
+
@after_successful_connection_done = Concurrent::AtomicBoolean.new(false)
|
261
266
|
@stopping = Concurrent::AtomicBoolean.new(false)
|
262
|
-
# To support BWC, we check if DLQ exists in core (< 5.4). If it doesn't, we use nil to resort to previous behavior.
|
263
|
-
@dlq_writer = dlq_enabled? ? execution_context.dlq_writer : nil
|
264
267
|
|
265
268
|
check_action_validity
|
266
269
|
|
270
|
+
@logger.info("New Elasticsearch output", :class => self.class.name, :hosts => @hosts.map(&:sanitized).map(&:to_s))
|
271
|
+
|
267
272
|
# the license_checking behaviour in the Pool class is externalized in the LogStash::ElasticSearchOutputLicenseChecker
|
268
273
|
# class defined in license_check.rb. This license checking is specific to the elasticsearch output here and passed
|
269
274
|
# to build_client down to the Pool class.
|
270
|
-
build_client(LicenseChecker.new(@logger))
|
275
|
+
@client = build_client(LicenseChecker.new(@logger))
|
276
|
+
|
277
|
+
@after_successful_connection_thread = after_successful_connection do
|
278
|
+
begin
|
279
|
+
finish_register
|
280
|
+
true # thread.value
|
281
|
+
rescue => e
|
282
|
+
# we do not want to halt the thread with an exception as that has consequences for LS
|
283
|
+
e # thread.value
|
284
|
+
ensure
|
285
|
+
@after_successful_connection_done.make_true
|
286
|
+
end
|
287
|
+
end
|
271
288
|
|
272
|
-
|
273
|
-
|
274
|
-
|
275
|
-
|
289
|
+
# To support BWC, we check if DLQ exists in core (< 5.4). If it doesn't, we use nil to resort to previous behavior.
|
290
|
+
@dlq_writer = dlq_enabled? ? execution_context.dlq_writer : nil
|
291
|
+
|
292
|
+
if data_stream_config?
|
293
|
+
@event_mapper = -> (e) { data_stream_event_action_tuple(e) }
|
294
|
+
@event_target = -> (e) { data_stream_name(e) }
|
295
|
+
@index = "#{data_stream_type}-#{data_stream_dataset}-#{data_stream_namespace}".freeze # default name
|
296
|
+
else
|
297
|
+
@event_mapper = -> (e) { event_action_tuple(e) }
|
298
|
+
@event_target = -> (e) { e.sprintf(@index) }
|
276
299
|
end
|
300
|
+
|
277
301
|
@bulk_request_metrics = metric.namespace(:bulk_requests)
|
278
302
|
@document_level_metrics = metric.namespace(:documents)
|
279
|
-
|
303
|
+
end
|
304
|
+
|
305
|
+
# @override post-register when ES connection established
|
306
|
+
def finish_register
|
307
|
+
assert_es_version_supports_data_streams if data_stream_config?
|
308
|
+
discover_cluster_uuid
|
309
|
+
install_template
|
310
|
+
setup_ilm if ilm_in_use?
|
311
|
+
super
|
280
312
|
end
|
281
313
|
|
282
314
|
# @override to handle proxy => '' as if none was set
|
@@ -297,46 +329,47 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
297
329
|
|
298
330
|
# Receive an array of events and immediately attempt to index them (no buffering)
|
299
331
|
def multi_receive(events)
|
300
|
-
|
301
|
-
|
332
|
+
wait_for_successful_connection if @after_successful_connection_done
|
333
|
+
retrying_submit map_events(events)
|
334
|
+
end
|
335
|
+
|
336
|
+
def map_events(events)
|
337
|
+
events.map(&@event_mapper)
|
338
|
+
end
|
339
|
+
|
340
|
+
def wait_for_successful_connection
|
341
|
+
after_successful_connection_done = @after_successful_connection_done
|
342
|
+
return unless after_successful_connection_done
|
343
|
+
stoppable_sleep 1 until after_successful_connection_done.true?
|
344
|
+
|
345
|
+
status = @after_successful_connection_thread && @after_successful_connection_thread.value
|
346
|
+
if status.is_a?(Exception) # check if thread 'halted' with an error
|
347
|
+
# keep logging that something isn't right (from every #multi_receive)
|
348
|
+
@logger.error "Elasticsearch setup did not complete normally, please review previously logged errors",
|
349
|
+
message: status.message, exception: status.class
|
350
|
+
else
|
351
|
+
@after_successful_connection_done = nil # do not execute __method__ again if all went well
|
302
352
|
end
|
303
|
-
retrying_submit(events.map {|e| event_action_tuple(e)})
|
304
353
|
end
|
354
|
+
private :wait_for_successful_connection
|
305
355
|
|
306
356
|
def close
|
307
357
|
@stopping.make_true if @stopping
|
308
|
-
|
358
|
+
stop_after_successful_connection_thread
|
309
359
|
@client.close if @client
|
310
360
|
end
|
311
361
|
|
312
|
-
|
313
|
-
|
314
|
-
|
362
|
+
private
|
363
|
+
|
364
|
+
def stop_after_successful_connection_thread
|
365
|
+
@after_successful_connection_thread.join unless @after_successful_connection_thread.nil?
|
315
366
|
end
|
316
367
|
|
317
|
-
#
|
318
|
-
# Convert the event into a 3-tuple of action, params, and event
|
368
|
+
# Convert the event into a 3-tuple of action, params and event hash
|
319
369
|
def event_action_tuple(event)
|
320
|
-
|
321
|
-
|
322
|
-
params = {
|
323
|
-
:_id => @document_id ? event.sprintf(@document_id) : nil,
|
324
|
-
:_index => event.sprintf(@index),
|
325
|
-
routing_field_name => @routing ? event.sprintf(@routing) : nil
|
326
|
-
}
|
327
|
-
|
370
|
+
params = common_event_params(event)
|
328
371
|
params[:_type] = get_event_type(event) if use_event_type?(nil)
|
329
372
|
|
330
|
-
if @pipeline
|
331
|
-
value = event.sprintf(@pipeline)
|
332
|
-
# convention: empty string equates to not using a pipeline
|
333
|
-
# this is useful when using a field reference in the pipeline setting, e.g.
|
334
|
-
# elasticsearch {
|
335
|
-
# pipeline => "%{[@metadata][pipeline]}"
|
336
|
-
# }
|
337
|
-
params[:pipeline] = value unless value.empty?
|
338
|
-
end
|
339
|
-
|
340
373
|
if @parent
|
341
374
|
if @join_field
|
342
375
|
join_value = event.get(@join_field)
|
@@ -348,26 +381,54 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
348
381
|
end
|
349
382
|
end
|
350
383
|
|
384
|
+
action = event.sprintf(@action || 'index')
|
385
|
+
|
351
386
|
if action == 'update'
|
352
387
|
params[:_upsert] = LogStash::Json.load(event.sprintf(@upsert)) if @upsert != ""
|
353
388
|
params[:_script] = event.sprintf(@script) if @script != ""
|
354
389
|
params[retry_on_conflict_action_name] = @retry_on_conflict
|
355
390
|
end
|
356
391
|
|
357
|
-
if @version
|
358
|
-
|
359
|
-
|
392
|
+
params[:version] = event.sprintf(@version) if @version
|
393
|
+
params[:version_type] = event.sprintf(@version_type) if @version_type
|
394
|
+
|
395
|
+
EventActionTuple.new(action, params, event)
|
396
|
+
end
|
360
397
|
|
361
|
-
|
362
|
-
|
398
|
+
class EventActionTuple < Array # TODO: acting as an array for compatibility
|
399
|
+
|
400
|
+
def initialize(action, params, event, event_data = nil)
|
401
|
+
super(3)
|
402
|
+
self[0] = action
|
403
|
+
self[1] = params
|
404
|
+
self[2] = event_data || event.to_hash
|
405
|
+
@event = event
|
363
406
|
end
|
364
407
|
|
365
|
-
|
408
|
+
attr_reader :event
|
409
|
+
|
366
410
|
end
|
367
411
|
|
368
|
-
#
|
369
|
-
|
370
|
-
|
412
|
+
# @return Hash (initial) parameters for given event
|
413
|
+
# @private shared event params factory between index and data_stream mode
|
414
|
+
def common_event_params(event)
|
415
|
+
params = {
|
416
|
+
:_id => @document_id ? event.sprintf(@document_id) : nil,
|
417
|
+
:_index => @event_target.call(event),
|
418
|
+
routing_field_name => @routing ? event.sprintf(@routing) : nil
|
419
|
+
}
|
420
|
+
|
421
|
+
if @pipeline
|
422
|
+
value = event.sprintf(@pipeline)
|
423
|
+
# convention: empty string equates to not using a pipeline
|
424
|
+
# this is useful when using a field reference in the pipeline setting, e.g.
|
425
|
+
# elasticsearch {
|
426
|
+
# pipeline => "%{[@metadata][pipeline]}"
|
427
|
+
# }
|
428
|
+
params[:pipeline] = value unless value.empty?
|
429
|
+
end
|
430
|
+
|
431
|
+
params
|
371
432
|
end
|
372
433
|
|
373
434
|
@@plugins = Gem::Specification.find_all{|spec| spec.name =~ /logstash-output-elasticsearch-/ }
|
@@ -377,35 +438,33 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
377
438
|
require "logstash/outputs/elasticsearch/#{name}"
|
378
439
|
end
|
379
440
|
|
380
|
-
|
441
|
+
def retry_on_conflict_action_name
|
442
|
+
maximum_seen_major_version >= 7 ? :retry_on_conflict : :_retry_on_conflict
|
443
|
+
end
|
381
444
|
|
382
445
|
def routing_field_name
|
383
|
-
|
446
|
+
:routing
|
384
447
|
end
|
385
448
|
|
386
449
|
# Determine the correct value for the 'type' field for the given event
|
387
|
-
DEFAULT_EVENT_TYPE_ES6="doc".freeze
|
388
|
-
DEFAULT_EVENT_TYPE_ES7="_doc".freeze
|
450
|
+
DEFAULT_EVENT_TYPE_ES6 = "doc".freeze
|
451
|
+
DEFAULT_EVENT_TYPE_ES7 = "_doc".freeze
|
452
|
+
|
389
453
|
def get_event_type(event)
|
390
454
|
# Set the 'type' value for the index.
|
391
455
|
type = if @document_type
|
392
456
|
event.sprintf(@document_type)
|
393
457
|
else
|
394
|
-
|
395
|
-
|
396
|
-
elsif maximum_seen_major_version == 6
|
458
|
+
major_version = maximum_seen_major_version
|
459
|
+
if major_version == 6
|
397
460
|
DEFAULT_EVENT_TYPE_ES6
|
398
|
-
elsif
|
461
|
+
elsif major_version == 7
|
399
462
|
DEFAULT_EVENT_TYPE_ES7
|
400
463
|
else
|
401
464
|
nil
|
402
465
|
end
|
403
466
|
end
|
404
467
|
|
405
|
-
if !(type.is_a?(String) || type.is_a?(Numeric))
|
406
|
-
@logger.warn("Bad event type! Non-string/integer type value set!", :type_class => type.class, :type_value => type.to_s, :event => event)
|
407
|
-
end
|
408
|
-
|
409
468
|
type.to_s
|
410
469
|
end
|
411
470
|
|
@@ -417,14 +476,15 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
417
476
|
# @param noop_required_client [nil]: required `nil` for legacy reasons.
|
418
477
|
# @return [Boolean]
|
419
478
|
def use_event_type?(noop_required_client)
|
420
|
-
# always set type for ES
|
479
|
+
# always set type for ES 6
|
421
480
|
# for ES 7 only set it if the user defined it
|
422
481
|
(maximum_seen_major_version < 7) || (maximum_seen_major_version == 7 && @document_type)
|
423
482
|
end
|
424
483
|
|
425
484
|
def install_template
|
426
485
|
TemplateManager.install_template(self)
|
427
|
-
|
486
|
+
rescue => e
|
487
|
+
@logger.error("Failed to install template", message: e.message, exception: e.class, backtrace: e.backtrace)
|
428
488
|
end
|
429
489
|
|
430
490
|
def setup_ecs_compatibility_related_defaults
|
@@ -447,13 +507,14 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
447
507
|
end
|
448
508
|
|
449
509
|
# To be overidden by the -java version
|
450
|
-
VALID_HTTP_ACTIONS=["index", "delete", "create", "update"]
|
510
|
+
VALID_HTTP_ACTIONS = ["index", "delete", "create", "update"]
|
451
511
|
def valid_actions
|
452
512
|
VALID_HTTP_ACTIONS
|
453
513
|
end
|
454
514
|
|
455
515
|
def check_action_validity
|
456
|
-
|
516
|
+
return if @action.nil? # not set
|
517
|
+
raise LogStash::ConfigurationError, "No action specified!" if @action.empty?
|
457
518
|
|
458
519
|
# If we're using string interpolation, we're good!
|
459
520
|
return if @action =~ /%{.+}/
|