logstash-output-elasticsearch 7.4.3-java → 8.0.0-java
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +5 -5
- data/CHANGELOG.md +5 -18
- data/docs/index.asciidoc +13 -50
- data/lib/logstash/outputs/elasticsearch/common.rb +39 -43
- data/lib/logstash/outputs/elasticsearch/common_configs.rb +2 -11
- data/lib/logstash/outputs/elasticsearch/http_client.rb +22 -27
- data/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb +2 -2
- data/lib/logstash/outputs/elasticsearch/http_client/pool.rb +12 -31
- data/lib/logstash/outputs/elasticsearch/template_manager.rb +6 -4
- data/logstash-output-elasticsearch.gemspec +1 -1
- data/spec/es_spec_helper.rb +0 -6
- data/spec/integration/outputs/compressed_indexing_spec.rb +44 -46
- data/spec/integration/outputs/delete_spec.rb +49 -51
- data/spec/integration/outputs/groovy_update_spec.rb +129 -131
- data/spec/integration/outputs/index_version_spec.rb +81 -82
- data/spec/integration/outputs/ingest_pipeline_spec.rb +49 -51
- data/spec/integration/outputs/painless_update_spec.rb +130 -170
- data/spec/integration/outputs/parent_spec.rb +55 -149
- data/spec/integration/outputs/sniffer_spec.rb +2 -5
- data/spec/integration/outputs/templates_5x_spec.rb +82 -81
- data/spec/integration/outputs/templates_spec.rb +81 -81
- data/spec/integration/outputs/update_spec.rb +99 -101
- data/spec/unit/outputs/elasticsearch/http_client/manticore_adapter_spec.rb +5 -30
- data/spec/unit/outputs/elasticsearch/http_client/pool_spec.rb +0 -3
- data/spec/unit/outputs/elasticsearch/http_client_spec.rb +12 -11
- data/spec/unit/outputs/elasticsearch/template_manager_spec.rb +25 -13
- data/spec/unit/outputs/elasticsearch_spec.rb +1 -10
- metadata +4 -6
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
|
-
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
2
|
+
SHA1:
|
3
|
+
metadata.gz: 54e5ec0cdf1cd6e3331fa31ed9794128debfc581
|
4
|
+
data.tar.gz: 37e763dae42fb2b01fdbd601356847fd5515bf4a
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 43493de2f061e43a2d94cb38b0047307fa8d6713f8bef930f8754fd4e312a1959124b8f3ba9a5946bfa1774ba619bc1e91de1c0022a054c297e669b3fd703794
|
7
|
+
data.tar.gz: 32ce21a37acfefdcfa46230143ab4c84fee33d7866d1e47eaca0f955be77b2a256c22e29dbbffdb68b816e6f82c2e63d55c206e3ac1be2d9c733ac086ba11c5c
|
data/CHANGELOG.md
CHANGED
@@ -1,21 +1,8 @@
|
|
1
|
-
##
|
2
|
-
|
3
|
-
|
4
|
-
|
5
|
-
|
6
|
-
- Use `#response_body` instead of `#body` when debugging response from the server #679
|
7
|
-
- Docs: Add DLQ policy section
|
8
|
-
- Fix passing of String instead of SafeURI to BadURIError
|
9
|
-
|
10
|
-
## 7.4.1
|
11
|
-
- Properly detect if DLQ is supported and enabled
|
12
|
-
|
13
|
-
## 7.4.0
|
14
|
-
- Retry all non-200 responses of the bulk API indefinitely
|
15
|
-
- Improve documentation on retry codes
|
16
|
-
|
17
|
-
## 7.3.8
|
18
|
-
- Fix bug where java class names were logged rather than actual host names in various scenarios
|
1
|
+
## 8.0.0
|
2
|
+
- Breaking: make deprecated options :flush_size and :idle_flush_time obsolete
|
3
|
+
- Remove obsolete options :max_retries and :retry_max_items
|
4
|
+
- Fix: handling of initial single big event
|
5
|
+
- Fix: typo was enabling http compression by default this returns it back to false
|
19
6
|
|
20
7
|
## 7.3.7
|
21
8
|
- Properly support characters needing escaping in users / passwords across multiple SafeURI implementions (pre/post LS 5.5.1)
|
data/docs/index.asciidoc
CHANGED
@@ -23,7 +23,7 @@ include::{include_path}/plugin_header.asciidoc[]
|
|
23
23
|
.Compatibility Note
|
24
24
|
[NOTE]
|
25
25
|
================================================================================
|
26
|
-
Starting with Elasticsearch 5.3, there's an {ref}
|
26
|
+
Starting with Elasticsearch 5.3, there's an {ref}modules-http.html[HTTP setting]
|
27
27
|
called `http.content_type.required`. If this option is set to `true`, and you
|
28
28
|
are using Logstash 2.4 through 5.2, you need to update the Elasticsearch output
|
29
29
|
plugin to version 6.2.5 or higher.
|
@@ -41,52 +41,34 @@ to upgrade Logstash in lock-step.
|
|
41
41
|
You can learn more about Elasticsearch at <https://www.elastic.co/products/elasticsearch>
|
42
42
|
|
43
43
|
==== Template management for Elasticsearch 5.x
|
44
|
-
|
45
44
|
Index template for this version (Logstash 5.0) has been changed to reflect Elasticsearch's mapping changes in version 5.0.
|
46
45
|
Most importantly, the subfield for string multi-fields has changed from `.raw` to `.keyword` to match ES default
|
47
46
|
behavior.
|
48
47
|
|
49
|
-
**Users installing ES 5.x and LS 5.x**
|
50
|
-
|
48
|
+
** Users installing ES 5.x and LS 5.x **
|
51
49
|
This change will not affect you and you will continue to use the ES defaults.
|
52
50
|
|
53
|
-
**Users upgrading from LS 2.x to LS 5.x with ES 5.x**
|
54
|
-
|
51
|
+
** Users upgrading from LS 2.x to LS 5.x with ES 5.x **
|
55
52
|
LS will not force upgrade the template, if `logstash` template already exists. This means you will still use
|
56
53
|
`.raw` for sub-fields coming from 2.x. If you choose to use the new template, you will have to reindex your data after
|
57
54
|
the new template is installed.
|
58
55
|
|
59
56
|
==== Retry Policy
|
60
57
|
|
61
|
-
The retry policy has changed significantly in the
|
58
|
+
The retry policy has changed significantly in the 2.2.0 release.
|
62
59
|
This plugin uses the Elasticsearch bulk API to optimize its imports into Elasticsearch. These requests may experience
|
63
|
-
either partial or total failures.
|
64
|
-
request are handled differently than error codes for individual documents.
|
60
|
+
either partial or total failures.
|
65
61
|
|
66
|
-
|
62
|
+
The following errors are retried infinitely:
|
67
63
|
|
68
|
-
|
64
|
+
- Network errors (inability to connect)
|
65
|
+
- 429 (Too many requests) and
|
66
|
+
- 503 (Service unavailable) errors
|
69
67
|
|
70
|
-
|
71
|
-
* 409 errors (conflict) are logged as a warning and dropped.
|
72
|
-
|
73
|
-
Note that 409 exceptions are no longer retried. Please set a higher `retry_on_conflict` value if you experience 409 exceptions.
|
68
|
+
NOTE: 409 exceptions are no longer retried. Please set a higher `retry_on_conflict` value if you experience 409 exceptions.
|
74
69
|
It is more performant for Elasticsearch to retry these exceptions than this plugin.
|
75
70
|
|
76
|
-
|
77
|
-
==== DLQ Policy
|
78
|
-
|
79
|
-
Mapping (404) errors from Elasticsearch can lead to data loss. Unfortunately
|
80
|
-
mapping errors cannot be handled without human intervention and without looking
|
81
|
-
at the field that caused the mapping mismatch. If the DLQ is enabled, the
|
82
|
-
original events causing the mapping errors are stored in a file that can be
|
83
|
-
processed at a later time. Often times, the offending field can be removed and
|
84
|
-
re-indexed to Elasticsearch. If the DLQ is not enabled, and a mapping error
|
85
|
-
happens, the problem is logged as a warning, and the event is dropped. See
|
86
|
-
<<dead-letter-queues>> for more information about processing events in the DLQ.
|
87
|
-
|
88
|
-
==== Batch Sizes
|
89
|
-
|
71
|
+
==== Batch Sizes ====
|
90
72
|
This plugin attempts to send batches of events as a single request. However, if
|
91
73
|
a request exceeds 20MB we will break it up until multiple batch requests. If a single document exceeds 20MB it will be sent as a single request.
|
92
74
|
|
@@ -252,15 +234,6 @@ Set the Elasticsearch errors in the whitelist that you don't want to log.
|
|
252
234
|
A useful example is when you want to skip all 409 errors
|
253
235
|
which are `document_already_exists_exception`.
|
254
236
|
|
255
|
-
[id="plugins-{type}s-{plugin}-flush_size"]
|
256
|
-
===== `flush_size` (DEPRECATED)
|
257
|
-
|
258
|
-
* DEPRECATED WARNING: This configuration item is deprecated and may not be available in future versions.
|
259
|
-
* Value type is <<number,number>>
|
260
|
-
* There is no default value for this setting.
|
261
|
-
|
262
|
-
|
263
|
-
|
264
237
|
[id="plugins-{type}s-{plugin}-healthcheck_path"]
|
265
238
|
===== `healthcheck_path`
|
266
239
|
|
@@ -298,15 +271,6 @@ Any special characters present in the URLs here MUST be URL escaped! This means
|
|
298
271
|
|
299
272
|
Enable gzip compression on requests. Note that response compression is on by default for Elasticsearch v5.0 and beyond
|
300
273
|
|
301
|
-
[id="plugins-{type}s-{plugin}-idle_flush_time"]
|
302
|
-
===== `idle_flush_time` (DEPRECATED)
|
303
|
-
|
304
|
-
* DEPRECATED WARNING: This configuration item is deprecated and may not be available in future versions.
|
305
|
-
* Value type is <<number,number>>
|
306
|
-
* Default value is `1`
|
307
|
-
|
308
|
-
|
309
|
-
|
310
274
|
[id="plugins-{type}s-{plugin}-index"]
|
311
275
|
===== `index`
|
312
276
|
|
@@ -495,8 +459,7 @@ Set script name for scripted update mode
|
|
495
459
|
* Value type is <<string,string>>
|
496
460
|
* Default value is `"painless"`
|
497
461
|
|
498
|
-
Set the language of the used script. If not set, this defaults to painless in ES 5.0
|
499
|
-
When using indexed (stored) scripts on Elasticsearch 6 and higher, you must set this parameter to `""` (empty string).
|
462
|
+
Set the language of the used script. If not set, this defaults to painless in ES 5.0
|
500
463
|
|
501
464
|
[id="plugins-{type}s-{plugin}-script_type"]
|
502
465
|
===== `script_type`
|
@@ -696,4 +659,4 @@ See also https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-in
|
|
696
659
|
|
697
660
|
|
698
661
|
[id="plugins-{type}s-{plugin}-common-options"]
|
699
|
-
include::{include_path}/{type}.asciidoc[]
|
662
|
+
include::{include_path}/{type}.asciidoc[]
|
@@ -4,10 +4,14 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
4
4
|
module Common
|
5
5
|
attr_reader :client, :hosts
|
6
6
|
|
7
|
-
# These codes
|
8
|
-
|
9
|
-
|
10
|
-
|
7
|
+
# These are codes for temporary recoverable conditions
|
8
|
+
# 429 just means that ES has too much traffic ATM
|
9
|
+
# 503 means it , or a proxy is temporarily unavailable
|
10
|
+
RETRYABLE_CODES = [429, 503]
|
11
|
+
|
12
|
+
DLQ_CODES = [400, 404]
|
13
|
+
SUCCESS_CODES = [200, 201]
|
14
|
+
CONFLICT_CODE = 409
|
11
15
|
|
12
16
|
# When you use external versioning, you are communicating that you want
|
13
17
|
# to ignore conflicts. More obviously, since an external version is a
|
@@ -18,7 +22,7 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
18
22
|
def register
|
19
23
|
@stopping = Concurrent::AtomicBoolean.new(false)
|
20
24
|
# To support BWC, we check if DLQ exists in core (< 5.4). If it doesn't, we use nil to resort to previous behavior.
|
21
|
-
@dlq_writer =
|
25
|
+
@dlq_writer = supports_dlq? ? execution_context.dlq_writer : nil
|
22
26
|
|
23
27
|
setup_hosts # properly sets @hosts
|
24
28
|
build_client
|
@@ -30,13 +34,7 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
30
34
|
|
31
35
|
# Receive an array of events and immediately attempt to index them (no buffering)
|
32
36
|
def multi_receive(events)
|
33
|
-
|
34
|
-
events.each_slice(@flush_size) do |slice|
|
35
|
-
retrying_submit(slice.map {|e| event_action_tuple(e) })
|
36
|
-
end
|
37
|
-
else
|
38
|
-
retrying_submit(events.map {|e| event_action_tuple(e)})
|
39
|
-
end
|
37
|
+
retrying_submit(events.map {|e| event_action_tuple(e)})
|
40
38
|
end
|
41
39
|
|
42
40
|
# Convert the event into a 3-tuple of action, params, and event
|
@@ -136,12 +134,12 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
136
134
|
# - For 409, we log and drop. there is nothing we can do
|
137
135
|
# - For a mapping error, we send to dead letter queue for a human to intervene at a later point.
|
138
136
|
# - For everything else there's mastercard. Yep, and we retry indefinitely. This should fix #572 and other transient network issues
|
139
|
-
if
|
137
|
+
if SUCCESS_CODES.include?(status)
|
140
138
|
next
|
141
|
-
elsif
|
139
|
+
elsif CONFLICT_CODE == status
|
142
140
|
@logger.warn "Failed action.", status: status, action: action, response: response if !failure_type_logging_whitelist.include?(failure["type"])
|
143
141
|
next
|
144
|
-
elsif
|
142
|
+
elsif DLQ_CODES.include?(status)
|
145
143
|
action_event = action[2]
|
146
144
|
# To support bwc, we check if DLQ exists. otherwise we log and drop event (previous behavior)
|
147
145
|
if @dlq_writer
|
@@ -176,15 +174,8 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
176
174
|
params[:pipeline] = event.sprintf(@pipeline)
|
177
175
|
end
|
178
176
|
|
179
|
-
|
180
|
-
|
181
|
-
join_value = event.get(@join_field)
|
182
|
-
parent_value = event.sprintf(@parent)
|
183
|
-
event.set(@join_field, { "name" => join_value, "parent" => parent_value })
|
184
|
-
params[:_routing] = event.sprintf(@parent)
|
185
|
-
else
|
186
|
-
params[:parent] = event.sprintf(@parent)
|
187
|
-
end
|
177
|
+
if @parent
|
178
|
+
params[:parent] = event.sprintf(@parent)
|
188
179
|
end
|
189
180
|
|
190
181
|
if @action == 'update'
|
@@ -253,22 +244,29 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
253
244
|
sleep_interval = next_sleep_interval(sleep_interval)
|
254
245
|
retry unless @stopping.true?
|
255
246
|
rescue ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError => e
|
256
|
-
|
257
|
-
|
258
|
-
|
259
|
-
|
260
|
-
|
261
|
-
|
262
|
-
|
263
|
-
|
264
|
-
|
265
|
-
|
247
|
+
if RETRYABLE_CODES.include?(e.response_code)
|
248
|
+
log_hash = {:code => e.response_code, :url => e.url.sanitized.to_s}
|
249
|
+
log_hash[:body] = e.body if @logger.debug? # Generally this is too verbose
|
250
|
+
message = "Encountered a retryable error. Will Retry with exponential backoff "
|
251
|
+
|
252
|
+
# We treat 429s as a special case because these really aren't errors, but
|
253
|
+
# rather just ES telling us to back off a bit, which we do.
|
254
|
+
# The other retryable code is 503, which are true errors
|
255
|
+
# Even though we retry the user should be made aware of these
|
256
|
+
if e.response_code == 429
|
257
|
+
logger.debug(message, log_hash)
|
258
|
+
else
|
259
|
+
logger.error(message, log_hash)
|
260
|
+
end
|
261
|
+
|
262
|
+
sleep_interval = sleep_for_interval(sleep_interval)
|
263
|
+
retry
|
266
264
|
else
|
267
|
-
|
265
|
+
log_hash = {:code => e.response_code,
|
266
|
+
:response_body => e.response_body}
|
267
|
+
log_hash[:request_body] = e.request_body if @logger.debug?
|
268
|
+
@logger.error("Got a bad response code from server, but this code is not considered retryable. Request will be dropped", log_hash)
|
268
269
|
end
|
269
|
-
|
270
|
-
sleep_interval = sleep_for_interval(sleep_interval)
|
271
|
-
retry
|
272
270
|
rescue => e
|
273
271
|
# Stuff that should never happen
|
274
272
|
# For all other errors print out full connection issues
|
@@ -281,16 +279,14 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
281
279
|
|
282
280
|
@logger.debug("Failed actions for last bad bulk request!", :actions => actions)
|
283
281
|
|
282
|
+
# We retry until there are no errors! Errors should all go to the retry queue
|
284
283
|
sleep_interval = sleep_for_interval(sleep_interval)
|
285
284
|
retry unless @stopping.true?
|
286
285
|
end
|
287
286
|
end
|
288
287
|
|
289
|
-
def
|
290
|
-
|
291
|
-
# See more in: https://github.com/elastic/logstash/issues/8064
|
292
|
-
respond_to?(:execution_context) && execution_context.respond_to?(:dlq_writer) &&
|
293
|
-
!execution_context.dlq_writer.inner_writer.is_a?(::LogStash::Util::DummyDeadLetterQueueWriter)
|
288
|
+
def supports_dlq?
|
289
|
+
respond_to?(:execution_context) && execution_context.respond_to?(:dlq_writer)
|
294
290
|
end
|
295
291
|
end
|
296
292
|
end; end; end
|
@@ -78,9 +78,6 @@ module LogStash; module Outputs; class ElasticSearch
|
|
78
78
|
# This can be dynamic using the `%{foo}` syntax.
|
79
79
|
mod.config :parent, :validate => :string, :default => nil
|
80
80
|
|
81
|
-
# For child documents, name of the join field
|
82
|
-
mod.config :join_field, :validate => :string, :default => nil
|
83
|
-
|
84
81
|
# Sets the host(s) of the remote instance. If given an array it will load balance requests across the hosts specified in the `hosts` parameter.
|
85
82
|
# Remember the `http` protocol uses the http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html#modules-http[http] address (eg. 9200, not 9300).
|
86
83
|
# `"127.0.0.1"`
|
@@ -94,9 +91,9 @@ module LogStash; module Outputs; class ElasticSearch
|
|
94
91
|
# Any special characters present in the URLs here MUST be URL escaped! This means `#` should be put in as `%23` for instance.
|
95
92
|
mod.config :hosts, :validate => :uri, :default => [::LogStash::Util::SafeURI.new("//127.0.0.1")], :list => true
|
96
93
|
|
97
|
-
mod.config :flush_size, :validate => :number, :
|
94
|
+
mod.config :flush_size, :validate => :number, :obsolete => "This setting is no longer available as we now try to restrict bulk requests to sane sizes. See the 'Batch Sizes' section of the docs. If you think you still need to restrict payloads based on the number, not size, of events, please open a ticket."
|
98
95
|
|
99
|
-
mod.config :idle_flush_time, :validate => :number, :
|
96
|
+
mod.config :idle_flush_time, :validate => :number, :obsolete => "This settings is no longer valid. This was a no-op now as every pipeline batch is flushed synchronously obviating the need for this option."
|
100
97
|
|
101
98
|
# Set upsert content for update mode.s
|
102
99
|
# Create a new document with this parameter as json string if `document_id` doesn't exists
|
@@ -106,9 +103,6 @@ module LogStash; module Outputs; class ElasticSearch
|
|
106
103
|
# Create a new document with source if `document_id` doesn't exist in Elasticsearch
|
107
104
|
mod.config :doc_as_upsert, :validate => :boolean, :default => false
|
108
105
|
|
109
|
-
#Obsolete since 4.1.0
|
110
|
-
mod.config :max_retries, :obsolete => "This setting no longer does anything. Please remove it from your config"
|
111
|
-
|
112
106
|
# Set script name for scripted update mode
|
113
107
|
mod.config :script, :validate => :string, :default => ""
|
114
108
|
|
@@ -133,9 +127,6 @@ module LogStash; module Outputs; class ElasticSearch
|
|
133
127
|
# Set max interval in seconds between bulk retries.
|
134
128
|
mod.config :retry_max_interval, :validate => :number, :default => 64
|
135
129
|
|
136
|
-
#Obsolete since 4.1.0
|
137
|
-
mod.config :retry_max_items, :obsolete => "This setting no longer does anything. Please remove it from your config"
|
138
|
-
|
139
130
|
# The number of times Elasticsearch should internally retry an update/upserted document
|
140
131
|
# See the https://www.elastic.co/guide/en/elasticsearch/guide/current/partial-updates.html[partial updates]
|
141
132
|
# for more info
|
@@ -50,11 +50,11 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
50
50
|
# through a special http path, such as using mod_rewrite.
|
51
51
|
def initialize(options={})
|
52
52
|
@logger = options[:logger]
|
53
|
-
|
53
|
+
|
54
54
|
# Again, in case we use DEFAULT_OPTIONS in the future, uncomment this.
|
55
55
|
# @options = DEFAULT_OPTIONS.merge(options)
|
56
56
|
@options = options
|
57
|
-
|
57
|
+
|
58
58
|
@url_template = build_url_template
|
59
59
|
|
60
60
|
@pool = build_pool(@options)
|
@@ -62,7 +62,7 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
62
62
|
# connection pool at the same time
|
63
63
|
@bulk_path = @options[:bulk_path]
|
64
64
|
end
|
65
|
-
|
65
|
+
|
66
66
|
def build_url_template
|
67
67
|
{
|
68
68
|
:scheme => self.scheme,
|
@@ -82,8 +82,9 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
82
82
|
template_put(name, template)
|
83
83
|
end
|
84
84
|
|
85
|
-
def
|
86
|
-
@pool.
|
85
|
+
def get_version
|
86
|
+
url, response = @pool.get("")
|
87
|
+
LogStash::Json.load(response.body)["version"]
|
87
88
|
end
|
88
89
|
|
89
90
|
def bulk(actions)
|
@@ -106,7 +107,7 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
106
107
|
if http_compression
|
107
108
|
body_stream.set_encoding "BINARY"
|
108
109
|
stream_writer = Zlib::GzipWriter.new(body_stream, Zlib::DEFAULT_COMPRESSION, Zlib::DEFAULT_STRATEGY)
|
109
|
-
else
|
110
|
+
else
|
110
111
|
stream_writer = body_stream
|
111
112
|
end
|
112
113
|
bulk_responses = []
|
@@ -115,8 +116,8 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
115
116
|
action.map {|line| LogStash::Json.dump(line)}.join("\n") :
|
116
117
|
LogStash::Json.dump(action)
|
117
118
|
as_json << "\n"
|
118
|
-
if
|
119
|
-
bulk_responses << bulk_send(body_stream)
|
119
|
+
if (body_stream.size + as_json.bytesize) > TARGET_BULK_BYTES
|
120
|
+
bulk_responses << bulk_send(body_stream) unless body_stream.size == 0
|
120
121
|
end
|
121
122
|
stream_writer.write(as_json)
|
122
123
|
end
|
@@ -135,29 +136,22 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
135
136
|
|
136
137
|
def bulk_send(body_stream)
|
137
138
|
params = http_compression ? {:headers => {"Content-Encoding" => "gzip"}} : {}
|
138
|
-
# Discard the URL
|
139
|
-
response = @pool.post(@bulk_path, params, body_stream.string)
|
139
|
+
# Discard the URL
|
140
|
+
_, response = @pool.post(@bulk_path, params, body_stream.string)
|
140
141
|
if !body_stream.closed?
|
141
142
|
body_stream.truncate(0)
|
142
143
|
body_stream.seek(0)
|
143
144
|
end
|
144
|
-
|
145
|
-
if response.code != 200
|
146
|
-
raise ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError.new(
|
147
|
-
response.code, @bulk_path, body_stream.to_s, response.body
|
148
|
-
)
|
149
|
-
end
|
150
|
-
|
151
145
|
LogStash::Json.load(response.body)
|
152
146
|
end
|
153
147
|
|
154
148
|
def get(path)
|
155
|
-
response = @pool.get(path, nil)
|
149
|
+
url, response = @pool.get(path, nil)
|
156
150
|
LogStash::Json.load(response.body)
|
157
151
|
end
|
158
152
|
|
159
153
|
def post(path, params = {}, body_string)
|
160
|
-
response = @pool.post(path, params, body_string)
|
154
|
+
url, response = @pool.post(path, params, body_string)
|
161
155
|
LogStash::Json.load(response.body)
|
162
156
|
end
|
163
157
|
|
@@ -208,7 +202,7 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
208
202
|
else
|
209
203
|
nil
|
210
204
|
end
|
211
|
-
|
205
|
+
|
212
206
|
calculated_scheme = calculate_property(uris, :scheme, explicit_scheme, sniffing)
|
213
207
|
|
214
208
|
if calculated_scheme && calculated_scheme !~ /https?/
|
@@ -228,7 +222,7 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
228
222
|
# Enter things like foo:123, bar and wind up with foo:123, bar:9200
|
229
223
|
calculate_property(uris, :port, nil, sniffing) || 9200
|
230
224
|
end
|
231
|
-
|
225
|
+
|
232
226
|
def uris
|
233
227
|
@options[:hosts]
|
234
228
|
end
|
@@ -247,7 +241,7 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
247
241
|
|
248
242
|
def build_adapter(options)
|
249
243
|
timeout = options[:timeout] || 0
|
250
|
-
|
244
|
+
|
251
245
|
adapter_options = {
|
252
246
|
:socket_timeout => timeout,
|
253
247
|
:request_timeout => timeout,
|
@@ -268,11 +262,11 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
268
262
|
end
|
269
263
|
|
270
264
|
adapter_options[:ssl] = ssl_options if self.scheme == 'https'
|
271
|
-
|
265
|
+
|
272
266
|
adapter_class = ::LogStash::Outputs::ElasticSearch::HttpClient::ManticoreAdapter
|
273
267
|
adapter = adapter_class.new(@logger, adapter_options)
|
274
268
|
end
|
275
|
-
|
269
|
+
|
276
270
|
def build_pool(options)
|
277
271
|
adapter = build_adapter(options)
|
278
272
|
|
@@ -321,21 +315,22 @@ module LogStash; module Outputs; class ElasticSearch;
|
|
321
315
|
h.query
|
322
316
|
end
|
323
317
|
prefixed_raw_query = raw_query && !raw_query.empty? ? "?#{raw_query}" : nil
|
324
|
-
|
318
|
+
|
325
319
|
raw_url = "#{raw_scheme}://#{postfixed_userinfo}#{raw_host}:#{raw_port}#{prefixed_raw_path}#{prefixed_raw_query}"
|
326
320
|
|
327
321
|
::LogStash::Util::SafeURI.new(raw_url)
|
328
322
|
end
|
329
323
|
|
330
324
|
def template_exists?(name)
|
331
|
-
response = @pool.head("/_template/#{name}")
|
325
|
+
url, response = @pool.head("/_template/#{name}")
|
332
326
|
response.code >= 200 && response.code <= 299
|
333
327
|
end
|
334
328
|
|
335
329
|
def template_put(name, template)
|
336
330
|
path = "_template/#{name}"
|
337
331
|
logger.info("Installing elasticsearch template to #{path}")
|
338
|
-
@pool.put(path, nil, LogStash::Json.dump(template))
|
332
|
+
url, response = @pool.put(path, nil, LogStash::Json.dump(template))
|
333
|
+
response
|
339
334
|
end
|
340
335
|
|
341
336
|
# Build a bulk item for an elasticsearch update action
|