logstash-output-elasticsearch 2.0.0.pre.beta-java → 2.1.0-java

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: ff0491347a2d439e0bf928efb828a6fd9e9d652b
4
- data.tar.gz: c7e9d5ad2322c92ac571894937a84888306555d2
3
+ metadata.gz: 3b46615e7a20412351d62189c561aff137efed71
4
+ data.tar.gz: ff02b2887b46616fadf1ae075a6512cd4012d745
5
5
  SHA512:
6
- metadata.gz: 461aea49e5262b4fc469c6c2652793e5f1acdee720e268c68efebc36d0f718fbd14f4d9ba7a6277061bdaa465df6d1515b72c4f8d7dcf60d9c5773a577d80a6a
7
- data.tar.gz: 0f634800a20a4e437accd28c7b7694352a603fac90cf707d9b9214304adf5cc3582c6eced384a17eb5d0ff3ca5a2f6bf5cc8551a3ed83f6358905cca26ae7e83
6
+ metadata.gz: 0b1a582068d0f6fe453b48566c1b97a829d32501c15c4ba8eb241f9365b63b6f5cf9fd7be305698c4ba0dbe7b9b72aa1cbb517225809ce341475947d924018a9
7
+ data.tar.gz: 123a5e7173cd0984aeb750419021da26bd55644ed7f5bf607be20f697069eca492a669836a1663c9d7c24cbea7957a6e9f68d48abc99fa9ad5813896ed10c4d9
data/CHANGELOG.md CHANGED
@@ -1,3 +1,23 @@
1
+ ## 2.1.0
2
+ - New setting: timeout. This lets you control the behavior of a slow/stuck
3
+ request to Elasticsearch that could be, for example, caused by network,
4
+ firewall, or load balancer issues.
5
+
6
+ ## 2.0.0
7
+ - Plugins were updated to follow the new shutdown semantic, this mainly allows Logstash to instruct input plugins to terminate gracefully,
8
+ instead of using Thread.raise on the plugins' threads. Ref: https://github.com/elastic/logstash/pull/3895
9
+ - Dependency on logstash-core update to 2.0
10
+
11
+ ## 2.0.0-beta2
12
+ - Massive internal refactor of client handling
13
+ - Background HTTP sniffing support
14
+ - Reduced bulk request size to 500 from 5000 (better memory utilization)
15
+ - Removed 'host' config option. Now use 'hosts'
16
+
17
+ ## 2.0.0-beta
18
+ - Only support HTTP Protocol
19
+ - Removed support for node and transport protocols (now in logstash-output-elasticsearch_java)
20
+
1
21
  ## 1.0.7
2
22
  - Add update API support
3
23
 
@@ -10,44 +10,33 @@ require "thread" # for safe queueing
10
10
  require "uri" # for escaping user input
11
11
  require "logstash/outputs/elasticsearch/http_client"
12
12
 
13
- # This output lets you store logs in Elasticsearch and is the most recommended
14
- # output for Logstash. If you plan on using the Kibana web interface, you'll
15
- # want to use this output.
13
+ # This plugin is the recommended method of storing logs in Elasticsearch.
14
+ # If you plan on using the Kibana web interface, you'll want to use this output.
16
15
  #
17
- # This output only speaks the HTTP, which is the preferred protocol for interacting with Elasticsearch. By default
18
- # Elasticsearch exposes HTTP on port 9200.
19
- #
20
- # We strongly encourage the use of HTTP over the node protocol. It is just as
21
- # fast and far easier to administer. For those wishing to use the java protocol please see the 'elasticsearch_java' gem.
16
+ # This output only speaks the HTTP protocol. HTTP is the preferred protocol for interacting with Elasticsearch as of Logstash 2.0.
17
+ # We strongly encourage the use of HTTP over the node protocol for a number of reasons. HTTP is only marginally slower,
18
+ # yet far easier to administer and work with. When using the HTTP protocol one may upgrade Elasticsearch versions without having
19
+ # to upgrade Logstash in lock-step. For those still wishing to use the node or transport protocols please see
20
+ # the https://www.elastic.co/guide/en/logstash/2.0/plugins-outputs-elasticsearch_java.html[logstash-output-elasticsearch_java] plugin.
22
21
  #
23
22
  # You can learn more about Elasticsearch at <https://www.elastic.co/products/elasticsearch>
24
23
  #
25
24
  # ==== Retry Policy
26
25
  #
27
- # By default all bulk requests to ES are synchronous. Not all events in the bulk requests
28
- # always make it successfully. For example, there could be events which are not formatted
29
- # correctly for the index they are targeting (type mismatch in mapping). So that we minimize loss of
30
- # events, we have a specific retry policy in place. We retry all events which fail to be reached by
31
- # Elasticsearch for network related issues. We retry specific events which exhibit errors under a separate
32
- # policy described below. Events of this nature are ones which experience ES error codes described as
33
- # retryable errors.
34
- #
35
- # *Retryable Errors:*
26
+ # This plugin uses the Elasticsearch bulk API to optimize its imports into Elasticsearch. These requests may experience
27
+ # either partial or total failures. Events are retried if they fail due to either a network error or the status codes
28
+ # 429 (the server is busy), 409 (Version Conflict), or 503 (temporary overloading/maintenance).
36
29
  #
37
- # - 429, Too Many Requests (RFC6585)
38
- # - 503, The server is currently unable to handle the request due to a temporary overloading or maintenance of the server.
39
- #
40
- # Here are the rules of what is retried when:
30
+ # The retry policy's logic can be described as follows:
41
31
  #
42
- # - Block and retry all events in bulk response that experiences transient network exceptions until
32
+ # - Block and retry all events in the bulk response that experience transient network exceptions until
43
33
  # a successful submission is received by Elasticsearch.
44
- # - Retry subset of sent events which resulted in ES errors of a retryable nature which can be found
45
- # in RETRYABLE_CODES
46
- # - For events which returned retryable error codes, they will be pushed onto a separate queue for
47
- # retrying events. events in this queue will be retried a maximum of 5 times by default (configurable through :max_retries). The size of
48
- # this queue is capped by the value set in :retry_max_items.
49
- # - Events from the retry queue are submitted again either when the queue reaches its max size or when
50
- # the max interval time is reached, which is set in :retry_max_interval.
34
+ # - Retry the subset of sent events which resulted in ES errors of a retryable nature.
35
+ # - Events which returned retryable error codes will be pushed onto a separate queue for
36
+ # retrying events. Events in this queue will be retried a maximum of 5 times by default (configurable through :max_retries).
37
+ # The size of this queue is capped by the value set in :retry_max_items.
38
+ # - Events from the retry queue are submitted again when the queue reaches its max size or when
39
+ # the max interval time is reached. The max interval time is configurable via :retry_max_interval.
51
40
  # - Events which are not retryable or have reached their max retry count are logged to stderr.
52
41
  class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
53
42
  attr_reader :client
@@ -67,13 +56,13 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
67
56
 
68
57
  # The index type to write events to. Generally you should try to write only
69
58
  # similar events to the same 'type'. String expansion `%{foo}` works here.
70
- #
59
+ #
71
60
  # Deprecated in favor of `document_type` field.
72
61
  config :index_type, :validate => :string, :deprecated => "Please use the 'document_type' setting instead. It has the same effect, but is more appropriately named."
73
62
 
74
63
  # The document type to write events to. Generally you should try to write only
75
64
  # similar events to the same 'type'. String expansion `%{foo}` works here.
76
- # Unless you set 'document_type', the event 'type' will be used if it exists
65
+ # Unless you set 'document_type', the event 'type' will be used if it exists
77
66
  # otherwise the document type will be assigned the value of 'logs'
78
67
  config :document_type, :validate => :string
79
68
 
@@ -113,24 +102,27 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
113
102
  # This can be dynamic using the `%{foo}` syntax.
114
103
  config :routing, :validate => :string
115
104
 
116
- # Sets the host(s) of the remote instance. If given an array it will load balance requests across the hosts specified in the `host` parameter.
105
+ # Sets the host(s) of the remote instance. If given an array it will load balance requests across the hosts specified in the `hosts` parameter.
117
106
  # Remember the `http` protocol uses the http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html#modules-http[http] address (eg. 9200, not 9300).
118
107
  # `"127.0.0.1"`
119
108
  # `["127.0.0.1:9200","127.0.0.2:9200"]`
120
- # It is important to exclude http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html[dedicated master nodes] from the `host` list
121
- # to prevent LS from sending bulk requests to the master nodes. So this parameter should only reference either data or client nodes.
109
+ # It is important to exclude http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html[dedicated master nodes] from the `hosts` list
110
+ # to prevent LS from sending bulk requests to the master nodes. So this parameter should only reference either data or client nodes in Elasticsearch.
122
111
 
123
112
  config :hosts, :validate => :array
124
113
 
125
- # You can set the remote port as part of the host, or explicitly here as well
126
- config :port, :validate => :string, :default => 9200
114
+ # The port setting is obsolete. Please use the 'hosts' setting instead.
115
+ # Hosts entries can be in "host:port" format.
116
+ config :port, :obsolete => "Please use the 'hosts' setting instead. Hosts entries can be in 'host:port' format."
127
117
 
128
- # This plugin uses the bulk index api for improved indexing performance.
129
- # To make efficient bulk api calls, we will buffer a certain number of
118
+ # This plugin uses the bulk index API for improved indexing performance.
119
+ # To make efficient bulk API calls, we will buffer a certain number of
130
120
  # events before flushing that out to Elasticsearch. This setting
131
121
  # controls how many events will be buffered before sending a batch
132
- # of events.
133
- config :flush_size, :validate => :number, :default => 5000
122
+ # of events. Increasing the `flush_size` has an effect on Logstash's heap size.
123
+ # Remember to also increase the heap size using `LS_HEAP_SIZE` if you are sending big documents
124
+ # or have increased the `flush_size` to a higher value.
125
+ config :flush_size, :validate => :number, :default => 500
134
126
 
135
127
  # The amount of time since last flush before a flush is forced.
136
128
  #
@@ -143,64 +135,61 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
143
135
  # near-real-time.
144
136
  config :idle_flush_time, :validate => :number, :default => 1
145
137
 
146
- # The Elasticsearch action to perform. Valid actions are: `index`, `delete`.
147
- #
148
- # Use of this setting *REQUIRES* you also configure the `document_id` setting
149
- # because `delete` actions all require a document id.
150
- #
151
- # What does each action do?
138
+ # The Elasticsearch action to perform. Valid actions are:
152
139
  #
153
140
  # - index: indexes a document (an event from Logstash).
154
- # - delete: deletes a document by id
141
+ # - delete: deletes a document by id (An id is required for this action)
155
142
  # - create: indexes a document, fails if a document by that id already exists in the index.
156
- # - update: updates a document by id
157
- # following action is not supported by HTTP protocol
143
+ # - update: updates a document by id. Update has a special case where you can upsert -- update a
144
+ # document if not already present. See the `upsert` option
158
145
  #
159
- # For more details on actions, check out the http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-bulk.html[Elasticsearch bulk API documentation]
146
+ # For more details on actions, check out the http://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html[Elasticsearch bulk API documentation]
160
147
  config :action, :validate => %w(index delete create update), :default => "index"
161
148
 
162
- # Username and password (only valid when protocol is HTTP; this setting works with HTTP or HTTPS auth)
149
+ # Username to authenticate to a secure Elasticsearch cluster
163
150
  config :user, :validate => :string
151
+ # Password to authenticate to a secure Elasticsearch cluster
164
152
  config :password, :validate => :password
165
153
 
166
- # HTTP Path at which the Elasticsearch server lives. Use this if you must run ES behind a proxy that remaps
167
- # the root path for the Elasticsearch HTTP API lives. This option is ignored for non-HTTP transports.
154
+ # HTTP Path at which the Elasticsearch server lives. Use this if you must run Elasticsearch behind a proxy that remaps
155
+ # the root path for the Elasticsearch HTTP API lives.
168
156
  config :path, :validate => :string, :default => "/"
169
157
 
170
- # SSL Configurations (only valid when protocol is HTTP)
171
- #
172
- # Enable SSL
158
+ # Enable SSL/TLS secured communication to Elasticsearch cluster
173
159
  config :ssl, :validate => :boolean, :default => false
174
160
 
175
- # Validate the server's certificate
176
- # Disabling this severely compromises security
177
- # For more information read https://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf
161
+ # Option to validate the server's certificate. Disabling this severely compromises security.
162
+ # For more information on disabling certificate verification please read
163
+ # https://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf
178
164
  config :ssl_certificate_verification, :validate => :boolean, :default => true
179
165
 
180
166
  # The .cer or .pem file to validate the server's certificate
181
167
  config :cacert, :validate => :path
182
168
 
183
- # The JKS truststore to validate the server's certificate
169
+ # The JKS truststore to validate the server's certificate.
184
170
  # Use either `:truststore` or `:cacert`
185
171
  config :truststore, :validate => :path
186
172
 
187
173
  # Set the truststore password
188
174
  config :truststore_password, :validate => :password
189
175
 
190
- # The keystore used to present a certificate to the server
176
+ # The keystore used to present a certificate to the server.
191
177
  # It can be either .jks or .p12
192
178
  config :keystore, :validate => :path
193
179
 
194
180
  # Set the truststore password
195
181
  config :keystore_password, :validate => :password
196
182
 
197
- # Enable cluster sniffing
198
- # Asks host for the list of all cluster nodes and adds them to the hosts list
199
- # Will return ALL nodes with HTTP enabled (including master nodes!). If you use
183
+ # This setting asks Elasticsearch for the list of all cluster nodes and adds them to the hosts list.
184
+ # Note: This will return ALL nodes with HTTP enabled (including master nodes!). If you use
200
185
  # this with master nodes, you probably want to disable HTTP on them by setting
201
- # `http.enabled` to false in their elasticsearch.yml.
186
+ # `http.enabled` to false in their elasticsearch.yml. You can either use the `sniffing` option or
187
+ # manually enter multiple Elasticsearch hosts using the `hosts` paramater.
202
188
  config :sniffing, :validate => :boolean, :default => false
203
189
 
190
+ # How long to wait, in seconds, between sniffing attempts
191
+ config :sniffing_delay, :validate => :number, :default => 5
192
+
204
193
  # Set max retry for each event
205
194
  config :max_retries, :validate => :number, :default => 3
206
195
 
@@ -210,45 +199,54 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
210
199
  # Set max interval between bulk retries
211
200
  config :retry_max_interval, :validate => :number, :default => 5
212
201
 
213
- # Set the address of a forward HTTP proxy. Must be used with the 'http' protocol
214
- # Can be either a string, such as 'http://localhost:123' or a hash in the form
215
- # {host: 'proxy.org' port: 80 scheme: 'http'}
202
+ # Set the address of a forward HTTP proxy.
203
+ # Can be either a string, such as `http://localhost:123` or a hash in the form
204
+ # of `{host: 'proxy.org' port: 80 scheme: 'http'}`.
216
205
  # Note, this is NOT a SOCKS proxy, but a plain HTTP proxy
217
206
  config :proxy
218
207
 
219
- # Enable doc_as_upsert for update mode
220
- # create a new document with source if document_id doesn't exists
208
+ # Enable `doc_as_upsert` for update mode.
209
+ # Create a new document with source if `document_id` doesn't exist in Elasticsearch
221
210
  config :doc_as_upsert, :validate => :boolean, :default => false
222
211
 
223
- # Set upsert content for update mode
224
- # create a new document with this parameter as json string if document_id doesn't exists
212
+ # Set upsert content for update mode.
213
+ # Create a new document with this parameter as json string if `document_id` doesn't exists
225
214
  config :upsert, :validate => :string, :default => ""
226
215
 
216
+ # Set the timeout for network operations and requests sent Elasticsearch. If
217
+ # a timeout occurs, the request will be retried.
218
+ config :timeout, :validate => :number
219
+
227
220
  public
228
221
  def register
229
222
  @hosts = Array(@hosts)
230
223
  # retry-specific variables
231
224
  @retry_flush_mutex = Mutex.new
232
- @retry_teardown_requested = Concurrent::AtomicBoolean.new(false)
225
+ @retry_close_requested = Concurrent::AtomicBoolean.new(false)
233
226
  # needs flushing when interval
234
227
  @retry_queue_needs_flushing = ConditionVariable.new
235
228
  @retry_queue_not_full = ConditionVariable.new
236
229
  @retry_queue = Queue.new
230
+ @submit_mutex = Mutex.new
237
231
 
238
232
  client_settings = {}
239
- common_options = {:client_settings => client_settings}
233
+ common_options = {
234
+ :client_settings => client_settings,
235
+ :sniffing => @sniffing,
236
+ :sniffing_delay => @sniffing_delay
237
+ }
240
238
 
239
+ common_options[:timeout] = @timeout if @timeout
241
240
  client_settings[:path] = "/#{@path}/".gsub(/\/+/, "/") # Normalize slashes
242
241
  @logger.debug? && @logger.debug("Normalizing http path", :path => @path, :normalized => client_settings[:path])
243
242
 
244
- if @host.nil?
243
+ if @hosts.nil? || @hosts.empty?
245
244
  @logger.info("No 'host' set in elasticsearch output. Defaulting to localhost")
246
- @host = ["localhost"]
245
+ @hosts = ["localhost"]
247
246
  end
248
247
 
249
248
  client_settings.merge! setup_ssl()
250
249
  client_settings.merge! setup_proxy()
251
- client_settings.merge! setup_sniffing()
252
250
  common_options.merge! setup_basic_auth()
253
251
 
254
252
  # Update API setup
@@ -258,13 +256,8 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
258
256
  }
259
257
  common_options.merge! update_options if @action == 'update'
260
258
 
261
- option_hosts = @hosts.map do |host|
262
- host_name, port = host.split(":")
263
- { :host => host_name, :port => (port || @port).to_i }
264
- end
265
-
266
259
  @client = LogStash::Outputs::Elasticsearch::HttpClient.new(
267
- common_options.merge(:hosts => @hosts)
260
+ common_options.merge(:hosts => @hosts, :logger => @logger)
268
261
  )
269
262
 
270
263
  if @manage_template
@@ -276,7 +269,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
276
269
  end
277
270
  end
278
271
 
279
- @logger.info("New Elasticsearch output", :hosts => @hosts, :port => @port)
272
+ @logger.info("New Elasticsearch output", :hosts => @hosts)
280
273
 
281
274
  @client_idx = 0
282
275
 
@@ -294,7 +287,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
294
287
  end
295
288
 
296
289
  @retry_thread = Thread.new do
297
- while @retry_teardown_requested.false?
290
+ while @retry_close_requested.false?
298
291
  @retry_flush_mutex.synchronize { @retry_queue_needs_flushing.wait(@retry_flush_mutex) }
299
292
  retry_flush
300
293
  end
@@ -317,9 +310,8 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
317
310
 
318
311
  public
319
312
  def receive(event)
320
- return unless output?(event)
321
313
 
322
- # block until we have not maxed out our
314
+ # block until we have not maxed out our
323
315
  # retry queue. This is applying back-pressure
324
316
  # to slow down the receive-rate
325
317
  @retry_flush_mutex.synchronize {
@@ -343,7 +335,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
343
335
  :_type => type,
344
336
  :_routing => @routing ? event.sprintf(@routing) : nil
345
337
  }
346
-
338
+
347
339
  params[:_upsert] = LogStash::Json.load(event.sprintf(@upsert)) if @action == 'update' && @upsert != ""
348
340
 
349
341
  buffer_receive([event.sprintf(@action), params, event])
@@ -353,21 +345,29 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
353
345
  # The submit method can be called from both the
354
346
  # Stud::Buffer flush thread and from our own retry thread.
355
347
  def submit(actions)
356
- es_actions = actions.map { |a, doc, event| [a, doc, event.to_hash] }
348
+ @submit_mutex.synchronize do
349
+ es_actions = actions.map { |a, doc, event| [a, doc, event.to_hash] }
350
+
351
+ bulk_response = @client.bulk(es_actions)
357
352
 
358
- bulk_response = @client.bulk(es_actions)
353
+ next unless bulk_response["errors"]
359
354
 
360
- if bulk_response["errors"]
361
- actions_with_responses = actions.zip(bulk_response['statuses'])
362
355
  actions_to_retry = []
363
- actions_with_responses.each do |action, resp_code|
364
- if RETRYABLE_CODES.include?(resp_code)
365
- @logger.warn "retrying failed action with response code: #{resp_code}"
356
+
357
+ bulk_response["items"].each_with_index do |resp,idx|
358
+ action_type, action_props = resp.first
359
+
360
+ status = action_props["status"]
361
+ action = actions[idx]
362
+
363
+ if RETRYABLE_CODES.include?(status)
364
+ @logger.warn "retrying failed action with response code: #{status}"
366
365
  actions_to_retry << action
367
- elsif not SUCCESS_CODES.include?(resp_code)
368
- @logger.warn "failed action with response of #{resp_code}, dropping action: #{action}"
366
+ elsif not SUCCESS_CODES.include?(status)
367
+ @logger.warn "Failed action. ", status: status, action: action, response: resp
369
368
  end
370
369
  end
370
+
371
371
  retry_push(actions_to_retry) unless actions_to_retry.empty?
372
372
  end
373
373
  end
@@ -375,7 +375,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
375
375
  # When there are exceptions raised upon submission, we raise an exception so that
376
376
  # Stud::Buffer will retry to flush
377
377
  public
378
- def flush(actions, teardown = false)
378
+ def flush(actions, close = false)
379
379
  begin
380
380
  submit(actions)
381
381
  rescue Manticore::SocketException => e
@@ -407,26 +407,24 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
407
407
  end # def flush
408
408
 
409
409
  public
410
- def teardown
411
- if @cacert # remove temporary jks store created from the cacert
412
- File.delete(@truststore)
413
- end
410
+ def close
411
+ @client.stop_sniffing!
414
412
 
415
- @retry_teardown_requested.make_true
413
+ @retry_close_requested.make_true
416
414
  # First, make sure retry_timer_thread is stopped
417
- # to ensure we do not signal a retry based on
415
+ # to ensure we do not signal a retry based on
418
416
  # the retry interval.
419
417
  Thread.kill(@retry_timer_thread)
420
418
  @retry_timer_thread.join
421
- # Signal flushing in the case that #retry_flush is in
419
+ # Signal flushing in the case that #retry_flush is in
422
420
  # the process of waiting for a signal.
423
421
  @retry_flush_mutex.synchronize { @retry_queue_needs_flushing.signal }
424
- # Now, #retry_flush is ensured to not be in a state of
422
+ # Now, #retry_flush is ensured to not be in a state of
425
423
  # waiting and can be safely joined into the main thread
426
424
  # for further final execution of an in-process remaining call.
427
425
  @retry_thread.join
428
426
 
429
- # execute any final actions along with a proceeding retry for any
427
+ # execute any final actions along with a proceeding retry for any
430
428
  # final actions that did not succeed.
431
429
  buffer_flush(:final => true)
432
430
  retry_flush
@@ -448,11 +446,6 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
448
446
  return {:proxy => proxy}
449
447
  end
450
448
 
451
- private
452
- def setup_sniffing
453
- { :reload_connections => @reload_connections }
454
- end
455
-
456
449
  private
457
450
  def setup_ssl
458
451
  return {} unless @ssl
@@ -460,12 +453,15 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
460
453
  if @cacert && @truststore
461
454
  raise(LogStash::ConfigurationError, "Use either \"cacert\" or \"truststore\" when configuring the CA certificate") if @truststore
462
455
  end
456
+
463
457
  ssl_options = {}
464
- if @cacert then
465
- @truststore, ssl_options[:truststore_password] = generate_jks @cacert
458
+
459
+ if @cacert
460
+ ssl_options[:ca_file] = @cacert
466
461
  elsif @truststore
467
462
  ssl_options[:truststore_password] = @truststore_password.value if @truststore_password
468
463
  end
464
+
469
465
  ssl_options[:truststore] = @truststore if @truststore
470
466
  if @keystore
471
467
  ssl_options[:keystore] = @keystore
@@ -493,35 +489,12 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
493
489
  end
494
490
 
495
491
  private
496
- def generate_jks cert_path
497
-
498
- require 'securerandom'
499
- require 'tempfile'
500
- require 'java'
501
- import java.io.FileInputStream
502
- import java.io.FileOutputStream
503
- import java.security.KeyStore
504
- import java.security.cert.CertificateFactory
505
-
506
- jks = java.io.File.createTempFile("cert", ".jks")
507
-
508
- ks = KeyStore.getInstance "JKS"
509
- ks.load nil, nil
510
- cf = CertificateFactory.getInstance "X.509"
511
- cert = cf.generateCertificate FileInputStream.new(cert_path)
512
- ks.setCertificateEntry "cacert", cert
513
- pwd = SecureRandom.urlsafe_base64(9)
514
- ks.store FileOutputStream.new(jks), pwd.to_java.toCharArray
515
- [jks.path, pwd]
516
- end
517
-
518
- private
519
- # in charge of submitting any actions in @retry_queue that need to be
492
+ # in charge of submitting any actions in @retry_queue that need to be
520
493
  # retried
521
494
  #
522
495
  # This method is not called concurrently. It is only called by @retry_thread
523
- # and once that thread is ended during the teardown process, a final call
524
- # to this method is done upon teardown in the main thread.
496
+ # and once that thread is ended during the close process, a final call
497
+ # to this method is done upon close in the main thread.
525
498
  def retry_flush()
526
499
  unless @retry_queue.empty?
527
500
  buffer = @retry_queue.size.times.map do