logstash-output-elasticsearch 0.1.9-java → 0.1.10-java
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CONTRIBUTORS +31 -0
- data/LICENSE +1 -1
- data/README.md +95 -0
- data/lib/logstash/outputs/elasticsearch.rb +227 -105
- data/lib/logstash/outputs/elasticsearch/protocol.rb +30 -3
- data/logstash-output-elasticsearch.gemspec +3 -2
- data/spec/outputs/elasticsearch_spec.rb +234 -4
- metadata +57 -41
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 90b616b39de31794c9b9047c5eccf544908752a0
|
4
|
+
data.tar.gz: b28c3292d533055b0c540d16bf0390e90fba212f
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: e8a86c367834828819f246adc6e6a19d896bef9d6f2570c68aaeb578830357725639c27561e73dd5f85410740dc6cfd21aaf1622daabe2e0ed7469690338ffc5
|
7
|
+
data.tar.gz: edb1fd8e7d447a68a9ac4261398771a641b7dc9c8b8022fddf486bb2328764b1415de58c0c6ce81368d1703c80d593ea56e51bcd5febf098b51a253b38c5425c
|
data/CONTRIBUTORS
ADDED
@@ -0,0 +1,31 @@
|
|
1
|
+
The following is a list of people who have contributed ideas, code, bug
|
2
|
+
reports, or in general have helped logstash along its way.
|
3
|
+
|
4
|
+
Contributors:
|
5
|
+
* Aaron Mildenstein (untergeek)
|
6
|
+
* Bob Corsaro (dokipen)
|
7
|
+
* Colin Surprenant (colinsurprenant)
|
8
|
+
* Dmitry Koprov (dkoprov)
|
9
|
+
* Graham Bleach (bleach)
|
10
|
+
* Hao Chen (haoch)
|
11
|
+
* James Turnbull (jamtur01)
|
12
|
+
* John E. Vincent (lusis)
|
13
|
+
* Jordan Sissel (jordansissel)
|
14
|
+
* João Duarte (jsvd)
|
15
|
+
* Kurt Hurtado (kurtado)
|
16
|
+
* Miah Johnson (miah)
|
17
|
+
* Pere Urbón (purbon)
|
18
|
+
* Pete Fritchman (fetep)
|
19
|
+
* Pier-Hugues Pellerin (ph)
|
20
|
+
* Raymond Feng (raymondfeng)
|
21
|
+
* Richard Pijnenburg (electrical)
|
22
|
+
* Spenser Jones (SpenserJ)
|
23
|
+
* Suyog Rao (suyograo)
|
24
|
+
* Tal Levy (talevy)
|
25
|
+
* Tom Hodder (tolland)
|
26
|
+
* jimmyjones2
|
27
|
+
|
28
|
+
Note: If you've sent us patches, bug reports, or otherwise contributed to
|
29
|
+
Logstash, and you aren't on the list above and want to be, please let us know
|
30
|
+
and we'll make sure you're here. Contributions from folks like you are what make
|
31
|
+
open source awesome.
|
data/LICENSE
CHANGED
@@ -1,4 +1,4 @@
|
|
1
|
-
Copyright (c) 2012-
|
1
|
+
Copyright (c) 2012-2015 Elasticsearch <http://www.elasticsearch.org>
|
2
2
|
|
3
3
|
Licensed under the Apache License, Version 2.0 (the "License");
|
4
4
|
you may not use this file except in compliance with the License.
|
data/README.md
ADDED
@@ -0,0 +1,95 @@
|
|
1
|
+
# Logstash Plugin
|
2
|
+
|
3
|
+
This is a plugin for [Logstash](https://github.com/elasticsearch/logstash).
|
4
|
+
|
5
|
+
It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
|
6
|
+
|
7
|
+
## Documentation
|
8
|
+
|
9
|
+
Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elasticsearch.org/guide/en/logstash/current/).
|
10
|
+
|
11
|
+
- For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
|
12
|
+
- For more asciidoc formatting tips, see the excellent reference here https://github.com/elasticsearch/docs#asciidoc-guide
|
13
|
+
|
14
|
+
## Need Help?
|
15
|
+
|
16
|
+
Need help? Try #logstash on freenode IRC or the logstash-users@googlegroups.com mailing list.
|
17
|
+
|
18
|
+
## Developing
|
19
|
+
|
20
|
+
### 1. Plugin Developement and Testing
|
21
|
+
|
22
|
+
#### Code
|
23
|
+
- To get started, you'll need JRuby with the Bundler gem installed.
|
24
|
+
|
25
|
+
- Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization.
|
26
|
+
|
27
|
+
- Install dependencies
|
28
|
+
```sh
|
29
|
+
bundle install
|
30
|
+
```
|
31
|
+
|
32
|
+
#### Test
|
33
|
+
|
34
|
+
```sh
|
35
|
+
bundle exec rspec
|
36
|
+
```
|
37
|
+
|
38
|
+
The Logstash code required to run the tests/specs is specified in the `Gemfile` by the line similar to:
|
39
|
+
```ruby
|
40
|
+
gem "logstash", :github => "elasticsearch/logstash", :branch => "1.5"
|
41
|
+
```
|
42
|
+
To test against another version or a local Logstash, edit the `Gemfile` to specify an alternative location, for example:
|
43
|
+
```ruby
|
44
|
+
gem "logstash", :github => "elasticsearch/logstash", :ref => "master"
|
45
|
+
```
|
46
|
+
```ruby
|
47
|
+
gem "logstash", :path => "/your/local/logstash"
|
48
|
+
```
|
49
|
+
|
50
|
+
Then update your dependencies and run your tests:
|
51
|
+
|
52
|
+
```sh
|
53
|
+
bundle install
|
54
|
+
bundle exec rspec
|
55
|
+
```
|
56
|
+
|
57
|
+
### 2. Running your unpublished Plugin in Logstash
|
58
|
+
|
59
|
+
#### 2.1 Run in a local Logstash clone
|
60
|
+
|
61
|
+
- Edit Logstash `tools/Gemfile` and add the local plugin path, for example:
|
62
|
+
```ruby
|
63
|
+
gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
|
64
|
+
```
|
65
|
+
- Update Logstash dependencies
|
66
|
+
```sh
|
67
|
+
rake vendor:gems
|
68
|
+
```
|
69
|
+
- Run Logstash with your plugin
|
70
|
+
```sh
|
71
|
+
bin/logstash -e 'filter {awesome {}}'
|
72
|
+
```
|
73
|
+
At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
|
74
|
+
|
75
|
+
#### 2.2 Run in an installed Logstash
|
76
|
+
|
77
|
+
- Build your plugin gem
|
78
|
+
```sh
|
79
|
+
gem build logstash-filter-awesome.gemspec
|
80
|
+
```
|
81
|
+
- Install the plugin from the Logstash home
|
82
|
+
```sh
|
83
|
+
bin/plugin install /your/local/plugin/logstash-filter-awesome.gem
|
84
|
+
```
|
85
|
+
- Start Logstash and proceed to test the plugin
|
86
|
+
|
87
|
+
## Contributing
|
88
|
+
|
89
|
+
All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
|
90
|
+
|
91
|
+
Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
|
92
|
+
|
93
|
+
It is more important to me that you are able to contribute.
|
94
|
+
|
95
|
+
For more information about contributing, see the [CONTRIBUTING](https://github.com/elasticsearch/logstash/blob/master/CONTRIBUTING.md) file.
|
@@ -3,8 +3,10 @@ require "logstash/namespace"
|
|
3
3
|
require "logstash/environment"
|
4
4
|
require "logstash/outputs/base"
|
5
5
|
require "logstash/json"
|
6
|
+
require "concurrent_ruby"
|
6
7
|
require "stud/buffer"
|
7
8
|
require "socket" # for Socket.gethostname
|
9
|
+
require "thread" # for safe queueing
|
8
10
|
require "uri" # for escaping user input
|
9
11
|
require 'logstash-output-elasticsearch_jars.rb'
|
10
12
|
|
@@ -35,7 +37,6 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
35
37
|
include Stud::Buffer
|
36
38
|
|
37
39
|
config_name "elasticsearch"
|
38
|
-
milestone 3
|
39
40
|
|
40
41
|
# The index to write events to. This can be dynamic using the `%{foo}` syntax.
|
41
42
|
# The default value will partition your indices by day so you can more easily
|
@@ -88,6 +89,11 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
88
89
|
# This is only required if the normal multicast/cluster discovery stuff won't
|
89
90
|
# work in your environment.
|
90
91
|
#
|
92
|
+
# The plugin will use multicast discovery to connect to Elasticsearch
|
93
|
+
# when using `protocol => node` without setting a host.
|
94
|
+
#
|
95
|
+
# http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html#multicast[Multicast Discovery Docs]
|
96
|
+
#
|
91
97
|
# `"127.0.0.1"`
|
92
98
|
# `["127.0.0.1:9300","127.0.0.2:9300"]`
|
93
99
|
config :host, :validate => :array
|
@@ -177,6 +183,9 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
177
183
|
#
|
178
184
|
# - index: indexes a document (an event from Logstash).
|
179
185
|
# - delete: deletes a document by id
|
186
|
+
# - create: indexes a document, fails if a document by that id already exists in the index.
|
187
|
+
# following action is not supported by HTTP protocol
|
188
|
+
# - create_unless_exists: creates a document, fails if no id is provided
|
180
189
|
#
|
181
190
|
# For more details on actions, check out the http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-bulk.html[Elasticsearch bulk API documentation]
|
182
191
|
config :action, :validate => :string, :default => "index"
|
@@ -200,18 +209,33 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
200
209
|
# Set the truststore password
|
201
210
|
config :truststore_password, :validate => :password
|
202
211
|
|
203
|
-
#
|
204
|
-
#
|
205
|
-
|
206
|
-
|
207
|
-
|
208
|
-
|
209
|
-
|
212
|
+
# Enable cluster sniffing (transport only)
|
213
|
+
# Asks host for the list of all cluster nodes and adds them to the hosts list
|
214
|
+
config :sniffing, :validate => :boolean, :default => false
|
215
|
+
|
216
|
+
# Set max retry for each event
|
217
|
+
config :max_retries, :validate => :number, :default => 3
|
218
|
+
|
219
|
+
# Set retry policy for events that failed to send
|
220
|
+
config :retry_max_items, :validate => :number, :default => 5000
|
221
|
+
|
222
|
+
# Set max interval between bulk retries
|
223
|
+
config :retry_max_interval, :validate => :number, :default => 5
|
210
224
|
|
211
225
|
public
|
212
226
|
def register
|
227
|
+
@submit_mutex = Mutex.new
|
228
|
+
# retry-specific variables
|
229
|
+
@retry_flush_mutex = Mutex.new
|
230
|
+
@retry_teardown_requested = Concurrent::AtomicBoolean.new(false)
|
231
|
+
# needs flushing when interval
|
232
|
+
@retry_queue_needs_flushing = ConditionVariable.new
|
233
|
+
@retry_queue_not_full = ConditionVariable.new
|
234
|
+
@retry_queue = Queue.new
|
235
|
+
|
213
236
|
client_settings = {}
|
214
237
|
|
238
|
+
|
215
239
|
if @protocol.nil?
|
216
240
|
@protocol = LogStash::Environment.jruby? ? "node" : "http"
|
217
241
|
end
|
@@ -223,6 +247,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
223
247
|
client_settings["cluster.name"] = @cluster if @cluster
|
224
248
|
client_settings["network.host"] = @bind_host if @bind_host
|
225
249
|
client_settings["transport.tcp.port"] = @bind_port if @bind_port
|
250
|
+
client_settings["client.transport.sniff"] = @sniffing
|
226
251
|
|
227
252
|
if @node_name
|
228
253
|
client_settings["node.name"] = @node_name
|
@@ -283,33 +308,27 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
283
308
|
|
284
309
|
@client = Array.new
|
285
310
|
|
286
|
-
if protocol == "node"
|
287
|
-
options = {
|
288
|
-
|
289
|
-
:port => @port,
|
290
|
-
}.merge(common_options)
|
291
|
-
@client << client_class.new(options)
|
311
|
+
if protocol == "node" || @host.nil? # if @protocol is "node" or @host is not set
|
312
|
+
options = { :host => @host, :port => @port }.merge(common_options)
|
313
|
+
@client = [client_class.new(options)]
|
292
314
|
else # if @protocol in ["transport","http"]
|
293
|
-
@host.
|
294
|
-
|
295
|
-
|
296
|
-
|
297
|
-
|
298
|
-
|
299
|
-
@logger.info "Create client to elasticsearch server on #{_host}:#{_port}"
|
300
|
-
@client << client_class.new(options)
|
301
|
-
end # @host.each
|
315
|
+
@client = @host.map do |host|
|
316
|
+
(_host,_port) = host.split ":"
|
317
|
+
options = { :host => _host, :port => _port || @port }.merge(common_options)
|
318
|
+
@logger.info "Create client to elasticsearch server on #{_host}:#{_port}"
|
319
|
+
client_class.new(options)
|
320
|
+
end # @host.map
|
302
321
|
end
|
303
322
|
|
304
323
|
if @manage_template
|
305
324
|
for client in @client
|
306
|
-
|
307
|
-
|
308
|
-
|
309
|
-
|
310
|
-
|
311
|
-
|
312
|
-
|
325
|
+
begin
|
326
|
+
@logger.info("Automatic template management enabled", :manage_template => @manage_template.to_s)
|
327
|
+
client.template_install(@template_name, get_template, @template_overwrite)
|
328
|
+
break
|
329
|
+
rescue => e
|
330
|
+
@logger.error("Failed to install template: #{e.message}")
|
331
|
+
end
|
313
332
|
end # for @client loop
|
314
333
|
end # if @manage_templates
|
315
334
|
|
@@ -325,8 +344,147 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
325
344
|
:max_interval => @idle_flush_time,
|
326
345
|
:logger => @logger
|
327
346
|
)
|
347
|
+
|
348
|
+
@retry_timer_thread = Thread.new do
|
349
|
+
loop do
|
350
|
+
sleep(@retry_max_interval)
|
351
|
+
@retry_flush_mutex.synchronize { @retry_queue_needs_flushing.signal }
|
352
|
+
end
|
353
|
+
end
|
354
|
+
|
355
|
+
@retry_thread = Thread.new do
|
356
|
+
while @retry_teardown_requested.false?
|
357
|
+
@retry_flush_mutex.synchronize { @retry_queue_needs_flushing.wait(@retry_flush_mutex) }
|
358
|
+
retry_flush
|
359
|
+
end
|
360
|
+
end
|
328
361
|
end # def register
|
329
362
|
|
363
|
+
|
364
|
+
public
|
365
|
+
def get_template
|
366
|
+
if @template.nil?
|
367
|
+
@template = ::File.expand_path('elasticsearch/elasticsearch-template.json', ::File.dirname(__FILE__))
|
368
|
+
if !File.exists?(@template)
|
369
|
+
raise "You must specify 'template => ...' in your elasticsearch output (I looked for '#{@template}')"
|
370
|
+
end
|
371
|
+
end
|
372
|
+
template_json = IO.read(@template).gsub(/\n/,'')
|
373
|
+
template = LogStash::Json.load(template_json)
|
374
|
+
template['template'] = wildcard_substitute(@index)
|
375
|
+
@logger.info("Using mapping template", :template => template)
|
376
|
+
return template
|
377
|
+
end # def get_template
|
378
|
+
|
379
|
+
public
|
380
|
+
def receive(event)
|
381
|
+
return unless output?(event)
|
382
|
+
|
383
|
+
# block until we have not maxed out our
|
384
|
+
# retry queue. This is applying back-pressure
|
385
|
+
# to slow down the receive-rate
|
386
|
+
@retry_flush_mutex.synchronize {
|
387
|
+
@retry_queue_not_full.wait(@retry_flush_mutex) while @retry_queue.size > @retry_max_items
|
388
|
+
}
|
389
|
+
|
390
|
+
event['@metadata']['retry_count'] = 0
|
391
|
+
|
392
|
+
# Set the 'type' value for the index.
|
393
|
+
type = @index_type ? event.sprintf(@index_type) : (event["type"] || "logs")
|
394
|
+
|
395
|
+
index = event.sprintf(@index)
|
396
|
+
|
397
|
+
document_id = @document_id ? event.sprintf(@document_id) : nil
|
398
|
+
buffer_receive([event.sprintf(@action), { :_id => document_id, :_index => index, :_type => type }, event])
|
399
|
+
end # def receive
|
400
|
+
|
401
|
+
public
|
402
|
+
# synchronize the @current_client.bulk call to avoid concurrency/thread safety issues with the
|
403
|
+
# # client libraries which might not be thread safe. the submit method can be called from both the
|
404
|
+
# # Stud::Buffer flush thread and from our own retry thread.
|
405
|
+
def submit(actions)
|
406
|
+
es_actions = actions.map { |a, doc, event| [a, doc, event.to_hash] }
|
407
|
+
@submit_mutex.lock
|
408
|
+
begin
|
409
|
+
bulk_response = @current_client.bulk(es_actions)
|
410
|
+
ensure
|
411
|
+
@submit_mutex.unlock
|
412
|
+
end
|
413
|
+
if bulk_response["errors"]
|
414
|
+
failed_actions = actions.select.with_index {|_,i| [429, 503].include?(bulk_response['statuses'][i]) }
|
415
|
+
unless failed_actions.empty?
|
416
|
+
@logger.debug "#{failed_actions.size}/#{actions.size} events were unsuccessful in sending"
|
417
|
+
retry_push(failed_actions)
|
418
|
+
end
|
419
|
+
end
|
420
|
+
end
|
421
|
+
|
422
|
+
# Does not raise an exception to prevent Stud::Buffer from
|
423
|
+
# attempting to resubmit successful actions within the bulk
|
424
|
+
# request.
|
425
|
+
public
|
426
|
+
def flush(actions, teardown = false)
|
427
|
+
begin
|
428
|
+
@logger.debug? and @logger.debug "Sending bulk of actions to client[#{@client_idx}]: #{@host[@client_idx]}"
|
429
|
+
submit(actions)
|
430
|
+
rescue => e
|
431
|
+
@logger.error "Got error to send bulk of actions to elasticsearch server at #{@host[@client_idx]} : #{e.message}"
|
432
|
+
ensure
|
433
|
+
unless @protocol == "node"
|
434
|
+
@logger.debug? and @logger.debug "Shifting current elasticsearch client"
|
435
|
+
shift_client
|
436
|
+
end
|
437
|
+
end
|
438
|
+
end # def flush
|
439
|
+
|
440
|
+
public
|
441
|
+
def teardown
|
442
|
+
if @cacert # remove temporary jks store created from the cacert
|
443
|
+
File.delete(@truststore)
|
444
|
+
end
|
445
|
+
|
446
|
+
@retry_teardown_requested.make_true
|
447
|
+
# First, make sure retry_timer_thread is stopped
|
448
|
+
# to ensure we do not signal a retry based on
|
449
|
+
# the retry interval.
|
450
|
+
Thread.kill(@retry_timer_thread)
|
451
|
+
@retry_timer_thread.join
|
452
|
+
# Signal flushing in the case that #retry_flush is in
|
453
|
+
# the process of waiting for a signal.
|
454
|
+
@retry_flush_mutex.synchronize { @retry_queue_needs_flushing.signal }
|
455
|
+
# Now, #retry_flush is ensured to not be in a state of
|
456
|
+
# waiting and can be safely joined into the main thread
|
457
|
+
# for further final execution of an in-process remaining call.
|
458
|
+
@retry_thread.join
|
459
|
+
|
460
|
+
# execute any final actions along with a proceeding retry for any
|
461
|
+
# final actions that did not succeed.
|
462
|
+
buffer_flush(:final => true)
|
463
|
+
retry_flush
|
464
|
+
end
|
465
|
+
|
466
|
+
@@plugins = Gem::Specification.find_all{|spec| spec.name =~ /logstash-output-elasticsearch-/ }
|
467
|
+
|
468
|
+
@@plugins.each do |plugin|
|
469
|
+
name = plugin.name.split('-')[-1]
|
470
|
+
require "logstash/outputs/elasticsearch/#{name}"
|
471
|
+
end
|
472
|
+
|
473
|
+
protected
|
474
|
+
def start_local_elasticsearch
|
475
|
+
@logger.info("Starting embedded Elasticsearch local node.")
|
476
|
+
builder = org.elasticsearch.node.NodeBuilder.nodeBuilder
|
477
|
+
# Disable 'local only' - LOGSTASH-277
|
478
|
+
#builder.local(true)
|
479
|
+
builder.settings.put("cluster.name", @cluster) if @cluster
|
480
|
+
builder.settings.put("node.name", @node_name) if @node_name
|
481
|
+
builder.settings.put("network.host", @bind_host) if @bind_host
|
482
|
+
builder.settings.put("http.port", @embedded_http_port)
|
483
|
+
|
484
|
+
@embedded_elasticsearch = builder.node
|
485
|
+
@embedded_elasticsearch.start
|
486
|
+
end # def start_local_elasticsearch
|
487
|
+
|
330
488
|
protected
|
331
489
|
def shift_client
|
332
490
|
@client_idx = (@client_idx+1) % @client.length
|
@@ -368,36 +526,6 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
368
526
|
end
|
369
527
|
end
|
370
528
|
|
371
|
-
public
|
372
|
-
def get_template
|
373
|
-
if @template.nil?
|
374
|
-
@template = ::File.expand_path('elasticsearch/elasticsearch-template.json', ::File.dirname(__FILE__))
|
375
|
-
if !File.exists?(@template)
|
376
|
-
raise "You must specify 'template => ...' in your elasticsearch output (I looked for '#{@template}')"
|
377
|
-
end
|
378
|
-
end
|
379
|
-
template_json = IO.read(@template).gsub(/\n/,'')
|
380
|
-
template = LogStash::Json.load(template_json)
|
381
|
-
template['template'] = wildcard_substitute(@index)
|
382
|
-
@logger.info("Using mapping template", :template => template)
|
383
|
-
return template
|
384
|
-
end # def get_template
|
385
|
-
|
386
|
-
protected
|
387
|
-
def start_local_elasticsearch
|
388
|
-
@logger.info("Starting embedded Elasticsearch local node.")
|
389
|
-
builder = org.elasticsearch.node.NodeBuilder.nodeBuilder
|
390
|
-
# Disable 'local only' - LOGSTASH-277
|
391
|
-
#builder.local(true)
|
392
|
-
builder.settings.put("cluster.name", @cluster) if @cluster
|
393
|
-
builder.settings.put("node.name", @node_name) if @node_name
|
394
|
-
builder.settings.put("network.host", @bind_host) if @bind_host
|
395
|
-
builder.settings.put("http.port", @embedded_http_port)
|
396
|
-
|
397
|
-
@embedded_elasticsearch = builder.node
|
398
|
-
@embedded_elasticsearch.start
|
399
|
-
end # def start_local_elasticsearch
|
400
|
-
|
401
529
|
private
|
402
530
|
def generate_jks cert_path
|
403
531
|
|
@@ -421,55 +549,49 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
|
|
421
549
|
[jks.path, pwd]
|
422
550
|
end
|
423
551
|
|
424
|
-
|
425
|
-
|
426
|
-
|
427
|
-
|
428
|
-
|
429
|
-
|
430
|
-
|
431
|
-
|
432
|
-
|
433
|
-
|
434
|
-
|
435
|
-
|
436
|
-
|
437
|
-
|
438
|
-
|
439
|
-
|
440
|
-
|
441
|
-
|
442
|
-
|
443
|
-
|
444
|
-
|
445
|
-
|
446
|
-
@logger.error "Got error to send bulk of actions to elasticsearch server at #{@host[@client_idx]} : #{e.message}"
|
447
|
-
raise e
|
448
|
-
ensure
|
449
|
-
unless @protocol == "node"
|
450
|
-
@logger.debug? and @logger.debug "Shifting current elasticsearch client"
|
451
|
-
shift_client
|
452
|
-
end
|
552
|
+
private
|
553
|
+
# in charge of submitting any actions in @retry_queue that need to be
|
554
|
+
# retried
|
555
|
+
#
|
556
|
+
# This method is not called concurrently. It is only called by @retry_thread
|
557
|
+
# and once that thread is ended during the teardown process, a final call
|
558
|
+
# to this method is done upon teardown in the main thread.
|
559
|
+
def retry_flush()
|
560
|
+
unless @retry_queue.empty?
|
561
|
+
buffer = @retry_queue.size.times.map do
|
562
|
+
next_action, next_doc, next_event = @retry_queue.pop
|
563
|
+
next_event['@metadata']['retry_count'] += 1
|
564
|
+
|
565
|
+
if next_event['@metadata']['retry_count'] > @max_retries
|
566
|
+
@logger.error "too many attempts at sending event. dropping: #{next_event}"
|
567
|
+
nil
|
568
|
+
else
|
569
|
+
[next_action, next_doc, next_event]
|
570
|
+
end
|
571
|
+
end.compact
|
572
|
+
|
573
|
+
submit(buffer) unless buffer.empty?
|
453
574
|
end
|
454
|
-
# TODO(sissel): Handle errors. Since bulk requests could mostly succeed
|
455
|
-
# (aka partially fail), we need to figure out what documents need to be
|
456
|
-
# retried.
|
457
|
-
#
|
458
|
-
# In the worst case, a failing flush (exception) will incur a retry from Stud::Buffer.
|
459
|
-
end # def flush
|
460
575
|
|
461
|
-
|
462
|
-
|
463
|
-
|
464
|
-
end
|
465
|
-
buffer_flush(:final => true)
|
576
|
+
@retry_flush_mutex.synchronize {
|
577
|
+
@retry_queue_not_full.signal if @retry_queue.size < @retry_max_items
|
578
|
+
}
|
466
579
|
end
|
467
580
|
|
468
|
-
|
469
|
-
|
470
|
-
|
471
|
-
|
472
|
-
|
581
|
+
private
|
582
|
+
def retry_push(actions)
|
583
|
+
Array(actions).each{|action| @retry_queue << action}
|
584
|
+
@retry_flush_mutex.synchronize {
|
585
|
+
@retry_queue_needs_flushing.signal if @retry_queue.size >= @retry_max_items
|
586
|
+
}
|
473
587
|
end
|
474
588
|
|
589
|
+
# helper function to replace placeholders
|
590
|
+
# in index names to wildcards
|
591
|
+
# example:
|
592
|
+
# "logs-%{YYYY}" -> "logs-*"
|
593
|
+
private
|
594
|
+
def wildcard_substitute(name)
|
595
|
+
name.gsub(/%\{[^}]+\}/, "*")
|
596
|
+
end
|
475
597
|
end # class LogStash::Outputs::Elasticsearch
|
@@ -79,13 +79,18 @@ module LogStash::Outputs::Elasticsearch
|
|
79
79
|
end
|
80
80
|
|
81
81
|
def bulk(actions)
|
82
|
-
@client.bulk(:body => actions.collect do |action, args, source|
|
82
|
+
bulk_response = @client.bulk(:body => actions.collect do |action, args, source|
|
83
83
|
if source
|
84
84
|
next [ { action => args }, source ]
|
85
85
|
else
|
86
86
|
next { action => args }
|
87
87
|
end
|
88
88
|
end.flatten)
|
89
|
+
if bulk_response["errors"]
|
90
|
+
return {"errors" => true, "statuses" => bulk_response["items"].map { |i| i["status"] }}
|
91
|
+
else
|
92
|
+
return {"errors" => false}
|
93
|
+
end
|
89
94
|
end # def bulk
|
90
95
|
|
91
96
|
def template_exists?(name)
|
@@ -190,7 +195,14 @@ module LogStash::Outputs::Elasticsearch
|
|
190
195
|
end
|
191
196
|
response = prep.execute.actionGet()
|
192
197
|
|
193
|
-
|
198
|
+
if response.has_failures()
|
199
|
+
return {"errors" => true,
|
200
|
+
"statuses" => response.map { |i| (i.is_failed && i.get_failure.get_status.get_status) || 200 }}
|
201
|
+
else
|
202
|
+
return {"errors" => false}
|
203
|
+
end
|
204
|
+
# returns 200 for all successful actions, represents 201 & 200
|
205
|
+
# TODO(talevy): parse item response objects to retrieve correct 200 (OK) or 201(created) status codes
|
194
206
|
end # def bulk
|
195
207
|
|
196
208
|
def build_request(action, args, source)
|
@@ -202,8 +214,23 @@ module LogStash::Outputs::Elasticsearch
|
|
202
214
|
when "delete"
|
203
215
|
request = org.elasticsearch.action.delete.DeleteRequest.new(args[:_index])
|
204
216
|
request.id(args[:_id])
|
217
|
+
when "create"
|
218
|
+
request = org.elasticsearch.action.index.IndexRequest.new(args[:_index])
|
219
|
+
request.id(args[:_id]) if args[:_id]
|
220
|
+
request.source(source)
|
221
|
+
request.opType("create")
|
222
|
+
when "create_unless_exists"
|
223
|
+
unless args[:_id].nil?
|
224
|
+
request = org.elasticsearch.action.index.IndexRequest.new(args[:_index])
|
225
|
+
request.id(args[:_id])
|
226
|
+
request.source(source)
|
227
|
+
request.opType("create")
|
228
|
+
else
|
229
|
+
raise(LogStash::ConfigurationError, "Specifying action => 'create_unless_exists' without a document '_id' is not supported.")
|
230
|
+
end
|
231
|
+
else
|
232
|
+
raise(LogStash::ConfigurationError, "action => '#{action_name}' is not currently supported.")
|
205
233
|
#when "update"
|
206
|
-
#when "create"
|
207
234
|
end # case action
|
208
235
|
|
209
236
|
request.type(args[:_type]) if args[:_type]
|
@@ -1,8 +1,8 @@
|
|
1
1
|
Gem::Specification.new do |s|
|
2
2
|
|
3
3
|
s.name = 'logstash-output-elasticsearch'
|
4
|
-
s.version = '0.1.
|
5
|
-
s.licenses = ['
|
4
|
+
s.version = '0.1.10'
|
5
|
+
s.licenses = ['apache-2.0']
|
6
6
|
s.summary = "Logstash Output to Elasticsearch"
|
7
7
|
s.description = "Output events to elasticsearch"
|
8
8
|
s.authors = ["Elasticsearch"]
|
@@ -23,6 +23,7 @@ Gem::Specification.new do |s|
|
|
23
23
|
s.requirements << "jar 'org.elasticsearch:elasticsearch', '1.4.0'"
|
24
24
|
|
25
25
|
# Gem dependencies
|
26
|
+
s.add_runtime_dependency 'concurrent-ruby'
|
26
27
|
s.add_runtime_dependency 'elasticsearch', ['>= 1.0.6', '~> 1.0']
|
27
28
|
s.add_runtime_dependency 'stud', ['>= 0.0.17', '~> 0.0']
|
28
29
|
s.add_runtime_dependency 'cabin', ['~> 0.6']
|
@@ -2,6 +2,7 @@ require "logstash/devutils/rspec/spec_helper"
|
|
2
2
|
require "ftw"
|
3
3
|
require "logstash/plugin"
|
4
4
|
require "logstash/json"
|
5
|
+
require "stud/try"
|
5
6
|
|
6
7
|
describe "outputs/elasticsearch" do
|
7
8
|
|
@@ -72,6 +73,103 @@ describe "outputs/elasticsearch" do
|
|
72
73
|
end
|
73
74
|
end
|
74
75
|
|
76
|
+
describe "node client create actions", :elasticsearch => true do
|
77
|
+
require "logstash/outputs/elasticsearch"
|
78
|
+
require "elasticsearch"
|
79
|
+
let(:es) { Elasticsearch::Client.new }
|
80
|
+
|
81
|
+
def get_es_output(action, id = nil)
|
82
|
+
settings = {
|
83
|
+
"manage_template" => true,
|
84
|
+
"index" => "logstash-create",
|
85
|
+
"template_overwrite" => true,
|
86
|
+
"protocol" => "node",
|
87
|
+
"host" => "localhost",
|
88
|
+
"action" => action
|
89
|
+
}
|
90
|
+
settings['document_id'] = id unless id.nil?
|
91
|
+
LogStash::Outputs::ElasticSearch.new(settings)
|
92
|
+
end
|
93
|
+
|
94
|
+
before :each do
|
95
|
+
# Delete all templates first.
|
96
|
+
# Clean ES of data before we start.
|
97
|
+
es.indices.delete_template(:name => "*")
|
98
|
+
# This can fail if there are no indexes, ignore failure.
|
99
|
+
es.indices.delete(:index => "*") rescue nil
|
100
|
+
end
|
101
|
+
|
102
|
+
context "when action => create" do
|
103
|
+
it "should create new documents with or without id" do
|
104
|
+
subject = get_es_output("create", "id123")
|
105
|
+
subject.register
|
106
|
+
subject.receive(LogStash::Event.new("message" => "sample message here"))
|
107
|
+
subject.buffer_flush(:final => true)
|
108
|
+
es.indices.refresh
|
109
|
+
# Wait or fail until everything's indexed.
|
110
|
+
Stud::try(3.times) do
|
111
|
+
r = es.search
|
112
|
+
insist { r["hits"]["total"] } == 1
|
113
|
+
end
|
114
|
+
end
|
115
|
+
|
116
|
+
it "should creat new documents without id" do
|
117
|
+
subject = get_es_output("create")
|
118
|
+
subject.register
|
119
|
+
subject.receive(LogStash::Event.new("message" => "sample message here"))
|
120
|
+
subject.buffer_flush(:final => true)
|
121
|
+
es.indices.refresh
|
122
|
+
# Wait or fail until everything's indexed.
|
123
|
+
Stud::try(3.times) do
|
124
|
+
r = es.search
|
125
|
+
insist { r["hits"]["total"] } == 1
|
126
|
+
end
|
127
|
+
end
|
128
|
+
end
|
129
|
+
|
130
|
+
context "when action => create_unless_exists" do
|
131
|
+
it "should create new documents when specific id is specified" do
|
132
|
+
subject = get_es_output("create_unless_exists", "id123")
|
133
|
+
subject.register
|
134
|
+
subject.receive(LogStash::Event.new("message" => "sample message here"))
|
135
|
+
subject.buffer_flush(:final => true)
|
136
|
+
es.indices.refresh
|
137
|
+
# Wait or fail until everything's indexed.
|
138
|
+
Stud::try(3.times) do
|
139
|
+
r = es.search
|
140
|
+
insist { r["hits"]["total"] } == 1
|
141
|
+
end
|
142
|
+
end
|
143
|
+
|
144
|
+
it "should fail to create a document when no id is specified" do
|
145
|
+
subject = get_es_output("create_unless_exists")
|
146
|
+
subject.register
|
147
|
+
subject.receive(LogStash::Event.new("message" => "sample message here"))
|
148
|
+
subject.buffer_flush(:final => true)
|
149
|
+
es.indices.refresh
|
150
|
+
# Wait or fail until everything's indexed.
|
151
|
+
Stud::try(3.times) do
|
152
|
+
r = es.search
|
153
|
+
insist { r["hits"]["total"] } == 0
|
154
|
+
end
|
155
|
+
end
|
156
|
+
|
157
|
+
it "should unsuccesfully submit two records with the same document id" do
|
158
|
+
subject = get_es_output("create_unless_exists", "id123")
|
159
|
+
subject.register
|
160
|
+
subject.receive(LogStash::Event.new("message" => "sample message here"))
|
161
|
+
subject.receive(LogStash::Event.new("message" => "sample message here")) # 400 status failure (same id)
|
162
|
+
subject.buffer_flush(:final => true)
|
163
|
+
es.indices.refresh
|
164
|
+
# Wait or fail until everything's indexed.
|
165
|
+
Stud::try(3.times) do
|
166
|
+
r = es.search
|
167
|
+
insist { r["hits"]["total"] } == 1
|
168
|
+
end
|
169
|
+
end
|
170
|
+
end
|
171
|
+
end
|
172
|
+
|
75
173
|
describe "testing index_type", :elasticsearch => true do
|
76
174
|
describe "no type value" do
|
77
175
|
# Generate a random index name
|
@@ -257,7 +355,7 @@ describe "outputs/elasticsearch" do
|
|
257
355
|
end
|
258
356
|
end
|
259
357
|
|
260
|
-
describe "wildcard substitution in index templates", :
|
358
|
+
describe "wildcard substitution in index templates", :elasticsearch => true do
|
261
359
|
require "logstash/outputs/elasticsearch"
|
262
360
|
|
263
361
|
let(:template) { '{"template" : "not important, will be updated by :index"}' }
|
@@ -345,13 +443,18 @@ describe "outputs/elasticsearch" do
|
|
345
443
|
reject { values }.include?(1)
|
346
444
|
end
|
347
445
|
|
348
|
-
it "
|
446
|
+
it "does not create .raw field for the message field" do
|
349
447
|
results = @es.search(:q => "message.raw:\"sample message here\"")
|
448
|
+
insist { results["hits"]["total"] } == 0
|
449
|
+
end
|
450
|
+
|
451
|
+
it "creates .raw field from any string field which is not_analyzed" do
|
452
|
+
results = @es.search(:q => "country.raw:\"us\"")
|
350
453
|
insist { results["hits"]["total"] } == 1
|
351
|
-
insist { results["hits"]["hits"][0]["_source"]["
|
454
|
+
insist { results["hits"]["hits"][0]["_source"]["country"] } == "us"
|
352
455
|
|
353
456
|
# partial or terms should not work.
|
354
|
-
results = @es.search(:q => "
|
457
|
+
results = @es.search(:q => "country.raw:\"u\"")
|
355
458
|
insist { results["hits"]["total"] } == 0
|
356
459
|
end
|
357
460
|
|
@@ -374,6 +477,105 @@ describe "outputs/elasticsearch" do
|
|
374
477
|
end
|
375
478
|
end
|
376
479
|
|
480
|
+
describe "failures in bulk class expected behavior", :elasticsearch => true do
|
481
|
+
let(:template) { '{"template" : "not important, will be updated by :index"}' }
|
482
|
+
let(:event1) { LogStash::Event.new("somevalue" => 100, "@timestamp" => "2014-11-17T20:37:17.223Z", "@metadata" => {"retry_count" => 0}) }
|
483
|
+
let(:action1) { ["index", {:_id=>nil, :_index=>"logstash-2014.11.17", :_type=>"logs"}, event1] }
|
484
|
+
let(:event2) { LogStash::Event.new("geoip" => { "location" => [ 0.0, 0.0] }, "@timestamp" => "2014-11-17T20:37:17.223Z", "@metadata" => {"retry_count" => 0}) }
|
485
|
+
let(:action2) { ["index", {:_id=>nil, :_index=>"logstash-2014.11.17", :_type=>"logs"}, event2] }
|
486
|
+
let(:max_retries) { 3 }
|
487
|
+
|
488
|
+
def mock_actions_with_response(*resp)
|
489
|
+
LogStash::Outputs::Elasticsearch::Protocols::HTTPClient
|
490
|
+
.any_instance.stub(:bulk).and_return(*resp)
|
491
|
+
LogStash::Outputs::Elasticsearch::Protocols::NodeClient
|
492
|
+
.any_instance.stub(:bulk).and_return(*resp)
|
493
|
+
LogStash::Outputs::Elasticsearch::Protocols::TransportClient
|
494
|
+
.any_instance.stub(:bulk).and_return(*resp)
|
495
|
+
end
|
496
|
+
|
497
|
+
["node", "transport", "http"].each do |protocol|
|
498
|
+
context "with protocol => #{protocol}" do
|
499
|
+
subject do
|
500
|
+
require "logstash/outputs/elasticsearch"
|
501
|
+
settings = {
|
502
|
+
"manage_template" => true,
|
503
|
+
"index" => "logstash-2014.11.17",
|
504
|
+
"template_overwrite" => true,
|
505
|
+
"protocol" => protocol,
|
506
|
+
"host" => "localhost",
|
507
|
+
"retry_max_items" => 10,
|
508
|
+
"retry_max_interval" => 1,
|
509
|
+
"max_retries" => max_retries
|
510
|
+
}
|
511
|
+
next LogStash::Outputs::ElasticSearch.new(settings)
|
512
|
+
end
|
513
|
+
|
514
|
+
before :each do
|
515
|
+
# Delete all templates first.
|
516
|
+
require "elasticsearch"
|
517
|
+
|
518
|
+
# Clean ES of data before we start.
|
519
|
+
@es = Elasticsearch::Client.new
|
520
|
+
@es.indices.delete_template(:name => "*")
|
521
|
+
@es.indices.delete(:index => "*")
|
522
|
+
@es.indices.refresh
|
523
|
+
end
|
524
|
+
|
525
|
+
it "should return no errors if all bulk actions are successful" do
|
526
|
+
mock_actions_with_response({"errors" => false})
|
527
|
+
expect(subject).to receive(:submit).with([action1, action2]).once.and_call_original
|
528
|
+
subject.register
|
529
|
+
subject.receive(event1)
|
530
|
+
subject.receive(event2)
|
531
|
+
subject.buffer_flush(:final => true)
|
532
|
+
sleep(2)
|
533
|
+
end
|
534
|
+
|
535
|
+
it "should retry actions with response status of 503" do
|
536
|
+
mock_actions_with_response({"errors" => true, "statuses" => [200, 200, 503, 503]},
|
537
|
+
{"errors" => true, "statuses" => [200, 503]},
|
538
|
+
{"errors" => false})
|
539
|
+
expect(subject).to receive(:submit).with([action1, action1, action1, action2]).ordered.once.and_call_original
|
540
|
+
expect(subject).to receive(:submit).with([action1, action2]).ordered.once.and_call_original
|
541
|
+
expect(subject).to receive(:submit).with([action2]).ordered.once.and_call_original
|
542
|
+
|
543
|
+
subject.register
|
544
|
+
subject.receive(event1)
|
545
|
+
subject.receive(event1)
|
546
|
+
subject.receive(event1)
|
547
|
+
subject.receive(event2)
|
548
|
+
subject.buffer_flush(:final => true)
|
549
|
+
sleep(3)
|
550
|
+
end
|
551
|
+
|
552
|
+
it "should retry actions with response status of 429" do
|
553
|
+
mock_actions_with_response({"errors" => true, "statuses" => [429]},
|
554
|
+
{"errors" => false})
|
555
|
+
expect(subject).to receive(:submit).with([action1]).twice.and_call_original
|
556
|
+
subject.register
|
557
|
+
subject.receive(event1)
|
558
|
+
subject.buffer_flush(:final => true)
|
559
|
+
sleep(3)
|
560
|
+
end
|
561
|
+
|
562
|
+
it "should retry an event until max_retries reached" do
|
563
|
+
mock_actions_with_response({"errors" => true, "statuses" => [429]},
|
564
|
+
{"errors" => true, "statuses" => [429]},
|
565
|
+
{"errors" => true, "statuses" => [429]},
|
566
|
+
{"errors" => true, "statuses" => [429]},
|
567
|
+
{"errors" => true, "statuses" => [429]},
|
568
|
+
{"errors" => true, "statuses" => [429]})
|
569
|
+
expect(subject).to receive(:submit).with([action1]).exactly(max_retries).times.and_call_original
|
570
|
+
subject.register
|
571
|
+
subject.receive(event1)
|
572
|
+
subject.buffer_flush(:final => true)
|
573
|
+
sleep(3)
|
574
|
+
end
|
575
|
+
end
|
576
|
+
end
|
577
|
+
end
|
578
|
+
|
377
579
|
describe "elasticsearch protocol", :elasticsearch => true do
|
378
580
|
# ElasticSearch related jars
|
379
581
|
#LogStash::Environment.load_elasticsearch_jars!
|
@@ -515,3 +717,31 @@ describe "outputs/elasticsearch" do
|
|
515
717
|
end
|
516
718
|
end
|
517
719
|
end
|
720
|
+
|
721
|
+
describe "outputs/elasticsearch", :elasticsearch => true do
|
722
|
+
require 'elasticsearch'
|
723
|
+
|
724
|
+
it "set sniffing in transport mode" do
|
725
|
+
|
726
|
+
config = %q[
|
727
|
+
output {
|
728
|
+
elasticsearch {
|
729
|
+
host => "localhost"
|
730
|
+
protocol => "transport"
|
731
|
+
sniffing => true
|
732
|
+
}
|
733
|
+
}
|
734
|
+
]
|
735
|
+
|
736
|
+
|
737
|
+
settings_class = org.elasticsearch.common.settings.ImmutableSettings
|
738
|
+
settings = settings_class.settingsBuilder
|
739
|
+
|
740
|
+
expect(settings_class).to receive(:settingsBuilder).and_return(settings)
|
741
|
+
|
742
|
+
pipeline = LogStash::Pipeline.new(config)
|
743
|
+
pipeline.run
|
744
|
+
|
745
|
+
expect(settings.build.getAsMap["client.transport.sniff"]).to eq("true")
|
746
|
+
end
|
747
|
+
end
|
metadata
CHANGED
@@ -1,25 +1,30 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: logstash-output-elasticsearch
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.1.
|
4
|
+
version: 0.1.10
|
5
5
|
platform: java
|
6
6
|
authors:
|
7
7
|
- Elasticsearch
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date:
|
11
|
+
date: 2015-01-27 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
|
-
|
15
|
-
version_requirements: !ruby/object:Gem::Requirement
|
14
|
+
requirement: !ruby/object:Gem::Requirement
|
16
15
|
requirements:
|
17
16
|
- - '>='
|
18
17
|
- !ruby/object:Gem::Version
|
19
|
-
version:
|
20
|
-
|
18
|
+
version: '0'
|
19
|
+
name: concurrent-ruby
|
20
|
+
prerelease: false
|
21
|
+
type: :runtime
|
22
|
+
version_requirements: !ruby/object:Gem::Requirement
|
23
|
+
requirements:
|
24
|
+
- - '>='
|
21
25
|
- !ruby/object:Gem::Version
|
22
|
-
version: '
|
26
|
+
version: '0'
|
27
|
+
- !ruby/object:Gem::Dependency
|
23
28
|
requirement: !ruby/object:Gem::Requirement
|
24
29
|
requirements:
|
25
30
|
- - '>='
|
@@ -28,18 +33,18 @@ dependencies:
|
|
28
33
|
- - ~>
|
29
34
|
- !ruby/object:Gem::Version
|
30
35
|
version: '1.0'
|
36
|
+
name: elasticsearch
|
31
37
|
prerelease: false
|
32
38
|
type: :runtime
|
33
|
-
- !ruby/object:Gem::Dependency
|
34
|
-
name: stud
|
35
39
|
version_requirements: !ruby/object:Gem::Requirement
|
36
40
|
requirements:
|
37
41
|
- - '>='
|
38
42
|
- !ruby/object:Gem::Version
|
39
|
-
version:
|
43
|
+
version: 1.0.6
|
40
44
|
- - ~>
|
41
45
|
- !ruby/object:Gem::Version
|
42
|
-
version: '
|
46
|
+
version: '1.0'
|
47
|
+
- !ruby/object:Gem::Dependency
|
43
48
|
requirement: !ruby/object:Gem::Requirement
|
44
49
|
requirements:
|
45
50
|
- - '>='
|
@@ -48,32 +53,32 @@ dependencies:
|
|
48
53
|
- - ~>
|
49
54
|
- !ruby/object:Gem::Version
|
50
55
|
version: '0.0'
|
56
|
+
name: stud
|
51
57
|
prerelease: false
|
52
58
|
type: :runtime
|
53
|
-
- !ruby/object:Gem::Dependency
|
54
|
-
name: cabin
|
55
59
|
version_requirements: !ruby/object:Gem::Requirement
|
56
60
|
requirements:
|
61
|
+
- - '>='
|
62
|
+
- !ruby/object:Gem::Version
|
63
|
+
version: 0.0.17
|
57
64
|
- - ~>
|
58
65
|
- !ruby/object:Gem::Version
|
59
|
-
version: '0.
|
66
|
+
version: '0.0'
|
67
|
+
- !ruby/object:Gem::Dependency
|
60
68
|
requirement: !ruby/object:Gem::Requirement
|
61
69
|
requirements:
|
62
70
|
- - ~>
|
63
71
|
- !ruby/object:Gem::Version
|
64
72
|
version: '0.6'
|
73
|
+
name: cabin
|
65
74
|
prerelease: false
|
66
75
|
type: :runtime
|
67
|
-
- !ruby/object:Gem::Dependency
|
68
|
-
name: logstash
|
69
76
|
version_requirements: !ruby/object:Gem::Requirement
|
70
77
|
requirements:
|
71
|
-
- -
|
72
|
-
- !ruby/object:Gem::Version
|
73
|
-
version: 1.4.0
|
74
|
-
- - <
|
78
|
+
- - ~>
|
75
79
|
- !ruby/object:Gem::Version
|
76
|
-
version:
|
80
|
+
version: '0.6'
|
81
|
+
- !ruby/object:Gem::Dependency
|
77
82
|
requirement: !ruby/object:Gem::Requirement
|
78
83
|
requirements:
|
79
84
|
- - '>='
|
@@ -82,32 +87,32 @@ dependencies:
|
|
82
87
|
- - <
|
83
88
|
- !ruby/object:Gem::Version
|
84
89
|
version: 2.0.0
|
90
|
+
name: logstash
|
85
91
|
prerelease: false
|
86
92
|
type: :runtime
|
87
|
-
- !ruby/object:Gem::Dependency
|
88
|
-
name: jar-dependencies
|
89
93
|
version_requirements: !ruby/object:Gem::Requirement
|
90
94
|
requirements:
|
91
95
|
- - '>='
|
92
96
|
- !ruby/object:Gem::Version
|
93
|
-
version:
|
97
|
+
version: 1.4.0
|
98
|
+
- - <
|
99
|
+
- !ruby/object:Gem::Version
|
100
|
+
version: 2.0.0
|
101
|
+
- !ruby/object:Gem::Dependency
|
94
102
|
requirement: !ruby/object:Gem::Requirement
|
95
103
|
requirements:
|
96
104
|
- - '>='
|
97
105
|
- !ruby/object:Gem::Version
|
98
106
|
version: '0'
|
107
|
+
name: jar-dependencies
|
99
108
|
prerelease: false
|
100
109
|
type: :runtime
|
101
|
-
- !ruby/object:Gem::Dependency
|
102
|
-
name: ftw
|
103
110
|
version_requirements: !ruby/object:Gem::Requirement
|
104
111
|
requirements:
|
105
112
|
- - '>='
|
106
|
-
- !ruby/object:Gem::Version
|
107
|
-
version: 0.0.40
|
108
|
-
- - ~>
|
109
113
|
- !ruby/object:Gem::Version
|
110
114
|
version: '0'
|
115
|
+
- !ruby/object:Gem::Dependency
|
111
116
|
requirement: !ruby/object:Gem::Requirement
|
112
117
|
requirements:
|
113
118
|
- - '>='
|
@@ -116,50 +121,59 @@ dependencies:
|
|
116
121
|
- - ~>
|
117
122
|
- !ruby/object:Gem::Version
|
118
123
|
version: '0'
|
124
|
+
name: ftw
|
119
125
|
prerelease: false
|
120
126
|
type: :development
|
121
|
-
- !ruby/object:Gem::Dependency
|
122
|
-
name: logstash-input-generator
|
123
127
|
version_requirements: !ruby/object:Gem::Requirement
|
124
128
|
requirements:
|
125
129
|
- - '>='
|
130
|
+
- !ruby/object:Gem::Version
|
131
|
+
version: 0.0.40
|
132
|
+
- - ~>
|
126
133
|
- !ruby/object:Gem::Version
|
127
134
|
version: '0'
|
135
|
+
- !ruby/object:Gem::Dependency
|
128
136
|
requirement: !ruby/object:Gem::Requirement
|
129
137
|
requirements:
|
130
138
|
- - '>='
|
131
139
|
- !ruby/object:Gem::Version
|
132
140
|
version: '0'
|
141
|
+
name: logstash-input-generator
|
133
142
|
prerelease: false
|
134
143
|
type: :development
|
135
|
-
- !ruby/object:Gem::Dependency
|
136
|
-
name: manticore
|
137
144
|
version_requirements: !ruby/object:Gem::Requirement
|
138
145
|
requirements:
|
139
|
-
- -
|
146
|
+
- - '>='
|
140
147
|
- !ruby/object:Gem::Version
|
141
|
-
version: '0
|
148
|
+
version: '0'
|
149
|
+
- !ruby/object:Gem::Dependency
|
142
150
|
requirement: !ruby/object:Gem::Requirement
|
143
151
|
requirements:
|
144
152
|
- - ~>
|
145
153
|
- !ruby/object:Gem::Version
|
146
154
|
version: '0.3'
|
155
|
+
name: manticore
|
147
156
|
prerelease: false
|
148
157
|
type: :runtime
|
149
|
-
- !ruby/object:Gem::Dependency
|
150
|
-
name: logstash-devutils
|
151
158
|
version_requirements: !ruby/object:Gem::Requirement
|
152
159
|
requirements:
|
153
|
-
- -
|
160
|
+
- - ~>
|
154
161
|
- !ruby/object:Gem::Version
|
155
|
-
version: '0'
|
162
|
+
version: '0.3'
|
163
|
+
- !ruby/object:Gem::Dependency
|
156
164
|
requirement: !ruby/object:Gem::Requirement
|
157
165
|
requirements:
|
158
166
|
- - '>='
|
159
167
|
- !ruby/object:Gem::Version
|
160
168
|
version: '0'
|
169
|
+
name: logstash-devutils
|
161
170
|
prerelease: false
|
162
171
|
type: :development
|
172
|
+
version_requirements: !ruby/object:Gem::Requirement
|
173
|
+
requirements:
|
174
|
+
- - '>='
|
175
|
+
- !ruby/object:Gem::Version
|
176
|
+
version: '0'
|
163
177
|
description: Output events to elasticsearch
|
164
178
|
email: info@elasticsearch.com
|
165
179
|
executables: []
|
@@ -167,8 +181,10 @@ extensions: []
|
|
167
181
|
extra_rdoc_files: []
|
168
182
|
files:
|
169
183
|
- .gitignore
|
184
|
+
- CONTRIBUTORS
|
170
185
|
- Gemfile
|
171
186
|
- LICENSE
|
187
|
+
- README.md
|
172
188
|
- Rakefile
|
173
189
|
- lib/logstash/outputs/elasticsearch.rb
|
174
190
|
- lib/logstash/outputs/elasticsearch/elasticsearch-template.json
|
@@ -177,7 +193,7 @@ files:
|
|
177
193
|
- spec/outputs/elasticsearch_spec.rb
|
178
194
|
homepage: http://logstash.net/
|
179
195
|
licenses:
|
180
|
-
-
|
196
|
+
- apache-2.0
|
181
197
|
metadata:
|
182
198
|
logstash_plugin: 'true'
|
183
199
|
logstash_group: output
|
@@ -198,7 +214,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
198
214
|
requirements:
|
199
215
|
- jar 'org.elasticsearch:elasticsearch', '1.4.0'
|
200
216
|
rubyforge_project:
|
201
|
-
rubygems_version: 2.
|
217
|
+
rubygems_version: 2.1.9
|
202
218
|
signing_key:
|
203
219
|
specification_version: 4
|
204
220
|
summary: Logstash Output to Elasticsearch
|