fluent-plugin-elasticsearchfork 0.5.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 31e26aa8b25a1e3d2b7033f8eb5f858c91a2f6eb
4
+ data.tar.gz: 83720974bf6ea9b459b22d8ab52cffa89740a756
5
+ SHA512:
6
+ metadata.gz: 1e490e718047330518b2bf9f04d66f81b9c8fd57fcb4c38222cd05fa8619f07e5b26e4284a3172921770791bba38bc83427951dfa72b3162e35b7b84afab3ab3
7
+ data.tar.gz: 1352faf2850fb701e6279004907258c9b860f40f0f7a1a25194ea0c27aff3687eb9917ef2ba231356577c2ecb356f4446340900ed6e74b1c9bfba4f8d389e9ce
data/.gitignore ADDED
@@ -0,0 +1,17 @@
1
+ *.gem
2
+ *.rbc
3
+ .bundle
4
+ .config
5
+ .yardoc
6
+ Gemfile.lock
7
+ InstalledFiles
8
+ _yardoc
9
+ coverage
10
+ doc/
11
+ lib/bundler/man
12
+ pkg
13
+ rdoc
14
+ spec/reports
15
+ test/tmp
16
+ test/version_tmp
17
+ tmp
data/.travis.yml ADDED
@@ -0,0 +1,8 @@
1
+ language: ruby
2
+
3
+ rvm:
4
+ - 1.9.3
5
+ - 2.0.0
6
+ - 2.1.1
7
+
8
+ script: bundle exec ruby -S -Itest test/plugin/test_out_elasticsearch.rb
data/Gemfile ADDED
@@ -0,0 +1,6 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in fluent-plugin-elasticsearch.gemspec
4
+ gemspec
5
+
6
+ gem 'coveralls', require: false
data/History.md ADDED
@@ -0,0 +1,56 @@
1
+ ## Changelog
2
+
3
+ ### Future
4
+ - added `reload_on_failure` and `reload_connections` flags (#78)
5
+
6
+ ### 0.5.1
7
+
8
+ - fix legacy hosts option, port should be optional (#75)
9
+
10
+ ### 0.5.0
11
+
12
+ - add full connection URI support (#65)
13
+ - use `@timestamp` for index (#41)
14
+ - add support for elasticsearch gem version 1 (#71)
15
+ - fix connection reset & retry when connection is lost (#67)
16
+
17
+ ### 0.4.0
18
+
19
+ - add `request_timeout` config (#59)
20
+ - fix lockup when non-hash values are sent (#52)
21
+
22
+ ### 0.3.1
23
+
24
+ - force using patron (#46)
25
+ - do not generate @timestamp if already part of message (#35)
26
+
27
+ ### 0.3.0
28
+
29
+ - add `parent_key` option (#28)
30
+ - have travis-ci build on multiple rubies (#30)
31
+ - add `utc_index` and `hosts` options, switch to using `elasticsearch` gem (#26, #29)
32
+
33
+ ### 0.2.0
34
+
35
+ - fix encoding issues with JSON conversion and again when sending to elasticsearch (#19, #21)
36
+ - add logstash_dateformat option (#20)
37
+
38
+ ### 0.1.4
39
+
40
+ - add logstash_prefix option
41
+
42
+ ### 0.1.3
43
+
44
+ - raising an exception on non-success response from elasticsearch
45
+
46
+ ### 0.1.2
47
+
48
+ - add id_key option
49
+
50
+ ### 0.1.1
51
+
52
+ - fix timezone in logstash key
53
+
54
+ ### 0.1.0
55
+
56
+ - Initial gem release.
data/LICENSE.txt ADDED
@@ -0,0 +1,22 @@
1
+ Copyright (c) 2012 Uken Games
2
+
3
+ MIT License
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining
6
+ a copy of this software and associated documentation files (the
7
+ "Software"), to deal in the Software without restriction, including
8
+ without limitation the rights to use, copy, modify, merge, publish,
9
+ distribute, sublicense, and/or sell copies of the Software, and to
10
+ permit persons to whom the Software is furnished to do so, subject to
11
+ the following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be
14
+ included in all copies or substantial portions of the Software.
15
+
16
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
19
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
20
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
21
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
22
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,187 @@
1
+ # Fluent::Plugin::Elasticsearch, a plugin for [Fluentd](http://fluentd.org)
2
+
3
+ [![Gem Version](https://badge.fury.io/rb/fluent-plugin-elasticsearch.png)](http://badge.fury.io/rb/fluent-plugin-elasticsearch)
4
+ [![Dependency Status](https://gemnasium.com/uken/guard-sidekiq.png)](https://gemnasium.com/uken/fluent-plugin-elasticsearch)
5
+ [![Build Status](https://travis-ci.org/uken/fluent-plugin-elasticsearch.png?branch=master)](https://travis-ci.org/uken/fluent-plugin-elasticsearch)
6
+ [![Coverage Status](https://coveralls.io/repos/uken/fluent-plugin-elasticsearch/badge.png)](https://coveralls.io/r/uken/fluent-plugin-elasticsearch)
7
+ [![Code Climate](https://codeclimate.com/github/uken/fluent-plugin-elasticsearch.png)](https://codeclimate.com/github/uken/fluent-plugin-elasticsearch)
8
+
9
+ I wrote this so you can search logs routed through Fluentd.
10
+
11
+ ## Installation
12
+
13
+ $ gem install fluent-plugin-elasticsearch
14
+
15
+ * prerequisite : You need to install [libcurl (libcurl-devel)](http://curl.haxx.se/libcurl/) to work with.
16
+
17
+ ## Usage
18
+
19
+ In your fluentd configration, use `type elasticsearch`. Additional configuration is optional, default values would look like this:
20
+
21
+ ```
22
+ host localhost
23
+ port 9200
24
+ index_name fluentd
25
+ type_name fluentd
26
+ ```
27
+
28
+ **Index templates**
29
+
30
+ This plugin creates ElasticSearch indices by merely writing to them. Consider using [Index Templates](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-templates.html) to gain control of what get indexed and how. See [this example](https://github.com/uken/fluent-plugin-elasticsearch/issues/33#issuecomment-38693282) for a good starting point.
31
+
32
+ **More options:**
33
+
34
+ ```
35
+ hosts host1:port1,host2:port2,host3:port3
36
+ ```
37
+
38
+ or
39
+
40
+ ```
41
+ hosts https://customhost.com:443/path,https://username:password@host-failover.com:443
42
+ ```
43
+
44
+ You can specify multiple elasticsearch hosts with separator ",".
45
+
46
+ If you specify multiple hosts, this plugin will load balance updates to elasticsearch. This is an [elasticsearch-ruby](https://github.com/elasticsearch/elasticsearch-ruby) feature, the default strategy is round-robin.
47
+
48
+ If you specify this option, host and port options are ignored.
49
+
50
+ ```
51
+ user demo
52
+ password secret
53
+ path /elastic_search/
54
+ scheme https
55
+ ```
56
+
57
+ You can specify user and password for HTTP basic auth. If used in conjunction with a hosts list, then these options will be used by default i.e. if you do not provide any of these options within the hosts listed.
58
+
59
+
60
+ ```
61
+ logstash_format true # defaults to false
62
+ ```
63
+
64
+ This is meant to make writing data into elasticsearch compatible to what logstash writes. By doing this, one could take advantade of [kibana](http://kibana.org/).
65
+
66
+ ```
67
+ logstash_prefix mylogs # defaults to "logstash"
68
+ ```
69
+
70
+ By default, the records inserted into index `logstash-YYMMDD`. This option allows to insert into specified index like `mylogs-YYMMDD`.
71
+
72
+ ```
73
+ logstash_dateformat %Y.%m. # defaults to "%Y.%m.%d"
74
+ ```
75
+
76
+ By default, when inserting records in logstash format, @timestamp is dynamically created with the time at log ingestion. If you'd like to use a custom time. Include an @timestamp with your record.
77
+
78
+ ```
79
+ {"@timestamp":"2014-04-07T000:00:00-00:00"}
80
+ ```
81
+
82
+ By default, the records inserted into index `logstash-YYMMDD`. This option allows to insert into specified index like `logstash-YYYYMM` for a monthly index.
83
+
84
+ ```
85
+ utc_index true
86
+ ```
87
+
88
+ By default, the records inserted into index `logstash-YYMMDD` with utc (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.
89
+
90
+ ```
91
+ request_timeout 15s # defaults to 5s
92
+ ```
93
+
94
+ You can specify HTTP request timeout.
95
+
96
+ This is useful when Elasticsearch cannot return response for bulk request within the default of 5 seconds.
97
+
98
+ ```
99
+ reload_connections false # defaults to true
100
+ ```
101
+
102
+ You can tune how the elasticsearch-transport host reloading feature works. By default it will reload the host list from the server
103
+ every 10,000th request to spread the load. This can be an issue if your ElasticSearch cluster is behind a Reverse Proxy,
104
+ as fluentd process may not have direct network access to the ElasticSearch nodes.
105
+
106
+ ```
107
+ reload_on_failure true # defaults to false
108
+ ```
109
+
110
+ Indicates that the elasticsearch-transport will try to reload the nodes addresses if there is a failure while making the
111
+ request, this can be useful to quickly remove a dead node from the list of addresses.
112
+
113
+ ---
114
+
115
+ ```
116
+ include_tag_key true # defaults to false
117
+ tag_key tag # defaults to tag
118
+ ```
119
+
120
+ This will add the fluentd tag in the json record. For instance, if you have a config like this:
121
+
122
+ ```
123
+ <match my.logs>
124
+ type elasticsearch
125
+ include_tag_key true
126
+ tag_key _key
127
+ </match>
128
+ ```
129
+
130
+ The record inserted into elasticsearch would be
131
+
132
+ ```
133
+ {"_key":"my.logs", "name":"Johnny Doeie"}
134
+ ```
135
+
136
+ ---
137
+
138
+ ```
139
+ id_key request_id # use "request_id" field as a record id in ES
140
+ ```
141
+
142
+ By default, all records inserted into elasticsearch get a random _id. This option allows to use a field in the record as an identifier.
143
+
144
+ This following record `{"name":"Johnny","request_id":"87d89af7daffad6"}` will trigger the following ElasticSearch command
145
+
146
+ ```
147
+ { "index" : { "_index" : "logstash-2013.01.01, "_type" : "fluentd", "_id" : "87d89af7daffad6" } }
148
+ { "name": "Johnny", "request_id": "87d89af7daffad6" }
149
+ ```
150
+
151
+ ---
152
+
153
+ fluentd-plugin-elasticsearch is a buffered output that uses elasticseach's bulk API. So additional buffer configuration would be (with default values):
154
+
155
+ ```
156
+ buffer_type memory
157
+ flush_interval 60
158
+ retry_limit 17
159
+ retry_wait 1.0
160
+ num_threads 1
161
+ ```
162
+
163
+ ---
164
+
165
+ Please consider using [fluent-plugin-forest](https://github.com/tagomoris/fluent-plugin-forest) to send multiple logs to multiple ElasticSearch indices:
166
+
167
+ ```
168
+ <match my.logs.*>
169
+ type forest
170
+ subtype elasticsearch
171
+ remove_prefix my.logs
172
+ <template>
173
+ logstash_prefix ${tag}
174
+ # ...
175
+ </template>
176
+ </match>
177
+ ```
178
+
179
+ ## Contributing
180
+
181
+ 1. Fork it
182
+ 2. Create your feature branch (`git checkout -b my-new-feature`)
183
+ 3. Commit your changes (`git commit -am 'Add some feature'`)
184
+ 4. Push to the branch (`git push origin my-new-feature`)
185
+ 5. Create new Pull Request
186
+
187
+ If you have a question, [open an Issue](https://github.com/uken/fluent-plugin-elasticsearch/issues).
data/Rakefile ADDED
@@ -0,0 +1,11 @@
1
+ require "bundler/gem_tasks"
2
+ require 'rake/testtask'
3
+
4
+ Rake::TestTask.new(:test) do |test|
5
+ test.libs << 'lib' << 'test'
6
+ test.pattern = 'test/**/test_*.rb'
7
+ test.verbose = true
8
+ end
9
+
10
+ task :default => :test
11
+
@@ -0,0 +1,25 @@
1
+ # -*- encoding: utf-8 -*-
2
+ $:.push File.expand_path('../lib', __FILE__)
3
+
4
+ Gem::Specification.new do |s|
5
+ s.name = 'fluent-plugin-elasticsearchfork'
6
+ s.version = '0.5.1'
7
+ s.authors = ['diogo', 'pitr']
8
+ s.email = ['tomodian@gmail.com']
9
+ s.description = %q{ElasticSearch output plugin for Fluent event collector}
10
+ s.summary = s.description
11
+ s.homepage = 'https://github.com/tomodian/fluent-plugin-elasticsearch'
12
+ s.license = 'MIT'
13
+
14
+ s.files = `git ls-files`.split($/)
15
+ s.executables = s.files.grep(%r{^bin/}).map{ |f| File.basename(f) }
16
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
17
+ s.require_paths = ['lib']
18
+
19
+ s.add_runtime_dependency 'fluentd', '~> 0'
20
+ s.add_runtime_dependency 'patron', '~> 0'
21
+ s.add_runtime_dependency 'elasticsearch', '>= 0'
22
+
23
+ s.add_development_dependency 'rake', '~> 0'
24
+ s.add_development_dependency 'webmock', '~> 1'
25
+ end
@@ -0,0 +1,176 @@
1
+ # encoding: UTF-8
2
+ require 'date'
3
+ require 'patron'
4
+ require 'elasticsearch'
5
+ require 'uri'
6
+
7
+ class Fluent::ElasticsearchOutput < Fluent::BufferedOutput
8
+ class ConnectionFailure < StandardError; end
9
+
10
+ Fluent::Plugin.register_output('elasticsearch', self)
11
+
12
+ config_param :host, :string, :default => 'localhost'
13
+ config_param :port, :integer, :default => 9200
14
+ config_param :user, :string, :default => nil
15
+ config_param :password, :string, :default => nil
16
+ config_param :path, :string, :default => nil
17
+ config_param :scheme, :string, :default => 'http'
18
+ config_param :hosts, :string, :default => nil
19
+ config_param :logstash_format, :bool, :default => false
20
+ config_param :logstash_prefix, :string, :default => "logstash"
21
+ config_param :logstash_dateformat, :string, :default => "%Y.%m.%d"
22
+ config_param :utc_index, :bool, :default => true
23
+ config_param :type_name, :string, :default => "fluentd"
24
+ config_param :index_name, :string, :default => "fluentd"
25
+ config_param :id_key, :string, :default => nil
26
+ config_param :parent_key, :string, :default => nil
27
+ config_param :request_timeout, :time, :default => 5
28
+ config_param :reload_connections, :bool, :default => true
29
+ config_param :reload_on_failure, :bool, :default => false
30
+
31
+ include Fluent::SetTagKeyMixin
32
+ config_set_default :include_tag_key, false
33
+
34
+ def initialize
35
+ super
36
+ end
37
+
38
+ def configure(conf)
39
+ super
40
+ end
41
+
42
+ def start
43
+ super
44
+ end
45
+
46
+ def client
47
+ @_es ||= begin
48
+ adapter_conf = lambda {|f| f.adapter :patron }
49
+ transport = Elasticsearch::Transport::Transport::HTTP::Faraday.new(get_connection_options.merge(
50
+ options: {
51
+ reload_connections: @reload_connections,
52
+ reload_on_failure: @reload_on_failure,
53
+ retry_on_failure: 5,
54
+ transport_options: {
55
+ request: { timeout: @request_timeout }
56
+ }
57
+ }), &adapter_conf)
58
+ es = Elasticsearch::Client.new transport: transport
59
+
60
+ begin
61
+ raise ConnectionFailure, "Can not reach Elasticsearch cluster (#{connection_options_description})!" unless es.ping
62
+ rescue Faraday::ConnectionFailed => e
63
+ raise ConnectionFailure, "Can not reach Elasticsearch cluster (#{connection_options_description})! #{e.message}"
64
+ end
65
+
66
+ log.info "Connection opened to Elasticsearch cluster => #{connection_options_description}"
67
+ es
68
+ end
69
+ end
70
+
71
+ def get_connection_options
72
+ raise "`password` must be present if `user` is present" if @user && !@password
73
+
74
+ hosts = if @hosts
75
+ @hosts.split(',').map do |host_str|
76
+ # Support legacy hosts format host:port,host:port,host:port...
77
+ if host_str.match(%r{^[^:]+(\:\d+)?$})
78
+ {
79
+ host: host_str.split(':')[0],
80
+ port: (host_str.split(':')[1] || @port).to_i,
81
+ scheme: @scheme
82
+ }
83
+ else
84
+ # New hosts format expects URLs such as http://logs.foo.com,https://john:pass@logs2.foo.com/elastic
85
+ uri = URI(host_str)
86
+ %w(user password path).inject(host: uri.host, port: uri.port, scheme: uri.scheme) do |hash, key|
87
+ hash[key.to_sym] = uri.public_send(key) unless uri.public_send(key).nil? || uri.public_send(key) == ''
88
+ hash
89
+ end
90
+ end
91
+ end.compact
92
+ else
93
+ [{host: @host, port: @port, scheme: @scheme}]
94
+ end.each do |host|
95
+ host.merge!(user: @user, password: @password) if !host[:user] && @user
96
+ host.merge!(path: @path) if !host[:path] && @path
97
+ end
98
+
99
+ {
100
+ hosts: hosts
101
+ }
102
+ end
103
+
104
+ def connection_options_description
105
+ get_connection_options[:hosts].map do |host_info|
106
+ attributes = host_info.dup
107
+ attributes[:password] = 'obfuscated' if attributes.has_key?(:password)
108
+ attributes.inspect
109
+ end.join(', ')
110
+ end
111
+
112
+ def format(tag, time, record)
113
+ [tag, time, record].to_msgpack
114
+ end
115
+
116
+ def shutdown
117
+ super
118
+ end
119
+
120
+ def write(chunk)
121
+ bulk_message = []
122
+
123
+ chunk.msgpack_each do |tag, time, record|
124
+ next unless record.is_a? Hash
125
+ if @logstash_format
126
+ if record.has_key?("@timestamp")
127
+ time = Time.parse record["@timestamp"]
128
+ else
129
+ record.merge!({"@timestamp" => Time.at(time).to_datetime.to_s})
130
+ end
131
+ if @utc_index
132
+ target_index = "#{@logstash_prefix}-#{Time.at(time).getutc.strftime("#{@logstash_dateformat}")}"
133
+ else
134
+ target_index = "#{@logstash_prefix}-#{Time.at(time).strftime("#{@logstash_dateformat}")}"
135
+ end
136
+ else
137
+ target_index = Date.today.strftime @index_name
138
+ end
139
+
140
+ if @include_tag_key
141
+ record.merge!(@tag_key => tag)
142
+ end
143
+
144
+ meta = { "index" => {"_index" => target_index, "_type" => type_name} }
145
+ if @id_key && record[@id_key]
146
+ meta['index']['_id'] = record[@id_key]
147
+ end
148
+
149
+ if @parent_key && record[@parent_key]
150
+ meta['index']['_parent'] = record[@parent_key]
151
+ end
152
+
153
+ bulk_message << meta
154
+ bulk_message << record
155
+ end
156
+
157
+ send(bulk_message) unless bulk_message.empty?
158
+ bulk_message.clear
159
+ end
160
+
161
+ def send(data)
162
+ retries = 0
163
+ begin
164
+ client.bulk body: data
165
+ rescue Faraday::ConnectionFailed, Faraday::TimeoutError => e
166
+ if retries < 2
167
+ retries += 1
168
+ @_es = nil
169
+ log.warn "Could not push logs to Elasticsearch, resetting connection and trying again. #{e.message}"
170
+ sleep 2**retries
171
+ retry
172
+ end
173
+ raise ConnectionFailure, "Could not push logs to Elasticsearch after #{retries} retries. #{e.message}"
174
+ end
175
+ end
176
+ end
data/test/helper.rb ADDED
@@ -0,0 +1 @@
1
+ require 'minitest/pride'
@@ -0,0 +1,440 @@
1
+ require 'test/unit'
2
+
3
+ require 'fluent/test'
4
+ require 'fluent/plugin/out_elasticsearch'
5
+
6
+ require 'webmock/test_unit'
7
+ require 'date'
8
+
9
+ $:.push File.expand_path("../..", __FILE__)
10
+ $:.push File.dirname(__FILE__)
11
+
12
+ require 'helper'
13
+
14
+ WebMock.disable_net_connect!
15
+
16
+ class ElasticsearchOutput < Test::Unit::TestCase
17
+ attr_accessor :index_cmds, :index_command_counts
18
+
19
+ def setup
20
+ Fluent::Test.setup
21
+ @driver = nil
22
+ end
23
+
24
+ def driver(tag='test', conf='')
25
+ @driver ||= Fluent::Test::BufferedOutputTestDriver.new(Fluent::ElasticsearchOutput, tag).configure(conf)
26
+ end
27
+
28
+ def sample_record
29
+ {'age' => 26, 'request_id' => '42', 'parent_id' => 'parent'}
30
+ end
31
+
32
+ def stub_elastic_ping(url="http://localhost:9200")
33
+ stub_request(:head, url).with.to_return(:status => 200, :body => "", :headers => {})
34
+ end
35
+
36
+ def stub_elastic(url="http://localhost:9200/_bulk")
37
+ stub_request(:post, url).with do |req|
38
+ @index_cmds = req.body.split("\n").map {|r| JSON.parse(r) }
39
+ end
40
+ end
41
+
42
+ def stub_elastic_unavailable(url="http://localhost:9200/_bulk")
43
+ stub_request(:post, url).to_return(:status => [503, "Service Unavailable"])
44
+ end
45
+
46
+ def stub_elastic_with_store_index_command_counts(url="http://localhost:9200/_bulk")
47
+ if @index_command_counts == nil
48
+ @index_command_counts = {}
49
+ @index_command_counts.default = 0
50
+ end
51
+
52
+ stub_request(:post, url).with do |req|
53
+ index_cmds = req.body.split("\n").map {|r| JSON.parse(r) }
54
+ @index_command_counts[url] += index_cmds.size
55
+ end
56
+ end
57
+
58
+ def test_configure
59
+ config = %{
60
+ host logs.google.com
61
+ port 777
62
+ scheme https
63
+ path /es/
64
+ user john
65
+ password doe
66
+ }
67
+ instance = driver('test', config).instance
68
+
69
+ assert_equal 'logs.google.com', instance.host
70
+ assert_equal 777, instance.port
71
+ assert_equal 'https', instance.scheme
72
+ assert_equal '/es/', instance.path
73
+ assert_equal 'john', instance.user
74
+ assert_equal 'doe', instance.password
75
+ end
76
+
77
+ def test_legacy_hosts_list
78
+ config = %{
79
+ hosts host1:50,host2:100,host3
80
+ scheme https
81
+ path /es/
82
+ port 123
83
+ }
84
+ instance = driver('test', config).instance
85
+
86
+ assert_equal 3, instance.get_connection_options[:hosts].length
87
+ host1, host2, host3 = instance.get_connection_options[:hosts]
88
+
89
+ assert_equal 'host1', host1[:host]
90
+ assert_equal 50, host1[:port]
91
+ assert_equal 'https', host1[:scheme]
92
+ assert_equal '/es/', host2[:path]
93
+ assert_equal 'host3', host3[:host]
94
+ assert_equal 123, host3[:port]
95
+ assert_equal 'https', host3[:scheme]
96
+ assert_equal '/es/', host3[:path]
97
+ end
98
+
99
+ def test_hosts_list
100
+ config = %{
101
+ hosts https://john:password@host1:443/elastic/,http://host2
102
+ path /default_path
103
+ user default_user
104
+ password default_password
105
+ }
106
+ instance = driver('test', config).instance
107
+
108
+ assert_equal 2, instance.get_connection_options[:hosts].length
109
+ host1, host2 = instance.get_connection_options[:hosts]
110
+
111
+ assert_equal 'host1', host1[:host]
112
+ assert_equal 443, host1[:port]
113
+ assert_equal 'https', host1[:scheme]
114
+ assert_equal 'john', host1[:user]
115
+ assert_equal 'password', host1[:password]
116
+ assert_equal '/elastic/', host1[:path]
117
+
118
+ assert_equal 'host2', host2[:host]
119
+ assert_equal 'http', host2[:scheme]
120
+ assert_equal 'default_user', host2[:user]
121
+ assert_equal 'default_password', host2[:password]
122
+ assert_equal '/default_path', host2[:path]
123
+ end
124
+
125
+ def test_single_host_params_and_defaults
126
+ config = %{
127
+ host logs.google.com
128
+ user john
129
+ password doe
130
+ }
131
+ instance = driver('test', config).instance
132
+
133
+ assert_equal 1, instance.get_connection_options[:hosts].length
134
+ host1 = instance.get_connection_options[:hosts][0]
135
+
136
+ assert_equal 'logs.google.com', host1[:host]
137
+ assert_equal 9200, host1[:port]
138
+ assert_equal 'http', host1[:scheme]
139
+ assert_equal 'john', host1[:user]
140
+ assert_equal 'doe', host1[:password]
141
+ assert_equal nil, host1[:path]
142
+ end
143
+
144
+ def test_writes_to_default_index
145
+ stub_elastic_ping
146
+ stub_elastic
147
+ driver.emit(sample_record)
148
+ driver.run
149
+ assert_equal('fluentd', index_cmds.first['index']['_index'])
150
+ end
151
+
152
+ def test_writes_to_default_type
153
+ stub_elastic_ping
154
+ stub_elastic
155
+ driver.emit(sample_record)
156
+ driver.run
157
+ assert_equal('fluentd', index_cmds.first['index']['_type'])
158
+ end
159
+
160
+ def test_writes_to_speficied_index
161
+ driver.configure("index_name myindex\n")
162
+ stub_elastic_ping
163
+ stub_elastic
164
+ driver.emit(sample_record)
165
+ driver.run
166
+ assert_equal('myindex', index_cmds.first['index']['_index'])
167
+ end
168
+
169
+ def test_writes_to_speficied_type
170
+ driver.configure("type_name mytype\n")
171
+ stub_elastic_ping
172
+ stub_elastic
173
+ driver.emit(sample_record)
174
+ driver.run
175
+ assert_equal('mytype', index_cmds.first['index']['_type'])
176
+ end
177
+
178
+ def test_writes_to_speficied_host
179
+ driver.configure("host 192.168.33.50\n")
180
+ stub_elastic_ping("http://192.168.33.50:9200")
181
+ elastic_request = stub_elastic("http://192.168.33.50:9200/_bulk")
182
+ driver.emit(sample_record)
183
+ driver.run
184
+ assert_requested(elastic_request)
185
+ end
186
+
187
+ def test_writes_to_speficied_port
188
+ driver.configure("port 9201\n")
189
+ stub_elastic_ping("http://localhost:9201")
190
+ elastic_request = stub_elastic("http://localhost:9201/_bulk")
191
+ driver.emit(sample_record)
192
+ driver.run
193
+ assert_requested(elastic_request)
194
+ end
195
+
196
+ def test_writes_to_multi_hosts
197
+ hosts = [['192.168.33.50', 9201], ['192.168.33.51', 9201], ['192.168.33.52', 9201]]
198
+ hosts_string = hosts.map {|x| "#{x[0]}:#{x[1]}"}.compact.join(',')
199
+
200
+ driver.configure("hosts #{hosts_string}")
201
+
202
+ hosts.each do |host_info|
203
+ host, port = host_info
204
+ stub_elastic_ping("http://#{host}:#{port}")
205
+ stub_elastic_with_store_index_command_counts("http://#{host}:#{port}/_bulk")
206
+ end
207
+
208
+ 1000.times do
209
+ driver.emit(sample_record.merge('age'=>rand(100)))
210
+ end
211
+
212
+ driver.run
213
+
214
+ # @note: we cannot make multi chunks with options (flush_interval, buffer_chunk_limit)
215
+ # it's Fluentd test driver's constraint
216
+ # so @index_command_counts.size is always 1
217
+
218
+ assert(@index_command_counts.size > 0, "not working with hosts options")
219
+
220
+ total = 0
221
+ @index_command_counts.each do |url, count|
222
+ total += count
223
+ end
224
+ assert_equal(2000, total)
225
+ end
226
+
227
+ def test_makes_bulk_request
228
+ stub_elastic_ping
229
+ stub_elastic
230
+ driver.emit(sample_record)
231
+ driver.emit(sample_record.merge('age' => 27))
232
+ driver.run
233
+ assert_equal(4, index_cmds.count)
234
+ end
235
+
236
+ def test_all_records_are_preserved_in_bulk
237
+ stub_elastic_ping
238
+ stub_elastic
239
+ driver.emit(sample_record)
240
+ driver.emit(sample_record.merge('age' => 27))
241
+ driver.run
242
+ assert_equal(26, index_cmds[1]['age'])
243
+ assert_equal(27, index_cmds[3]['age'])
244
+ end
245
+
246
+ def test_writes_to_logstash_index
247
+ driver.configure("logstash_format true\n")
248
+ time = Time.parse Date.today.to_s
249
+ logstash_index = "logstash-#{time.getutc.strftime("%Y.%m.%d")}"
250
+ stub_elastic_ping
251
+ stub_elastic
252
+ driver.emit(sample_record, time)
253
+ driver.run
254
+ assert_equal(logstash_index, index_cmds.first['index']['_index'])
255
+ end
256
+
257
+ def test_writes_to_logstash_utc_index
258
+ driver.configure("logstash_format true
259
+ utc_index false")
260
+ time = Time.parse Date.today.to_s
261
+ utc_index = "logstash-#{time.strftime("%Y.%m.%d")}"
262
+ stub_elastic_ping
263
+ stub_elastic
264
+ driver.emit(sample_record, time)
265
+ driver.run
266
+ assert_equal(utc_index, index_cmds.first['index']['_index'])
267
+ end
268
+
269
+ def test_writes_to_logstash_index_with_specified_prefix
270
+ driver.configure("logstash_format true
271
+ logstash_prefix myprefix")
272
+ time = Time.parse Date.today.to_s
273
+ logstash_index = "myprefix-#{time.getutc.strftime("%Y.%m.%d")}"
274
+ stub_elastic_ping
275
+ stub_elastic
276
+ driver.emit(sample_record, time)
277
+ driver.run
278
+ assert_equal(logstash_index, index_cmds.first['index']['_index'])
279
+ end
280
+
281
+ def test_writes_to_logstash_index_with_specified_dateformat
282
+ driver.configure("logstash_format true
283
+ logstash_dateformat %Y.%m")
284
+ time = Time.parse Date.today.to_s
285
+ logstash_index = "logstash-#{time.getutc.strftime("%Y.%m")}"
286
+ stub_elastic_ping
287
+ stub_elastic
288
+ driver.emit(sample_record, time)
289
+ driver.run
290
+ assert_equal(logstash_index, index_cmds.first['index']['_index'])
291
+ end
292
+
293
+ def test_writes_to_logstash_index_with_specified_prefix_and_dateformat
294
+ driver.configure("logstash_format true
295
+ logstash_prefix myprefix
296
+ logstash_dateformat %Y.%m")
297
+ time = Time.parse Date.today.to_s
298
+ logstash_index = "myprefix-#{time.getutc.strftime("%Y.%m")}"
299
+ stub_elastic_ping
300
+ stub_elastic
301
+ driver.emit(sample_record, time)
302
+ driver.run
303
+ assert_equal(logstash_index, index_cmds.first['index']['_index'])
304
+ end
305
+
306
+ def test_doesnt_add_logstash_timestamp_by_default
307
+ stub_elastic_ping
308
+ stub_elastic
309
+ driver.emit(sample_record)
310
+ driver.run
311
+ assert_nil(index_cmds[1]['@timestamp'])
312
+ end
313
+
314
+ def test_adds_logstash_timestamp_when_configured
315
+ driver.configure("logstash_format true\n")
316
+ stub_elastic_ping
317
+ stub_elastic
318
+ ts = DateTime.now.to_s
319
+ driver.emit(sample_record)
320
+ driver.run
321
+ assert(index_cmds[1].has_key? '@timestamp')
322
+ assert_equal(index_cmds[1]['@timestamp'], ts)
323
+ end
324
+
325
+ def test_uses_custom_timestamp_when_included_in_record
326
+ driver.configure("logstash_format true\n")
327
+ stub_elastic_ping
328
+ stub_elastic
329
+ ts = DateTime.new(2001,2,3).to_s
330
+ driver.emit(sample_record.merge!('@timestamp' => ts))
331
+ driver.run
332
+ assert(index_cmds[1].has_key? '@timestamp')
333
+ assert_equal(index_cmds[1]['@timestamp'], ts)
334
+ end
335
+
336
+ def test_doesnt_add_tag_key_by_default
337
+ stub_elastic_ping
338
+ stub_elastic
339
+ driver.emit(sample_record)
340
+ driver.run
341
+ assert_nil(index_cmds[1]['tag'])
342
+ end
343
+
344
+ def test_adds_tag_key_when_configured
345
+ driver('mytag').configure("include_tag_key true\n")
346
+ stub_elastic_ping
347
+ stub_elastic
348
+ driver.emit(sample_record)
349
+ driver.run
350
+ assert(index_cmds[1].has_key?('tag'))
351
+ assert_equal(index_cmds[1]['tag'], 'mytag')
352
+ end
353
+
354
+ def test_adds_id_key_when_configured
355
+ driver.configure("id_key request_id\n")
356
+ stub_elastic_ping
357
+ stub_elastic
358
+ driver.emit(sample_record)
359
+ driver.run
360
+ assert_equal(index_cmds[0]['index']['_id'], '42')
361
+ end
362
+
363
+ def test_doesnt_add_id_key_if_missing_when_configured
364
+ driver.configure("id_key another_request_id\n")
365
+ stub_elastic_ping
366
+ stub_elastic
367
+ driver.emit(sample_record)
368
+ driver.run
369
+ assert(!index_cmds[0]['index'].has_key?('_id'))
370
+ end
371
+
372
+ def test_adds_id_key_when_not_configured
373
+ stub_elastic_ping
374
+ stub_elastic
375
+ driver.emit(sample_record)
376
+ driver.run
377
+ assert(!index_cmds[0]['index'].has_key?('_id'))
378
+ end
379
+
380
+ def test_adds_parent_key_when_configured
381
+ driver.configure("parent_key parent_id\n")
382
+ stub_elastic_ping
383
+ stub_elastic
384
+ driver.emit(sample_record)
385
+ driver.run
386
+ assert_equal(index_cmds[0]['index']['_parent'], 'parent')
387
+ end
388
+
389
+ def test_doesnt_add_parent_key_if_missing_when_configured
390
+ driver.configure("parent_key another_parent_id\n")
391
+ stub_elastic_ping
392
+ stub_elastic
393
+ driver.emit(sample_record)
394
+ driver.run
395
+ assert(!index_cmds[0]['index'].has_key?('_parent'))
396
+ end
397
+
398
+ def test_adds_parent_key_when_not_configured
399
+ stub_elastic_ping
400
+ stub_elastic
401
+ driver.emit(sample_record)
402
+ driver.run
403
+ assert(!index_cmds[0]['index'].has_key?('_parent'))
404
+ end
405
+
406
+ def test_request_error
407
+ stub_elastic_ping
408
+ stub_elastic_unavailable
409
+ driver.emit(sample_record)
410
+ assert_raise(Elasticsearch::Transport::Transport::Errors::ServiceUnavailable) {
411
+ driver.run
412
+ }
413
+ end
414
+
415
+ def test_garbage_record_error
416
+ stub_elastic_ping
417
+ stub_elastic
418
+ driver.emit("some garbage string")
419
+ driver.run
420
+ end
421
+
422
+ def test_connection_failed_retry
423
+ connection_resets = 0
424
+
425
+ stub_elastic_ping(url="http://localhost:9200").with do |req|
426
+ connection_resets += 1
427
+ end
428
+
429
+ stub_request(:post, "http://localhost:9200/_bulk").with do |req|
430
+ raise Faraday::ConnectionFailed, "Test message"
431
+ end
432
+
433
+ driver.emit(sample_record)
434
+
435
+ assert_raise(Fluent::ElasticsearchOutput::ConnectionFailure) {
436
+ driver.run
437
+ }
438
+ assert_equal(connection_resets, 3)
439
+ end
440
+ end
metadata ADDED
@@ -0,0 +1,128 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: fluent-plugin-elasticsearchfork
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.5.1
5
+ platform: ruby
6
+ authors:
7
+ - diogo
8
+ - pitr
9
+ autorequire:
10
+ bindir: bin
11
+ cert_chain: []
12
+ date: 2014-10-29 00:00:00.000000000 Z
13
+ dependencies:
14
+ - !ruby/object:Gem::Dependency
15
+ name: fluentd
16
+ requirement: !ruby/object:Gem::Requirement
17
+ requirements:
18
+ - - "~>"
19
+ - !ruby/object:Gem::Version
20
+ version: '0'
21
+ type: :runtime
22
+ prerelease: false
23
+ version_requirements: !ruby/object:Gem::Requirement
24
+ requirements:
25
+ - - "~>"
26
+ - !ruby/object:Gem::Version
27
+ version: '0'
28
+ - !ruby/object:Gem::Dependency
29
+ name: patron
30
+ requirement: !ruby/object:Gem::Requirement
31
+ requirements:
32
+ - - "~>"
33
+ - !ruby/object:Gem::Version
34
+ version: '0'
35
+ type: :runtime
36
+ prerelease: false
37
+ version_requirements: !ruby/object:Gem::Requirement
38
+ requirements:
39
+ - - "~>"
40
+ - !ruby/object:Gem::Version
41
+ version: '0'
42
+ - !ruby/object:Gem::Dependency
43
+ name: elasticsearch
44
+ requirement: !ruby/object:Gem::Requirement
45
+ requirements:
46
+ - - ">="
47
+ - !ruby/object:Gem::Version
48
+ version: '0'
49
+ type: :runtime
50
+ prerelease: false
51
+ version_requirements: !ruby/object:Gem::Requirement
52
+ requirements:
53
+ - - ">="
54
+ - !ruby/object:Gem::Version
55
+ version: '0'
56
+ - !ruby/object:Gem::Dependency
57
+ name: rake
58
+ requirement: !ruby/object:Gem::Requirement
59
+ requirements:
60
+ - - "~>"
61
+ - !ruby/object:Gem::Version
62
+ version: '0'
63
+ type: :development
64
+ prerelease: false
65
+ version_requirements: !ruby/object:Gem::Requirement
66
+ requirements:
67
+ - - "~>"
68
+ - !ruby/object:Gem::Version
69
+ version: '0'
70
+ - !ruby/object:Gem::Dependency
71
+ name: webmock
72
+ requirement: !ruby/object:Gem::Requirement
73
+ requirements:
74
+ - - "~>"
75
+ - !ruby/object:Gem::Version
76
+ version: '1'
77
+ type: :development
78
+ prerelease: false
79
+ version_requirements: !ruby/object:Gem::Requirement
80
+ requirements:
81
+ - - "~>"
82
+ - !ruby/object:Gem::Version
83
+ version: '1'
84
+ description: ElasticSearch output plugin for Fluent event collector
85
+ email:
86
+ - tomodian@gmail.com
87
+ executables: []
88
+ extensions: []
89
+ extra_rdoc_files: []
90
+ files:
91
+ - ".gitignore"
92
+ - ".travis.yml"
93
+ - Gemfile
94
+ - History.md
95
+ - LICENSE.txt
96
+ - README.md
97
+ - Rakefile
98
+ - fluent-plugin-elasticsearch.gemspec
99
+ - lib/fluent/plugin/out_elasticsearch.rb
100
+ - test/helper.rb
101
+ - test/plugin/test_out_elasticsearch.rb
102
+ homepage: https://github.com/tomodian/fluent-plugin-elasticsearch
103
+ licenses:
104
+ - MIT
105
+ metadata: {}
106
+ post_install_message:
107
+ rdoc_options: []
108
+ require_paths:
109
+ - lib
110
+ required_ruby_version: !ruby/object:Gem::Requirement
111
+ requirements:
112
+ - - ">="
113
+ - !ruby/object:Gem::Version
114
+ version: '0'
115
+ required_rubygems_version: !ruby/object:Gem::Requirement
116
+ requirements:
117
+ - - ">="
118
+ - !ruby/object:Gem::Version
119
+ version: '0'
120
+ requirements: []
121
+ rubyforge_project:
122
+ rubygems_version: 2.2.2
123
+ signing_key:
124
+ specification_version: 4
125
+ summary: ElasticSearch output plugin for Fluent event collector
126
+ test_files:
127
+ - test/helper.rb
128
+ - test/plugin/test_out_elasticsearch.rb