fluent-plugin-elasticsearch-cluster 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 0bb8bb8cdccbb3ae69e88bdcc1e7b0f91d007414
4
+ data.tar.gz: 170597458a5ab7311499226291893b71afae2740
5
+ SHA512:
6
+ metadata.gz: 2354d14a2a534a0c05e37ac4e2be59b79354222c5e3c7ecd54b7ecf5b854e5ba9a4cdfb7f0fbc0348f143b9aaa6b035bb55047386c9a0411df09f9720e9cfb31
7
+ data.tar.gz: 202da061a293dbf1caaf8736f05d656bc321bd12d2f433e8e40707a27770cf5e1e2bb1c2ba810aafff00a247a8bb7789a36edb75570afc0c7ac8695a9b5aa144
@@ -0,0 +1,17 @@
1
+ *.gem
2
+ *.rbc
3
+ .bundle
4
+ .config
5
+ .yardoc
6
+ Gemfile.lock
7
+ InstalledFiles
8
+ _yardoc
9
+ coverage
10
+ doc/
11
+ lib/bundler/man
12
+ pkg
13
+ rdoc
14
+ spec/reports
15
+ test/tmp
16
+ test/version_tmp
17
+ tmp
data/Gemfile ADDED
@@ -0,0 +1,6 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in fluent-plugin-elasticsearch-cluster.gemspec
4
+ gemspec
5
+
6
+ gem 'coveralls', require: false
@@ -0,0 +1,43 @@
1
+ Changelog
2
+ =========
3
+
4
+ 0.1.0
5
+ =====
6
+ - use elasticsearch-ruby (with patron)
7
+ -- handle multiple nodes with load balance
8
+ - add utc_index option
9
+
10
+ fluent-plugin-elasticsearch
11
+ =====
12
+
13
+ 0.2.0
14
+ =====
15
+
16
+ - fix encoding issues with JSON conversion and again when sending to elasticsearch (#19, #21)
17
+ - add logstash_dateformat option (#20)
18
+
19
+ 0.1.4
20
+ =====
21
+
22
+ - add logstash_prefix option
23
+
24
+ 0.1.3
25
+ =====
26
+
27
+ - raising an exception on non-success response from elasticsearch
28
+
29
+ 0.1.2
30
+ =====
31
+
32
+ - add id_key option
33
+
34
+ 0.1.1
35
+ =====
36
+
37
+ - fix timezone in logstash key
38
+
39
+
40
+ 0.1.0
41
+ =====
42
+
43
+ - Initial gem release.
@@ -0,0 +1,22 @@
1
+ Copyright (c) 2012 Uken Games
2
+
3
+ MIT License
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining
6
+ a copy of this software and associated documentation files (the
7
+ "Software"), to deal in the Software without restriction, including
8
+ without limitation the rights to use, copy, modify, merge, publish,
9
+ distribute, sublicense, and/or sell copies of the Software, and to
10
+ permit persons to whom the Software is furnished to do so, subject to
11
+ the following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be
14
+ included in all copies or substantial portions of the Software.
15
+
16
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
19
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
20
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
21
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
22
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
@@ -0,0 +1,130 @@
1
+ # Fluent::Plugin::Elasticsearch, a plugin for [Fluentd](http://fluentd.org)
2
+
3
+ [![Gem Version](https://badge.fury.io/rb/fluent-plugin-elasticsearch.png)](http://badge.fury.io/rb/fluent-plugin-elasticsearch)
4
+ [![Dependency Status](https://gemnasium.com/uken/guard-sidekiq.png)](https://gemnasium.com/uken/fluent-plugin-elasticsearch)
5
+ [![Build Status](https://travis-ci.org/uken/fluent-plugin-elasticsearch.png?branch=master)](https://travis-ci.org/uken/fluent-plugin-elasticsearch)
6
+ [![Coverage Status](https://coveralls.io/repos/uken/fluent-plugin-elasticsearch/badge.png)](https://coveralls.io/r/uken/fluent-plugin-elasticsearch)
7
+ [![Code Climate](https://codeclimate.com/github/uken/fluent-plugin-elasticsearch.png)](https://codeclimate.com/github/uken/fluent-plugin-elasticsearch)
8
+
9
+ I wrote this so you can search logs routed through Fluentd.
10
+
11
+ ## Installation
12
+
13
+ $ gem install fluent-plugin-elasticsearch
14
+
15
+ * prerequisite : You need to install [libcurl](http://curl.haxx.se/libcurl/) to work with.
16
+
17
+ ## Usage
18
+
19
+ In your fluentd configration, use `type elasticsearch`. Additional configuration is optional, default values would look like this:
20
+
21
+ ```
22
+ host localhost
23
+ port 9200
24
+ index_name fluentd
25
+ type_name fluentd
26
+ ```
27
+
28
+ **More options:**
29
+
30
+ ```
31
+ hosts host1:port1,host2:port2,host3:port3
32
+ ```
33
+
34
+ You can specify multiple elasticsearch hosts with separator ",".
35
+
36
+ If you specify multiple hosts, plugin writes to elasticsearch with load balanced. (it's elasticsearch-ruby's feature, default is round-robin.)
37
+
38
+ If you specify this option, host and port options are ignored.
39
+
40
+ ```
41
+ logstash_format true # defaults to false
42
+ ```
43
+
44
+ This is meant to make writing data into elasticsearch compatible to what logstash writes. By doing this, one could take advantade of [kibana](http://kibana.org/).
45
+
46
+ ```
47
+ logstash_prefix mylogs # defaults to "logstash"
48
+ ```
49
+
50
+ By default, the records inserted into index `logstash-YYMMDD`. This option allows to insert into specified index like `mylogs-YYMMDD`.
51
+
52
+ ```
53
+ logstash_dateformat %Y.%m. # defaults to "%Y.%m.%d"
54
+ ```
55
+
56
+ By default, the records inserted into index `logstash-YYMMDD`. This option allows to insert into specified index like `logstash-YYYYMM` for a monthly index.
57
+
58
+ ---
59
+
60
+ ```
61
+ include_tag_key true # defaults to false
62
+ tag_key tag # defaults to tag
63
+ ```
64
+
65
+ This will add the fluentd tag in the json record. For instance, if you have a config like this:
66
+
67
+ ```
68
+ <match my.logs>
69
+ type elasticsearch
70
+ include_tag_key true
71
+ tag_key _key
72
+ </match>
73
+ ```
74
+
75
+ The record inserted into elasticsearch would be
76
+
77
+ ```
78
+ {"_key":"my.logs", "name":"Johnny Doeie"}
79
+ ```
80
+
81
+ ---
82
+
83
+ ```
84
+ id_key request_id # use "request_id" field as a record id in ES
85
+ ```
86
+
87
+ By default, all records inserted into elasticsearch get a random _id. This option allows to use a field in the record as an identifier.
88
+
89
+ This following record `{"name":"Johnny","request_id":"87d89af7daffad6"}` will trigger the following ElasticSearch command
90
+
91
+ ```
92
+ { "index" : { "_index" : "logstash-2013.01.01, "_type" : "fluentd", "_id" : "87d89af7daffad6" } }
93
+ { "name": "Johnny", "request_id": "87d89af7daffad6" }
94
+ ```
95
+
96
+ ---
97
+
98
+ fluentd-plugin-elasticsearch is a buffered output that uses elasticseach's bulk API. So additional buffer configuration would be (with default values):
99
+
100
+ ```
101
+ buffer_type memory
102
+ flush_interval 60
103
+ retry_limit 17
104
+ retry_wait 1.0
105
+ num_threads 1
106
+ ```
107
+
108
+ ---
109
+
110
+ Please consider using [fluent-plugin-forest](https://github.com/tagomoris/fluent-plugin-forest) to send multiple logs to multiple ElasticSearch indices:
111
+
112
+ ```
113
+ <match my.logs.*>
114
+ type forest
115
+ subtype elasticsearch
116
+ remove_prefix my.logs
117
+ <template>
118
+ logstash_prefix ${tag}
119
+ # ...
120
+ </template>
121
+ </match>
122
+ ```
123
+
124
+ ## Contributing
125
+
126
+ 1. Fork it
127
+ 2. Create your feature branch (`git checkout -b my-new-feature`)
128
+ 3. Commit your changes (`git commit -am 'Add some feature'`)
129
+ 4. Push to the branch (`git push origin my-new-feature`)
130
+ 5. Create new Pull Request
@@ -0,0 +1,11 @@
1
+ require "bundler/gem_tasks"
2
+ require 'rake/testtask'
3
+
4
+ Rake::TestTask.new(:test) do |test|
5
+ test.libs << 'lib' << 'test'
6
+ test.pattern = 'test/**/test_*.rb'
7
+ test.verbose = true
8
+ end
9
+
10
+ task :default => :test
11
+
@@ -0,0 +1,25 @@
1
+ # -*- encoding: utf-8 -*-
2
+ $:.push File.expand_path("../lib", __FILE__)
3
+
4
+ Gem::Specification.new do |s|
5
+ s.name = "fluent-plugin-elasticsearch-cluster"
6
+ s.version = '0.1.0'
7
+ s.authors = ['HeartSaVioR']
8
+ s.email = ["kabhwan@gmail.com"]
9
+ s.description = %q{ElasticSearch output plugin for Fluent event collector, based on fluent-plugin-elasticsearch, with support cluster}
10
+ s.summary = s.description
11
+ s.homepage = "https://github.com/HeartSaVioR/fluent-plugin-elasticsearch-cluster"
12
+ s.license = 'MIT'
13
+
14
+ s.files = `git ls-files`.split($/)
15
+ s.executables = s.files.grep(%r{^bin/}).map{ |f| File.basename(f) }
16
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
17
+ s.require_paths = ["lib"]
18
+
19
+ s.add_runtime_dependency "fluentd"
20
+ s.add_runtime_dependency "patron"
21
+ s.add_runtime_dependency "elasticsearch"
22
+
23
+ s.add_development_dependency "rake"
24
+ s.add_development_dependency "webmock"
25
+ end
@@ -0,0 +1,100 @@
1
+ # encoding: UTF-8
2
+ require 'date'
3
+ require 'patron'
4
+ require 'elasticsearch'
5
+
6
+ class Fluent::ElasticsearchClusterOutput < Fluent::BufferedOutput
7
+ Fluent::Plugin.register_output('elasticsearch_cluster', self)
8
+
9
+ config_param :host, :string, :default => 'localhost'
10
+ config_param :port, :integer, :default => 9200
11
+ config_param :logstash_format, :bool, :default => false
12
+ config_param :logstash_prefix, :string, :default => "logstash"
13
+ config_param :logstash_dateformat, :string, :default => "%Y.%m.%d"
14
+ config_param :utc_index, :bool, :default => true
15
+ config_param :type_name, :string, :default => "fluentd"
16
+ config_param :index_name, :string, :default => "fluentd"
17
+ config_param :id_key, :string, :default => nil
18
+ config_param :parent_key, :string, :default => nil
19
+ config_param :flush_size, :integer, :default => 1000
20
+ config_param :hosts, :string, :default => nil
21
+
22
+ include Fluent::SetTagKeyMixin
23
+ config_set_default :include_tag_key, false
24
+
25
+ def initialize
26
+ super
27
+ end
28
+
29
+ def configure(conf)
30
+ super
31
+ end
32
+
33
+ def start
34
+ super
35
+ @es = Elasticsearch::Client.new :hosts => get_hosts, :reload_connections => true, :adapter => :patron, :retry_on_failure => 5
36
+ raise "Can not reach Elasticsearch cluster (#{@host}:#{@port})!" unless @es.ping
37
+ end
38
+
39
+ def get_hosts
40
+ if @hosts
41
+ @hosts.split(',').map {|x| x.strip}.compact
42
+ else
43
+ ["#{@host}:#{@port}"]
44
+ end
45
+ end
46
+
47
+ def format(tag, time, record)
48
+ [tag, time, record].to_msgpack
49
+ end
50
+
51
+ def shutdown
52
+ super
53
+ end
54
+
55
+ def write(chunk)
56
+ bulk_message = []
57
+
58
+ chunk.msgpack_each do |tag, time, record|
59
+ if @logstash_format
60
+ record.merge!({"@timestamp" => Time.at(time).to_datetime.to_s})
61
+ if @utc_index
62
+ target_index = "#{@logstash_prefix}-#{Time.at(time).getutc.strftime("#{@logstash_dateformat}")}"
63
+ else
64
+ target_index = "#{@logstash_prefix}-#{Time.at(time).strftime("#{@logstash_dateformat}")}"
65
+ end
66
+ else
67
+ target_index = @index_name
68
+ end
69
+
70
+ if @include_tag_key
71
+ record.merge!(@tag_key => tag)
72
+ end
73
+
74
+ meta = { "index" => {"_index" => target_index, "_type" => type_name} }
75
+ if @id_key && record[@id_key]
76
+ meta['index']['_id'] = record[@id_key]
77
+ end
78
+
79
+ if @parent_key && record[@parent_key]
80
+ meta['index']['_parent'] = record[@parent_key]
81
+ end
82
+
83
+ bulk_message << meta
84
+ bulk_message << record
85
+
86
+ if bulk_message.size >= @flush_size
87
+ send(bulk_message)
88
+ bulk_message.clear
89
+ end
90
+ end
91
+
92
+ send(bulk_message) unless bulk_message.empty?
93
+ bulk_message.clear
94
+ end
95
+
96
+ def send(data)
97
+ @es.bulk body: data
98
+ end
99
+ end
100
+
@@ -0,0 +1 @@
1
+ require 'minitest/pride'
@@ -0,0 +1,321 @@
1
+ require 'test/unit'
2
+
3
+ require 'fluent/test'
4
+ require 'fluent/plugin/out_elasticsearch_cluster'
5
+
6
+ require 'webmock/test_unit'
7
+ require 'date'
8
+
9
+ require 'helper'
10
+
11
+ $:.push File.expand_path("../lib", __FILE__)
12
+ $:.push File.dirname(__FILE__)
13
+
14
+ WebMock.disable_net_connect!
15
+
16
+ class ElasticsearchClusterOutput < Test::Unit::TestCase
17
+ attr_accessor :index_cmds, :index_command_counts
18
+
19
+ def setup
20
+ Fluent::Test.setup
21
+ @driver = nil
22
+ end
23
+
24
+ def driver(tag='test', conf='')
25
+ @driver ||= Fluent::Test::BufferedOutputTestDriver.new(Fluent::ElasticsearchClusterOutput, tag).configure(conf)
26
+ end
27
+
28
+ def sample_record
29
+ {'age' => 26, 'request_id' => '42', 'parent_id' => 'parent'}
30
+ end
31
+
32
+ def stub_elastic_ping(url="http://localhost:9200")
33
+ stub_request(:head, url).with.to_return(:status => 200, :body => "", :headers => {})
34
+ end
35
+
36
+ def stub_elastic(url="http://localhost:9200/_bulk")
37
+ stub_request(:post, url).with do |req|
38
+ @index_cmds = req.body.split("\n").map {|r| JSON.parse(r) }
39
+ end
40
+ end
41
+
42
+ def stub_elastic_unavailable(url="http://localhost:9200/_bulk")
43
+ stub_request(:post, url).to_return(:status => [503, "Service Unavailable"])
44
+ end
45
+
46
+ def stub_elastic_with_store_index_command_counts(url="http://localhost:9200/_bulk")
47
+ if @index_command_counts == nil
48
+ @index_command_counts = {}
49
+ @index_command_counts.default = 0
50
+ end
51
+
52
+ stub_request(:post, url).with do |req|
53
+ index_cmds = req.body.split("\n").map {|r| JSON.parse(r) }
54
+ @index_command_counts[url] += index_cmds.size
55
+ end
56
+ end
57
+
58
+ def test_writes_to_default_index
59
+ stub_elastic_ping
60
+ stub_elastic
61
+ driver.emit(sample_record)
62
+ driver.run
63
+ assert_equal('fluentd', index_cmds.first['index']['_index'])
64
+ end
65
+
66
+ def test_writes_to_default_type
67
+ stub_elastic_ping
68
+ stub_elastic
69
+ driver.emit(sample_record)
70
+ driver.run
71
+ assert_equal('fluentd', index_cmds.first['index']['_type'])
72
+ end
73
+
74
+ def test_writes_to_speficied_index
75
+ driver.configure("index_name myindex\n")
76
+ stub_elastic_ping
77
+ stub_elastic
78
+ driver.emit(sample_record)
79
+ driver.run
80
+ assert_equal('myindex', index_cmds.first['index']['_index'])
81
+ end
82
+
83
+ def test_writes_to_speficied_type
84
+ driver.configure("type_name mytype\n")
85
+ stub_elastic_ping
86
+ stub_elastic
87
+ driver.emit(sample_record)
88
+ driver.run
89
+ assert_equal('mytype', index_cmds.first['index']['_type'])
90
+ end
91
+
92
+ def test_writes_to_speficied_host
93
+ driver.configure("host 192.168.33.50\n")
94
+ stub_elastic_ping("http://192.168.33.50:9200")
95
+ elastic_request = stub_elastic("http://192.168.33.50:9200/_bulk")
96
+ driver.emit(sample_record)
97
+ driver.run
98
+ assert_requested(elastic_request)
99
+ end
100
+
101
+ def test_writes_to_speficied_port
102
+ driver.configure("port 9201\n")
103
+ stub_elastic_ping("http://localhost:9201")
104
+ elastic_request = stub_elastic("http://localhost:9201/_bulk")
105
+ driver.emit(sample_record)
106
+ driver.run
107
+ assert_requested(elastic_request)
108
+ end
109
+
110
+ def test_writes_to_multi_hosts
111
+ hosts = [['192.168.33.50', 9201], ['192.168.33.51', 9201], ['192.168.33.52', 9201]]
112
+ hosts_string = hosts.map {|x| "#{x[0]}:#{x[1]}"}.compact.join(',')
113
+
114
+ driver.configure("hosts #{hosts_string}")
115
+ # load balance is performed per bulk, so set bulk size to 1 during test
116
+ driver.configure("flush_size 1\n")
117
+
118
+ hosts.each do |host_info|
119
+ host, port = host_info
120
+ stub_elastic_ping("http://#{host}:#{port}")
121
+ stub_elastic_with_store_index_command_counts("http://#{host}:#{port}/_bulk")
122
+ end
123
+
124
+ 10.times { driver.emit(sample_record.merge('age'=>rand(100))) }
125
+ driver.run
126
+
127
+ commands_per_hosts = 20 / hosts.size
128
+ assert(@index_command_counts.size == hosts.size, "some hosts are not receiving messages")
129
+ @index_command_counts.each do |url, count|
130
+ assert(count >= commands_per_hosts, "messages are not sent with load balanced")
131
+ end
132
+ end
133
+
134
+ def test_makes_bulk_request
135
+ stub_elastic_ping
136
+ stub_elastic
137
+ driver.emit(sample_record)
138
+ driver.emit(sample_record.merge('age' => 27))
139
+ driver.run
140
+ assert_equal(4, index_cmds.count)
141
+ end
142
+
143
+ def test_makes_bulk_request_with_specific_size
144
+ driver.configure("flush_size 10\n")
145
+ driver.configure("flush_interval 10s\n")
146
+ stub_elastic_ping
147
+ stub_elastic
148
+ 100.times { driver.emit(sample_record.merge('age'=>rand(100))) }
149
+ driver.run
150
+
151
+ assert_equal(10, index_cmds.count)
152
+ end
153
+
154
+ def test_all_records_are_preserved_in_bulk
155
+ stub_elastic_ping
156
+ stub_elastic
157
+ driver.emit(sample_record)
158
+ driver.emit(sample_record.merge('age' => 27))
159
+ driver.run
160
+ assert_equal(26, index_cmds[1]['age'])
161
+ assert_equal(27, index_cmds[3]['age'])
162
+ end
163
+
164
+ def test_writes_to_logstash_index
165
+ driver.configure("logstash_format true\n")
166
+ time = Time.parse Date.today.to_s
167
+ logstash_index = "logstash-#{time.getutc.strftime("%Y.%m.%d")}"
168
+ stub_elastic_ping
169
+ stub_elastic
170
+ driver.emit(sample_record, time)
171
+ driver.run
172
+ assert_equal(logstash_index, index_cmds.first['index']['_index'])
173
+ end
174
+
175
+ def test_writes_to_logstash_utc_index
176
+ driver.configure("logstash_format true\n")
177
+ driver.configure("utc_index false\n")
178
+ time = Time.parse Date.today.to_s
179
+ utc_index = "logstash-#{time.strftime("%Y.%m.%d")}"
180
+ stub_elastic_ping
181
+ stub_elastic
182
+ driver.emit(sample_record, time)
183
+ driver.run
184
+ assert_equal(utc_index, index_cmds.first['index']['_index'])
185
+ end
186
+
187
+ def test_writes_to_logstash_index_with_specified_prefix
188
+ driver.configure("logstash_format true\n")
189
+ driver.configure("logstash_prefix myprefix\n")
190
+ time = Time.parse Date.today.to_s
191
+ logstash_index = "myprefix-#{time.getutc.strftime("%Y.%m.%d")}"
192
+ stub_elastic_ping
193
+ stub_elastic
194
+ driver.emit(sample_record, time)
195
+ driver.run
196
+ assert_equal(logstash_index, index_cmds.first['index']['_index'])
197
+ end
198
+
199
+ def test_writes_to_logstash_index_with_specified_dateformat
200
+ driver.configure("logstash_format true\n")
201
+ driver.configure("logstash_dateformat %Y.%m\n")
202
+ time = Time.parse Date.today.to_s
203
+ logstash_index = "logstash-#{time.getutc.strftime("%Y.%m")}"
204
+ stub_elastic_ping
205
+ stub_elastic
206
+ driver.emit(sample_record, time)
207
+ driver.run
208
+ assert_equal(logstash_index, index_cmds.first['index']['_index'])
209
+ end
210
+
211
+ def test_writes_to_logstash_index_with_specified_prefix_and_dateformat
212
+ driver.configure("logstash_format true\n")
213
+ driver.configure("logstash_prefix myprefix\n")
214
+ driver.configure("logstash_dateformat %Y.%m\n")
215
+ time = Time.parse Date.today.to_s
216
+ logstash_index = "myprefix-#{time.getutc.strftime("%Y.%m")}"
217
+ stub_elastic_ping
218
+ stub_elastic
219
+ driver.emit(sample_record, time)
220
+ driver.run
221
+ assert_equal(logstash_index, index_cmds.first['index']['_index'])
222
+ end
223
+
224
+ def test_doesnt_add_logstash_timestamp_by_default
225
+ stub_elastic_ping
226
+ stub_elastic
227
+ driver.emit(sample_record)
228
+ driver.run
229
+ assert_nil(index_cmds[1]['@timestamp'])
230
+ end
231
+
232
+ def test_adds_logstash_timestamp_when_configured
233
+ driver.configure("logstash_format true\n")
234
+ stub_elastic_ping
235
+ stub_elastic
236
+ ts = DateTime.now.to_s
237
+ driver.emit(sample_record)
238
+ driver.run
239
+ assert(index_cmds[1].has_key? '@timestamp')
240
+ assert_equal(index_cmds[1]['@timestamp'], ts)
241
+ end
242
+
243
+ def test_doesnt_add_tag_key_by_default
244
+ stub_elastic_ping
245
+ stub_elastic
246
+ driver.emit(sample_record)
247
+ driver.run
248
+ assert_nil(index_cmds[1]['tag'])
249
+ end
250
+
251
+ def test_adds_tag_key_when_configured
252
+ driver('mytag').configure("include_tag_key true\n")
253
+ stub_elastic_ping
254
+ stub_elastic
255
+ driver.emit(sample_record)
256
+ driver.run
257
+ assert(index_cmds[1].has_key?('tag'))
258
+ assert_equal(index_cmds[1]['tag'], 'mytag')
259
+ end
260
+
261
+ def test_adds_id_key_when_configured
262
+ driver.configure("id_key request_id\n")
263
+ stub_elastic_ping
264
+ stub_elastic
265
+ driver.emit(sample_record)
266
+ driver.run
267
+ assert_equal(index_cmds[0]['index']['_id'], '42')
268
+ end
269
+
270
+ def test_doesnt_add_id_key_if_missing_when_configured
271
+ driver.configure("id_key another_request_id\n")
272
+ stub_elastic_ping
273
+ stub_elastic
274
+ driver.emit(sample_record)
275
+ driver.run
276
+ assert(!index_cmds[0]['index'].has_key?('_id'))
277
+ end
278
+
279
+ def test_adds_id_key_when_not_configured
280
+ stub_elastic_ping
281
+ stub_elastic
282
+ driver.emit(sample_record)
283
+ driver.run
284
+ assert(!index_cmds[0]['index'].has_key?('_id'))
285
+ end
286
+
287
+ def test_adds_parent_key_when_configured
288
+ driver.configure("parent_key parent_id\n")
289
+ stub_elastic_ping
290
+ stub_elastic
291
+ driver.emit(sample_record)
292
+ driver.run
293
+ assert_equal(index_cmds[0]['index']['_parent'], 'parent')
294
+ end
295
+
296
+ def test_doesnt_add_parent_key_if_missing_when_configured
297
+ driver.configure("parent_key another_parent_id\n")
298
+ stub_elastic_ping
299
+ stub_elastic
300
+ driver.emit(sample_record)
301
+ driver.run
302
+ assert(!index_cmds[0]['index'].has_key?('_parent'))
303
+ end
304
+
305
+ def test_adds_parent_key_when_not_configured
306
+ stub_elastic_ping
307
+ stub_elastic
308
+ driver.emit(sample_record)
309
+ driver.run
310
+ assert(!index_cmds[0]['index'].has_key?('_parent'))
311
+ end
312
+
313
+ def test_request_error
314
+ stub_elastic_ping
315
+ stub_elastic_unavailable
316
+ driver.emit(sample_record)
317
+ assert_raise(Elasticsearch::Transport::Transport::Errors::ServiceUnavailable) {
318
+ driver.run
319
+ }
320
+ end
321
+ end
metadata ADDED
@@ -0,0 +1,128 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: fluent-plugin-elasticsearch-cluster
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - HeartSaVioR
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2014-03-12 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: fluentd
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ">="
18
+ - !ruby/object:Gem::Version
19
+ version: '0'
20
+ type: :runtime
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - ">="
25
+ - !ruby/object:Gem::Version
26
+ version: '0'
27
+ - !ruby/object:Gem::Dependency
28
+ name: patron
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - ">="
32
+ - !ruby/object:Gem::Version
33
+ version: '0'
34
+ type: :runtime
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - ">="
39
+ - !ruby/object:Gem::Version
40
+ version: '0'
41
+ - !ruby/object:Gem::Dependency
42
+ name: elasticsearch
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - ">="
46
+ - !ruby/object:Gem::Version
47
+ version: '0'
48
+ type: :runtime
49
+ prerelease: false
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - ">="
53
+ - !ruby/object:Gem::Version
54
+ version: '0'
55
+ - !ruby/object:Gem::Dependency
56
+ name: rake
57
+ requirement: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - ">="
60
+ - !ruby/object:Gem::Version
61
+ version: '0'
62
+ type: :development
63
+ prerelease: false
64
+ version_requirements: !ruby/object:Gem::Requirement
65
+ requirements:
66
+ - - ">="
67
+ - !ruby/object:Gem::Version
68
+ version: '0'
69
+ - !ruby/object:Gem::Dependency
70
+ name: webmock
71
+ requirement: !ruby/object:Gem::Requirement
72
+ requirements:
73
+ - - ">="
74
+ - !ruby/object:Gem::Version
75
+ version: '0'
76
+ type: :development
77
+ prerelease: false
78
+ version_requirements: !ruby/object:Gem::Requirement
79
+ requirements:
80
+ - - ">="
81
+ - !ruby/object:Gem::Version
82
+ version: '0'
83
+ description: ElasticSearch output plugin for Fluent event collector, based on fluent-plugin-elasticsearch,
84
+ with support cluster
85
+ email:
86
+ - kabhwan@gmail.com
87
+ executables: []
88
+ extensions: []
89
+ extra_rdoc_files: []
90
+ files:
91
+ - ".gitignore"
92
+ - Gemfile
93
+ - History.md
94
+ - LICENSE.txt
95
+ - README.md
96
+ - Rakefile
97
+ - fluent-plugin-elasticsearch-cluster.gemspec
98
+ - lib/fluent/plugin/out_elasticsearch_cluster.rb
99
+ - test/helper.rb
100
+ - test/plugin/test_out_elasticsearch_cluster.rb
101
+ homepage: https://github.com/HeartSaVioR/fluent-plugin-elasticsearch-cluster
102
+ licenses:
103
+ - MIT
104
+ metadata: {}
105
+ post_install_message:
106
+ rdoc_options: []
107
+ require_paths:
108
+ - lib
109
+ required_ruby_version: !ruby/object:Gem::Requirement
110
+ requirements:
111
+ - - ">="
112
+ - !ruby/object:Gem::Version
113
+ version: '0'
114
+ required_rubygems_version: !ruby/object:Gem::Requirement
115
+ requirements:
116
+ - - ">="
117
+ - !ruby/object:Gem::Version
118
+ version: '0'
119
+ requirements: []
120
+ rubyforge_project:
121
+ rubygems_version: 2.2.2
122
+ signing_key:
123
+ specification_version: 4
124
+ summary: ElasticSearch output plugin for Fluent event collector, based on fluent-plugin-elasticsearch,
125
+ with support cluster
126
+ test_files:
127
+ - test/helper.rb
128
+ - test/plugin/test_out_elasticsearch_cluster.rb