fluent-plugin-elasticsearch 1.3.0 → 1.4.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 1e5c36c889794052ff9bab9d9b6e69aa8d937eed
4
- data.tar.gz: a719c42134c8d53969abe517a47f47a6936d092d
3
+ metadata.gz: 89a28c84005c362158da08a71ea7c10036df254d
4
+ data.tar.gz: 87a5e6283b6e3bd057ec949ee95316460bc8f6da
5
5
  SHA512:
6
- metadata.gz: e75bdccd2859b063eb1fc48fdce8394e43648ff05b4d25214b66b85111482c1d8d671e24d3a5376d37f0f1093fbefb2866afc5e9882728df2b09c2147cf16a90
7
- data.tar.gz: 3393dce7f111c75ce16b8b4af7fff51064870cb1033bf446701180cf0c84458ffc3a8ea3f6a58503eff46d1d7e414e588073497fd5e6a20becc036971ddb40d8
6
+ metadata.gz: fde384e1b97967886d1446785e1cdb5eedb1bf7f98ab7841d321d34886dc84ee95f255d8d86b289c03b40e469528efce6a288b63072a6fd9f40f5451b82bd043
7
+ data.tar.gz: 35d51c4cc3508cf12e6c4971db4ab4c947c4688cc4f021a33c4d0bf1f325a03a013b1b1b30195be8a64296636997e757a518fdd5516234741824cec20151197b
@@ -5,6 +5,7 @@ rvm:
5
5
  - 2.0.0
6
6
  - 2.1
7
7
  - 2.2
8
+ - 2.3.0
8
9
 
9
10
  script: bundle exec rake test
10
11
  sudo: false
data/Gemfile CHANGED
@@ -5,3 +5,4 @@ gemspec
5
5
 
6
6
  gem 'simplecov', require: false
7
7
  gem 'coveralls', require: false
8
+ gem 'strptime', require: false if RUBY_ENGINE == "ruby" && RUBY_VERSION =~ /^2/
data/History.md CHANGED
@@ -1,5 +1,11 @@
1
1
  ## Changelog
2
2
 
3
+ ### Future
4
+
5
+ ### 1.4.0
6
+ - add `target_index_key` to specify target index (#153)
7
+ - add `time_key_format` for faster time format parsing (#154)
8
+
3
9
  ### 1.3.0
4
10
  - add `write_operation`
5
11
 
data/README.md CHANGED
@@ -20,8 +20,10 @@ Note: For Amazon Elasticsearch Service please consider using [fluent-plugin-aws-
20
20
  + [logstash_format](#logstash_format)
21
21
  + [logstash_prefix](#logstash_prefix)
22
22
  + [logstash_dateformat](#logstash_dateformat)
23
+ + [time_key_format](#time_key_format)
23
24
  + [time_key](#time_key)
24
25
  + [utc_index](#utc_index)
26
+ + [target_index_key](#target_index_key)
25
27
  + [request_timeout](#request_timeout)
26
28
  + [reload_connections](#reload_connections)
27
29
  + [reload_on_failure](#reload_on_failure)
@@ -30,6 +32,7 @@ Note: For Amazon Elasticsearch Service please consider using [fluent-plugin-aws-
30
32
  + [id_key](#id_key)
31
33
  + [write_operation](#write_operation)
32
34
  + [Client/host certificate options](#clienthost-certificate-options)
35
+ + [Proxy Support](#proxy-support)
33
36
  + [Buffered output options](#buffered-output-options)
34
37
  + [Not seeing a config you need?](#not-seeing-a-config-you-need)
35
38
  + [Dynamic configuration](#dynamic-configuration)
@@ -45,11 +48,11 @@ $ gem install fluent-plugin-elasticsearch
45
48
 
46
49
  ## Usage
47
50
 
48
- In your Fluentd configuration, use `type elasticsearch`. Additional configuration is optional, default values would look like this:
51
+ In your Fluentd configuration, use `@type elasticsearch`. Additional configuration is optional, default values would look like this:
49
52
 
50
53
  ```
51
54
  <match my.logs>
52
- type elasticsearch
55
+ @type elasticsearch
53
56
  host localhost
54
57
  port 9200
55
58
  index_name fluentd
@@ -96,7 +99,7 @@ Specify `ssl_verify false` to skip ssl verification (defaults to true)
96
99
  logstash_format true # defaults to false
97
100
  ```
98
101
 
99
- This is meant to make writing data into ElasticSearch compatible to what [Logstash](https://www.elastic.co/products/logstash) writes. By doing this, one could take advantage of [Kibana](https://www.elastic.co/products/kibana).
102
+ This is meant to make writing data into ElasticSearch indices compatible to what [Logstash](https://www.elastic.co/products/logstash) calls them. By doing this, one could take advantage of [Kibana](https://www.elastic.co/products/kibana). See logstash_prefix and logstash_dateformat to customize this index name pattern. The index name will be `#{logstash_prefix}-#{formated_date}`
100
103
 
101
104
  ### logstash_prefix
102
105
 
@@ -106,12 +109,24 @@ logstash_prefix mylogs # defaults to "logstash"
106
109
 
107
110
  ### logstash_dateformat
108
111
 
109
- By default, the records inserted into index `logstash-YYMMDD`. This option allows to insert into specified index like `mylogs-YYYYMM` for a monthly index.
112
+ The strftime format to generate index target index name when `logstash_format` is set to true. By default, the records are inserted into index `logstash-YYYY.MM.DD`. This option, alongwith `logstash_prefix` lets us insert into specified index like `mylogs-YYYYMM` for a monthly index.
110
113
 
111
114
  ```
112
115
  logstash_dateformat %Y.%m. # defaults to "%Y.%m.%d"
113
116
  ```
114
117
 
118
+ ### time_key_format
119
+
120
+ The format of the time stamp field (`@timestamp` or what you specify with [time_key][#time_key]). This parameter only has an effect when [logstash_format][#logstash_format] is true as it only affects the name of the index we write to. Please see [Time#strftime](http://ruby-doc.org/core-1.9.3/Time.html#method-i-strftime) for information about the value of this format.
121
+
122
+ Setting this to a known format can vastly improve your log ingestion speed if all most of your logs are in the same format. If there is an error parsing this format the timestamp will default to the ingestion time. If you are on Ruby 2.0 or later you can get a further performance improvment by installing the "strptime" gem: `fluent-gem install strptime`.
123
+
124
+ For example to parse ISO8601 times with sub-second precision:
125
+
126
+ ```
127
+ time_key_format %Y-%m-%dT%H:%M:%S.%N%z
128
+ ```
129
+
115
130
  ### time_key
116
131
 
117
132
  By default, when inserting records in [Logstash](https://www.elastic.co/products/logstash) format, `@timestamp` is dynamically created with the time at log ingestion. If you'd like to use a custom time, include an `@timestamp` with your record.
@@ -141,7 +156,7 @@ The output will be
141
156
  ```
142
157
  {
143
158
  "title": "developer",
144
- "@timstamp": "2014-12-19T08:01:03Z",
159
+ "@timestamp": "2014-12-19T08:01:03Z",
145
160
  "vtm": "2014-12-19T08:01:03Z"
146
161
  }
147
162
  ```
@@ -154,6 +169,39 @@ utc_index true
154
169
 
155
170
  By default, the records inserted into index `logstash-YYMMDD` with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.
156
171
 
172
+ ### target_index_key
173
+
174
+ Tell this plugin to find the index name to write to in the record under this key in preference to other mechanisms.
175
+
176
+ If it is present in the record (and the value is non falsey) the value will be used as the index name to write to and then removed from the record before output; if it is not found then it will use logstash_format or index_name settings as configured.
177
+
178
+ Suppose you have the following settings
179
+
180
+ ```
181
+ target_index_key @target_index
182
+ index_name fallback
183
+ ```
184
+
185
+ If your input is:
186
+ ```
187
+ {
188
+ "title": "developer",
189
+ "@timestamp": "2014-12-19T08:01:03Z",
190
+ "@target_index": "logstash-2014.12.19"
191
+ }
192
+ ```
193
+
194
+ The output would be
195
+
196
+ ```
197
+ {
198
+ "title": "developer",
199
+ "@timestamp": "2014-12-19T08:01:03Z",
200
+ }
201
+ ```
202
+
203
+ and this record will be written to the specified index (`logstash-2014.12.19`) rather than `fallback`.
204
+
157
205
  ### request_timeout
158
206
 
159
207
  You can specify HTTP request timeout.
@@ -200,7 +248,7 @@ This will add the Fluentd tag in the JSON record. For instance, if you have a co
200
248
 
201
249
  ```
202
250
  <match my.logs>
203
- type elasticsearch
251
+ @type elasticsearch
204
252
  include_tag_key true
205
253
  tag_key _key
206
254
  </match>
@@ -229,7 +277,7 @@ This following record `{"name":"Johnny","request_id":"87d89af7daffad6"}` will tr
229
277
 
230
278
  ### write_operation
231
279
 
232
- The write_operation can be any of:
280
+ The write_operation can be any of:
233
281
 
234
282
  | Operation | Description |
235
283
  | ------------- | ----------- |
@@ -254,6 +302,10 @@ client_key /path/to/your/private/key
254
302
  client_key_pass password
255
303
  ```
256
304
 
305
+ ### Proxy Support
306
+
307
+ Starting with version 0.8.0, this gem uses excon, which supports proxy with environment variables - https://github.com/excon/excon#proxy-support
308
+
257
309
  ### Buffered output options
258
310
 
259
311
  `fluentd-plugin-elasticsearch` extends [Fluentd's builtin Buffered Output plugin](http://docs.fluentd.org/articles/buffer-plugin-overview). It adds the following options:
@@ -276,7 +328,7 @@ Alternatively, consider using [fluent-plugin-forest](https://github.com/tagomori
276
328
 
277
329
  ```
278
330
  <match my.logs.*>
279
- type forest
331
+ @type forest
280
332
  subtype elasticsearch
281
333
  remove_prefix my.logs
282
334
  <template>
@@ -294,7 +346,7 @@ If you want configurations to depend on information in messages, you can use `el
294
346
 
295
347
  ```
296
348
  <match my.logs.*>
297
- type elasticsearch_dynamic
349
+ @type elasticsearch_dynamic
298
350
  hosts ${record['host1']}:9200,${record['host2']}:9200
299
351
  index_name my_index.${Time.at(time).getutc.strftime(@logstash_dateformat)}
300
352
  logstash_prefix ${tag_parts[3]}
@@ -311,8 +363,12 @@ If you have a question, [open an Issue](https://github.com/uken/fluent-plugin-el
311
363
 
312
364
  ## Contributing
313
365
 
366
+ There are usually a few feature requests, tagged [Easy](https://github.com/uken/fluent-plugin-elasticsearch/issues?q=is%3Aissue+is%3Aopen+label%3Alevel%3AEasy), [Normal](https://github.com/uken/fluent-plugin-elasticsearch/issues?q=is%3Aissue+is%3Aopen+label%3Alevel%3ANormal) and [Hard](https://github.com/uken/fluent-plugin-elasticsearch/issues?q=is%3Aissue+is%3Aopen+label%3Alevel%3AHard). Feel free to work on any one of them.
367
+
314
368
  Pull Requests are welcomed.
315
369
 
370
+ [![Pull Request Graph](https://graphs.waffle.io/uken/fluent-plugin-elasticsearch/throughput.svg)](https://waffle.io/uken/fluent-plugin-elasticsearch/metrics)
371
+
316
372
  ## Running tests
317
373
 
318
374
  Install dev dependencies:
@@ -3,7 +3,7 @@ $:.push File.expand_path('../lib', __FILE__)
3
3
 
4
4
  Gem::Specification.new do |s|
5
5
  s.name = 'fluent-plugin-elasticsearch'
6
- s.version = '1.3.0'
6
+ s.version = '1.4.0'
7
7
  s.authors = ['diogo', 'pitr']
8
8
  s.email = ['pitr.vern@gmail.com', 'me@diogoterror.com']
9
9
  s.description = %q{ElasticSearch output plugin for Fluent event collector}
@@ -3,6 +3,10 @@ require 'date'
3
3
  require 'excon'
4
4
  require 'elasticsearch'
5
5
  require 'uri'
6
+ begin
7
+ require 'strptime'
8
+ rescue LoadError
9
+ end
6
10
 
7
11
  class Fluent::ElasticsearchOutput < Fluent::BufferedOutput
8
12
  class ConnectionFailure < StandardError; end
@@ -16,6 +20,8 @@ class Fluent::ElasticsearchOutput < Fluent::BufferedOutput
16
20
  config_param :path, :string, :default => nil
17
21
  config_param :scheme, :string, :default => 'http'
18
22
  config_param :hosts, :string, :default => nil
23
+ config_param :target_index_key, :string, :default => nil
24
+ config_param :time_key_format, :string, :default => nil
19
25
  config_param :logstash_format, :bool, :default => false
20
26
  config_param :logstash_prefix, :string, :default => "logstash"
21
27
  config_param :logstash_dateformat, :string, :default => "%Y.%m.%d"
@@ -41,16 +47,50 @@ class Fluent::ElasticsearchOutput < Fluent::BufferedOutput
41
47
 
42
48
  def initialize
43
49
  super
50
+ @time_parser = TimeParser.new(@time_key_format, @router)
44
51
  end
45
52
 
46
53
  def configure(conf)
47
54
  super
55
+ @time_parser = TimeParser.new(@time_key_format, @router)
48
56
  end
49
57
 
50
58
  def start
51
59
  super
52
60
  end
53
61
 
62
+ # once fluent v0.14 is released we might be able to use
63
+ # Fluent::Parser::TimeParser, but it doesn't quite do what we want - if gives
64
+ # [sec,nsec] where as we want something we can call `strftime` on...
65
+ class TimeParser
66
+ def initialize(time_key_format, router)
67
+ @time_key_format = time_key_format
68
+ @router = router
69
+ @parser = if time_key_format
70
+ begin
71
+ # Strptime doesn't support all formats, but for those it does it's
72
+ # blazingly fast.
73
+ strptime = Strptime.new(time_key_format)
74
+ Proc.new { |value| strptime.exec(value).to_datetime }
75
+ rescue
76
+ # Can happen if Strptime doesn't recognize the format; or
77
+ # if strptime couldn't be required (because it's not installed -- it's
78
+ # ruby 2 only)
79
+ Proc.new { |value| DateTime.strptime(value, time_key_format) }
80
+ end
81
+ else
82
+ Proc.new { |value| DateTime.parse(value) }
83
+ end
84
+ end
85
+
86
+ def parse(value, event_time)
87
+ @parser.call(value)
88
+ rescue => e
89
+ @router.emit_error_event("Fluent::ElasticsearchOutput::TimeParser.error", Fluent::Engine.now, {'time' => event_time, 'format' => @time_key_format, 'value' => value }, e)
90
+ return Time.at(event_time).to_datetime
91
+ end
92
+ end
93
+
54
94
  def client
55
95
  @_es ||= begin
56
96
  excon_options = { client_key: @client_key, client_cert: @client_cert, client_key_pass: @client_key_pass }
@@ -151,20 +191,21 @@ class Fluent::ElasticsearchOutput < Fluent::BufferedOutput
151
191
 
152
192
  chunk.msgpack_each do |tag, time, record|
153
193
  next unless record.is_a? Hash
154
- if @logstash_format
194
+ if @target_index_key && record[@target_index_key]
195
+ target_index = record.delete @target_index_key
196
+ elsif @logstash_format
155
197
  if record.has_key?("@timestamp")
156
- time = Time.parse record["@timestamp"]
198
+ dt = record["@timestamp"]
199
+ dt = @time_parser.parse(record["@timestamp"], time)
157
200
  elsif record.has_key?(@time_key)
158
- time = Time.parse record[@time_key]
201
+ dt = @time_parser.parse(record[@time_key], time)
159
202
  record['@timestamp'] = record[@time_key]
160
203
  else
161
- record.merge!({"@timestamp" => Time.at(time).to_datetime.to_s})
162
- end
163
- if @utc_index
164
- target_index = "#{@logstash_prefix}-#{Time.at(time).getutc.strftime("#{@logstash_dateformat}")}"
165
- else
166
- target_index = "#{@logstash_prefix}-#{Time.at(time).strftime("#{@logstash_dateformat}")}"
204
+ dt = Time.at(time).to_datetime
205
+ record.merge!({"@timestamp" => dt.to_s})
167
206
  end
207
+ dt = dt.new_offset(0) if @utc_index
208
+ target_index = "#{@logstash_prefix}-#{dt.strftime(@logstash_dateformat)}"
168
209
  else
169
210
  target_index = @index_name
170
211
  end
@@ -8,6 +8,8 @@ class ElasticsearchOutput < Test::Unit::TestCase
8
8
  Fluent::Test.setup
9
9
  require 'fluent/plugin/out_elasticsearch'
10
10
  @driver = nil
11
+ log = Fluent::Engine.log
12
+ log.out.logs.slice!(0, log.out.logs.length)
11
13
  end
12
14
 
13
15
  def driver(tag='test', conf='')
@@ -155,6 +157,49 @@ class ElasticsearchOutput < Test::Unit::TestCase
155
157
  assert_equal('myindex', index_cmds.first['index']['_index'])
156
158
  end
157
159
 
160
+ def test_writes_to_target_index_key
161
+ driver.configure("target_index_key @target_index\n")
162
+ stub_elastic_ping
163
+ stub_elastic
164
+ record = sample_record.clone
165
+ driver.emit(sample_record.merge('@target_index' => 'local-override'))
166
+ driver.run
167
+ assert_equal('local-override', index_cmds.first['index']['_index'])
168
+ assert_nil(index_cmds[1]['@target_index'])
169
+ end
170
+
171
+ def test_writes_to_target_index_key_logstash
172
+ driver.configure("target_index_key @target_index\n")
173
+ driver.configure("logstash_format true\n")
174
+ time = Time.parse Date.today.to_s
175
+ stub_elastic_ping
176
+ stub_elastic
177
+ driver.emit(sample_record.merge('@target_index' => 'local-override'), time)
178
+ driver.run
179
+ assert_equal('local-override', index_cmds.first['index']['_index'])
180
+ end
181
+
182
+ def test_writes_to_target_index_key_fallack
183
+ driver.configure("target_index_key @target_index\n")
184
+ stub_elastic_ping
185
+ stub_elastic
186
+ driver.emit(sample_record)
187
+ driver.run
188
+ assert_equal('fluentd', index_cmds.first['index']['_index'])
189
+ end
190
+
191
+ def test_writes_to_target_index_key_fallack_logstash
192
+ driver.configure("target_index_key @target_index\n")
193
+ driver.configure("logstash_format true\n")
194
+ time = Time.parse Date.today.to_s
195
+ logstash_index = "logstash-#{time.getutc.strftime("%Y.%m.%d")}"
196
+ stub_elastic_ping
197
+ stub_elastic
198
+ driver.emit(sample_record, time)
199
+ driver.run
200
+ assert_equal(logstash_index, index_cmds.first['index']['_index'])
201
+ end
202
+
158
203
  def test_writes_to_speficied_type
159
204
  driver.configure("type_name mytype\n")
160
205
  stub_elastic_ping
@@ -234,25 +279,29 @@ class ElasticsearchOutput < Test::Unit::TestCase
234
279
 
235
280
  def test_writes_to_logstash_index
236
281
  driver.configure("logstash_format true\n")
237
- time = Time.parse Date.today.to_s
238
- logstash_index = "logstash-#{time.getutc.strftime("%Y.%m.%d")}"
282
+ #
283
+ # This is 1 second past midnight in BST, so the UTC index should be the day before
284
+ dt = DateTime.new(2015, 6, 1, 0, 0, 1, "+01:00")
285
+ logstash_index = "logstash-2015.05.31"
239
286
  stub_elastic_ping
240
287
  stub_elastic
241
- driver.emit(sample_record, time)
288
+ driver.emit(sample_record, dt.to_time)
242
289
  driver.run
243
290
  assert_equal(logstash_index, index_cmds.first['index']['_index'])
244
291
  end
245
292
 
246
- def test_writes_to_logstash_utc_index
293
+ def test_writes_to_logstash_non_utc_index
247
294
  driver.configure("logstash_format true
248
295
  utc_index false")
249
- time = Time.parse Date.today.to_s
250
- utc_index = "logstash-#{time.strftime("%Y.%m.%d")}"
296
+ # When using `utc_index false` the index time will be the local day of
297
+ # ingestion time
298
+ time = Date.today.to_time
299
+ index = "logstash-#{time.strftime("%Y.%m.%d")}"
251
300
  stub_elastic_ping
252
301
  stub_elastic
253
302
  driver.emit(sample_record, time)
254
303
  driver.run
255
- assert_equal(utc_index, index_cmds.first['index']['_index'])
304
+ assert_equal(index, index_cmds.first['index']['_index'])
256
305
  end
257
306
 
258
307
  def test_writes_to_logstash_index_with_specified_prefix
@@ -334,6 +383,55 @@ class ElasticsearchOutput < Test::Unit::TestCase
334
383
  assert_equal(index_cmds[1]['@timestamp'], ts)
335
384
  end
336
385
 
386
+
387
+ def test_uses_custom_time_key_format
388
+ driver.configure("logstash_format true
389
+ time_key_format %Y-%m-%dT%H:%M:%S.%N%z\n")
390
+ stub_elastic_ping
391
+ stub_elastic
392
+ ts = "2001-02-03T13:14:01.673+02:00"
393
+ driver.emit(sample_record.merge!('@timestamp' => ts))
394
+ driver.run
395
+ assert_equal("logstash-2001.02.03", index_cmds[0]['index']['_index'])
396
+ assert(index_cmds[1].has_key? '@timestamp')
397
+ assert_equal(index_cmds[1]['@timestamp'], ts)
398
+ end
399
+
400
+ def test_uses_custom_time_key_format_logs_an_error
401
+ driver.configure("logstash_format true
402
+ time_key_format %Y-%m-%dT%H:%M:%S.%N%z\n")
403
+ stub_elastic_ping
404
+ stub_elastic
405
+
406
+ ts = "2001/02/03 13:14:01,673+02:00"
407
+ index = "logstash-#{Date.today.strftime("%Y.%m.%d")}"
408
+
409
+ driver.emit(sample_record.merge!('@timestamp' => ts))
410
+ driver.run
411
+
412
+ log = driver.instance.router.emit_error_handler.log
413
+ errors = log.out.logs.grep /tag="Fluent::ElasticsearchOutput::TimeParser.error"/
414
+ assert_equal(1, errors.length, "Error was logged for timestamp parse failure")
415
+
416
+ assert_equal(index, index_cmds[0]['index']['_index'])
417
+ assert(index_cmds[1].has_key? '@timestamp')
418
+ assert_equal(index_cmds[1]['@timestamp'], ts)
419
+ end
420
+
421
+
422
+ def test_uses_custom_time_key_format_obscure_format
423
+ driver.configure("logstash_format true
424
+ time_key_format %a %b %d %H:%M:%S %Z %Y\n")
425
+ stub_elastic_ping
426
+ stub_elastic
427
+ ts = "Thu Nov 29 14:33:20 GMT 2001"
428
+ driver.emit(sample_record.merge!('@timestamp' => ts))
429
+ driver.run
430
+ assert_equal("logstash-2001.11.29", index_cmds[0]['index']['_index'])
431
+ assert(index_cmds[1].has_key? '@timestamp')
432
+ assert_equal(index_cmds[1]['@timestamp'], ts)
433
+ end
434
+
337
435
  def test_doesnt_add_tag_key_by_default
338
436
  stub_elastic_ping
339
437
  stub_elastic
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: fluent-plugin-elasticsearch
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.3.0
4
+ version: 1.4.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - diogo
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2016-01-05 00:00:00.000000000 Z
12
+ date: 2016-02-18 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: fluentd
@@ -152,7 +152,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
152
152
  version: '0'
153
153
  requirements: []
154
154
  rubyforge_project:
155
- rubygems_version: 2.2.2
155
+ rubygems_version: 2.5.1
156
156
  signing_key:
157
157
  specification_version: 4
158
158
  summary: ElasticSearch output plugin for Fluent event collector