fluent-plugin-elasticsearch 3.6.1 → 3.7.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 1073f1770d5c380f496b57f111e1792812b870392c5996111465ce9cd2cfa0c6
4
- data.tar.gz: 1f8e70ec6516ba252d7f2b722ec7c85d3830e3eb0b616b67c909663ac5db9461
3
+ metadata.gz: 3edaea3928f2cc09c7a908d870afdc2cac2a6d15f2e3f36cc7aa2f27962da79f
4
+ data.tar.gz: '09c000b05842db4f08a546f09adb3fe725d50fcf213435d3c7fc0dfa94753551'
5
5
  SHA512:
6
- metadata.gz: 291e4a936373fc096718c4fa9db816874a7e53d2f950b9a8e5bfa3f7a6830d98a4d79ab4b36d16b5615cfd831e364b5e545ff91a49866f26805a5aae73670338
7
- data.tar.gz: ea66679ef5406d950923e9f255532116f5378619b712f063c64b01997313aecf5ec53c3e04a7c1313125cda4370e058c200d539bd741c2afd55a08015a42c26b
6
+ metadata.gz: 148893479963ccf22397b41be92ea10dbe49e852d54412daa10659e8eb52c8bd1f0ac9546549ec28dde9df53a40f41e6fc45746eb95c10ab8e885e8e38d60537
7
+ data.tar.gz: f7e31c37963276c11162d78aa45b52c71feaaf409dd548b72cfd32c652b784008ccc0d0777da0f050817f3d1803023e523c3d693d193a21725aeeb3849518fd6
data/History.md CHANGED
@@ -1,6 +1,11 @@
1
1
  ## Changelog [[tags]](https://github.com/uken/fluent-plugin-elasticsearch/tags)
2
2
 
3
3
  ### [Unreleased]
4
+ ### 3.7.0
5
+ - Tweak for cosmetic change (#671)
6
+ - Fix access to Elasticsearch::Transport::VERSION with explicit top level class path (#670)
7
+ - Implement Elasticsearch Input plugin (#669)
8
+
4
9
  ### 3.6.1
5
10
  - retry upsert on recoverable error. (#667)
6
11
  - Allow `_index` in chunk_keys (#665)
@@ -0,0 +1,293 @@
1
+ ## Index
2
+
3
+ * [Installation](#installation)
4
+ * [Usage](#usage)
5
+ * [Configuration](#configuration)
6
+ + [host](#host)
7
+ + [port](#port)
8
+ + [hosts](#hosts)
9
+ + [user, password, path, scheme, ssl_verify](#user-password-path-scheme-ssl_verify)
10
+ + [parse_timestamp](#parse_timestamp)
11
+ + [timestampkey_format](#timestampkey_format)
12
+ + [timestamp_key](#timestamp_key)
13
+ + [timestamp_parse_error_tag](#timestamp_parse_error_tag)
14
+ + [http_backend](#http_backend)
15
+ + [request_timeout](#request_timeout)
16
+ + [reload_connections](#reload_connections)
17
+ + [reload_on_failure](#reload_on_failure)
18
+ + [resurrect_after](#resurrect_after)
19
+ + [with_transporter_log](#with_transporter_log)
20
+ + [Client/host certificate options](#clienthost-certificate-options)
21
+ + [sniffer_class_name](#sniffer-class-name)
22
+ + [custom_headers](#custom_headers)
23
+ + [docinfo_fields](#docinfo_fields)
24
+ + [docinfo_target](#docinfo_target)
25
+ + [docinfo](#docinfo)
26
+ * [Advanced Usage](#advanced-usage)
27
+
28
+ ## Usage
29
+
30
+ In your Fluentd configuration, use `@type elasticsearch` and specify `tag your.awesome.tag`. Additional configuration is optional, default values would look like this:
31
+
32
+ ```
33
+ <source>
34
+ @type elasticsearch
35
+ host localhost
36
+ port 9200
37
+ index_name fluentd
38
+ type_name fluentd
39
+ tag my.logs
40
+ </match>
41
+ ```
42
+
43
+ ## Configuration
44
+
45
+ ### host
46
+
47
+ ```
48
+ host user-custom-host.domain # default localhost
49
+ ```
50
+
51
+ You can specify Elasticsearch host by this parameter.
52
+
53
+
54
+ ### port
55
+
56
+ ```
57
+ port 9201 # defaults to 9200
58
+ ```
59
+
60
+ You can specify Elasticsearch port by this parameter.
61
+
62
+ ### hosts
63
+
64
+ ```
65
+ hosts host1:port1,host2:port2,host3:port3
66
+ ```
67
+
68
+ You can specify multiple Elasticsearch hosts with separator ",".
69
+
70
+ If you specify multiple hosts, this plugin will load balance updates to Elasticsearch. This is an [elasticsearch-ruby](https://github.com/elasticsearch/elasticsearch-ruby) feature, the default strategy is round-robin.
71
+
72
+ If you specify `hosts` option, `host` and `port` options are ignored.
73
+
74
+ ```
75
+ host user-custom-host.domain # ignored
76
+ port 9200 # ignored
77
+ hosts host1:port1,host2:port2,host3:port3
78
+ ```
79
+
80
+ If you specify `hosts` option without port, `port` option is used.
81
+
82
+ ```
83
+ port 9200
84
+ hosts host1:port1,host2:port2,host3 # port3 is 9200
85
+ ```
86
+
87
+ **Note:** If you will use scheme https, do not include "https://" in your hosts ie. host "https://domain", this will cause ES cluster to be unreachable and you will receive an error "Can not reach Elasticsearch cluster"
88
+
89
+ **Note:** Up until v2.8.5, it was allowed to embed the username/password in the URL. However, this syntax is deprecated as of v2.8.6 because it was found to cause serious connection problems (See #394). Please migrate your settings to use the `user` and `password` field (described below) instead.
90
+
91
+ ### user, password, path, scheme, ssl_verify
92
+
93
+ ```
94
+ user demo
95
+ password secret
96
+ path /elastic_search/
97
+ scheme https
98
+ ```
99
+
100
+ You can specify user and password for HTTP Basic authentication.
101
+
102
+ And this plugin will escape required URL encoded characters within `%{}` placeholders.
103
+
104
+ ```
105
+ user %{demo+}
106
+ password %{@secret}
107
+ ```
108
+
109
+ Specify `ssl_verify false` to skip ssl verification (defaults to true)
110
+
111
+ ### parse_timestamp
112
+
113
+ ```
114
+ parse_timestamp true # defaults to false
115
+ ```
116
+
117
+ Parse a `@timestamp` field and add parsed time to the event.
118
+
119
+ ### timestamp_key_format
120
+
121
+ The format of the time stamp field (`@timestamp` or what you specify in Elasticsearch). This parameter only has an effect when [parse_timestamp](#parse_timestamp) is true as it only affects the name of the index we write to. Please see [Time#strftime](http://ruby-doc.org/core-1.9.3/Time.html#method-i-strftime) for information about the value of this format.
122
+
123
+ Setting this to a known format can vastly improve your log ingestion speed if all most of your logs are in the same format. If there is an error parsing this format the timestamp will default to the ingestion time. If you are on Ruby 2.0 or later you can get a further performance improvement by installing the "strptime" gem: `fluent-gem install strptime`.
124
+
125
+ For example to parse ISO8601 times with sub-second precision:
126
+
127
+ ```
128
+ timestamp_key_format %Y-%m-%dT%H:%M:%S.%N%z
129
+ ```
130
+
131
+ ### timestamp_parse_error_tag
132
+
133
+ With `parse_timestamp true`, elasticsearch input plugin parses timestamp field for consuming event time. If the consumed record has invalid timestamp value, this plugin emits an error event to `@ERROR` label with `timestamp_parse_error_tag` configured tag.
134
+
135
+ Default value is `elasticsearch_plugin.input.time.error`.
136
+
137
+ ### http_backend
138
+
139
+ With `http_backend typhoeus`, elasticsearch plugin uses typhoeus faraday http backend.
140
+ Typhoeus can handle HTTP keepalive.
141
+
142
+ Default value is `excon` which is default http_backend of elasticsearch plugin.
143
+
144
+ ```
145
+ http_backend typhoeus
146
+ ```
147
+
148
+
149
+ ### request_timeout
150
+
151
+ You can specify HTTP request timeout.
152
+
153
+ This is useful when Elasticsearch cannot return response for bulk request within the default of 5 seconds.
154
+
155
+ ```
156
+ request_timeout 15s # defaults to 5s
157
+ ```
158
+
159
+ ### reload_connections
160
+
161
+ You can tune how the elasticsearch-transport host reloading feature works. By default it will reload the host list from the server every 10,000th request to spread the load. This can be an issue if your Elasticsearch cluster is behind a Reverse Proxy, as Fluentd process may not have direct network access to the Elasticsearch nodes.
162
+
163
+ ```
164
+ reload_connections false # defaults to true
165
+ ```
166
+
167
+ ### reload_on_failure
168
+
169
+ Indicates that the elasticsearch-transport will try to reload the nodes addresses if there is a failure while making the
170
+ request, this can be useful to quickly remove a dead node from the list of addresses.
171
+
172
+ ```
173
+ reload_on_failure true # defaults to false
174
+ ```
175
+
176
+ ### resurrect_after
177
+
178
+ You can set in the elasticsearch-transport how often dead connections from the elasticsearch-transport's pool will be resurrected.
179
+
180
+ ```
181
+ resurrect_after 5s # defaults to 60s
182
+ ```
183
+
184
+ ### with_transporter_log
185
+
186
+ This is debugging purpose option to enable to obtain transporter layer log.
187
+ Default value is `false` for backward compatibility.
188
+
189
+ We recommend to set this true if you start to debug this plugin.
190
+
191
+ ```
192
+ with_transporter_log true
193
+ ```
194
+
195
+ ### Client/host certificate options
196
+
197
+ Need to verify Elasticsearch's certificate? You can use the following parameter to specify a CA instead of using an environment variable.
198
+ ```
199
+ ca_file /path/to/your/ca/cert
200
+ ```
201
+
202
+ Does your Elasticsearch cluster want to verify client connections? You can specify the following parameters to use your client certificate, key, and key password for your connection.
203
+ ```
204
+ client_cert /path/to/your/client/cert
205
+ client_key /path/to/your/private/key
206
+ client_key_pass password
207
+ ```
208
+
209
+ If you want to configure SSL/TLS version, you can specify ssl\_version parameter.
210
+ ```
211
+ ssl_version TLSv1_2 # or [SSLv23, TLSv1, TLSv1_1]
212
+ ```
213
+
214
+ :warning: If SSL/TLS enabled, it might have to be required to set ssl\_version.
215
+
216
+ ### Sniffer Class Name
217
+
218
+ The default Sniffer used by the `Elasticsearch::Transport` class works well when Fluentd has a direct connection
219
+ to all of the Elasticsearch servers and can make effective use of the `_nodes` API. This doesn't work well
220
+ when Fluentd must connect through a load balancer or proxy. The parameter `sniffer_class_name` gives you the
221
+ ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition,
222
+ there is a new `Fluent::Plugin::ElasticsearchSimpleSniffer` class which reuses the hosts given in the configuration, which
223
+ is typically the hostname of the load balancer or proxy. For example, a configuration like this would cause
224
+ connections to `logging-es` to reload every 100 operations:
225
+
226
+ ```
227
+ host logging-es
228
+ port 9200
229
+ reload_connections true
230
+ sniffer_class_name Fluent::Plugin::ElasticsearchSimpleSniffer
231
+ reload_after 100
232
+ ```
233
+
234
+ ### custom_headers
235
+
236
+ This parameter adds additional headers to request. The default value is `{}`.
237
+
238
+ ```
239
+ custom_headers {"token":"secret"}
240
+ ```
241
+
242
+ ### docinfo_fields
243
+
244
+ This parameter specifies docinfo record keys. The default values are `['_index', '_type', '_id']`.
245
+
246
+ ```
247
+ docinfo_fields ['_index', '_id']
248
+ ```
249
+
250
+ ### docinfo_target
251
+
252
+ This parameter specifies docinfo storing key. The default value is `@metadata`.
253
+
254
+ ```
255
+ docinfo_target metadata
256
+ ```
257
+
258
+ ### docinfo
259
+
260
+ This parameter specifies whether docinfo information including or not. The default value is `false`.
261
+
262
+ ```
263
+ docinfo false
264
+ ```
265
+
266
+ ## Advanced Usage
267
+
268
+ Elasticsearch Input plugin and Elasticsearch output plugin can combine to transfer records into another cluster.
269
+
270
+ ```aconf
271
+ <source>
272
+ @type elasticsearch
273
+ host original-cluster.local
274
+ port 9200
275
+ tag raw.elasticsearch
276
+ index_name logstash-*
277
+ docinfo true
278
+ # repeat false
279
+ # num_slices 2
280
+ # with_transporter_log true
281
+ </source>
282
+ <match raw.elasticsearch>
283
+ @type elasticsearch
284
+ host transferred-cluster.local
285
+ port 9200
286
+ index_name ${$.@metadata._index}
287
+ # type_name ${$.@metadata._type} # This parameter is optional due to Removal of mapping types since ES7.
288
+ id_key ${$.@metadata._id} # This parameter is needed for prevent duplicated records.
289
+ <buffer tag, $.@metadata._index, $.@metadata._type, $.@metadata._id>
290
+ @type memory # should use file buffer for preventing chunk lost
291
+ </buffer>
292
+ </match>
293
+ ```
data/README.md CHANGED
@@ -91,6 +91,7 @@ Current maintainers: @cosmo0920
91
91
  + [enable_ilm](#enable_ilm)
92
92
  + [ilm_policy_id](#ilm_policy_id)
93
93
  + [ilm_policy](#ilm_policy)
94
+ * [Configuration - Elasticsearch Input](#configuration---elasticsearch-input)
94
95
  * [Troubleshooting](#troubleshooting)
95
96
  + [Cannot send events to elasticsearch](#cannot-send-events-to-elasticsearch)
96
97
  + [Cannot see detailed failure log](#cannot-see-detailed-failure-log)
@@ -794,11 +795,11 @@ http_backend typhoeus
794
795
  ```
795
796
 
796
797
  ### compression_level
797
- You can add gzip compression of output data. In this case `default_compression`, `best_compression` or `best speed` option should be chosen.
798
+ You can add gzip compression of output data. In this case `default_compression`, `best_compression` or `best speed` option should be chosen.
798
799
  By default there is no compression, default value for this option is `no_compression`
799
800
  ```
800
801
  compression_level best_compression
801
- ```
802
+ ```
802
803
 
803
804
  ### prefer_oj_serializer
804
805
 
@@ -1158,6 +1159,10 @@ Default value is `{}`.
1158
1159
 
1159
1160
  **NOTE:** This parameter requests to install elasticsearch-xpack gem.
1160
1161
 
1162
+ ## Configuration - Elasticsearch Input
1163
+
1164
+ See [Elasticsearch Input plugin document](README.ElasticsearchInput.md)
1165
+
1161
1166
  ## Troubleshooting
1162
1167
 
1163
1168
  ### Cannot send events to Elasticsearch
@@ -3,9 +3,9 @@ $:.push File.expand_path('../lib', __FILE__)
3
3
 
4
4
  Gem::Specification.new do |s|
5
5
  s.name = 'fluent-plugin-elasticsearch'
6
- s.version = '3.6.1'
7
- s.authors = ['diogo', 'pitr']
8
- s.email = ['pitr.vern@gmail.com', 'me@diogoterror.com']
6
+ s.version = '3.7.0'
7
+ s.authors = ['diogo', 'pitr', 'Hiroshi Hatake']
8
+ s.email = ['pitr.vern@gmail.com', 'me@diogoterror.com', 'cosmo0920.wp@gmail.com']
9
9
  s.description = %q{Elasticsearch output plugin for Fluent event collector}
10
10
  s.summary = s.description
11
11
  s.homepage = 'https://github.com/uken/fluent-plugin-elasticsearch'
@@ -0,0 +1,325 @@
1
+ require 'elasticsearch'
2
+
3
+ require 'fluent/log-ext'
4
+ require 'fluent/plugin/input'
5
+ require_relative 'elasticsearch_constants'
6
+
7
+ module Fluent::Plugin
8
+ class ElasticsearchInput < Input
9
+ class UnrecoverableRequestFailure < Fluent::UnrecoverableError; end
10
+
11
+ DEFAULT_RELOAD_AFTER = -1
12
+ DEFAULT_STORAGE_TYPE = 'local'
13
+ METADATA = "@metadata".freeze
14
+
15
+ helpers :timer, :thread
16
+
17
+ Fluent::Plugin.register_input('elasticsearch', self)
18
+
19
+ config_param :tag, :string
20
+ config_param :host, :string, :default => 'localhost'
21
+ config_param :port, :integer, :default => 9200
22
+ config_param :user, :string, :default => nil
23
+ config_param :password, :string, :default => nil, :secret => true
24
+ config_param :path, :string, :default => nil
25
+ config_param :scheme, :enum, :list => [:https, :http], :default => :http
26
+ config_param :hosts, :string, :default => nil
27
+ config_param :index_name, :string, :default => "fluentd"
28
+ config_param :parse_timestamp, :bool, :default => false
29
+ config_param :timestamp_key_format, :string, :default => nil
30
+ config_param :timestamp_parse_error_tag, :string, :default => 'elasticsearch_plugin.input.time.error'
31
+ config_param :query, :hash, :default => {"sort" => [ "_doc" ]}
32
+ config_param :scroll, :string, :default => "1m"
33
+ config_param :size, :integer, :default => 1000
34
+ config_param :num_slices, :integer, :default => 1
35
+ config_param :interval, :size, :default => 5
36
+ config_param :repeat, :bool, :default => true
37
+ config_param :http_backend, :enum, list: [:excon, :typhoeus], :default => :excon
38
+ config_param :request_timeout, :time, :default => 5
39
+ config_param :reload_connections, :bool, :default => true
40
+ config_param :reload_on_failure, :bool, :default => false
41
+ config_param :resurrect_after, :time, :default => 60
42
+ config_param :reload_after, :integer, :default => DEFAULT_RELOAD_AFTER
43
+ config_param :ssl_verify , :bool, :default => true
44
+ config_param :client_key, :string, :default => nil
45
+ config_param :client_cert, :string, :default => nil
46
+ config_param :client_key_pass, :string, :default => nil, :secret => true
47
+ config_param :ca_file, :string, :default => nil
48
+ config_param :ssl_version, :enum, list: [:SSLv23, :TLSv1, :TLSv1_1, :TLSv1_2], :default => :TLSv1_2
49
+ config_param :with_transporter_log, :bool, :default => false
50
+ config_param :sniffer_class_name, :string, :default => nil
51
+ config_param :custom_headers, :hash, :default => {}
52
+ config_param :docinfo_fields, :array, :default => ['_index', '_type', '_id']
53
+ config_param :docinfo_target, :string, :default => METADATA
54
+ config_param :docinfo, :bool, :default => false
55
+
56
+ include Fluent::Plugin::ElasticsearchConstants
57
+
58
+ def initialize
59
+ super
60
+ end
61
+
62
+ def configure(conf)
63
+ super
64
+
65
+ @timestamp_parser = create_time_parser
66
+ @backend_options = backend_options
67
+
68
+ raise Fluent::ConfigError, "`password` must be present if `user` is present" if @user && @password.nil?
69
+
70
+ if @user && m = @user.match(/%{(?<user>.*)}/)
71
+ @user = URI.encode_www_form_component(m["user"])
72
+ end
73
+ if @password && m = @password.match(/%{(?<password>.*)}/)
74
+ @password = URI.encode_www_form_component(m["password"])
75
+ end
76
+
77
+ @transport_logger = nil
78
+ if @with_transporter_log
79
+ @transport_logger = log
80
+ log_level = conf['@log_level'] || conf['log_level']
81
+ log.warn "Consider to specify log_level with @log_level." unless log_level
82
+ end
83
+ @current_config = nil
84
+ # Specify @sniffer_class before calling #client.
85
+ @sniffer_class = nil
86
+ begin
87
+ @sniffer_class = Object.const_get(@sniffer_class_name) if @sniffer_class_name
88
+ rescue Exception => ex
89
+ raise Fluent::ConfigError, "Could not load sniffer class #{@sniffer_class_name}: #{ex}"
90
+ end
91
+
92
+ @options = {
93
+ :index => @index_name,
94
+ :scroll => @scroll,
95
+ :size => @size
96
+ }
97
+ @base_query = @query
98
+ end
99
+
100
+ def backend_options
101
+ case @http_backend
102
+ when :excon
103
+ { client_key: @client_key, client_cert: @client_cert, client_key_pass: @client_key_pass }
104
+ when :typhoeus
105
+ require 'typhoeus'
106
+ { sslkey: @client_key, sslcert: @client_cert, keypasswd: @client_key_pass }
107
+ end
108
+ rescue LoadError => ex
109
+ log.error_backtrace(ex.backtrace)
110
+ raise Fluent::ConfigError, "You must install #{@http_backend} gem. Exception: #{ex}"
111
+ end
112
+
113
+ def get_escaped_userinfo(host_str)
114
+ if m = host_str.match(/(?<scheme>.*)%{(?<user>.*)}:%{(?<password>.*)}(?<path>@.*)/)
115
+ m["scheme"] +
116
+ URI.encode_www_form_component(m["user"]) +
117
+ ':' +
118
+ URI.encode_www_form_component(m["password"]) +
119
+ m["path"]
120
+ else
121
+ host_str
122
+ end
123
+ end
124
+
125
+ def get_connection_options(con_host=nil)
126
+
127
+ hosts = if con_host || @hosts
128
+ (con_host || @hosts).split(',').map do |host_str|
129
+ # Support legacy hosts format host:port,host:port,host:port...
130
+ if host_str.match(%r{^[^:]+(\:\d+)?$})
131
+ {
132
+ host: host_str.split(':')[0],
133
+ port: (host_str.split(':')[1] || @port).to_i,
134
+ scheme: @scheme.to_s
135
+ }
136
+ else
137
+ # New hosts format expects URLs such as http://logs.foo.com,https://john:pass@logs2.foo.com/elastic
138
+ uri = URI(get_escaped_userinfo(host_str))
139
+ %w(user password path).inject(host: uri.host, port: uri.port, scheme: uri.scheme) do |hash, key|
140
+ hash[key.to_sym] = uri.public_send(key) unless uri.public_send(key).nil? || uri.public_send(key) == ''
141
+ hash
142
+ end
143
+ end
144
+ end.compact
145
+ else
146
+ [{host: @host, port: @port, scheme: @scheme.to_s}]
147
+ end.each do |host|
148
+ host.merge!(user: @user, password: @password) if !host[:user] && @user
149
+ host.merge!(path: @path) if !host[:path] && @path
150
+ end
151
+
152
+ {
153
+ hosts: hosts
154
+ }
155
+ end
156
+
157
+ def start
158
+ super
159
+
160
+ timer_execute(:in_elasticsearch_timer, @interval, repeat: @repeat, &method(:run))
161
+ end
162
+
163
+ # once fluent v0.14 is released we might be able to use
164
+ # Fluent::Parser::TimeParser, but it doesn't quite do what we want - if gives
165
+ # [sec,nsec] where as we want something we can call `strftime` on...
166
+ def create_time_parser
167
+ if @timestamp_key_format
168
+ begin
169
+ # Strptime doesn't support all formats, but for those it does it's
170
+ # blazingly fast.
171
+ strptime = Strptime.new(@timestamp_key_format)
172
+ Proc.new { |value|
173
+ value = convert_numeric_time_into_string(value, @timestamp_key_format) if value.is_a?(Numeric)
174
+ strptime.exec(value).to_time
175
+ }
176
+ rescue
177
+ # Can happen if Strptime doesn't recognize the format; or
178
+ # if strptime couldn't be required (because it's not installed -- it's
179
+ # ruby 2 only)
180
+ Proc.new { |value|
181
+ value = convert_numeric_time_into_string(value, @timestamp_key_format) if value.is_a?(Numeric)
182
+ DateTime.strptime(value, @timestamp_key_format).to_time
183
+ }
184
+ end
185
+ else
186
+ Proc.new { |value|
187
+ value = convert_numeric_time_into_string(value) if value.is_a?(Numeric)
188
+ DateTime.parse(value).to_time
189
+ }
190
+ end
191
+ end
192
+
193
+ def convert_numeric_time_into_string(numeric_time, timestamp_key_format = "%Y-%m-%dT%H:%M:%S.%N%z")
194
+ numeric_time_parser = Fluent::NumericTimeParser.new(:float)
195
+ Time.at(numeric_time_parser.parse(numeric_time).to_r).strftime(timestamp_key_format)
196
+ end
197
+
198
+ def parse_time(value, event_time, tag)
199
+ @timestamp_parser.call(value)
200
+ rescue => e
201
+ router.emit_error_event(@timestamp_parse_error_tag, Fluent::Engine.now, {'tag' => tag, 'time' => event_time, 'format' => @timestamp_key_format, 'value' => value}, e)
202
+ return Time.at(event_time).to_time
203
+ end
204
+
205
+ def client(host = nil)
206
+ # check here to see if we already have a client connection for the given host
207
+ connection_options = get_connection_options(host)
208
+
209
+ @_es = nil unless is_existing_connection(connection_options[:hosts])
210
+
211
+ @_es ||= begin
212
+ @current_config = connection_options[:hosts].clone
213
+ adapter_conf = lambda {|f| f.adapter @http_backend, @backend_options }
214
+ local_reload_connections = @reload_connections
215
+ if local_reload_connections && @reload_after > DEFAULT_RELOAD_AFTER
216
+ local_reload_connections = @reload_after
217
+ end
218
+
219
+ headers = { 'Content-Type' => "application/json" }.merge(@custom_headers)
220
+
221
+ transport = Elasticsearch::Transport::Transport::HTTP::Faraday.new(
222
+ connection_options.merge(
223
+ options: {
224
+ reload_connections: local_reload_connections,
225
+ reload_on_failure: @reload_on_failure,
226
+ resurrect_after: @resurrect_after,
227
+ logger: @transport_logger,
228
+ transport_options: {
229
+ headers: headers,
230
+ request: { timeout: @request_timeout },
231
+ ssl: { verify: @ssl_verify, ca_file: @ca_file, version: @ssl_version }
232
+ },
233
+ http: {
234
+ user: @user,
235
+ password: @password
236
+ },
237
+ sniffer_class: @sniffer_class,
238
+ }), &adapter_conf)
239
+ Elasticsearch::Client.new transport: transport
240
+ end
241
+ end
242
+
243
+ def is_existing_connection(host)
244
+ # check if the host provided match the current connection
245
+ return false if @_es.nil?
246
+ return false if @current_config.nil?
247
+ return false if host.length != @current_config.length
248
+
249
+ for i in 0...host.length
250
+ if !host[i][:host].eql? @current_config[i][:host] || host[i][:port] != @current_config[i][:port]
251
+ return false
252
+ end
253
+ end
254
+
255
+ return true
256
+ end
257
+
258
+ def run
259
+ return run_slice if @num_slices <= 1
260
+
261
+ log.warn("Large slice number is specified:(#{@num_slices}). Consider reducing num_slices") if @num_slices > 8
262
+
263
+ @num_slices.times.map do |slice_id|
264
+ thread_create(:"in_elasticsearch_thread_#{slice_id}") do
265
+ run_slice(slice_id)
266
+ end
267
+ end
268
+ end
269
+
270
+ def run_slice(slice_id=nil)
271
+ slice_query = @base_query
272
+ slice_query = slice_query.merge('slice' => { 'id' => slice_id, 'max' => @num_slices}) unless slice_id.nil?
273
+ result = client.search(@options.merge(:body => Yajl.dump(slice_query) ))
274
+ es = Fluent::MultiEventStream.new
275
+
276
+ result["hits"]["hits"].each {|hit| process_events(hit, es)}
277
+ has_hits = result['hits']['hits'].any?
278
+ scroll_id = result['_scroll_id']
279
+
280
+ while has_hits && scroll_id
281
+ result = process_next_scroll_request(es, scroll_id)
282
+ has_hits = result['has_hits']
283
+ scroll_id = result['_scroll_id']
284
+ end
285
+
286
+ router.emit_stream(@tag, es)
287
+ client.clear_scroll(scroll_id: scroll_id) if scroll_id
288
+ end
289
+
290
+ def process_scroll_request(scroll_id)
291
+ client.scroll(:body => { :scroll_id => scroll_id }, :scroll => @scroll)
292
+ end
293
+
294
+ def process_next_scroll_request(es, scroll_id)
295
+ result = process_scroll_request(scroll_id)
296
+ result['hits']['hits'].each { |hit| process_events(hit, es) }
297
+ {'has_hits' => result['hits']['hits'].any?, '_scroll_id' => result['_scroll_id']}
298
+ end
299
+
300
+ def process_events(hit, es)
301
+ event = hit["_source"]
302
+ time = Fluent::Engine.now
303
+ if @parse_timestamp
304
+ if event.has_key?(TIMESTAMP_FIELD)
305
+ rts = event[TIMESTAMP_FIELD]
306
+ time = parse_time(rts, time, @tag)
307
+ end
308
+ end
309
+ if @docinfo
310
+ docinfo_target = event[@docinfo_target] || {}
311
+
312
+ unless docinfo_target.is_a?(Hash)
313
+ raise UnrecoverableError, "incompatible type for the docinfo_target=#{@docinfo_target} field in the `_source` document, expected a hash got:", :type => docinfo_target.class, :event => event
314
+ end
315
+
316
+ @docinfo_fields.each do |field|
317
+ docinfo_target[field] = hit[field]
318
+ end
319
+
320
+ event[@docinfo_target] = docinfo_target
321
+ end
322
+ es.add(time, event)
323
+ end
324
+ end
325
+ end
@@ -330,7 +330,7 @@ EOC
330
330
  end
331
331
  end
332
332
 
333
- version_arr = Elasticsearch::Transport::VERSION.split('.')
333
+ version_arr = ::Elasticsearch::Transport::VERSION.split('.')
334
334
 
335
335
  if (version_arr[0].to_i < 7) || (version_arr[0].to_i == 7 && version_arr[1].to_i < 2)
336
336
  if compression
@@ -629,7 +629,6 @@ class TestElasticsearchErrorHandler < Test::Unit::TestCase
629
629
  next unless e.respond_to?(:retry_stream)
630
630
  e.retry_stream.each {|time, record| records << record}
631
631
  end
632
- puts records
633
632
  assert_equal 3, records.length
634
633
  assert_equal 2, records[0]['_id']
635
634
  # upsert is retried in case of conflict error.
@@ -0,0 +1,454 @@
1
+ require_relative '../helper'
2
+ require 'date'
3
+ require 'fluent/test/helpers'
4
+ require 'json'
5
+ require 'fluent/test/driver/input'
6
+ require 'flexmock/test_unit'
7
+
8
+ class ElasticsearchInputTest < Test::Unit::TestCase
9
+ include FlexMock::TestCase
10
+ include Fluent::Test::Helpers
11
+
12
+ CONFIG = %[
13
+ tag raw.elasticsearch
14
+ interval 2
15
+ ]
16
+
17
+ def setup
18
+ Fluent::Test.setup
19
+ require 'fluent/plugin/in_elasticsearch'
20
+ @driver = nil
21
+ log = Fluent::Engine.log
22
+ log.out.logs.slice!(0, log.out.logs.length)
23
+ end
24
+
25
+ def driver(conf='')
26
+ @driver ||= Fluent::Test::Driver::Input.new(Fluent::Plugin::ElasticsearchInput).configure(conf)
27
+ end
28
+
29
+ def sample_response(index_name="fluentd")
30
+ {
31
+ "took"=>4,
32
+ "timed_out"=>false,
33
+ "_shards"=>{
34
+ "total"=>2,
35
+ "successful"=>2,
36
+ "skipped"=>0,
37
+ "failed"=>0
38
+ },
39
+ "hits"=>{
40
+ "total"=>{
41
+ "value"=>1,
42
+ "relation"=>"eq"
43
+ },
44
+ "max_score"=>1,
45
+ "hits"=>[
46
+ {
47
+ "_index"=>"#{index_name}-2019.11.14",
48
+ "_type"=>"_doc",
49
+ "_id"=>"MJ_faG4B16RqUMOji_nH",
50
+ "_score"=>1,
51
+ "_source"=>{
52
+ "message"=>"Hi from Fluentd!",
53
+ "@timestamp"=>"2019-11-14T16:45:10.559841000+09:00"
54
+ }
55
+ }
56
+ ]
57
+ }
58
+ }.to_json
59
+ end
60
+
61
+ def sample_scroll_response
62
+ {
63
+ "_scroll_id"=>"WomkoUKG0QPB679Ulo6TqQgh3pIGRUmrl9qXXGK3EeiQh9rbYNasTkspZQcJ01uz",
64
+ "took"=>0,
65
+ "timed_out"=>false,
66
+ "_shards"=>{
67
+ "total"=>1,
68
+ "successful"=>1,
69
+ "skipped"=>0,
70
+ "failed"=>0
71
+ },
72
+ "hits"=>{
73
+ "total"=>{
74
+ "value"=>7,
75
+ "relation"=>"eq"
76
+ },
77
+ "max_score"=>nil,
78
+ "hits"=>[
79
+ {
80
+ "_index"=>"fluentd-2019.11.14",
81
+ "_type"=>"_doc",
82
+ "_id"=>"MJ_faG4B16RqUMOji_nH",
83
+ "_score"=>1,
84
+ "_source"=>{
85
+ "message"=>"Hi from Fluentd!",
86
+ "@timestamp"=>"2019-11-14T16:45:10.559841000+09:00"
87
+ },
88
+ "sort"=>[0]
89
+ }
90
+ ]
91
+ }
92
+ }.to_json
93
+ end
94
+
95
+ def sample_scroll_response_2
96
+ {
97
+ "_scroll_id"=>"WomkoUKG0QPB679Ulo6TqQgh3pIGRUmrl9qXXGK3EeiQh9rbYNasTkspZQcJ01uz",
98
+ "took"=>0,
99
+ "timed_out"=>false,
100
+ "_shards"=>{
101
+ "total"=>1,
102
+ "successful"=>1,
103
+ "skipped"=>0,
104
+ "failed"=>0
105
+ },
106
+ "hits"=>{
107
+ "total"=>{
108
+ "value"=>7,
109
+ "relation"=>"eq"
110
+ },
111
+ "max_score"=>nil,
112
+ "hits"=>[
113
+ {
114
+ "_index"=>"fluentd-2019.11.14",
115
+ "_type"=>"_doc",
116
+ "_id"=>"L5-saG4B16RqUMOjw_kb",
117
+ "_score"=>1,
118
+ "_source"=>{
119
+ "message"=>"Yaaaaaaay from Fluentd!",
120
+ "@timestamp"=>"2019-11-14T15:49:41.112023000+09:00"
121
+ },
122
+ "sort"=>[1]
123
+ }
124
+ ]
125
+ }
126
+ }.to_json
127
+ end
128
+
129
+ def sample_scroll_response_terminate
130
+ {
131
+ "_scroll_id"=>"WomkoUKG0QPB679Ulo6TqQgh3pIGRUmrl9qXXGK3EeiQh9rbYNasTkspZQcJ01uz",
132
+ "took"=>1,
133
+ "timed_out"=>false,
134
+ "terminated_early"=>true,
135
+ "_shards"=>{
136
+ "total"=>1,
137
+ "successful"=>1,
138
+ "skipped"=>0,
139
+ "failed"=>0
140
+ },
141
+ "hits"=>{
142
+ "total"=>{
143
+ "value"=>7,
144
+ "relation"=>"eq"
145
+ },
146
+ "max_score"=>nil,
147
+ "hits"=>[]
148
+ }
149
+ }.to_json
150
+ end
151
+
152
+ def test_configure
153
+ config = %{
154
+ host logs.google.com
155
+ port 777
156
+ scheme https
157
+ path /es/
158
+ user john
159
+ password doe
160
+ tag raw.elasticsearch
161
+ }
162
+ instance = driver(config).instance
163
+
164
+ expected_query = { "sort" => [ "_doc" ]}
165
+ assert_equal 'logs.google.com', instance.host
166
+ assert_equal 777, instance.port
167
+ assert_equal :https, instance.scheme
168
+ assert_equal '/es/', instance.path
169
+ assert_equal 'john', instance.user
170
+ assert_equal 'doe', instance.password
171
+ assert_equal 'raw.elasticsearch', instance.tag
172
+ assert_equal :TLSv1_2, instance.ssl_version
173
+ assert_equal 'fluentd', instance.index_name
174
+ assert_equal expected_query, instance.query
175
+ assert_equal '1m', instance.scroll
176
+ assert_equal 1000, instance.size
177
+ assert_equal 1, instance.num_slices
178
+ assert_equal 5, instance.interval
179
+ assert_true instance.repeat
180
+ assert_nil instance.client_key
181
+ assert_nil instance.client_cert
182
+ assert_nil instance.client_key_pass
183
+ assert_nil instance.ca_file
184
+ assert_false instance.with_transporter_log
185
+ assert_equal :excon, instance.http_backend
186
+ assert_nil instance.sniffer_class_name
187
+ assert_true instance.custom_headers.empty?
188
+ assert_equal ['_index', '_type', '_id'], instance.docinfo_fields
189
+ assert_equal '@metadata', instance.docinfo_target
190
+ assert_false instance.docinfo
191
+ end
192
+
193
+ def test_single_host_params_and_defaults
194
+ config = %{
195
+ host logs.google.com
196
+ user john
197
+ password doe
198
+ tag raw.elasticsearch
199
+ }
200
+ instance = driver(config).instance
201
+
202
+ assert_equal 1, instance.get_connection_options[:hosts].length
203
+ host1 = instance.get_connection_options[:hosts][0]
204
+
205
+ assert_equal 'logs.google.com', host1[:host]
206
+ assert_equal 9200, host1[:port]
207
+ assert_equal 'http', host1[:scheme]
208
+ assert_equal 'john', host1[:user]
209
+ assert_equal 'doe', host1[:password]
210
+ assert_equal nil, host1[:path]
211
+ assert_equal 'raw.elasticsearch', instance.tag
212
+ end
213
+
214
+ def test_single_host_params_and_defaults_with_escape_placeholders
215
+ config = %{
216
+ host logs.google.com
217
+ user %{j+hn}
218
+ password %{d@e}
219
+ tag raw.elasticsearch
220
+ }
221
+ instance = driver(config).instance
222
+
223
+ assert_equal 1, instance.get_connection_options[:hosts].length
224
+ host1 = instance.get_connection_options[:hosts][0]
225
+
226
+ assert_equal 'logs.google.com', host1[:host]
227
+ assert_equal 9200, host1[:port]
228
+ assert_equal 'http', host1[:scheme]
229
+ assert_equal 'j%2Bhn', host1[:user]
230
+ assert_equal 'd%40e', host1[:password]
231
+ assert_equal nil, host1[:path]
232
+ assert_equal 'raw.elasticsearch', instance.tag
233
+ end
234
+
235
+ def test_legacy_hosts_list
236
+ config = %{
237
+ hosts host1:50,host2:100,host3
238
+ scheme https
239
+ path /es/
240
+ port 123
241
+ tag raw.elasticsearch
242
+ }
243
+ instance = driver(config).instance
244
+
245
+ assert_equal 3, instance.get_connection_options[:hosts].length
246
+ host1, host2, host3 = instance.get_connection_options[:hosts]
247
+
248
+ assert_equal 'host1', host1[:host]
249
+ assert_equal 50, host1[:port]
250
+ assert_equal 'https', host1[:scheme]
251
+ assert_equal '/es/', host2[:path]
252
+ assert_equal 'host3', host3[:host]
253
+ assert_equal 123, host3[:port]
254
+ assert_equal 'https', host3[:scheme]
255
+ assert_equal '/es/', host3[:path]
256
+ assert_equal 'raw.elasticsearch', instance.tag
257
+ end
258
+
259
+ def test_hosts_list
260
+ config = %{
261
+ hosts https://john:password@host1:443/elastic/,http://host2
262
+ path /default_path
263
+ user default_user
264
+ password default_password
265
+ tag raw.elasticsearch
266
+ }
267
+ instance = driver(config).instance
268
+
269
+ assert_equal 2, instance.get_connection_options[:hosts].length
270
+ host1, host2 = instance.get_connection_options[:hosts]
271
+
272
+ assert_equal 'host1', host1[:host]
273
+ assert_equal 443, host1[:port]
274
+ assert_equal 'https', host1[:scheme]
275
+ assert_equal 'john', host1[:user]
276
+ assert_equal 'password', host1[:password]
277
+ assert_equal '/elastic/', host1[:path]
278
+
279
+ assert_equal 'host2', host2[:host]
280
+ assert_equal 'http', host2[:scheme]
281
+ assert_equal 'default_user', host2[:user]
282
+ assert_equal 'default_password', host2[:password]
283
+ assert_equal '/default_path', host2[:path]
284
+ assert_equal 'raw.elasticsearch', instance.tag
285
+ end
286
+
287
+ def test_hosts_list_with_escape_placeholders
288
+ config = %{
289
+ hosts https://%{j+hn}:%{passw@rd}@host1:443/elastic/,http://host2
290
+ path /default_path
291
+ user default_user
292
+ password default_password
293
+ tag raw.elasticsearch
294
+ }
295
+ instance = driver(config).instance
296
+
297
+ assert_equal 2, instance.get_connection_options[:hosts].length
298
+ host1, host2 = instance.get_connection_options[:hosts]
299
+
300
+ assert_equal 'host1', host1[:host]
301
+ assert_equal 443, host1[:port]
302
+ assert_equal 'https', host1[:scheme]
303
+ assert_equal 'j%2Bhn', host1[:user]
304
+ assert_equal 'passw%40rd', host1[:password]
305
+ assert_equal '/elastic/', host1[:path]
306
+
307
+ assert_equal 'host2', host2[:host]
308
+ assert_equal 'http', host2[:scheme]
309
+ assert_equal 'default_user', host2[:user]
310
+ assert_equal 'default_password', host2[:password]
311
+ assert_equal '/default_path', host2[:path]
312
+ assert_equal 'raw.elasticsearch', instance.tag
313
+ end
314
+
315
+ def test_emit
316
+ stub_request(:get, "http://localhost:9200/fluentd/_search?scroll=1m&size=1000").
317
+ with(body: "{\"sort\":[\"_doc\"]}").
318
+ to_return(status: 200, body: sample_response.to_s,
319
+ headers: {'Content-Type' => 'application/json'})
320
+
321
+ driver(CONFIG)
322
+ driver.run(expect_emits: 1, timeout: 10)
323
+ expected = {"message" => "Hi from Fluentd!",
324
+ "@timestamp" => "2019-11-14T16:45:10.559841000+09:00"}
325
+ event = driver.events.map {|e| e.last}.last
326
+ assert_equal expected, event
327
+ end
328
+
329
+ def test_emit_with_custom_index_name
330
+ index_name = "logstash"
331
+ stub_request(:get, "http://localhost:9200/#{index_name}/_search?scroll=1m&size=1000").
332
+ with(body: "{\"sort\":[\"_doc\"]}").
333
+ to_return(status: 200, body: sample_response(index_name).to_s,
334
+ headers: {'Content-Type' => 'application/json'})
335
+
336
+ driver(CONFIG + %[index_name #{index_name}])
337
+ driver.run(expect_emits: 1, timeout: 10)
338
+ expected = {"message" => "Hi from Fluentd!",
339
+ "@timestamp" => "2019-11-14T16:45:10.559841000+09:00"}
340
+ event = driver.events.map {|e| e.last}.last
341
+ assert_equal expected, event
342
+ end
343
+
344
+ def test_emit_with_parse_timestamp
345
+ index_name = "fluentd"
346
+ stub_request(:get, "http://localhost:9200/#{index_name}/_search?scroll=1m&size=1000").
347
+ with(body: "{\"sort\":[\"_doc\"]}").
348
+ to_return(status: 200, body: sample_response(index_name).to_s,
349
+ headers: {'Content-Type' => 'application/json'})
350
+
351
+ driver(CONFIG + %[parse_timestamp])
352
+ driver.run(expect_emits: 1, timeout: 10)
353
+ expected = {"message" => "Hi from Fluentd!",
354
+ "@timestamp" => "2019-11-14T16:45:10.559841000+09:00"}
355
+ event = driver.events.map {|e| e.last}.last
356
+ time = driver.events.map {|e| e[1]}.last
357
+ expected_time = event_time("2019-11-14T16:45:10.559841000+09:00")
358
+ assert_equal expected_time.to_time, time.to_time
359
+ assert_equal expected, event
360
+ end
361
+
362
+ def test_emit_with_parse_timestamp_and_timstamp_format
363
+ index_name = "fluentd"
364
+ stub_request(:get, "http://localhost:9200/#{index_name}/_search?scroll=1m&size=1000").
365
+ with(body: "{\"sort\":[\"_doc\"]}").
366
+ to_return(status: 200, body: sample_response(index_name).to_s,
367
+ headers: {'Content-Type' => 'application/json'})
368
+
369
+ driver(CONFIG + %[parse_timestamp true
370
+ timestamp_key_format %Y-%m-%dT%H:%M:%S.%N%z
371
+ ])
372
+ driver.run(expect_emits: 1, timeout: 10)
373
+ expected = {"message" => "Hi from Fluentd!",
374
+ "@timestamp" => "2019-11-14T16:45:10.559841000+09:00"}
375
+ event = driver.events.map {|e| e.last}.last
376
+ time = driver.events.map {|e| e[1]}.last
377
+ expected_time = event_time("2019-11-14T16:45:10.559841000+09:00")
378
+ assert_equal expected_time.to_time, time.to_time
379
+ assert_equal expected, event
380
+ end
381
+
382
+ def test_emit_with_docinfo
383
+ stub_request(:get, "http://localhost:9200/fluentd/_search?scroll=1m&size=1000").
384
+ with(body: "{\"sort\":[\"_doc\"]}").
385
+ to_return(status: 200, body: sample_response.to_s,
386
+ headers: {'Content-Type' => 'application/json'})
387
+
388
+ driver(CONFIG + %[docinfo true])
389
+ driver.run(expect_emits: 1, timeout: 10)
390
+ expected = {"message" => "Hi from Fluentd!",
391
+ "@timestamp" => "2019-11-14T16:45:10.559841000+09:00"}
392
+ expected.merge!({"@metadata"=>
393
+ {"_id"=>"MJ_faG4B16RqUMOji_nH",
394
+ "_index"=>"fluentd-2019.11.14",
395
+ "_type"=>"_doc"}
396
+ })
397
+ event = driver.events.map {|e| e.last}.last
398
+ assert_equal expected, event
399
+ end
400
+
401
+ def test_emit_with_slices
402
+ stub_request(:get, "http://localhost:9200/fluentd/_search?scroll=1m&size=1000").
403
+ with(body: "{\"sort\":[\"_doc\"],\"slice\":{\"id\":0,\"max\":2}}").
404
+ to_return(status: 200, body: sample_response.to_s,
405
+ headers: {'Content-Type' => 'application/json'})
406
+ stub_request(:get, "http://localhost:9200/fluentd/_search?scroll=1m&size=1000").
407
+ with(body: "{\"sort\":[\"_doc\"],\"slice\":{\"id\":1,\"max\":2}}").
408
+ to_return(status: 200, body: sample_response.to_s,
409
+ headers: {'Content-Type' => 'application/json'})
410
+
411
+ driver(CONFIG + %[num_slices 2])
412
+ driver.run(expect_emits: 1, timeout: 10)
413
+ expected = [
414
+ {"message"=>"Hi from Fluentd!", "@timestamp"=>"2019-11-14T16:45:10.559841000+09:00"},
415
+ {"message"=>"Hi from Fluentd!", "@timestamp"=>"2019-11-14T16:45:10.559841000+09:00"},
416
+ ]
417
+ events = driver.events.map {|e| e.last}
418
+ assert_equal expected, events
419
+ end
420
+
421
+ def test_emit_with_size
422
+ stub_request(:get, "http://localhost:9200/fluentd/_search?scroll=1m&size=1").
423
+ with(body: "{\"sort\":[\"_doc\"]}").
424
+ to_return(status: 200, body: sample_scroll_response.to_s,
425
+ headers: {'Content-Type' => 'application/json'})
426
+ connection = 0
427
+ scroll_request = stub_request(:get, "http://localhost:9200/_search/scroll?scroll=1m").
428
+ with(
429
+ body: "{\"scroll_id\":\"WomkoUKG0QPB679Ulo6TqQgh3pIGRUmrl9qXXGK3EeiQh9rbYNasTkspZQcJ01uz\"}") do
430
+ connection += 1
431
+ end
432
+ scroll_request.to_return(lambda do |req|
433
+ if connection <= 1
434
+ {status: 200, body: sample_scroll_response_2.to_s,
435
+ headers: {'Content-Type' => 'application/json'}}
436
+ else
437
+ {status: 200, body: sample_scroll_response_terminate.to_s,
438
+ headers: {'Content-Type' => 'application/json'}}
439
+ end
440
+ end)
441
+ stub_request(:delete, "http://localhost:9200/_search/scroll/WomkoUKG0QPB679Ulo6TqQgh3pIGRUmrl9qXXGK3EeiQh9rbYNasTkspZQcJ01uz").
442
+ to_return(status: 200, body: "", headers: {})
443
+
444
+ driver(CONFIG + %[size 1])
445
+ driver.run(expect_emits: 1, timeout: 10)
446
+ expected = [
447
+ {"message"=>"Hi from Fluentd!", "@timestamp"=>"2019-11-14T16:45:10.559841000+09:00"},
448
+ {"message"=>"Yaaaaaaay from Fluentd!", "@timestamp"=>"2019-11-14T15:49:41.112023000+09:00"}
449
+ ]
450
+ events = driver.events.map{|e| e.last}
451
+ assert_equal expected, events
452
+ end
453
+
454
+ end
metadata CHANGED
@@ -1,15 +1,16 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: fluent-plugin-elasticsearch
3
3
  version: !ruby/object:Gem::Version
4
- version: 3.6.1
4
+ version: 3.7.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - diogo
8
8
  - pitr
9
+ - Hiroshi Hatake
9
10
  autorequire:
10
11
  bindir: bin
11
12
  cert_chain: []
12
- date: 2019-11-11 00:00:00.000000000 Z
13
+ date: 2019-11-18 00:00:00.000000000 Z
13
14
  dependencies:
14
15
  - !ruby/object:Gem::Dependency
15
16
  name: fluentd
@@ -127,6 +128,7 @@ description: Elasticsearch output plugin for Fluent event collector
127
128
  email:
128
129
  - pitr.vern@gmail.com
129
130
  - me@diogoterror.com
131
+ - cosmo0920.wp@gmail.com
130
132
  executables: []
131
133
  extensions: []
132
134
  extra_rdoc_files: []
@@ -141,6 +143,7 @@ files:
141
143
  - ISSUE_TEMPLATE.md
142
144
  - LICENSE.txt
143
145
  - PULL_REQUEST_TEMPLATE.md
146
+ - README.ElasticsearchInput.md
144
147
  - README.md
145
148
  - Rakefile
146
149
  - appveyor.yml
@@ -155,6 +158,7 @@ files:
155
158
  - lib/fluent/plugin/elasticsearch_index_template.rb
156
159
  - lib/fluent/plugin/elasticsearch_simple_sniffer.rb
157
160
  - lib/fluent/plugin/filter_elasticsearch_genid.rb
161
+ - lib/fluent/plugin/in_elasticsearch.rb
158
162
  - lib/fluent/plugin/oj_serializer.rb
159
163
  - lib/fluent/plugin/out_elasticsearch.rb
160
164
  - lib/fluent/plugin/out_elasticsearch_dynamic.rb
@@ -163,6 +167,7 @@ files:
163
167
  - test/plugin/test_elasticsearch_error_handler.rb
164
168
  - test/plugin/test_elasticsearch_index_lifecycle_management.rb
165
169
  - test/plugin/test_filter_elasticsearch_genid.rb
170
+ - test/plugin/test_in_elasticsearch.rb
166
171
  - test/plugin/test_out_elasticsearch.rb
167
172
  - test/plugin/test_out_elasticsearch_dynamic.rb
168
173
  - test/plugin/test_template.json
@@ -197,6 +202,7 @@ test_files:
197
202
  - test/plugin/test_elasticsearch_error_handler.rb
198
203
  - test/plugin/test_elasticsearch_index_lifecycle_management.rb
199
204
  - test/plugin/test_filter_elasticsearch_genid.rb
205
+ - test/plugin/test_in_elasticsearch.rb
200
206
  - test/plugin/test_out_elasticsearch.rb
201
207
  - test/plugin/test_out_elasticsearch_dynamic.rb
202
208
  - test/plugin/test_template.json