fluent-plugin-elasticsearch 1.18.2 → 2.0.0.rc.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA256:
3
- metadata.gz: ef82d990d532395ac70cfaca633811ecfec6e9277f02544a2fc73402aea79bd9
4
- data.tar.gz: 30beccac0549c496631ba4943c2aae0161db08bb55966f38c1554b8c493beb12
2
+ SHA1:
3
+ metadata.gz: 4c847d28fe74087b06966351eb6fd5f9e88695e2
4
+ data.tar.gz: 989fcbfff8006eb3275522d2ed7c7e2cd92ef5e2
5
5
  SHA512:
6
- metadata.gz: a7aaed653bb448e943265718258e9f92e2cb17fb778684483e58bac51c7a4175821aec96da2b6956ccf0f1b2e5c81775672f0c9ad34120e86b6e1e5f7b71aff2
7
- data.tar.gz: 37a07ba5b227ce441396984abbedbafd6005d709df2b1489b39fcd5cdbb4e4aee820104b0cf99c944990133a227daa2ccde2b0a0d5f27a07929aa112e980d069
6
+ metadata.gz: 0a2e7f76258f14ca000658b453958ef118f33456fd3bff1fd15391a4710052e48672f8fdb67c271d28156aebd2e93b01f0388466c569d224f5e8f65d8e4c98e8
7
+ data.tar.gz: fe7283edc63facf368b6344f4686ce3d0e8957297d09b0d6c22cc701156586ac3254dfed45dbea5007dd0a6792f0b84059e0a7e37705ff948fe171b838bb3365
data/.gitignore CHANGED
@@ -16,4 +16,3 @@ test/tmp
16
16
  test/version_tmp
17
17
  tmp
18
18
  .DS_Store
19
- vendor/
data/.travis.yml CHANGED
@@ -7,7 +7,6 @@ rvm:
7
7
 
8
8
  gemfile:
9
9
  - Gemfile
10
- - Gemfile.v0.12
11
10
 
12
11
  script: bundle exec rake test
13
12
  sudo: false
data/History.md CHANGED
@@ -1,99 +1,11 @@
1
1
  ## Changelog [[tags]](https://github.com/uken/fluent-plugin-elasticsearch/tags)
2
2
 
3
3
  ### [Unreleased]
4
+ - Log ES response errors (#230)
5
+ - Use latest elasticsearch-ruby (#240)
4
6
 
5
- ### 1.18.2
6
- - Retry upsert on recoverable error. (porting to v0.12) #667 (#684)
7
-
8
- ### 1.18.1
9
- - add new option to suppress doc wrapping (porting to v0.12) (#558)
10
-
11
- ### 1.18.0
12
- - Avoid NoMethodError on unknown Elasticsearch error responses (#487)
13
-
14
- ### 1.17.2
15
- - add simple sniffer for simple proxy/lb cases (#459)
16
-
17
- ### 1.17.1
18
- - backport strictness-scheme (#447)
19
-
20
- ### 1.17.0
21
- - Fix #434 bulk count (#437)
22
-
23
- ### 1.16.2
24
- - add trace logging to send_bulk (#435)
25
-
26
- ### 1.16.1
27
- - allow configure of retry_tag so messages can be routed through a different pipeline (#419)
28
- - fix #417. emit_error_event using an exception (#418)
29
-
30
- ### 1.16.0
31
- - evaluate bulk request failures and reroute failed messages (#405)
32
-
33
- ### 1.15.2
34
- - handle case where stats not processed in order; add testing (#410)
35
-
36
- ### 1.15.1
37
- - successful operation if all duplicates (#406)
38
-
39
- ### 1.15.0
40
- - revert dlq to use router.emit_error_event instead (#402)
41
- - Don't log full response on error (#399)
42
-
43
- ### 1.14.0
44
- - introduce dead letter queue to handle issues unpacking file buffer chunks (#398)
45
-
46
- ### 1.13.4
47
- - backport auth: Fix missing auth tokens after reloading connections (#397)
48
-
49
- ### 1.13.3
50
- - backport removing outdated generating hash id support module (#374)
51
-
52
- ### 1.13.2
53
- - backport preventing error when using template in elasticsearch_dynamic for elementally use case (#364)
54
-
55
- ### 1.13.1
56
- - backport adding config parameter to enable elasticsearch-ruby's transporter logging (#343)
57
-
58
- ### 1.13.0
59
- - Backport allowing to overwrite existing index template (#336)
60
-
61
- ### 1.12.0
62
- - GA release 1.12.0.
63
-
64
- ### 1.12.0.rc.1
65
- - Backport separating generate hash id module and bundled new plugin for generating unique hash id (#331)
66
-
67
- ### 1.11.1
68
- - Raise ConfigError when specifying different @hash_config.hash_id_key and id_key configration (#326)
69
- - backport small typo fix in README.md (#328)
70
-
71
- ### 1.11.0
72
- - backport adding bulk errors handling (#324)
73
-
74
- ### 1.10.3
75
- - releasing generating hash id mechanism to avoid records duplication backport (#323)
76
-
77
- ### 1.10.3.rc.1
78
- - backport Add generating hash id mechanism to avoid records duplication (#323)
79
-
80
- ### 1.10.2
81
- - backport adding `include_timestamp` option (#311)
82
-
83
- ### 1.10.1
84
- - backport escaping basic authentication user information placeholders (#309)
85
- - backport handling dynamic config misconfiguration (#308)
86
-
87
- ### 1.10.0
88
- - backport adding `logstash_prefix_separator` parameter fix
89
- - backport making configuraable SSL/TLS version (#300)
90
- - bump up minimum required Fluentd version to v0.12.10 due to use enum parameter type
91
-
92
- ### 1.9.7
93
- - fix license identifier in gemspec (#295)
94
-
95
- ### 1.9.6
96
- - add pipeline parameter (#266)
7
+ ### 2.0.0.rc.1
8
+ - Use v0.14 API to support nanosecond precision (#223)
97
9
 
98
10
  ### 1.9.5
99
11
  - sub-second time precision [(#249)](https://github.com/uken/fluent-plugin-elasticsearch/pull/249)
@@ -101,10 +13,6 @@
101
13
  ### 1.9.4
102
14
  - Include 'Content-Type' header in `transport_options`
103
15
 
104
- ### 1.9.3
105
- - Use latest elasticsearch-ruby (#240)
106
- - Log ES response errors (#230)
107
-
108
16
  ### 1.9.2
109
17
  - Fix elasticsearch_dynamic for v0.14 (#224)
110
18
 
data/README.md CHANGED
@@ -7,33 +7,30 @@
7
7
  [![Issue Stats](http://issuestats.com/github/uken/fluent-plugin-elasticsearch/badge/pr)](http://issuestats.com/github/uken/fluent-plugin-elasticsearch)
8
8
  [![Issue Stats](http://issuestats.com/github/uken/fluent-plugin-elasticsearch/badge/issue)](http://issuestats.com/github/uken/fluent-plugin-elasticsearch)
9
9
 
10
- Send your logs to Elasticsearch (and search them with Kibana maybe?)
10
+ Send your logs to ElasticSearch (and search them with Kibana maybe?)
11
11
 
12
12
  Note: For Amazon Elasticsearch Service please consider using [fluent-plugin-aws-elasticsearch-service](https://github.com/atomita/fluent-plugin-aws-elasticsearch-service)
13
13
 
14
+ Current maintainers: @cosmo0920
15
+
14
16
  * [Installation](#installation)
15
17
  * [Usage](#usage)
16
18
  + [Index templates](#index-templates)
17
19
  * [Configuration](#configuration)
18
- + [emit_error_for_missing_id](#emit_error_for_missing_id)
19
20
  + [hosts](#hosts)
20
21
  + [user, password, path, scheme, ssl_verify](#user-password-path-scheme-ssl_verify)
21
22
  + [logstash_format](#logstash_format)
22
23
  + [logstash_prefix](#logstash_prefix)
23
- + [logstash_prefix_separator](#logstash_prefix_separator)
24
24
  + [logstash_dateformat](#logstash_dateformat)
25
- + [pipeline](#pipeline)
26
25
  + [time_key_format](#time_key_format)
27
26
  + [time_precision](#time_precision)
28
27
  + [time_key](#time_key)
29
28
  + [time_key_exclude_timestamp](#time_key_exclude_timestamp)
30
- + [include_timestamp](#time_key_exclude_timestamp)
31
29
  + [utc_index](#utc_index)
32
30
  + [target_index_key](#target_index_key)
33
31
  + [target_type_key](#target_type_key)
34
32
  + [template_name](#template_name)
35
33
  + [template_file](#template_file)
36
- + [template_overwrite](#template_overwrite)
37
34
  + [templates](#templates)
38
35
  + [request_timeout](#request_timeout)
39
36
  + [reload_connections](#reload_connections)
@@ -46,21 +43,15 @@ Note: For Amazon Elasticsearch Service please consider using [fluent-plugin-aws-
46
43
  + [remove_keys](#remove_keys)
47
44
  + [remove_keys_on_update](#remove_keys_on_update)
48
45
  + [remove_keys_on_update_key](#remove_keys_on_update_key)
49
- + [retry_tag](#retry_tag)
50
46
  + [write_operation](#write_operation)
51
47
  + [time_parse_error_tag](#time_parse_error_tag)
52
48
  + [reconnect_on_error](#reconnect_on_error)
53
- + [with_transporter_log](#with_transporter_log)
54
49
  + [Client/host certificate options](#clienthost-certificate-options)
55
50
  + [Proxy Support](#proxy-support)
56
51
  + [Buffered output options](#buffered-output-options)
57
52
  + [Hash flattening](#hash-flattening)
58
- + [Generate Hash ID](#generate-hash-id)
59
- + [sniffer_class_name](#sniffer_class_name)
60
- + [reload_after](#reload_after)
61
53
  + [Not seeing a config you need?](#not-seeing-a-config-you-need)
62
54
  + [Dynamic configuration](#dynamic-configuration)
63
- + [suppress_doc_wrap](#suppress_doc_wrap)
64
55
  * [Contact](#contact)
65
56
  * [Contributing](#contributing)
66
57
  * [Running tests](#running-tests)
@@ -87,30 +78,21 @@ In your Fluentd configuration, use `@type elasticsearch`. Additional configurati
87
78
 
88
79
  ### Index templates
89
80
 
90
- This plugin creates Elasticsearch indices by merely writing to them. Consider using [Index Templates](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html) to gain control of what get indexed and how. See [this example](https://github.com/uken/fluent-plugin-elasticsearch/issues/33#issuecomment-38693282) for a good starting point.
81
+ This plugin creates ElasticSearch indices by merely writing to them. Consider using [Index Templates](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html) to gain control of what get indexed and how. See [this example](https://github.com/uken/fluent-plugin-elasticsearch/issues/33#issuecomment-38693282) for a good starting point.
91
82
 
92
83
  ## Configuration
93
84
 
94
- ### emit_error_for_missing_id
95
-
96
- ```
97
- emit_error_for_missing_id true
98
- ```
99
- When `write_operation` is configured to anything other then `index`, setting this value to `true` will
100
- cause the plugin to `emit_error_event` of any records which do not include an `_id` field. The default (`false`)
101
- behavior is to silently drop the records.
102
-
103
85
  ### hosts
104
86
 
105
87
  ```
106
88
  hosts host1:port1,host2:port2,host3:port3
89
+ # or
90
+ hosts https://customhost.com:443/path,https://username:password@host-failover.com:443
107
91
  ```
108
92
 
109
- You can specify multiple Elasticsearch hosts with separator ",".
110
-
111
- If you specify multiple hosts, this plugin will load balance updates to Elasticsearch. This is an [elasticsearch-ruby](https://github.com/elasticsearch/elasticsearch-ruby) feature, the default strategy is round-robin.
93
+ You can specify multiple ElasticSearch hosts with separator ",".
112
94
 
113
- **Note:** Up until v1.13.3, it was allowed to embed the username/password in the URL. However, this syntax is deprecated as of v1.13.4 because it was found to cause serious connection problems (See #394). Please migrate your settings to use the `user` and `password` field (described below) instead.
95
+ If you specify multiple hosts, this plugin will load balance updates to ElasticSearch. This is an [elasticsearch-ruby](https://github.com/elasticsearch/elasticsearch-ruby) feature, the default strategy is round-robin.
114
96
 
115
97
  ### user, password, path, scheme, ssl_verify
116
98
 
@@ -123,14 +105,7 @@ path /elastic_search/
123
105
  scheme https
124
106
  ```
125
107
 
126
- You can specify user and password for HTTP Basic authentication.
127
-
128
- And this plugin will escape required URL encoded characters within `%{}` placeholders.
129
-
130
- ```
131
- user %{demo+}
132
- password %{@secret}
133
- ```
108
+ You can specify user and password for HTTP basic auth. If used in conjunction with a hosts list, then these options will be used by default i.e. if you do not provide any of these options within the hosts listed.
134
109
 
135
110
  Specify `ssl_verify false` to skip ssl verification (defaults to true)
136
111
 
@@ -140,15 +115,7 @@ Specify `ssl_verify false` to skip ssl verification (defaults to true)
140
115
  logstash_format true # defaults to false
141
116
  ```
142
117
 
143
- This is meant to make writing data into Elasticsearch indices compatible to what [Logstash](https://www.elastic.co/products/logstash) calls them. By doing this, one could take advantage of [Kibana](https://www.elastic.co/products/kibana). See logstash_prefix and logstash_dateformat to customize this index name pattern. The index name will be `#{logstash_prefix}-#{formated_date}`
144
-
145
- ### include_timestamp
146
-
147
- ```
148
- include_timestamp true # defaults to false
149
- ```
150
-
151
- Adds a `@timestamp` field to the log, following all settings `logstash_format` does, except without the restrictions on `index_name`. This allows one to log to an alias in Elasticsearch and utilize the rollover API.
118
+ This is meant to make writing data into ElasticSearch indices compatible to what [Logstash](https://www.elastic.co/products/logstash) calls them. By doing this, one could take advantage of [Kibana](https://www.elastic.co/products/kibana). See logstash_prefix and logstash_dateformat to customize this index name pattern. The index name will be `#{logstash_prefix}-#{formated_date}`
152
119
 
153
120
  ### logstash_prefix
154
121
 
@@ -156,12 +123,6 @@ Adds a `@timestamp` field to the log, following all settings `logstash_format` d
156
123
  logstash_prefix mylogs # defaults to "logstash"
157
124
  ```
158
125
 
159
- ### logstash_prefix_separator
160
-
161
- ```
162
- logstash_prefix_separator _ # defaults to "-"
163
- ```
164
-
165
126
  ### logstash_dateformat
166
127
 
167
128
  The strftime format to generate index target index name when `logstash_format` is set to true. By default, the records are inserted into index `logstash-YYYY.MM.DD`. This option, alongwith `logstash_prefix` lets us insert into specified index like `mylogs-YYYYMM` for a monthly index.
@@ -170,16 +131,6 @@ The strftime format to generate index target index name when `logstash_format` i
170
131
  logstash_dateformat %Y.%m. # defaults to "%Y.%m.%d"
171
132
  ```
172
133
 
173
- ### pipeline
174
-
175
- Only in ES >= 5.x is available to use this parameter.
176
- This param is to set a pipeline id of your elasticsearch to be added into the request, you can configure ingest node.
177
- For more information: [![Ingest node](https://www.elastic.co/guide/en/elasticsearch/reference/master/ingest.html)]
178
-
179
- ```
180
- pipeline pipeline_id
181
- ```
182
-
183
134
  ### time_key_format
184
135
 
185
136
  The format of the time stamp field (`@timestamp` or what you specify with [time_key](#time_key)). This parameter only has an effect when [logstash_format](#logstash_format) is true as it only affects the name of the index we write to. Please see [Time#strftime](http://ruby-doc.org/core-1.9.3/Time.html#method-i-strftime) for information about the value of this format.
@@ -289,7 +240,7 @@ Similar to `target_index_key` config, find the type name to write to in the reco
289
240
 
290
241
  ### template_name
291
242
 
292
- The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless [template_overwrite](#template_overwrite) is set, in which case the template will be updated.
243
+ The name of the template to define. If a template by the name given is already present, it will be left unchanged.
293
244
 
294
245
  This parameter along with template_file allow the plugin to behave similarly to Logstash (it installs a template at creation time) so that raw records are available. See [https://github.com/uken/fluent-plugin-elasticsearch/issues/33](https://github.com/uken/fluent-plugin-elasticsearch/issues/33).
295
246
 
@@ -311,21 +262,11 @@ templates { "templane_name_1": "path_to_template_1_file", "templane_name_2": "pa
311
262
 
312
263
  If `template_file` and `template_name` are set, then this parameter will be ignored.
313
264
 
314
- ### template_overwrite
315
-
316
- Always update the template, even if it already exists.
317
-
318
- ```
319
- template_overwrite true # defaults to false
320
- ```
321
-
322
- One of [template_file](#template_file) or [templates](#templates) must also be specified if this is set.
323
-
324
265
  ### request_timeout
325
266
 
326
267
  You can specify HTTP request timeout.
327
268
 
328
- This is useful when Elasticsearch cannot return response for bulk request within the default of 5 seconds.
269
+ This is useful when ElasticSearch cannot return response for bulk request within the default of 5 seconds.
329
270
 
330
271
  ```
331
272
  request_timeout 15s # defaults to 5s
@@ -333,7 +274,7 @@ request_timeout 15s # defaults to 5s
333
274
 
334
275
  ### reload_connections
335
276
 
336
- You can tune how the elasticsearch-transport host reloading feature works. By default it will reload the host list from the server every 10,000th request to spread the load. This can be an issue if your Elasticsearch cluster is behind a Reverse Proxy, as Fluentd process may not have direct network access to the Elasticsearch nodes.
277
+ You can tune how the elasticsearch-transport host reloading feature works. By default it will reload the host list from the server every 10,000th request to spread the load. This can be an issue if your ElasticSearch cluster is behind a Reverse Proxy, as Fluentd process may not have direct network access to the ElasticSearch nodes.
337
278
 
338
279
  ```
339
280
  reload_connections false # defaults to true
@@ -353,7 +294,7 @@ reload_on_failure true # defaults to false
353
294
  You can set in the elasticsearch-transport how often dead connections from the elasticsearch-transport's pool will be resurrected.
354
295
 
355
296
  ```
356
- resurrect_after 5s # defaults to 60s
297
+ resurrect_after 5 # defaults to 60s
357
298
  ```
358
299
 
359
300
  ### include_tag_key, tag_key
@@ -373,7 +314,7 @@ This will add the Fluentd tag in the JSON record. For instance, if you have a co
373
314
  </match>
374
315
  ```
375
316
 
376
- The record inserted into Elasticsearch would be
317
+ The record inserted into ElasticSearch would be
377
318
 
378
319
  ```
379
320
  {"_key":"my.logs", "name":"Johnny Doeie"}
@@ -385,9 +326,9 @@ The record inserted into Elasticsearch would be
385
326
  id_key request_id # use "request_id" field as a record id in ES
386
327
  ```
387
328
 
388
- By default, all records inserted into Elasticsearch get a random _id. This option allows to use a field in the record as an identifier.
329
+ By default, all records inserted into ElasticSearch get a random _id. This option allows to use a field in the record as an identifier.
389
330
 
390
- This following record `{"name":"Johnny","request_id":"87d89af7daffad6"}` will trigger the following Elasticsearch command
331
+ This following record `{"name":"Johnny","request_id":"87d89af7daffad6"}` will trigger the following ElasticSearch command
391
332
 
392
333
  ```
393
334
  { "index" : { "_index" : "logstash-2013.01.01, "_type" : "fluentd", "_id" : "87d89af7daffad6" } }
@@ -405,7 +346,7 @@ If your input is
405
346
  { "name": "Johnny", "a_parent": "my_parent" }
406
347
  ```
407
348
 
408
- Elasticsearch command would be
349
+ ElasticSearch command would be
409
350
 
410
351
  ```
411
352
  { "index" : { "_index" : "****", "_type" : "****", "_id" : "****", "_parent" : "my_parent" } }
@@ -449,21 +390,6 @@ present in the record then the keys in record are used, if the `remove_keys_on_u
449
390
  remove_keys_on_update_key keys_to_skip
450
391
  ```
451
392
 
452
- ### retry_tag
453
-
454
- This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit
455
- failed records using the same tag that was provided. When set to a value other then `nil`, failed messages are emitted
456
- with the specified tag:
457
-
458
- ```
459
- retry_tag 'retry_es'
460
- ```
461
- **NOTE:** `retry_tag` is optional. If you would rather use labels to reroute retries, add a label (e.g '@label @SOMELABEL') to your fluent
462
- elasticsearch plugin configuration. Retry records are, by default, submitted for retry to the ROOT label, which means
463
- records will flow through your fluentd pipeline from the beginning. This may nor may not be a problem if the pipeline
464
- is idempotent - that is - you can process a record again with no changes. Use tagging or labeling to ensure your retry
465
- records are not processed again by your fluentd processing pipeline.
466
-
467
393
  ### write_operation
468
394
 
469
395
  The write_operation can be any of:
@@ -492,36 +418,20 @@ We recommended to set this true in the presence of elasticsearch shield.
492
418
  reconnect_on_error true # defaults to false
493
419
  ```
494
420
 
495
- ### with_transporter_log
496
-
497
- This is debugging purpose option to enable to obtain transporter layer log.
498
- Default value is `false` for backward compatibility.
499
-
500
- We recommend to set this true if you start to debug this plugin.
501
-
502
- ```
503
- with_transporter_log true
504
- ```
505
-
506
421
  ### Client/host certificate options
507
422
 
508
- Need to verify Elasticsearch's certificate? You can use the following parameter to specify a CA instead of using an environment variable.
423
+ Need to verify ElasticSearch's certificate? You can use the following parameter to specify a CA instead of using an environment variable.
509
424
  ```
510
425
  ca_file /path/to/your/ca/cert
511
426
  ```
512
427
 
513
- Does your Elasticsearch cluster want to verify client connections? You can specify the following parameters to use your client certificate, key, and key password for your connection.
428
+ Does your ElasticSearch cluster want to verify client connections? You can specify the following parameters to use your client certificate, key, and key password for your connection.
514
429
  ```
515
430
  client_cert /path/to/your/client/cert
516
431
  client_key /path/to/your/private/key
517
432
  client_key_pass password
518
433
  ```
519
434
 
520
- If you want to configure SSL/TLS version, you can specify ssl\_version parameter.
521
- ```
522
- ssl_version TLSv1_2 # or [SSLv23, TLSv1, TLSv1_1]
523
- ```
524
-
525
435
  ### Proxy Support
526
436
 
527
437
  Starting with version 0.8.0, this gem uses excon, which supports proxy with environment variables - https://github.com/excon/excon#proxy-support
@@ -532,7 +442,7 @@ Starting with version 0.8.0, this gem uses excon, which supports proxy with envi
532
442
 
533
443
  ```
534
444
  buffer_type memory
535
- flush_interval 60s
445
+ flush_interval 60
536
446
  retry_limit 17
537
447
  retry_wait 1.0
538
448
  num_threads 1
@@ -559,54 +469,11 @@ This will produce elasticsearch output that looks like this:
559
469
 
560
470
  Note that the flattener does not deal with arrays at this time.
561
471
 
562
- ### Generate Hash ID
563
-
564
- By default, the fluentd elasticsearch plugin does not emit records with a _id field, leaving it to Elasticsearch to generate a unique _id as the record is indexed. When an Elasticsearch cluster is congested and begins to take longer to respond than the configured request_timeout, the fluentd elasticsearch plugin will re-send the same bulk request. Since Elasticsearch can't tell its actually the same request, all documents in the request are indexed again resulting in duplicate data. In certain scenarios, this can result in essentially and infinite loop generating multiple copies of the same data.
565
-
566
- The bundled elasticsearch_genid filter can generate a unique _hash key for each record, this key may be passed to the id_key parameter in the elasticsearch plugin to communicate to Elasticsearch the uniqueness of the requests so that duplicates will be rejected or simply replace the existing records.
567
- Here is a sample config:
568
-
569
- ```
570
- <filter **>
571
- @type elasticsearch_genid
572
- hash_id_key _hash # storing generated hash id key (default is _hash)
573
- </filter>
574
- <match **>
575
- @type elasticsearch
576
- id_key _hash # specify same key name which is specified in hash_id_key
577
- remove_keys _hash # Elasticsearch doesn't like keys that start with _
578
- # other settings are ommitted.
579
- </match>
580
- ```
581
-
582
- ### Sniffer Class Name
583
-
584
- The default Sniffer used by the `Elasticsearch::Transport` class works well when Fluentd has a direct connection
585
- to all of the Elasticsearch servers and can make effective use of the `_nodes` API. This doesn't work well
586
- when Fluentd must connect through a load balancer or proxy. The parameter `sniffer_class_name` gives you the
587
- ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition,
588
- there is a new `Fluent::ElasticsearchSimpleSniffer` class which reuses the hosts given in the configuration, which
589
- is typically the hostname of the load balancer or proxy. For example, a configuration like this would cause
590
- connections to `logging-es` to reload every 100 operations:
591
-
592
- ```
593
- host logging-es
594
- port 9200
595
- reload_connections true
596
- sniffer_class_name Fluent::ElasticsearchSimpleSniffer
597
- reload_after 100
598
- ```
599
-
600
- ### Reload After
601
-
602
- When `reload_connections true`, this is the integer number of operations after which the plugin will
603
- reload the connections. The default value is 10000.
604
-
605
472
  ### Not seeing a config you need?
606
473
 
607
474
  We try to keep the scope of this plugin small and not add too many configuration options. If you think an option would be useful to others, feel free to open an issue or contribute a Pull Request.
608
475
 
609
- Alternatively, consider using [fluent-plugin-forest](https://github.com/tagomoris/fluent-plugin-forest). For example, to configure multiple tags to be sent to different Elasticsearch indices:
476
+ Alternatively, consider using [fluent-plugin-forest](https://github.com/tagomoris/fluent-plugin-forest). For example, to configure multiple tags to be sent to different ElasticSearch indices:
610
477
 
611
478
  ```
612
479
  <match my.logs.*>
@@ -624,7 +491,7 @@ And yet another option is described in Dynamic Configuration section.
624
491
 
625
492
  ### Dynamic configuration
626
493
 
627
- If you want configurations to depend on information in messages, you can use `elasticsearch_dynamic`. This is an experimental variation of the Elasticsearch plugin allows configuration values to be specified in ways such as the below:
494
+ If you want configurations to depend on information in messages, you can use `elasticsearch_dynamic`. This is an experimental variation of the ElasticSearch plugin allows configuration values to be specified in ways such as the below:
628
495
 
629
496
  ```
630
497
  <match my.logs.*>
@@ -639,12 +506,6 @@ If you want configurations to depend on information in messages, you can use `el
639
506
 
640
507
  **Please note, this uses Ruby's `eval` for every message, so there are performance and security implications.**
641
508
 
642
- ### suppress_doc_wrap
643
-
644
- By default, record body is wrapped by 'doc'. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
645
-
646
- Default value is `false`.
647
-
648
509
  ## Contact
649
510
 
650
511
  If you have a question, [open an Issue](https://github.com/uken/fluent-plugin-elasticsearch/issues).