fluent-plugin-elasticsearch 4.1.1 → 5.4.3

Sign up to get free protection for your applications and to get access to all the features.
Files changed (45) hide show
  1. checksums.yaml +4 -4
  2. data/.github/ISSUE_TEMPLATE/bug_report.md +37 -0
  3. data/.github/ISSUE_TEMPLATE/feature_request.md +24 -0
  4. data/.github/dependabot.yml +6 -0
  5. data/.github/workflows/issue-auto-closer.yml +2 -2
  6. data/.github/workflows/linux.yml +5 -2
  7. data/.github/workflows/macos.yml +5 -2
  8. data/.github/workflows/windows.yml +5 -2
  9. data/Gemfile +1 -2
  10. data/History.md +146 -0
  11. data/README.ElasticsearchGenID.md +4 -4
  12. data/README.ElasticsearchInput.md +1 -1
  13. data/README.Troubleshooting.md +692 -0
  14. data/README.md +260 -550
  15. data/fluent-plugin-elasticsearch.gemspec +4 -1
  16. data/lib/fluent/plugin/elasticsearch_compat.rb +31 -0
  17. data/lib/fluent/plugin/elasticsearch_error_handler.rb +19 -4
  18. data/lib/fluent/plugin/elasticsearch_fallback_selector.rb +2 -2
  19. data/lib/fluent/plugin/elasticsearch_index_lifecycle_management.rb +18 -4
  20. data/lib/fluent/plugin/elasticsearch_index_template.rb +65 -21
  21. data/lib/fluent/plugin/elasticsearch_simple_sniffer.rb +2 -1
  22. data/lib/fluent/plugin/filter_elasticsearch_genid.rb +1 -1
  23. data/lib/fluent/plugin/in_elasticsearch.rb +8 -2
  24. data/lib/fluent/plugin/oj_serializer.rb +2 -1
  25. data/lib/fluent/plugin/out_elasticsearch.rb +192 -36
  26. data/lib/fluent/plugin/out_elasticsearch_data_stream.rb +298 -0
  27. data/lib/fluent/plugin/out_elasticsearch_dynamic.rb +3 -1
  28. data/test/helper.rb +0 -4
  29. data/test/plugin/mock_chunk.dat +0 -0
  30. data/test/plugin/test_elasticsearch_error_handler.rb +130 -23
  31. data/test/plugin/test_elasticsearch_fallback_selector.rb +17 -8
  32. data/test/plugin/test_elasticsearch_index_lifecycle_management.rb +57 -18
  33. data/test/plugin/test_elasticsearch_tls.rb +8 -2
  34. data/test/plugin/test_filter_elasticsearch_genid.rb +16 -16
  35. data/test/plugin/test_in_elasticsearch.rb +51 -21
  36. data/test/plugin/test_index_alias_template.json +11 -0
  37. data/test/plugin/test_index_template.json +25 -0
  38. data/test/plugin/test_out_elasticsearch.rb +2118 -704
  39. data/test/plugin/test_out_elasticsearch_data_stream.rb +1199 -0
  40. data/test/plugin/test_out_elasticsearch_dynamic.rb +170 -31
  41. metadata +62 -10
  42. data/.coveralls.yml +0 -2
  43. data/.travis.yml +0 -44
  44. data/appveyor.yml +0 -20
  45. data/gemfiles/Gemfile.without.ilm +0 -10
data/README.md CHANGED
@@ -4,14 +4,13 @@
4
4
  ![Testing on Windows](https://github.com/uken/fluent-plugin-elasticsearch/workflows/Testing%20on%20Windows/badge.svg?branch=master)
5
5
  ![Testing on macOS](https://github.com/uken/fluent-plugin-elasticsearch/workflows/Testing%20on%20macOS/badge.svg?branch=master)
6
6
  ![Testing on Ubuntu](https://github.com/uken/fluent-plugin-elasticsearch/workflows/Testing%20on%20Ubuntu/badge.svg?branch=master)
7
- [![Coverage Status](https://coveralls.io/repos/uken/fluent-plugin-elasticsearch/badge.png)](https://coveralls.io/r/uken/fluent-plugin-elasticsearch)
8
7
  [![Code Climate](https://codeclimate.com/github/uken/fluent-plugin-elasticsearch.png)](https://codeclimate.com/github/uken/fluent-plugin-elasticsearch)
9
8
 
10
9
  Send your logs to Elasticsearch (and search them with Kibana maybe?)
11
10
 
12
11
  Note: For Amazon Elasticsearch Service please consider using [fluent-plugin-aws-elasticsearch-service](https://github.com/atomita/fluent-plugin-aws-elasticsearch-service)
13
12
 
14
- Current maintainers: @cosmo0920
13
+ Current maintainers: [Hiroshi Hatake | @cosmo0920](https://github.com/cosmo0920), [Kentaro Hayashi | @kenhys](https://github.com/kenhys)
15
14
 
16
15
  * [Installation](#installation)
17
16
  * [Usage](#usage)
@@ -19,6 +18,8 @@ Current maintainers: @cosmo0920
19
18
  * [Configuration](#configuration)
20
19
  + [host](#host)
21
20
  + [port](#port)
21
+ + [cloud_id](#cloud_id)
22
+ + [cloud_auth](#cloud_auth)
22
23
  + [emit_error_for_missing_id](#emit_error_for_missing_id)
23
24
  + [hosts](#hosts)
24
25
  + [user, password, path, scheme, ssl_verify](#user-password-path-scheme-ssl_verify)
@@ -36,6 +37,7 @@ Current maintainers: @cosmo0920
36
37
  + [suppress_type_name](#suppress_type_name)
37
38
  + [target_index_key](#target_index_key)
38
39
  + [target_type_key](#target_type_key)
40
+ + [target_index_affinity](#target_index_affinity)
39
41
  + [template_name](#template_name)
40
42
  + [template_file](#template_file)
41
43
  + [template_overwrite](#template_overwrite)
@@ -85,6 +87,7 @@ Current maintainers: @cosmo0920
85
87
  + [verify_es version at startup](#verify_es_version_at_startup)
86
88
  + [default_elasticsearch_version](#default_elasticsearch_version)
87
89
  + [custom_headers](#custom_headers)
90
+ + [api_key](#api_key)
88
91
  + [Not seeing a config you need?](#not-seeing-a-config-you-need)
89
92
  + [Dynamic configuration](#dynamic-configuration)
90
93
  + [Placeholders](#placeholders)
@@ -100,31 +103,29 @@ Current maintainers: @cosmo0920
100
103
  + [ilm_policies](#ilm_policies)
101
104
  + [ilm_policy_overwrite](#ilm_policy_overwrite)
102
105
  + [truncate_caches_interval](#truncate_caches_interval)
106
+ + [use_legacy_template](#use_legacy_template)
107
+ + [metadata section](#metadata-section)
108
+ + [include_chunk_id](#include_chunk_id)
109
+ + [chunk_id_key](#chunk_id_key)
103
110
  * [Configuration - Elasticsearch Input](#configuration---elasticsearch-input)
104
111
  * [Configuration - Elasticsearch Filter GenID](#configuration---elasticsearch-filter-genid)
112
+ * [Configuration - Elasticsearch Output Data Stream](#configuration---elasticsearch-output-data-stream)
105
113
  * [Elasticsearch permissions](#elasticsearch-permissions)
106
114
  * [Troubleshooting](#troubleshooting)
107
- + [Cannot send events to elasticsearch](#cannot-send-events-to-elasticsearch)
108
- + [Cannot see detailed failure log](#cannot-see-detailed-failure-log)
109
- + [Cannot connect TLS enabled reverse Proxy](#cannot-connect-tls-enabled-reverse-proxy)
110
- + [Declined logs are resubmitted forever, why?](#declined-logs-are-resubmitted-forever-why)
111
- + [Suggested to install typhoeus gem, why?](#suggested-to-install-typhoeus-gem-why)
112
- + [Stopped to send events on k8s, why?](#stopped-to-send-events-on-k8s-why)
113
- + [Random 400 - Rejected by Elasticsearch is occured, why?](#random-400---rejected-by-elasticsearch-is-occured-why)
114
- + [Fluentd seems to hang if it unable to connect Elasticsearch, why?](#fluentd-seems-to-hang-if-it-unable-to-connect-elasticsearch-why)
115
- + [Enable Index Lifecycle Management](#enable-index-lifecycle-management)
116
- + [How to specify index codec](#how-to-specify-index-codec)
117
- + [Cannot push logs to Elasticsearch with connect_write timeout reached, why?](#cannot-push-logs-to-elasticsearch-with-connect_write-timeout-reached-why)
118
115
  * [Contact](#contact)
119
116
  * [Contributing](#contributing)
120
117
  * [Running tests](#running-tests)
121
118
 
122
119
  ## Requirements
123
120
 
124
- | fluent-plugin-elasticsearch | fluentd | ruby |
125
- |-------------------|---------|------|
126
- | >= 2.0.0 | >= v0.14.20 | >= 2.1 |
127
- | < 2.0.0 | >= v0.12.0 | >= 1.9 |
121
+ | fluent-plugin-elasticsearch | fluentd | ruby |
122
+ |:----------------------------:|:-----------:|:------:|
123
+ | >= 4.0.1 | >= v0.14.22 | >= 2.3 |
124
+ | >= 3.2.4 && < 4.0.1 | >= v0.14.22 | >= 2.1 |
125
+ | >= 2.0.0 && < 3.2.3 | >= v0.14.20 | >= 2.1 |
126
+ | < 2.0.0 | >= v0.12.0 | >= 1.9 |
127
+
128
+ NOTE: Since fluent-plugin-elasticsearch 5.3.0, it requires faraday 2.0 or later.
128
129
 
129
130
  NOTE: For v0.12 version, you should use 1.x.y version. Please send patch into v0.12 branch if you encountered 1.x version's bug.
130
131
 
@@ -172,6 +173,24 @@ You can specify Elasticsearch host by this parameter.
172
173
 
173
174
  **Note:** Since v3.3.2, `host` parameter supports builtin placeholders. If you want to send events dynamically into different hosts at runtime with `elasticsearch_dynamic` output plugin, please consider to switch to use plain `elasticsearch` output plugin. In more detail for builtin placeholders, please refer to [Placeholders](#placeholders) section.
174
175
 
176
+ To use IPv6 address on `host` parameter, you can use the following styles:
177
+
178
+ #### string style
179
+
180
+ To use string style, you must quote IPv6 address due to prevent to be interpreted as JSON:
181
+
182
+ ```
183
+ host "[2404:7a80:d440:3000:192a:a292:bd7f:ca10]"
184
+ ```
185
+
186
+ #### raw style
187
+
188
+ You can also specify raw IPv6 address. This will be handled as `[specified IPv6 address]`:
189
+
190
+ ```
191
+ host 2404:7a80:d440:3000:192a:a292:bd7f:ca10
192
+ ```
193
+
175
194
  ### port
176
195
 
177
196
  ```
@@ -180,6 +199,26 @@ port 9201 # defaults to 9200
180
199
 
181
200
  You can specify Elasticsearch port by this parameter.
182
201
 
202
+ ### cloud_id
203
+
204
+ ```
205
+ cloud_id test-dep:ZXVyb3BlLXdlc3QxLmdjcC5jbG91ZC5lcy5pbyRiYZTA1Ng==
206
+ ```
207
+
208
+ You can specify Elasticsearch cloud_id by this parameter.
209
+
210
+ If you specify `cloud_id` option then `cloud_auth` option is required.
211
+ If you specify `cloud_id` option, `host`, `port`, `user` and `password` options are ignored.
212
+
213
+ ### cloud_auth
214
+
215
+ ```
216
+ cloud_auth 'elastic:slkjdaooewkd87iqQ2O8EQYV'
217
+ ```
218
+
219
+ You can specify Elasticsearch cloud_auth by this parameter.
220
+
221
+
183
222
  ### emit_error_for_missing_id
184
223
 
185
224
  ```
@@ -218,6 +257,16 @@ hosts host1:port1,host2:port2,host3 # port3 is 9200
218
257
 
219
258
  **Note:** Up until v2.8.5, it was allowed to embed the username/password in the URL. However, this syntax is deprecated as of v2.8.6 because it was found to cause serious connection problems (See #394). Please migrate your settings to use the `user` and `password` field (described below) instead.
220
259
 
260
+ #### IPv6 addresses
261
+
262
+ When you want to specify IPv6 addresses, you must specify schema together:
263
+
264
+ ```
265
+ hosts http://[2404:7a80:d440:3000:de:7311:6329:2e6c]:port1,http://[2404:7a80:d440:3000:de:7311:6329:1e6c]:port2,http://[2404:7a80:d440:3000:de:6311:6329:2e6c]:port3
266
+ ```
267
+
268
+ If you don't specify hosts with schema together, Elasticsearch plugin complains Invalid URI for them.
269
+
221
270
  ### user, password, path, scheme, ssl_verify
222
271
 
223
272
  ```
@@ -407,6 +456,75 @@ and this record will be written to the specified index (`logstash-2014.12.19`) r
407
456
 
408
457
  Similar to `target_index_key` config, find the type name to write to in the record under this key (or nested record). If key not found in record - fallback to `type_name` (default "fluentd").
409
458
 
459
+ ### target_index_affinity
460
+
461
+ Enable plugin to dynamically select logstash time based target index in update/upsert operations based on already indexed data rather than current time of indexing.
462
+
463
+ ```
464
+ target_index_affinity true # defaults to false
465
+ ```
466
+
467
+ By default plugin writes data of logstash format index based on current time. For example daily based index after mignight data is written to newly created index. This is normally ok when data is coming from single source and not updated after indexing.
468
+
469
+ But if you have a use case where data is also updated after indexing and `id_key` is used to identify the document uniquely for updating. Logstash format is wanted to be used for easy data managing and retention. Updates are done right after indexing to complete the data (all data not available from single source) and no updates are done anymore later point on time. In this case problem happends at index rotation time where write to 2 indexes with same id_key value may happen.
470
+
471
+ This setting will search existing data by using elastic search's [id query](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-ids-query.html) using `id_key` value (with logstash_prefix and logstash_prefix_separator index pattarn e.g. `logstash-*`). The index of found data is used for update/upsert. When no data is found, data is written to current logstash index as normally.
472
+
473
+ This setting requires following other settings:
474
+ ```
475
+ logstash_format true
476
+ id_key myId # Some field on your data to identify the data uniquely
477
+ write_operation upsert # upsert or update
478
+ ```
479
+
480
+ Suppose you have the following situation where you have 2 different match to consume data from 2 different Kafka topics independently but close in time with each other (order not known).
481
+
482
+ ```
483
+ <match data1>
484
+ @type elasticsearch
485
+ ...
486
+ id_key myId
487
+ write_operation upsert
488
+ logstash_format true
489
+ logstash_dateformat %Y.%m.%d
490
+ logstash_prefix myindexprefix
491
+ target_index_affinity true
492
+ ...
493
+
494
+ <match data2>
495
+ @type elasticsearch
496
+ ...
497
+ id_key myId
498
+ write_operation upsert
499
+ logstash_format true
500
+ logstash_dateformat %Y.%m.%d
501
+ logstash_prefix myindexprefix
502
+ target_index_affinity true
503
+ ...
504
+ ```
505
+
506
+ If your first (data1) input is:
507
+ ```
508
+ {
509
+ "myId": "myuniqueId1",
510
+ "datafield1": "some value",
511
+ }
512
+ ```
513
+
514
+ and your second (data2) input is:
515
+ ```
516
+ {
517
+ "myId": "myuniqueId1",
518
+ "datafield99": "some important data from other source tightly related to id myuniqueId1 and wanted to be in same document.",
519
+ }
520
+ ```
521
+
522
+ Date today is 10.05.2021 so data is written to index `myindexprefix-2021.05.10` when both data1 and data2 is consumed during today.
523
+ But when we are close to index rotation and data1 is consumed and indexed at `2021-05-10T23:59:55.59707672Z` and data2
524
+ is consumed a bit later at `2021-05-11T00:00:58.222079Z` i.e. logstash index has been rotated and normally data2 would have been written
525
+ to index `myindexprefix-2021.05.11`. But with target_index_affinity setting as value true, data2 is now written to index `myindexprefix-2021.05.10`
526
+ into same document with data1 as wanted and duplicated document is avoided.
527
+
410
528
  ### template_name
411
529
 
412
530
  The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless [template_overwrite](#template_overwrite) is set, in which case the template will be updated.
@@ -429,7 +547,7 @@ Specify index templates in form of hash. Can contain multiple templates.
429
547
  templates { "template_name_1": "path_to_template_1_file", "template_name_2": "path_to_template_2_file"}
430
548
  ```
431
549
 
432
- If `template_file` and `template_name` are set, then this parameter will be ignored.
550
+ **Note:** Before ES plugin v4.1.2, if `template_file` and `template_name` are set, then this parameter will be ignored. In 4.1.3 or later, `template_file` and `template_name` can work with `templates`.
433
551
 
434
552
  ### customize_template
435
553
 
@@ -494,7 +612,7 @@ Specify the application name for the rollover index to be created.
494
612
  application_name default # defaults to "default"
495
613
  ```
496
614
 
497
- If [enable_ilm](#enable_ilm is set, then this parameter will be in effect otherwise ignored.
615
+ If [enable_ilm](#enable_ilm) is set, then this parameter will be in effect otherwise ignored.
498
616
 
499
617
  ### template_overwrite
500
618
 
@@ -917,7 +1035,7 @@ Starting with version 0.8.0, this gem uses excon, which supports proxy with envi
917
1035
 
918
1036
  ### Buffer options
919
1037
 
920
- `fluentd-plugin-elasticsearch` extends [Fluentd's builtin Output plugin](https://docs.fluentd.org/v0.14/articles/output-plugin-overview) and use `compat_parameters` plugin helper. It adds the following options:
1038
+ `fluentd-plugin-elasticsearch` extends [Fluentd's builtin Output plugin](https://docs.fluentd.org/output#overview) and use `compat_parameters` plugin helper. It adds the following options:
921
1039
 
922
1040
  ```
923
1041
  buffer_type memory
@@ -1062,11 +1180,11 @@ Advanced users can increase its capacity, but normal users should follow default
1062
1180
 
1063
1181
  If you want to increase it and forcibly retrying bulk request, please consider to change `unrecoverable_error_types` parameter from default value.
1064
1182
 
1065
- Change default value of `thread_pool.bulk.queue_size` in elasticsearch.yml:
1183
+ Change default value of `thread_pool.write.queue_size` in elasticsearch.yml:
1066
1184
  e.g.)
1067
1185
 
1068
1186
  ```yaml
1069
- thread_pool.bulk.queue_size: 1000
1187
+ thread_pool.write.queue_size: 1000
1070
1188
  ```
1071
1189
 
1072
1190
  Then, remove `es_rejected_execution_exception` from `unrecoverable_error_types` parameter:
@@ -1104,6 +1222,14 @@ This parameter adds additional headers to request. The default value is `{}`.
1104
1222
  custom_headers {"token":"secret"}
1105
1223
  ```
1106
1224
 
1225
+ ### api_key
1226
+
1227
+ This parameter adds authentication header. The default value is `nil`.
1228
+
1229
+ ```
1230
+ api_key "ElasticsearchAPIKEY"
1231
+ ```
1232
+
1107
1233
  ### Not seeing a config you need?
1108
1234
 
1109
1235
  We try to keep the scope of this plugin small and not add too many configuration options. If you think an option would be useful to others, feel free to open an issue or contribute a Pull Request.
@@ -1128,6 +1254,8 @@ And yet another option is described in Dynamic Configuration section.
1128
1254
 
1129
1255
  ### Dynamic configuration
1130
1256
 
1257
+ **NOTE**: *`out_elasticsearch_dynamic` will be planned to be marked as deprecated.* Please don't use the new Fluentd configuration. This plugin is maintained for backward compatibility.
1258
+
1131
1259
  If you want configurations to depend on information in messages, you can use `elasticsearch_dynamic`. This is an experimental variation of the Elasticsearch plugin allows configuration values to be specified in ways such as the below:
1132
1260
 
1133
1261
  ```
@@ -1240,7 +1368,7 @@ Default value is `true`.
1240
1368
 
1241
1369
  Configure `bulk_message` request splitting threshold size.
1242
1370
 
1243
- Default value is `20MB`. (20 * 1024 * 1024)
1371
+ Default value is `-1`(unlimited).
1244
1372
 
1245
1373
  If you specify this size as negative number, `bulk_message` request splitting feature will be disabled.
1246
1374
 
@@ -1292,615 +1420,194 @@ If it is set, timer for clearing `alias_indexes` and `template_names` caches wil
1292
1420
 
1293
1421
  Default value is `nil`.
1294
1422
 
1295
- ## Configuration - Elasticsearch Input
1296
-
1297
- See [Elasticsearch Input plugin document](README.ElasticsearchInput.md)
1298
-
1299
- ## Configuration - Elasticsearch Filter GenID
1300
-
1301
- See [Elasticsearch Filter GenID document](README.ElasticsearchGenID.md)
1302
-
1303
- ## Elasticsearch permissions
1304
-
1305
- If the target Elasticsearch requires authentication, a user holding the necessary permissions needs to be provided.
1306
-
1307
- The set of required permissions are the following:
1308
-
1309
- ```json
1310
- "cluster": ["manage_index_templates", "monitor", "manage_ilm"],
1311
- "indices": [
1312
- {
1313
- "names": [ "*" ],
1314
- "privileges": ["write","create","delete","create_index","manage","manage_ilm"]
1315
- }
1316
- ]
1317
- ```
1423
+ ## use_legacy_template
1318
1424
 
1319
- These permissions can be narrowed down by:
1425
+ Use legacy template or not.
1320
1426
 
1321
- - Setting a more specific pattern for indices under the `names` field
1322
- - Removing the `manage_index_templates` cluster permission when not using the feature within your plugin configuration
1323
- - Removing the `manage_ilm` cluster permission and the `manage` and `manage_ilm` indices privileges when not using ilm
1324
- features in the plugin configuration
1325
-
1326
- The list of privileges along with their description can be found in
1327
- [security privileges](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html).
1427
+ For Elasticsearch 7.8 or later, users can specify this parameter as `false` if their [template_file](#template_file) contains a composable index template.
1328
1428
 
1329
- ## Troubleshooting
1429
+ For Elasticsearch 7.7 or older, users should specify this parameter as `true`.
1330
1430
 
1331
- ### Cannot send events to Elasticsearch
1431
+ Composable template documentation is [Put Index Template API | Elasticsearch Reference](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html) and legacy template documentation is [Index Templates | Elasticsearch Reference](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates-v1.html).
1332
1432
 
1333
- A common cause of failure is that you are trying to connect to an Elasticsearch instance with an incompatible version.
1433
+ Please confirm that whether the using Elasticsearch cluster(s) support the composable template feature or not when turn on the brand new feature with this parameter.
1334
1434
 
1335
- For example, td-agent currently bundles the 6.x series of the [elasticsearch-ruby](https://github.com/elastic/elasticsearch-ruby) library. This means that your Elasticsearch server also needs to be 6.x. You can check the actual version of the client library installed on your system by executing the following command.
1435
+ ## <metadata\> section
1336
1436
 
1337
- ```
1338
- # For td-agent users
1339
- $ /usr/sbin/td-agent-gem list elasticsearch
1340
- # For standalone Fluentd users
1341
- $ fluent-gem list elasticsearch
1342
- ```
1343
- Or, fluent-plugin-elasticsearch v2.11.7 or later, users can inspect version incompatibility with the `validate_client_version` option:
1344
-
1345
- ```
1346
- validate_client_version true
1347
- ```
1348
-
1349
- If you get the following error message, please consider to install compatible elasticsearch client gems:
1350
-
1351
- ```
1352
- Detected ES 5 but you use ES client 6.1.0.
1353
- Please consider to use 5.x series ES client.
1354
- ```
1355
-
1356
- For further details of the version compatibility issue, please read [the official manual](https://github.com/elastic/elasticsearch-ruby#compatibility).
1357
-
1358
- ### Cannot see detailed failure log
1359
-
1360
- A common cause of failure is that you are trying to connect to an Elasticsearch instance with an incompatible ssl protocol version.
1361
-
1362
- For example, `out_elasticsearch` set up ssl_version to TLSv1 due to historical reason.
1363
- Modern Elasticsearch ecosystem requests to communicate with TLS v1.2 or later.
1364
- But, in this case, `out_elasticsearch` conceals transporter part failure log by default.
1365
- If you want to acquire transporter log, please consider to set the following configuration:
1366
-
1367
- ```
1368
- with_transporter_log true
1369
- @log_level debug
1370
- ```
1371
-
1372
- Then, the following log is shown in Fluentd log:
1373
-
1374
- ```
1375
- 2018-10-24 10:00:00 +0900 [error]: #0 [Faraday::ConnectionFailed] SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol (OpenSSL::SSL::SSLError) {:host=>"elasticsearch-host", :port=>80, :scheme=>"https", :user=>"elastic", :password=>"changeme", :protocol=>"https"}
1376
- ```
1377
-
1378
- This indicates that inappropriate TLS protocol version is used.
1379
- If you want to use TLS v1.2, please use `ssl_version` parameter like as:
1380
-
1381
- ```
1382
- ssl_version TLSv1_2
1383
- ```
1384
-
1385
- or, in v4.0.2 or later with Ruby 2.5 or later combination, the following congiuration is also valid:
1386
-
1387
- ```
1388
- ssl_max_version TLSv1_2
1389
- ssl_min_version TLSv1_2
1390
- ```
1391
-
1392
- ### Cannot connect TLS enabled reverse Proxy
1393
-
1394
- A common cause of failure is that you are trying to connect to an Elasticsearch instance behind nginx reverse proxy which uses an incompatible ssl protocol version.
1395
-
1396
- For example, `out_elasticsearch` set up ssl_version to TLSv1 due to historical reason.
1397
- Nowadays, nginx reverse proxy uses TLS v1.2 or later for security reason.
1398
- But, in this case, `out_elasticsearch` conceals transporter part failure log by default.
1399
-
1400
- If you set up nginx reverse proxy with TLS v1.2:
1401
-
1402
- ```
1403
- server {
1404
- listen <your IP address>:9400;
1405
- server_name <ES-Host>;
1406
- ssl on;
1407
- ssl_certificate /etc/ssl/certs/server-bundle.pem;
1408
- ssl_certificate_key /etc/ssl/private/server-key.pem;
1409
- ssl_client_certificate /etc/ssl/certs/ca.pem;
1410
- ssl_verify_client on;
1411
- ssl_verify_depth 2;
1412
-
1413
- # Reference : https://cipherli.st/
1414
- ssl_protocols TLSv1.2;
1415
- ssl_prefer_server_ciphers on;
1416
- ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
1417
- ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
1418
- ssl_session_cache shared:SSL:10m;
1419
- ssl_session_tickets off; # Requires nginx >= 1.5.9
1420
- ssl_stapling on; # Requires nginx >= 1.3.7
1421
- ssl_stapling_verify on; # Requires nginx => 1.3.7
1422
- resolver 127.0.0.1 valid=300s;
1423
- resolver_timeout 5s;
1424
- add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
1425
- add_header X-Frame-Options DENY;
1426
- add_header X-Content-Type-Options nosniff;
1427
-
1428
- client_max_body_size 64M;
1429
- keepalive_timeout 5;
1430
-
1431
- location / {
1432
- proxy_set_header Host $host;
1433
- proxy_set_header X-Real-IP $remote_addr;
1434
- proxy_pass http://localhost:9200;
1435
- }
1436
- }
1437
- ```
1438
-
1439
- Then, nginx reverse proxy starts with TLSv1.2.
1440
-
1441
- Fluentd suddenly dies with the following log:
1442
- ```
1443
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: log writing failed. execution expired
1444
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/ssl_socket.rb:10:in `initialize': stack level too deep (SystemStackError)
1445
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/connection.rb:429:in `new'
1446
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/connection.rb:429:in `socket'
1447
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/connection.rb:111:in `request_call'
1448
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/middlewares/mock.rb:48:in `request_call'
1449
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/middlewares/instrumentor.rb:26:in `request_call'
1450
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/middlewares/base.rb:16:in `request_call'
1451
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/middlewares/base.rb:16:in `request_call'
1452
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/middlewares/base.rb:16:in `request_call'
1453
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: ... 9266 levels...
1454
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require'
1455
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.2.5/bin/fluentd:8:in `<top (required)>'
1456
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/bin/fluentd:22:in `load'
1457
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/bin/fluentd:22:in `<main>'
1458
- Oct 31 9:44:45 <ES-Host> systemd[1]: fluentd.service: Control process exited, code=exited status=1
1459
- ```
1460
-
1461
- If you want to acquire transporter log, please consider to set the following configuration:
1462
-
1463
- ```
1464
- with_transporter_log true
1465
- @log_level debug
1466
- ```
1467
-
1468
- Then, the following log is shown in Fluentd log:
1469
-
1470
- ```
1471
- 2018-10-31 10:00:57 +0900 [warn]: #7 [Faraday::ConnectionFailed] Attempt 2 connecting to {:host=>"<ES-Host>", :port=>9400, :scheme=>"https", :protocol=>"https"}
1472
- 2018-10-31 10:00:57 +0900 [error]: #7 [Faraday::ConnectionFailed] Connection reset by peer - SSL_connect (Errno::ECONNRESET) {:host=>"<ES-Host>", :port=>9400, :scheme=>"https", :protocol=>"https"}
1473
- ```
1474
-
1475
- The above logs indicates that using incompatible SSL/TLS version between fluent-plugin-elasticsearch and nginx, which is reverse proxy, is root cause of this issue.
1476
-
1477
- If you want to use TLS v1.2, please use `ssl_version` parameter like as:
1478
-
1479
- ```
1480
- ssl_version TLSv1_2
1481
- ```
1482
-
1483
- or, in v4.0.2 or later with Ruby 2.5 or later combination, the following congiuration is also valid:
1484
-
1485
- ```
1486
- ssl_max_version TLSv1_2
1487
- ssl_min_version TLSv1_2
1488
- ```
1489
-
1490
- ### Declined logs are resubmitted forever, why?
1491
-
1492
- Sometimes users write Fluentd configuration like this:
1437
+ Users can specify whether including `chunk_id` information into records or not:
1493
1438
 
1494
1439
  ```aconf
1495
- <match **>
1440
+ <match your.awesome.routing.tag>
1496
1441
  @type elasticsearch
1497
- host localhost
1498
- port 9200
1499
- type_name fluentd
1500
- logstash_format true
1501
- time_key @timestamp
1502
- include_timestamp true
1503
- reconnect_on_error true
1504
- reload_on_failure true
1505
- reload_connections false
1506
- request_timeout 120s
1442
+ # Other configurations.
1443
+ <metadata>
1444
+ include_chunk_id true
1445
+ # chunk_id_key chunk_id # Default value is "chunk_id".
1446
+ </metadata>
1507
1447
  </match>
1508
1448
  ```
1509
1449
 
1510
- The above configuration does not use [`@label` feature](https://docs.fluentd.org/v1.0/articles/config-file#(5)-group-filter-and-output:-the-%E2%80%9Clabel%E2%80%9D-directive) and use glob(**) pattern.
1511
- It is usually problematic configuration.
1512
-
1513
- In error scenario, error events will be emitted with `@ERROR` label, and `fluent.*` tag.
1514
- The black hole glob pattern resubmits a problematic event into pushing Elasticsearch pipeline.
1515
-
1516
- This situation causes flood of declined log:
1517
-
1518
- ```log
1519
- 2018-11-13 11:16:27 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch" location=nil tag="app.fluentcat" time=2018-11-13 11:16:17.492985640 +0000 record={"message"=>"\xFF\xAD"}
1520
- 2018-11-13 11:16:38 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch" location=nil tag="fluent.warn" time=2018-11-13 11:16:27.978851140 +0000 record={"error"=>"#<Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError: 400 - Rejected by Elasticsearch>", "location"=>nil, "tag"=>"app.fluentcat", "time"=>2018-11-13 11:16:17.492985640 +0000, "record"=>{"message"=>"\xFF\xAD"}, "message"=>"dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error=\"400 - Rejected by Elasticsearch\" location=nil tag=\"app.fluentcat\" time=2018-11-13 11:16:17.492985640 +0000 record={\"message\"=>\"\\xFF\\xAD\"}"}
1521
- ```
1450
+ ### include_chunk_id
1522
1451
 
1523
- Then, user should use more concrete tag route or use `@label`.
1524
- The following sections show two examples how to solve flood of declined log.
1525
- One is using concrete tag routing, the other is using label routing.
1526
-
1527
- #### Using concrete tag routing
1528
-
1529
- The following configuration uses concrete tag route:
1452
+ Whether including `chunk_id` for not. Default value is `false`.
1530
1453
 
1531
1454
  ```aconf
1532
- <match out.elasticsearch.**>
1455
+ <match your.awesome.routing.tag>
1533
1456
  @type elasticsearch
1534
- host localhost
1535
- port 9200
1536
- type_name fluentd
1537
- logstash_format true
1538
- time_key @timestamp
1539
- include_timestamp true
1540
- reconnect_on_error true
1541
- reload_on_failure true
1542
- reload_connections false
1543
- request_timeout 120s
1457
+ # Other configurations.
1458
+ <metadata>
1459
+ include_chunk_id true
1460
+ </metadata>
1544
1461
  </match>
1545
1462
  ```
1546
1463
 
1547
- #### Using label feature
1548
1464
 
1549
- The following configuration uses label:
1465
+ ### chunk_id_key
1550
1466
 
1551
- ```aconf
1552
- <source>
1553
- @type forward
1554
- @label @ES
1555
- </source>
1556
- <label @ES>
1557
- <match out.elasticsearch.**>
1558
- @type elasticsearch
1559
- host localhost
1560
- port 9200
1561
- type_name fluentd
1562
- logstash_format true
1563
- time_key @timestamp
1564
- include_timestamp true
1565
- reconnect_on_error true
1566
- reload_on_failure true
1567
- reload_connections false
1568
- request_timeout 120s
1569
- </match>
1570
- </label>
1571
- <label @ERROR>
1572
- <match **>
1573
- @type stdout
1574
- </match>
1575
- </label>
1576
- ```
1577
-
1578
- ### Suggested to install typhoeus gem, why?
1579
-
1580
- fluent-plugin-elasticsearch doesn't depend on typhoeus gem by default.
1581
- If you want to use typhoeus backend, you must install typhoeus gem by your own.
1582
-
1583
- If you use vanilla Fluentd, you can install it by:
1584
-
1585
- ```
1586
- gem install typhoeus
1587
- ```
1588
-
1589
- But, you use td-agent instead of vanilla Fluentd, you have to use `td-agent-gem`:
1590
-
1591
- ```
1592
- td-agent-gem install typhoeus
1593
- ```
1594
-
1595
- In more detail, please refer to [the official plugin management document](https://docs.fluentd.org/v1.0/articles/plugin-management).
1596
-
1597
- ### Stopped to send events on k8s, why?
1598
-
1599
- fluent-plugin-elasticsearch reloads connection after 10000 requests. (Not correspond to events counts because ES plugin uses bulk API.)
1600
-
1601
- This functionality which is originated from elasticsearch-ruby gem is enabled by default.
1602
-
1603
- Sometimes this reloading functionality bothers users to send events with ES plugin.
1604
-
1605
- On k8s platform, users sometimes shall specify the following settings:
1467
+ Specify `chunk_id_key` to store `chunk_id` information into records. Default value is `chunk_id`.
1606
1468
 
1607
1469
  ```aconf
1608
- reload_connections false
1609
- reconnect_on_error true
1610
- reload_on_failure true
1470
+ <match your.awesome.routing.tag>
1471
+ @type elasticsearch
1472
+ # Other configurations.
1473
+ <metadata>
1474
+ include_chunk_id
1475
+ chunk_id_key chunk_hex
1476
+ </metadata>
1477
+ </match>
1611
1478
  ```
1612
1479
 
1613
- If you use [fluentd-kubernetes-daemonset](https://github.com/fluent/fluentd-kubernetes-daemonset), you can specify them with environment variables:
1480
+ ## Configuration - Elasticsearch Input
1614
1481
 
1615
- * `FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS` as `false`
1616
- * `FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR` as `true`
1617
- * `FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE` as `true`
1482
+ See [Elasticsearch Input plugin document](README.ElasticsearchInput.md)
1618
1483
 
1619
- This issue had been reported at [#525](https://github.com/uken/fluent-plugin-elasticsearch/issues/525).
1484
+ ## Configuration - Elasticsearch Filter GenID
1620
1485
 
1621
- ### Random 400 - Rejected by Elasticsearch is occured, why?
1486
+ See [Elasticsearch Filter GenID document](README.ElasticsearchGenID.md)
1622
1487
 
1623
- Index templates installed Elasticsearch sometimes generates 400 - Rejected by Elasticsearch errors.
1624
- For example, kubernetes audit log has structure:
1488
+ ## Elasticsearch permissions
1625
1489
 
1626
- ```json
1627
- "responseObject":{
1628
- "kind":"SubjectAccessReview",
1629
- "apiVersion":"authorization.k8s.io/v1beta1",
1630
- "metadata":{
1631
- "creationTimestamp":null
1632
- },
1633
- "spec":{
1634
- "nonResourceAttributes":{
1635
- "path":"/",
1636
- "verb":"get"
1637
- },
1638
- "user":"system:anonymous",
1639
- "group":[
1640
- "system:unauthenticated"
1641
- ]
1642
- },
1643
- "status":{
1644
- "allowed":true,
1645
- "reason":"RBAC: allowed by ClusterRoleBinding \"cluster-system-anonymous\" of ClusterRole \"cluster-admin\" to User \"system:anonymous\""
1646
- }
1647
- },
1648
- ```
1649
-
1650
- The last element `status` sometimes becomes `"status":"Success"`.
1651
- This element type glich causes status 400 error.
1652
-
1653
- There are some solutions for fixing this:
1654
-
1655
- #### Solution 1
1656
-
1657
- For a key which causes element type glich case.
1658
-
1659
- Using dymanic mapping with the following template:
1490
+ If the target Elasticsearch requires authentication, a user holding the necessary permissions needs to be provided.
1491
+
1492
+ The set of required permissions are the following:
1660
1493
 
1661
1494
  ```json
1662
- {
1663
- "template": "YOURINDEXNAME-*",
1664
- "mappings": {
1665
- "fluentd": {
1666
- "dynamic_templates": [
1667
- {
1668
- "default_no_index": {
1669
- "path_match": "^.*$",
1670
- "path_unmatch": "^(@timestamp|auditID|level|stage|requestURI|sourceIPs|metadata|objectRef|user|verb)(\\..+)?$",
1671
- "match_pattern": "regex",
1672
- "mapping": {
1673
- "index": false,
1674
- "enabled": false
1675
- }
1676
- }
1677
- }
1678
- ]
1495
+ "cluster": ["manage_index_templates", "monitor", "manage_ilm"],
1496
+ "indices": [
1497
+ {
1498
+ "names": [ "*" ],
1499
+ "privileges": ["write","create","delete","create_index","manage","manage_ilm"]
1679
1500
  }
1680
- }
1681
- }
1682
- ```
1683
-
1684
- Note that `YOURINDEXNAME` should be replaced with your using index prefix.
1685
-
1686
- #### Solution 2
1687
-
1688
- For unstable `responseObject` and `requestObject` key existence case.
1689
-
1690
- ```aconf
1691
- <filter YOURROUTETAG>
1692
- @id kube_api_audit_normalize
1693
- @type record_transformer
1694
- auto_typecast false
1695
- enable_ruby true
1696
- <record>
1697
- host "#{ENV['K8S_NODE_NAME']}"
1698
- responseObject ${record["responseObject"].nil? ? "none": record["responseObject"].to_json}
1699
- requestObject ${record["requestObject"].nil? ? "none": record["requestObject"].to_json}
1700
- origin kubernetes-api-audit
1701
- </record>
1702
- </filter>
1501
+ ]
1703
1502
  ```
1704
1503
 
1705
- Normalize `responseObject` and `requestObject` key with record_transformer and other similiar plugins is needed.
1504
+ These permissions can be narrowed down by:
1706
1505
 
1707
- ### Fluentd seems to hang if it unable to connect Elasticsearch, why?
1506
+ - Setting a more specific pattern for indices under the `names` field
1507
+ - Removing the `manage_index_templates` cluster permission when not using the feature within your plugin configuration
1508
+ - Removing the `manage_ilm` cluster permission and the `manage` and `manage_ilm` indices privileges when not using ilm
1509
+ features in the plugin configuration
1708
1510
 
1709
- On `#configure` phase, ES plugin should wait until ES instance communication is succeeded.
1710
- And ES plugin blocks to launch Fluentd by default.
1711
- Because Fluentd requests to set up configuration correctly on `#configure` phase.
1511
+ The list of privileges along with their description can be found in
1512
+ [security privileges](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html).
1712
1513
 
1713
- After `#configure` phase, it runs very fast and send events heavily in some heavily using case.
1514
+ ## Configuration - Elasticsearch Output Data Stream
1714
1515
 
1715
- In this scenario, we need to set up configuration correctly until `#configure` phase.
1716
- So, we provide default parameter is too conservative to use advanced users.
1516
+ Since Elasticsearch 7.9, Data Streams was introduced.
1717
1517
 
1718
- To remove too pessimistic behavior, you can use the following configuration:
1518
+ You can enable this feature by specifying `@type elasticsearch_data_stream`.
1719
1519
 
1720
- ```aconf
1721
- <match **>
1722
- @type elasticsearch
1723
- # Some advanced users know their using ES version.
1724
- # We can disable startup ES version checking.
1725
- verify_es_version_at_startup false
1726
- # If you know that your using ES major version is 7, you can set as 7 here.
1727
- default_elasticsearch_version 7
1728
- # If using very stable ES cluster, you can reduce retry operation counts. (minmum is 1)
1729
- max_retry_get_es_version 1
1730
- # If using very stable ES cluster, you can reduce retry operation counts. (minmum is 1)
1731
- max_retry_putting_template 1
1732
- # ... and some ES plugin configuration
1733
- </match>
1520
+ ```
1521
+ @type elasticsearch_data_stream
1522
+ data_stream_name test
1734
1523
  ```
1735
1524
 
1736
- ### Enable Index Lifecycle Management
1737
-
1738
- Index lifecycle management is template based index management feature.
1739
-
1740
- Main ILM feature parameters are:
1741
-
1742
- * `index_name` (when logstash_format as false)
1743
- * `logstash_prefix` (when logstash_format as true)
1744
- * `enable_ilm`
1745
- * `ilm_policy_id`
1746
- * `ilm_policy`
1747
-
1748
- * Advanced usage parameters
1749
- * `application_name`
1750
- * `index_separator`
1751
-
1752
- They are not all mandatory parameters but they are used for ILM feature in effect.
1753
-
1754
- ILM target index alias is created with `index_name` or an index which is calculated from `logstash_prefix`.
1755
-
1756
- From Elasticsearch plugin v4.0.0, ILM target index will be calculated from `index_name` (normal mode) or `logstash_prefix` (using with `logstash_format`as true).
1525
+ When `@type elasticsearch_data_stream` is used, unless specified with `data_stream_ilm_name` and `data_stream_template_name` or `data_stream_ilm_policy`, ILM default policy is set to the specified data stream.
1526
+ Then, the matching index template is also created automatically.
1757
1527
 
1758
- **NOTE:** Before Elasticsearch plugin v4.1.0, using `deflector_alias` parameter when ILM is enabled is permitted and handled, but, in the later releases such that 4.1.1 or later, it cannot use with when ILM is enabled.
1528
+ ### data_stream_name
1759
1529
 
1760
- And also, ILM feature users should specify their Elasticsearch template for ILM enabled indices.
1761
- Because ILM settings are injected into their Elasticsearch templates.
1530
+ You can specify Elasticsearch data stream name by this parameter.
1531
+ This parameter is mandatory for `elasticsearch_data_stream`.
1762
1532
 
1763
- `application_name` and `index_separator` also affect alias index names.
1533
+ ### data_stream_template_name
1764
1534
 
1765
- But this parameter is prepared for advanced usage.
1535
+ You can specify an existing matching index template for the data stream. If not present, it creates a new matching index template.
1766
1536
 
1767
- It usually should be used with default value which is `default`.
1537
+ Default value is `data_stream_name`.
1768
1538
 
1769
- Then, ILM parameters are used in alias index like as:
1539
+ ### data_stream_ilm_name
1770
1540
 
1771
- ##### Simple `index_name` case:
1541
+ You can specify the name of an existing ILM policy, which will be applied to the data stream. If not present, it creates a new ILM default policy (unless `data_stream_template_name` is defined, in that case the ILM will be set to the one specified in the matching index template).
1772
1542
 
1773
- `<index_name><index_separator><application_name>-000001`.
1543
+ Default value is `data_stream_name`.
1774
1544
 
1775
- ##### `logstash_format` as `true` case:
1545
+ There are some limitations about naming rule.
1776
1546
 
1777
- `<logstash_prefix><logstash_prefix_separator><application_name><logstash_prefix_separator><logstash_dateformat>-000001`.
1547
+ In more detail, please refer to the [Path parameters](https://www.elastic.co/guide/en/elasticsearch/reference/master/indices-create-data-stream.html#indices-create-data-stream-api-path-params).
1778
1548
 
1779
- #### Example ILM settings
1780
1549
 
1781
- ```aconf
1782
- index_name fluentd-${tag}
1783
- application_name ${tag}
1784
- index_date_pattern "now/d"
1785
- enable_ilm true
1786
- # Policy configurations
1787
- ilm_policy_id fluentd-policy
1788
- # ilm_policy {} # Use default policy
1789
- template_name your-fluentd-template
1790
- template_file /path/to/fluentd-template.json
1791
- # customize_template {"<<index_prefix>>": "fluentd"}
1792
- ```
1550
+ ### data_stream_ilm_policy
1793
1551
 
1794
- Note: This plugin only creates rollover-enabled indices, which are aliases pointing to them and index templates, and creates an ILM policy if enabled.
1552
+ You can specify the ILM policy contents as hash. If not present, it will apply the ILM default policy.
1795
1553
 
1796
- #### Create ILM indices in each day
1797
-
1798
- If you want to create new index in each day, you should use `logstash_format` style configuration:
1799
-
1800
- ```aconf
1801
- logstash_prefix fluentd
1802
- application_name default
1803
- index_date_pattern "now/d"
1804
- enable_ilm true
1805
- # Policy configurations
1806
- ilm_policy_id fluentd-policy
1807
- # ilm_policy {} # Use default policy
1808
- template_name your-fluentd-template
1809
- template_file /path/to/fluentd-template.json
1810
- ```
1554
+ **NOTE:** This parameter requests to install elasticsearch-xpack gem.
1811
1555
 
1812
- #### Fixed ILM indices
1556
+ ### data_stream_ilm_policy_overwrite
1813
1557
 
1814
- Also, users can use fixed ILM indices configuration.
1815
- If `index_date_pattern` is set as `""`(empty string), Elasticsearch plugin won't attach date pattern in ILM indices:
1558
+ Specify whether the data stream ILM policy should be overwritten.
1816
1559
 
1817
- ```aconf
1818
- index_name fluentd
1819
- application_name default
1820
- index_date_pattern ""
1821
- enable_ilm true
1822
- # Policy configurations
1823
- ilm_policy_id fluentd-policy
1824
- # ilm_policy {} # Use default policy
1825
- template_name your-fluentd-template
1826
- template_file /path/to/fluentd-template.json
1827
- ```
1560
+ Default value is `false`.
1828
1561
 
1829
- ### How to specify index codec
1562
+ **NOTE:** This parameter requests to install elasticsearch-xpack gem.
1830
1563
 
1831
- Elasticsearch can handle compression methods for stored data such as LZ4 and best_compression.
1832
- fluent-plugin-elasticsearch doesn't provide API which specifies compression method.
1564
+ ### data_stream_template_use_index_patterns_wildcard
1833
1565
 
1834
- Users can specify stored data compression method with template:
1566
+ Specify whether index patterns should include a wildcard (*) when creating an index template. This is particularly useful to prevent errors in scenarios where index templates are generated automatically, and multiple services with distinct suffixes are in use.
1835
1567
 
1836
- Create `compression.json` as follows:
1568
+ Default value is `true`.
1837
1569
 
1570
+ Consider the following JSON error response when index patterns clash due to wildcard usage:
1838
1571
  ```json
1839
1572
  {
1840
- "order": 100,
1841
- "index_patterns": [
1842
- "YOUR-INDEX-PATTERN"
1843
- ],
1844
- "settings": {
1845
- "index": {
1846
- "codec": "best_compression"
1847
- }
1848
- }
1573
+ "error": {
1574
+ "root_cause": [
1575
+ {
1576
+ "type": "illegal_argument_exception",
1577
+ "reason": "index template [eks-kube-apiserver] has index patterns [eks-kube-apiserver*] matching patterns from existing templates [eks-kube-apiserver-audit] with patterns (eks-kube-apiserver-audit => [eks-kube-apiserver-audit*]) that have the same priority [0], multiple index templates may not match during index creation, please use a different priority"
1578
+ }
1579
+ ],
1580
+ "type": "illegal_argument_exception",
1581
+ "reason": "index template [eks-kube-apiserver] has index patterns [eks-kube-apiserver*] matching patterns from existing templates [eks-kube-apiserver-audit] with patterns (eks-kube-apiserver-audit => [eks-kube-apiserver-audit*]) that have the same priority [0], multiple index templates may not match during index creation, please use a different priority"
1582
+ },
1583
+ "status": 400
1849
1584
  }
1850
1585
  ```
1851
1586
 
1852
- Then, specify the above template in your configuration:
1587
+ #### Usage Examples
1853
1588
 
1854
- ```aconf
1855
- template_name best_compression_tmpl
1856
- template_file compression.json
1857
- ```
1589
+ When `data_stream_template_use_index_patterns_wildcard` is set to `true` (default):
1858
1590
 
1859
- Elasticsearch will store data with `best_compression`:
1860
-
1861
- ```
1862
- % curl -XGET 'http://localhost:9200/logstash-2019.12.06/_settings?pretty'
1863
1591
  ```
1864
-
1865
- ```json
1866
- {
1867
- "logstash-2019.12.06" : {
1868
- "settings" : {
1869
- "index" : {
1870
- "codec" : "best_compression",
1871
- "number_of_shards" : "1",
1872
- "provided_name" : "logstash-2019.12.06",
1873
- "creation_date" : "1575622843800",
1874
- "number_of_replicas" : "1",
1875
- "uuid" : "THE_AWESOMEUUID",
1876
- "version" : {
1877
- "created" : "7040100"
1878
- }
1879
- }
1880
- }
1881
- }
1882
- }
1592
+ data_stream_name: foo
1593
+ data_stream_template_use_index_patterns_wildcard: true
1883
1594
  ```
1884
1595
 
1885
- ### Cannot push logs to Elasticsearch with connect_write timeout reached, why?
1596
+ In this case, the resulting index patterns will be: `["foo*"]`
1886
1597
 
1887
- It seems that Elasticsearch cluster is exhausted.
1598
+ When `data_stream_template_use_index_patterns_wildcard` is set to `false`:
1888
1599
 
1889
- Usually, Fluentd complains like the following log:
1890
-
1891
- ```log
1892
- 2019-12-29 00:23:33 +0000 [warn]: buffer flush took longer time than slow_flush_log_threshold: elapsed_time=27.283766102716327 slow_flush_log_threshold=15.0 plugin_id="object:aaaffaaaaaff"
1893
- 2019-12-29 00:23:33 +0000 [warn]: buffer flush took longer time than slow_flush_log_threshold: elapsed_time=26.161768959928304 slow_flush_log_threshold=15.0 plugin_id="object:aaaffaaaaaff"
1894
- 2019-12-29 00:23:33 +0000 [warn]: buffer flush took longer time than slow_flush_log_threshold: elapsed_time=28.713624476008117 slow_flush_log_threshold=15.0 plugin_id="object:aaaffaaaaaff"
1895
- 2019-12-29 01:39:18 +0000 [warn]: Could not push logs to Elasticsearch, resetting connection and trying again. connect_write timeout reached
1896
- 2019-12-29 01:39:18 +0000 [warn]: Could not push logs to Elasticsearch, resetting connection and trying again. connect_write timeout reached
1897
1600
  ```
1601
+ data_stream_name: foo
1602
+ data_stream_template_use_index_patterns_wildcard: false
1603
+ ```
1604
+
1605
+ The resulting index patterns will be: `["foo"]`
1898
1606
 
1899
- This warnings is usually caused by exhaused Elasticsearch cluster due to resource shortage.
1900
1607
 
1901
- If CPU usage is spiked and Elasticsearch cluster is eating up CPU resource, this issue is caused by CPU resource shortage.
1608
+ ## Troubleshooting
1902
1609
 
1903
- Check your Elasticsearch cluster health status and resource usage.
1610
+ See [Troubleshooting document](README.Troubleshooting.md)
1904
1611
 
1905
1612
  ## Contact
1906
1613
 
@@ -1924,4 +1631,7 @@ Install dev dependencies:
1924
1631
  $ gem install bundler
1925
1632
  $ bundle install
1926
1633
  $ bundle exec rake test
1634
+ # To just run the test you are working on:
1635
+ $ bundle exec rake test TEST=test/plugin/test_out_elasticsearch.rb TESTOPTS='--verbose --name=test_custom_template_with_rollover_index_create_and_custom_ilm'
1636
+
1927
1637
  ```