fluent-plugin-elasticsearch 4.3.3 → 5.0.4

Sign up to get free protection for your applications and to get access to all the features.
data/README.md CHANGED
@@ -38,6 +38,7 @@ Current maintainers: @cosmo0920
38
38
  + [suppress_type_name](#suppress_type_name)
39
39
  + [target_index_key](#target_index_key)
40
40
  + [target_type_key](#target_type_key)
41
+ + [target_index_affinity](#target_index_affinity)
41
42
  + [template_name](#template_name)
42
43
  + [template_file](#template_file)
43
44
  + [template_overwrite](#template_overwrite)
@@ -109,19 +110,9 @@ Current maintainers: @cosmo0920
109
110
  + [chunk_id_key](#chunk_id_key)
110
111
  * [Configuration - Elasticsearch Input](#configuration---elasticsearch-input)
111
112
  * [Configuration - Elasticsearch Filter GenID](#configuration---elasticsearch-filter-genid)
113
+ * [Configuration - Elasticsearch Output Data Stream](#configuration---elasticsearch-output-data-stream)
112
114
  * [Elasticsearch permissions](#elasticsearch-permissions)
113
115
  * [Troubleshooting](#troubleshooting)
114
- + [Cannot send events to elasticsearch](#cannot-send-events-to-elasticsearch)
115
- + [Cannot see detailed failure log](#cannot-see-detailed-failure-log)
116
- + [Cannot connect TLS enabled reverse Proxy](#cannot-connect-tls-enabled-reverse-proxy)
117
- + [Declined logs are resubmitted forever, why?](#declined-logs-are-resubmitted-forever-why)
118
- + [Suggested to install typhoeus gem, why?](#suggested-to-install-typhoeus-gem-why)
119
- + [Stopped to send events on k8s, why?](#stopped-to-send-events-on-k8s-why)
120
- + [Random 400 - Rejected by Elasticsearch is occured, why?](#random-400---rejected-by-elasticsearch-is-occured-why)
121
- + [Fluentd seems to hang if it unable to connect Elasticsearch, why?](#fluentd-seems-to-hang-if-it-unable-to-connect-elasticsearch-why)
122
- + [Enable Index Lifecycle Management](#enable-index-lifecycle-management)
123
- + [How to specify index codec](#how-to-specify-index-codec)
124
- + [Cannot push logs to Elasticsearch with connect_write timeout reached, why?](#cannot-push-logs-to-elasticsearch-with-connect_write-timeout-reached-why)
125
116
  * [Contact](#contact)
126
117
  * [Contributing](#contributing)
127
118
  * [Running tests](#running-tests)
@@ -181,6 +172,24 @@ You can specify Elasticsearch host by this parameter.
181
172
 
182
173
  **Note:** Since v3.3.2, `host` parameter supports builtin placeholders. If you want to send events dynamically into different hosts at runtime with `elasticsearch_dynamic` output plugin, please consider to switch to use plain `elasticsearch` output plugin. In more detail for builtin placeholders, please refer to [Placeholders](#placeholders) section.
183
174
 
175
+ To use IPv6 address on `host` parameter, you can use the following styles:
176
+
177
+ #### string style
178
+
179
+ To use string style, you must quote IPv6 address due to prevent to be interpreted as JSON:
180
+
181
+ ```
182
+ host "[2404:7a80:d440:3000:192a:a292:bd7f:ca10]"
183
+ ```
184
+
185
+ #### raw style
186
+
187
+ You can also specify raw IPv6 address. This will be handled as `[specified IPv6 address]`:
188
+
189
+ ```
190
+ host 2404:7a80:d440:3000:192a:a292:bd7f:ca10
191
+ ```
192
+
184
193
  ### port
185
194
 
186
195
  ```
@@ -247,6 +256,16 @@ hosts host1:port1,host2:port2,host3 # port3 is 9200
247
256
 
248
257
  **Note:** Up until v2.8.5, it was allowed to embed the username/password in the URL. However, this syntax is deprecated as of v2.8.6 because it was found to cause serious connection problems (See #394). Please migrate your settings to use the `user` and `password` field (described below) instead.
249
258
 
259
+ #### IPv6 addresses
260
+
261
+ When you want to specify IPv6 addresses, you must specify schema together:
262
+
263
+ ```
264
+ hosts http://[2404:7a80:d440:3000:de:7311:6329:2e6c]:port1,http://[2404:7a80:d440:3000:de:7311:6329:1e6c]:port2,http://[2404:7a80:d440:3000:de:6311:6329:2e6c]:port3
265
+ ```
266
+
267
+ If you don't specify hosts with schema together, Elasticsearch plugin complains Invalid URI for them.
268
+
250
269
  ### user, password, path, scheme, ssl_verify
251
270
 
252
271
  ```
@@ -436,6 +455,75 @@ and this record will be written to the specified index (`logstash-2014.12.19`) r
436
455
 
437
456
  Similar to `target_index_key` config, find the type name to write to in the record under this key (or nested record). If key not found in record - fallback to `type_name` (default "fluentd").
438
457
 
458
+ ### target_index_affinity
459
+
460
+ Enable plugin to dynamically select logstash time based target index in update/upsert operations based on already indexed data rather than current time of indexing.
461
+
462
+ ```
463
+ target_index_affinity true # defaults to false
464
+ ```
465
+
466
+ By default plugin writes data of logstash format index based on current time. For example daily based index after mignight data is written to newly created index. This is normally ok when data is coming from single source and not updated after indexing.
467
+
468
+ But if you have a use case where data is also updated after indexing and `id_key` is used to identify the document uniquely for updating. Logstash format is wanted to be used for easy data managing and retention. Updates are done right after indexing to complete the data (all data not available from single source) and no updates are done anymore later point on time. In this case problem happends at index rotation time where write to 2 indexes with same id_key value may happen.
469
+
470
+ This setting will search existing data by using elastic search's [id query](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-ids-query.html) using `id_key` value (with logstash_prefix and logstash_prefix_separator index pattarn e.g. `logstash-*`). The index of found data is used for update/upsert. When no data is found, data is written to current logstash index as normally.
471
+
472
+ This setting requires following other settings:
473
+ ```
474
+ logstash_format true
475
+ id_key myId # Some field on your data to identify the data uniquely
476
+ write_operation upsert # upsert or update
477
+ ```
478
+
479
+ Suppose you have the following situation where you have 2 different match to consume data from 2 different Kafka topics independently but close in time with each other (order not known).
480
+
481
+ ```
482
+ <match data1>
483
+ @type elasticsearch
484
+ ...
485
+ id_key myId
486
+ write_operation upsert
487
+ logstash_format true
488
+ logstash_dateformat %Y.%m.%d
489
+ logstash_prefix myindexprefix
490
+ target_index_affinity true
491
+ ...
492
+
493
+ <match data2>
494
+ @type elasticsearch
495
+ ...
496
+ id_key myId
497
+ write_operation upsert
498
+ logstash_format true
499
+ logstash_dateformat %Y.%m.%d
500
+ logstash_prefix myindexprefix
501
+ target_index_affinity true
502
+ ...
503
+ ```
504
+
505
+ If your first (data1) input is:
506
+ ```
507
+ {
508
+ "myId": "myuniqueId1",
509
+ "datafield1": "some value",
510
+ }
511
+ ```
512
+
513
+ and your second (data2) input is:
514
+ ```
515
+ {
516
+ "myId": "myuniqueId1",
517
+ "datafield99": "some important data from other source tightly related to id myuniqueId1 and wanted to be in same document.",
518
+ }
519
+ ```
520
+
521
+ Date today is 10.05.2021 so data is written to index `myindexprefix-2021.05.10` when both data1 and data2 is consumed during today.
522
+ But when we are close to index rotation and data1 is consumed and indexed at `2021-05-10T23:59:55.59707672Z` and data2
523
+ is consumed a bit later at `2021-05-11T00:00:58.222079Z` i.e. logstash index has been rotated and normally data2 would have been written
524
+ to index `myindexprefix-2021.05.11`. But with target_index_affinity setting as value true, data2 is now written to index `myindexprefix-2021.05.10`
525
+ into same document with data1 as wanted and duplicated document is avoided.
526
+
439
527
  ### template_name
440
528
 
441
529
  The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless [template_overwrite](#template_overwrite) is set, in which case the template will be updated.
@@ -1335,9 +1423,9 @@ Default value is `nil`.
1335
1423
 
1336
1424
  Use legacy template or not.
1337
1425
 
1338
- Elasticsearch 7.8 or later supports the brand new composable templates.
1426
+ For Elasticsearch 7.8 or later, users can specify this parameter as `false` if their [template_file](#template_file) contains a composable index template.
1339
1427
 
1340
- For Elasticsearch 7.7 or older, users should specify this parameter as `false`.
1428
+ For Elasticsearch 7.7 or older, users should specify this parameter as `true`.
1341
1429
 
1342
1430
  Composable template documentation is [Put Index Template API | Elasticsearch Reference](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html) and legacy template documentation is [Index Templates | Elasticsearch Reference](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates-v1.html).
1343
1431
 
@@ -1422,591 +1510,32 @@ features in the plugin configuration
1422
1510
  The list of privileges along with their description can be found in
1423
1511
  [security privileges](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html).
1424
1512
 
1425
- ## Troubleshooting
1426
-
1427
- ### Cannot send events to Elasticsearch
1428
-
1429
- A common cause of failure is that you are trying to connect to an Elasticsearch instance with an incompatible version.
1430
-
1431
- For example, td-agent currently bundles the 6.x series of the [elasticsearch-ruby](https://github.com/elastic/elasticsearch-ruby) library. This means that your Elasticsearch server also needs to be 6.x. You can check the actual version of the client library installed on your system by executing the following command.
1432
-
1433
- ```
1434
- # For td-agent users
1435
- $ /usr/sbin/td-agent-gem list elasticsearch
1436
- # For standalone Fluentd users
1437
- $ fluent-gem list elasticsearch
1438
- ```
1439
- Or, fluent-plugin-elasticsearch v2.11.7 or later, users can inspect version incompatibility with the `validate_client_version` option:
1440
-
1441
- ```
1442
- validate_client_version true
1443
- ```
1444
-
1445
- If you get the following error message, please consider to install compatible elasticsearch client gems:
1446
-
1447
- ```
1448
- Detected ES 5 but you use ES client 6.1.0.
1449
- Please consider to use 5.x series ES client.
1450
- ```
1451
-
1452
- For further details of the version compatibility issue, please read [the official manual](https://github.com/elastic/elasticsearch-ruby#compatibility).
1453
-
1454
- ### Cannot see detailed failure log
1455
-
1456
- A common cause of failure is that you are trying to connect to an Elasticsearch instance with an incompatible ssl protocol version.
1457
-
1458
- For example, `out_elasticsearch` set up ssl_version to TLSv1 due to historical reason.
1459
- Modern Elasticsearch ecosystem requests to communicate with TLS v1.2 or later.
1460
- But, in this case, `out_elasticsearch` conceals transporter part failure log by default.
1461
- If you want to acquire transporter log, please consider to set the following configuration:
1462
-
1463
- ```
1464
- with_transporter_log true
1465
- @log_level debug
1466
- ```
1467
-
1468
- Then, the following log is shown in Fluentd log:
1469
-
1470
- ```
1471
- 2018-10-24 10:00:00 +0900 [error]: #0 [Faraday::ConnectionFailed] SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol (OpenSSL::SSL::SSLError) {:host=>"elasticsearch-host", :port=>80, :scheme=>"https", :user=>"elastic", :password=>"changeme", :protocol=>"https"}
1472
- ```
1473
-
1474
- This indicates that inappropriate TLS protocol version is used.
1475
- If you want to use TLS v1.2, please use `ssl_version` parameter like as:
1476
-
1477
- ```
1478
- ssl_version TLSv1_2
1479
- ```
1480
-
1481
- or, in v4.0.2 or later with Ruby 2.5 or later combination, the following congiuration is also valid:
1482
-
1483
- ```
1484
- ssl_max_version TLSv1_2
1485
- ssl_min_version TLSv1_2
1486
- ```
1487
-
1488
- ### Cannot connect TLS enabled reverse Proxy
1489
-
1490
- A common cause of failure is that you are trying to connect to an Elasticsearch instance behind nginx reverse proxy which uses an incompatible ssl protocol version.
1491
-
1492
- For example, `out_elasticsearch` set up ssl_version to TLSv1 due to historical reason.
1493
- Nowadays, nginx reverse proxy uses TLS v1.2 or later for security reason.
1494
- But, in this case, `out_elasticsearch` conceals transporter part failure log by default.
1495
-
1496
- If you set up nginx reverse proxy with TLS v1.2:
1497
-
1498
- ```
1499
- server {
1500
- listen <your IP address>:9400;
1501
- server_name <ES-Host>;
1502
- ssl on;
1503
- ssl_certificate /etc/ssl/certs/server-bundle.pem;
1504
- ssl_certificate_key /etc/ssl/private/server-key.pem;
1505
- ssl_client_certificate /etc/ssl/certs/ca.pem;
1506
- ssl_verify_client on;
1507
- ssl_verify_depth 2;
1508
-
1509
- # Reference : https://cipherli.st/
1510
- ssl_protocols TLSv1.2;
1511
- ssl_prefer_server_ciphers on;
1512
- ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
1513
- ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
1514
- ssl_session_cache shared:SSL:10m;
1515
- ssl_session_tickets off; # Requires nginx >= 1.5.9
1516
- ssl_stapling on; # Requires nginx >= 1.3.7
1517
- ssl_stapling_verify on; # Requires nginx => 1.3.7
1518
- resolver 127.0.0.1 valid=300s;
1519
- resolver_timeout 5s;
1520
- add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
1521
- add_header X-Frame-Options DENY;
1522
- add_header X-Content-Type-Options nosniff;
1523
-
1524
- client_max_body_size 64M;
1525
- keepalive_timeout 5;
1526
-
1527
- location / {
1528
- proxy_set_header Host $host;
1529
- proxy_set_header X-Real-IP $remote_addr;
1530
- proxy_pass http://localhost:9200;
1531
- }
1532
- }
1533
- ```
1534
-
1535
- Then, nginx reverse proxy starts with TLSv1.2.
1536
-
1537
- Fluentd suddenly dies with the following log:
1538
- ```
1539
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: log writing failed. execution expired
1540
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/ssl_socket.rb:10:in `initialize': stack level too deep (SystemStackError)
1541
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/connection.rb:429:in `new'
1542
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/connection.rb:429:in `socket'
1543
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/connection.rb:111:in `request_call'
1544
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/middlewares/mock.rb:48:in `request_call'
1545
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/middlewares/instrumentor.rb:26:in `request_call'
1546
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/middlewares/base.rb:16:in `request_call'
1547
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/middlewares/base.rb:16:in `request_call'
1548
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/excon-0.62.0/lib/excon/middlewares/base.rb:16:in `request_call'
1549
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: ... 9266 levels...
1550
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require'
1551
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.2.5/bin/fluentd:8:in `<top (required)>'
1552
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/bin/fluentd:22:in `load'
1553
- Oct 31 9:44:45 <ES-Host> fluentd[6442]: from /opt/fluentd/embedded/bin/fluentd:22:in `<main>'
1554
- Oct 31 9:44:45 <ES-Host> systemd[1]: fluentd.service: Control process exited, code=exited status=1
1555
- ```
1556
-
1557
- If you want to acquire transporter log, please consider to set the following configuration:
1558
-
1559
- ```
1560
- with_transporter_log true
1561
- @log_level debug
1562
- ```
1563
-
1564
- Then, the following log is shown in Fluentd log:
1565
-
1566
- ```
1567
- 2018-10-31 10:00:57 +0900 [warn]: #7 [Faraday::ConnectionFailed] Attempt 2 connecting to {:host=>"<ES-Host>", :port=>9400, :scheme=>"https", :protocol=>"https"}
1568
- 2018-10-31 10:00:57 +0900 [error]: #7 [Faraday::ConnectionFailed] Connection reset by peer - SSL_connect (Errno::ECONNRESET) {:host=>"<ES-Host>", :port=>9400, :scheme=>"https", :protocol=>"https"}
1569
- ```
1570
-
1571
- The above logs indicates that using incompatible SSL/TLS version between fluent-plugin-elasticsearch and nginx, which is reverse proxy, is root cause of this issue.
1572
-
1573
- If you want to use TLS v1.2, please use `ssl_version` parameter like as:
1574
-
1575
- ```
1576
- ssl_version TLSv1_2
1577
- ```
1578
-
1579
- or, in v4.0.2 or later with Ruby 2.5 or later combination, the following congiuration is also valid:
1580
-
1581
- ```
1582
- ssl_max_version TLSv1_2
1583
- ssl_min_version TLSv1_2
1584
- ```
1585
-
1586
- ### Declined logs are resubmitted forever, why?
1587
-
1588
- Sometimes users write Fluentd configuration like this:
1589
-
1590
- ```aconf
1591
- <match **>
1592
- @type elasticsearch
1593
- host localhost
1594
- port 9200
1595
- type_name fluentd
1596
- logstash_format true
1597
- time_key @timestamp
1598
- include_timestamp true
1599
- reconnect_on_error true
1600
- reload_on_failure true
1601
- reload_connections false
1602
- request_timeout 120s
1603
- </match>
1604
- ```
1605
-
1606
- The above configuration does not use [`@label` feature](https://docs.fluentd.org/v1.0/articles/config-file#(5)-group-filter-and-output:-the-%E2%80%9Clabel%E2%80%9D-directive) and use glob(**) pattern.
1607
- It is usually problematic configuration.
1608
-
1609
- In error scenario, error events will be emitted with `@ERROR` label, and `fluent.*` tag.
1610
- The black hole glob pattern resubmits a problematic event into pushing Elasticsearch pipeline.
1611
-
1612
- This situation causes flood of declined log:
1613
-
1614
- ```log
1615
- 2018-11-13 11:16:27 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch" location=nil tag="app.fluentcat" time=2018-11-13 11:16:17.492985640 +0000 record={"message"=>"\xFF\xAD"}
1616
- 2018-11-13 11:16:38 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch" location=nil tag="fluent.warn" time=2018-11-13 11:16:27.978851140 +0000 record={"error"=>"#<Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError: 400 - Rejected by Elasticsearch>", "location"=>nil, "tag"=>"app.fluentcat", "time"=>2018-11-13 11:16:17.492985640 +0000, "record"=>{"message"=>"\xFF\xAD"}, "message"=>"dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error=\"400 - Rejected by Elasticsearch\" location=nil tag=\"app.fluentcat\" time=2018-11-13 11:16:17.492985640 +0000 record={\"message\"=>\"\\xFF\\xAD\"}"}
1617
- ```
1618
-
1619
- Then, user should use more concrete tag route or use `@label`.
1620
- The following sections show two examples how to solve flood of declined log.
1621
- One is using concrete tag routing, the other is using label routing.
1622
-
1623
- #### Using concrete tag routing
1624
-
1625
- The following configuration uses concrete tag route:
1626
-
1627
- ```aconf
1628
- <match out.elasticsearch.**>
1629
- @type elasticsearch
1630
- host localhost
1631
- port 9200
1632
- type_name fluentd
1633
- logstash_format true
1634
- time_key @timestamp
1635
- include_timestamp true
1636
- reconnect_on_error true
1637
- reload_on_failure true
1638
- reload_connections false
1639
- request_timeout 120s
1640
- </match>
1641
- ```
1642
-
1643
- #### Using label feature
1644
-
1645
- The following configuration uses label:
1646
-
1647
- ```aconf
1648
- <source>
1649
- @type forward
1650
- @label @ES
1651
- </source>
1652
- <label @ES>
1653
- <match out.elasticsearch.**>
1654
- @type elasticsearch
1655
- host localhost
1656
- port 9200
1657
- type_name fluentd
1658
- logstash_format true
1659
- time_key @timestamp
1660
- include_timestamp true
1661
- reconnect_on_error true
1662
- reload_on_failure true
1663
- reload_connections false
1664
- request_timeout 120s
1665
- </match>
1666
- </label>
1667
- <label @ERROR>
1668
- <match **>
1669
- @type stdout
1670
- </match>
1671
- </label>
1672
- ```
1673
-
1674
- ### Suggested to install typhoeus gem, why?
1675
-
1676
- fluent-plugin-elasticsearch doesn't depend on typhoeus gem by default.
1677
- If you want to use typhoeus backend, you must install typhoeus gem by your own.
1678
-
1679
- If you use vanilla Fluentd, you can install it by:
1680
-
1681
- ```
1682
- gem install typhoeus
1683
- ```
1684
-
1685
- But, you use td-agent instead of vanilla Fluentd, you have to use `td-agent-gem`:
1686
-
1687
- ```
1688
- td-agent-gem install typhoeus
1689
- ```
1690
-
1691
- In more detail, please refer to [the official plugin management document](https://docs.fluentd.org/v1.0/articles/plugin-management).
1692
-
1693
- ### Stopped to send events on k8s, why?
1694
-
1695
- fluent-plugin-elasticsearch reloads connection after 10000 requests. (Not correspond to events counts because ES plugin uses bulk API.)
1696
-
1697
- This functionality which is originated from elasticsearch-ruby gem is enabled by default.
1698
-
1699
- Sometimes this reloading functionality bothers users to send events with ES plugin.
1700
-
1701
- On k8s platform, users sometimes shall specify the following settings:
1702
-
1703
- ```aconf
1704
- reload_connections false
1705
- reconnect_on_error true
1706
- reload_on_failure true
1707
- ```
1708
-
1709
- If you use [fluentd-kubernetes-daemonset](https://github.com/fluent/fluentd-kubernetes-daemonset), you can specify them with environment variables:
1710
-
1711
- * `FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS` as `false`
1712
- * `FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR` as `true`
1713
- * `FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE` as `true`
1714
-
1715
- This issue had been reported at [#525](https://github.com/uken/fluent-plugin-elasticsearch/issues/525).
1716
-
1717
- ### Random 400 - Rejected by Elasticsearch is occured, why?
1718
-
1719
- Index templates installed Elasticsearch sometimes generates 400 - Rejected by Elasticsearch errors.
1720
- For example, kubernetes audit log has structure:
1721
-
1722
- ```json
1723
- "responseObject":{
1724
- "kind":"SubjectAccessReview",
1725
- "apiVersion":"authorization.k8s.io/v1beta1",
1726
- "metadata":{
1727
- "creationTimestamp":null
1728
- },
1729
- "spec":{
1730
- "nonResourceAttributes":{
1731
- "path":"/",
1732
- "verb":"get"
1733
- },
1734
- "user":"system:anonymous",
1735
- "group":[
1736
- "system:unauthenticated"
1737
- ]
1738
- },
1739
- "status":{
1740
- "allowed":true,
1741
- "reason":"RBAC: allowed by ClusterRoleBinding \"cluster-system-anonymous\" of ClusterRole \"cluster-admin\" to User \"system:anonymous\""
1742
- }
1743
- },
1744
- ```
1745
-
1746
- The last element `status` sometimes becomes `"status":"Success"`.
1747
- This element type glich causes status 400 error.
1748
-
1749
- There are some solutions for fixing this:
1750
-
1751
- #### Solution 1
1752
-
1753
- For a key which causes element type glich case.
1754
-
1755
- Using dymanic mapping with the following template:
1756
-
1757
- ```json
1758
- {
1759
- "template": "YOURINDEXNAME-*",
1760
- "mappings": {
1761
- "fluentd": {
1762
- "dynamic_templates": [
1763
- {
1764
- "default_no_index": {
1765
- "path_match": "^.*$",
1766
- "path_unmatch": "^(@timestamp|auditID|level|stage|requestURI|sourceIPs|metadata|objectRef|user|verb)(\\..+)?$",
1767
- "match_pattern": "regex",
1768
- "mapping": {
1769
- "index": false,
1770
- "enabled": false
1771
- }
1772
- }
1773
- }
1774
- ]
1775
- }
1776
- }
1777
- }
1778
- ```
1779
-
1780
- Note that `YOURINDEXNAME` should be replaced with your using index prefix.
1781
-
1782
- #### Solution 2
1783
-
1784
- For unstable `responseObject` and `requestObject` key existence case.
1785
-
1786
- ```aconf
1787
- <filter YOURROUTETAG>
1788
- @id kube_api_audit_normalize
1789
- @type record_transformer
1790
- auto_typecast false
1791
- enable_ruby true
1792
- <record>
1793
- host "#{ENV['K8S_NODE_NAME']}"
1794
- responseObject ${record["responseObject"].nil? ? "none": record["responseObject"].to_json}
1795
- requestObject ${record["requestObject"].nil? ? "none": record["requestObject"].to_json}
1796
- origin kubernetes-api-audit
1797
- </record>
1798
- </filter>
1799
- ```
1800
-
1801
- Normalize `responseObject` and `requestObject` key with record_transformer and other similiar plugins is needed.
1802
-
1803
- ### Fluentd seems to hang if it unable to connect Elasticsearch, why?
1804
-
1805
- On `#configure` phase, ES plugin should wait until ES instance communication is succeeded.
1806
- And ES plugin blocks to launch Fluentd by default.
1807
- Because Fluentd requests to set up configuration correctly on `#configure` phase.
1808
-
1809
- After `#configure` phase, it runs very fast and send events heavily in some heavily using case.
1810
-
1811
- In this scenario, we need to set up configuration correctly until `#configure` phase.
1812
- So, we provide default parameter is too conservative to use advanced users.
1813
-
1814
- To remove too pessimistic behavior, you can use the following configuration:
1513
+ ## Configuration - Elasticsearch Output Data Stream
1815
1514
 
1816
- ```aconf
1817
- <match **>
1818
- @type elasticsearch
1819
- # Some advanced users know their using ES version.
1820
- # We can disable startup ES version checking.
1821
- verify_es_version_at_startup false
1822
- # If you know that your using ES major version is 7, you can set as 7 here.
1823
- default_elasticsearch_version 7
1824
- # If using very stable ES cluster, you can reduce retry operation counts. (minmum is 1)
1825
- max_retry_get_es_version 1
1826
- # If using very stable ES cluster, you can reduce retry operation counts. (minmum is 1)
1827
- max_retry_putting_template 1
1828
- # ... and some ES plugin configuration
1829
- </match>
1830
- ```
1831
-
1832
- ### Enable Index Lifecycle Management
1833
-
1834
- Index lifecycle management is template based index management feature.
1835
-
1836
- Main ILM feature parameters are:
1837
-
1838
- * `index_name` (when logstash_format as false)
1839
- * `logstash_prefix` (when logstash_format as true)
1840
- * `enable_ilm`
1841
- * `ilm_policy_id`
1842
- * `ilm_policy`
1843
-
1844
- * Advanced usage parameters
1845
- * `application_name`
1846
- * `index_separator`
1847
-
1848
- They are not all mandatory parameters but they are used for ILM feature in effect.
1849
-
1850
- ILM target index alias is created with `index_name` or an index which is calculated from `logstash_prefix`.
1851
-
1852
- From Elasticsearch plugin v4.0.0, ILM target index will be calculated from `index_name` (normal mode) or `logstash_prefix` (using with `logstash_format`as true).
1853
-
1854
- **NOTE:** Before Elasticsearch plugin v4.1.0, using `deflector_alias` parameter when ILM is enabled is permitted and handled, but, in the later releases such that 4.1.1 or later, it cannot use with when ILM is enabled.
1855
-
1856
- And also, ILM feature users should specify their Elasticsearch template for ILM enabled indices.
1857
- Because ILM settings are injected into their Elasticsearch templates.
1858
-
1859
- `application_name` and `index_separator` also affect alias index names.
1860
-
1861
- But this parameter is prepared for advanced usage.
1862
-
1863
- It usually should be used with default value which is `default`.
1864
-
1865
- Then, ILM parameters are used in alias index like as:
1866
-
1867
- ##### Simple `index_name` case:
1868
-
1869
- `<index_name><index_separator><application_name>-000001`.
1870
-
1871
- ##### `logstash_format` as `true` case:
1872
-
1873
- `<logstash_prefix><logstash_prefix_separator><application_name><logstash_prefix_separator><logstash_dateformat>-000001`.
1874
-
1875
- #### Example ILM settings
1876
-
1877
- ```aconf
1878
- index_name fluentd-${tag}
1879
- application_name ${tag}
1880
- index_date_pattern "now/d"
1881
- enable_ilm true
1882
- # Policy configurations
1883
- ilm_policy_id fluentd-policy
1884
- # ilm_policy {} # Use default policy
1885
- template_name your-fluentd-template
1886
- template_file /path/to/fluentd-template.json
1887
- # customize_template {"<<index_prefix>>": "fluentd"}
1888
- ```
1889
-
1890
- Note: This plugin only creates rollover-enabled indices, which are aliases pointing to them and index templates, and creates an ILM policy if enabled.
1891
-
1892
- #### Create ILM indices in each day
1893
-
1894
- If you want to create new index in each day, you should use `logstash_format` style configuration:
1895
-
1896
- ```aconf
1897
- logstash_prefix fluentd
1898
- application_name default
1899
- index_date_pattern "now/d"
1900
- enable_ilm true
1901
- # Policy configurations
1902
- ilm_policy_id fluentd-policy
1903
- # ilm_policy {} # Use default policy
1904
- template_name your-fluentd-template
1905
- template_file /path/to/fluentd-template.json
1906
- ```
1907
-
1908
- Note that if you create a new set of indexes every day, the elasticsearch ILM policy system will treat each day separately and will always
1909
- maintain a separate active write index for each day.
1910
-
1911
- If you have a rollover based on max_age, it will continue to roll the indexes for prior dates even if no new documents are indexed. If you want
1912
- to delete indexes after a period of time, the ILM policy will never delete the current write index regardless of its age, so you would need a separate
1913
- system, such as curator, to actually delete the old indexes.
1914
-
1915
- For this reason, if you put the date into the index names with ILM you should only rollover based on size or number of documents and may need to use
1916
- curator to actually delete old indexes.
1917
-
1918
- #### Fixed ILM indices
1919
-
1920
- Also, users can use fixed ILM indices configuration.
1921
- If `index_date_pattern` is set as `""`(empty string), Elasticsearch plugin won't attach date pattern in ILM indices:
1922
-
1923
- ```aconf
1924
- index_name fluentd
1925
- application_name default
1926
- index_date_pattern ""
1927
- enable_ilm true
1928
- # Policy configurations
1929
- ilm_policy_id fluentd-policy
1930
- # ilm_policy {} # Use default policy
1931
- template_name your-fluentd-template
1932
- template_file /path/to/fluentd-template.json
1933
- ```
1934
-
1935
- ### How to specify index codec
1936
-
1937
- Elasticsearch can handle compression methods for stored data such as LZ4 and best_compression.
1938
- fluent-plugin-elasticsearch doesn't provide API which specifies compression method.
1939
-
1940
- Users can specify stored data compression method with template:
1941
-
1942
- Create `compression.json` as follows:
1943
-
1944
- ```json
1945
- {
1946
- "order": 100,
1947
- "index_patterns": [
1948
- "YOUR-INDEX-PATTERN"
1949
- ],
1950
- "settings": {
1951
- "index": {
1952
- "codec": "best_compression"
1953
- }
1954
- }
1955
- }
1956
- ```
1957
-
1958
- Then, specify the above template in your configuration:
1515
+ Since Elasticsearch 7.9, Data Streams was introduced.
1959
1516
 
1960
- ```aconf
1961
- template_name best_compression_tmpl
1962
- template_file compression.json
1963
- ```
1517
+ You can enable this feature by specifying `@type elasticsearch_data_stream`.
1964
1518
 
1965
- Elasticsearch will store data with `best_compression`:
1966
-
1967
- ```
1968
- % curl -XGET 'http://localhost:9200/logstash-2019.12.06/_settings?pretty'
1969
1519
  ```
1970
-
1971
- ```json
1972
- {
1973
- "logstash-2019.12.06" : {
1974
- "settings" : {
1975
- "index" : {
1976
- "codec" : "best_compression",
1977
- "number_of_shards" : "1",
1978
- "provided_name" : "logstash-2019.12.06",
1979
- "creation_date" : "1575622843800",
1980
- "number_of_replicas" : "1",
1981
- "uuid" : "THE_AWESOMEUUID",
1982
- "version" : {
1983
- "created" : "7040100"
1984
- }
1985
- }
1986
- }
1987
- }
1988
- }
1520
+ @type elasticsearch_data_stream
1521
+ data_stream_name test
1989
1522
  ```
1990
1523
 
1991
- ### Cannot push logs to Elasticsearch with connect_write timeout reached, why?
1524
+ When `@type elasticsearch_data_stream` is used, ILM default policy is set to the specified data stream.
1525
+ Then, the matching index template is also created automatically.
1992
1526
 
1993
- It seems that Elasticsearch cluster is exhausted.
1527
+ ### data_stream_name
1994
1528
 
1995
- Usually, Fluentd complains like the following log:
1529
+ You can specify Elasticsearch data stream name by this parameter.
1530
+ This parameter is mandatory for `elasticsearch_data_stream`.
1996
1531
 
1997
- ```log
1998
- 2019-12-29 00:23:33 +0000 [warn]: buffer flush took longer time than slow_flush_log_threshold: elapsed_time=27.283766102716327 slow_flush_log_threshold=15.0 plugin_id="object:aaaffaaaaaff"
1999
- 2019-12-29 00:23:33 +0000 [warn]: buffer flush took longer time than slow_flush_log_threshold: elapsed_time=26.161768959928304 slow_flush_log_threshold=15.0 plugin_id="object:aaaffaaaaaff"
2000
- 2019-12-29 00:23:33 +0000 [warn]: buffer flush took longer time than slow_flush_log_threshold: elapsed_time=28.713624476008117 slow_flush_log_threshold=15.0 plugin_id="object:aaaffaaaaaff"
2001
- 2019-12-29 01:39:18 +0000 [warn]: Could not push logs to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2002
- 2019-12-29 01:39:18 +0000 [warn]: Could not push logs to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2003
- ```
1532
+ There are some limitations about naming rule.
2004
1533
 
2005
- This warnings is usually caused by exhaused Elasticsearch cluster due to resource shortage.
1534
+ In more detail, please refer to the [Path parameters](https://www.elastic.co/guide/en/elasticsearch/reference/master/indices-create-data-stream.html#indices-create-data-stream-api-path-params).
2006
1535
 
2007
- If CPU usage is spiked and Elasticsearch cluster is eating up CPU resource, this issue is caused by CPU resource shortage.
1536
+ ## Troubleshooting
2008
1537
 
2009
- Check your Elasticsearch cluster health status and resource usage.
1538
+ See [Troubleshooting document](README.Troubleshooting.md)
2010
1539
 
2011
1540
  ## Contact
2012
1541