fluent-plugin-elasticsearch 3.7.1 → 3.8.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/History.md +5 -0
- data/README.md +78 -0
- data/fluent-plugin-elasticsearch.gemspec +1 -1
- data/lib/fluent/plugin/out_elasticsearch.rb +13 -1
- data/test/plugin/test_out_elasticsearch.rb +76 -0
- metadata +2 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 28cfb80e32dd25c9d67c582d4d39672515bbc9e0acb6a7b940ee1e36edd35de1
|
4
|
+
data.tar.gz: 6c2440335c4a1ebc03939dee6204cbd9fc6265797c9658d728df70fc63e9ba9d
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 7f67a2f892890ff80e34f6cd8aa14c5a61fc04095512f095d05554772eb384ddf54ec222006cf2b021ed08355d0af8575f0020671cd74de422c8b4c1e14989a0
|
7
|
+
data.tar.gz: 0d1919231ba8c8a88391a54d1853d992ba531f39c33826ff12ffd7258f9b3087ae5e18e1497edb0b6dac40998641a9ed79e28449cb90e2c9339a7a4e34406e4b
|
data/History.md
CHANGED
@@ -1,6 +1,11 @@
|
|
1
1
|
## Changelog [[tags]](https://github.com/uken/fluent-plugin-elasticsearch/tags)
|
2
2
|
|
3
3
|
### [Unreleased]
|
4
|
+
### 3.8.0
|
5
|
+
- Add FAQ for specifying index.codec (#679)
|
6
|
+
- Add FAQ for connect_write timeout reached error (#687)
|
7
|
+
- Unblocking buffer overflow with block action (#688)
|
8
|
+
|
4
9
|
### 3.7.1
|
5
10
|
- Make conpatible for Fluentd v1.8 (#677)
|
6
11
|
- Handle flatten_hashes in elasticsearch_dynamic (#675)
|
data/README.md
CHANGED
@@ -103,6 +103,8 @@ Current maintainers: @cosmo0920
|
|
103
103
|
+ [Random 400 - Rejected by Elasticsearch is occured, why?](#random-400---rejected-by-elasticsearch-is-occured-why)
|
104
104
|
+ [Fluentd seems to hang if it unable to connect Elasticsearch, why?](#fluentd-seems-to-hang-if-it-unable-to-connect-elasticsearch-why)
|
105
105
|
+ [Enable Index Lifecycle Management](#enable-index-lifecycle-management)
|
106
|
+
+ [How to specify index codec](#how-to-specify-index-codec)
|
107
|
+
+ [Cannot push logs to Elasticsearch with connect_write timeout reached, why?](#cannot-push-logs-to-elasticsearch-with-connect_write-timeout-reached-why)
|
106
108
|
* [Contact](#contact)
|
107
109
|
* [Contributing](#contributing)
|
108
110
|
* [Running tests](#running-tests)
|
@@ -1621,6 +1623,82 @@ customize_template {"<<index_prefix>>": "fluentd"}
|
|
1621
1623
|
|
1622
1624
|
Note: This plugin only creates rollover-enabled indices, which are aliases pointing to them and index templates, and creates an ILM policy if enabled.
|
1623
1625
|
|
1626
|
+
### How to specify index codec
|
1627
|
+
|
1628
|
+
Elasticsearch can handle compression methods for stored data such as LZ4 and best_compression.
|
1629
|
+
fluent-plugin-elasticsearch doesn't provide API which specifies compression method.
|
1630
|
+
|
1631
|
+
Users can specify stored data compression method with template:
|
1632
|
+
|
1633
|
+
Create `compression.json` as follows:
|
1634
|
+
|
1635
|
+
```json
|
1636
|
+
{
|
1637
|
+
"order": 100,
|
1638
|
+
"index_patterns": [
|
1639
|
+
"YOUR-INDEX-PATTERN"
|
1640
|
+
],
|
1641
|
+
"settings": {
|
1642
|
+
"index": {
|
1643
|
+
"codec": "best_compression"
|
1644
|
+
}
|
1645
|
+
}
|
1646
|
+
}
|
1647
|
+
```
|
1648
|
+
|
1649
|
+
Then, specify the above template in your configuration:
|
1650
|
+
|
1651
|
+
```aconf
|
1652
|
+
template_name best_compression_tmpl
|
1653
|
+
template_file compression.json
|
1654
|
+
```
|
1655
|
+
|
1656
|
+
Elasticsearch will store data with `best_compression`:
|
1657
|
+
|
1658
|
+
```
|
1659
|
+
% curl -XGET 'http://localhost:9200/logstash-2019.12.06/_settings?pretty'
|
1660
|
+
```
|
1661
|
+
|
1662
|
+
```json
|
1663
|
+
{
|
1664
|
+
"logstash-2019.12.06" : {
|
1665
|
+
"settings" : {
|
1666
|
+
"index" : {
|
1667
|
+
"codec" : "best_compression",
|
1668
|
+
"number_of_shards" : "1",
|
1669
|
+
"provided_name" : "logstash-2019.12.06",
|
1670
|
+
"creation_date" : "1575622843800",
|
1671
|
+
"number_of_replicas" : "1",
|
1672
|
+
"uuid" : "THE_AWESOMEUUID",
|
1673
|
+
"version" : {
|
1674
|
+
"created" : "7040100"
|
1675
|
+
}
|
1676
|
+
}
|
1677
|
+
}
|
1678
|
+
}
|
1679
|
+
}
|
1680
|
+
```
|
1681
|
+
|
1682
|
+
### Cannot push logs to Elasticsearch with connect_write timeout reached, why?
|
1683
|
+
|
1684
|
+
It seems that Elasticsearch cluster is exhausted.
|
1685
|
+
|
1686
|
+
Usually, Fluentd complains like the following log:
|
1687
|
+
|
1688
|
+
```log
|
1689
|
+
2019-12-29 00:23:33 +0000 [warn]: buffer flush took longer time than slow_flush_log_threshold: elapsed_time=27.283766102716327 slow_flush_log_threshold=15.0 plugin_id="object:aaaffaaaaaff"
|
1690
|
+
2019-12-29 00:23:33 +0000 [warn]: buffer flush took longer time than slow_flush_log_threshold: elapsed_time=26.161768959928304 slow_flush_log_threshold=15.0 plugin_id="object:aaaffaaaaaff"
|
1691
|
+
2019-12-29 00:23:33 +0000 [warn]: buffer flush took longer time than slow_flush_log_threshold: elapsed_time=28.713624476008117 slow_flush_log_threshold=15.0 plugin_id="object:aaaffaaaaaff"
|
1692
|
+
2019-12-29 01:39:18 +0000 [warn]: Could not push logs to Elasticsearch, resetting connection and trying again. connect_write timeout reached
|
1693
|
+
2019-12-29 01:39:18 +0000 [warn]: Could not push logs to Elasticsearch, resetting connection and trying again. connect_write timeout reached
|
1694
|
+
```
|
1695
|
+
|
1696
|
+
This warnings is usually caused by exhaused Elasticsearch cluster due to resource shortage.
|
1697
|
+
|
1698
|
+
If CPU usage is spiked and Elasticsearch cluster is eating up CPU resource, this issue is caused by CPU resource shortage.
|
1699
|
+
|
1700
|
+
Check your Elasticsearch cluster health status and resource usage.
|
1701
|
+
|
1624
1702
|
## Contact
|
1625
1703
|
|
1626
1704
|
If you have a question, [open an Issue](https://github.com/uken/fluent-plugin-elasticsearch/issues).
|
@@ -3,7 +3,7 @@ $:.push File.expand_path('../lib', __FILE__)
|
|
3
3
|
|
4
4
|
Gem::Specification.new do |s|
|
5
5
|
s.name = 'fluent-plugin-elasticsearch'
|
6
|
-
s.version = '3.
|
6
|
+
s.version = '3.8.0'
|
7
7
|
s.authors = ['diogo', 'pitr', 'Hiroshi Hatake']
|
8
8
|
s.email = ['pitr.vern@gmail.com', 'me@diogoterror.com', 'cosmo0920.wp@gmail.com']
|
9
9
|
s.description = %q{Elasticsearch output plugin for Fluent event collector}
|
@@ -33,6 +33,7 @@ module Fluent::Plugin
|
|
33
33
|
class ElasticsearchOutput < Output
|
34
34
|
class RecoverableRequestFailure < StandardError; end
|
35
35
|
class UnrecoverableRequestFailure < Fluent::UnrecoverableError; end
|
36
|
+
class RetryStreamEmitFailure < StandardError; end
|
36
37
|
|
37
38
|
# MissingIdFieldError is raised for records that do not
|
38
39
|
# include the field for the unique record identifier
|
@@ -862,8 +863,15 @@ EOC
|
|
862
863
|
error.handle_error(response, tag, chunk, bulk_message_count, extracted_values)
|
863
864
|
end
|
864
865
|
rescue RetryStreamError => e
|
866
|
+
log.trace "router.emit_stream for retry stream doing..."
|
865
867
|
emit_tag = @retry_tag ? @retry_tag : tag
|
866
|
-
|
868
|
+
# check capacity of buffer space
|
869
|
+
if retry_stream_retryable?
|
870
|
+
router.emit_stream(emit_tag, e.retry_stream)
|
871
|
+
else
|
872
|
+
raise RetryStreamEmitFailure, "buffer is full."
|
873
|
+
end
|
874
|
+
log.trace "router.emit_stream for retry stream done."
|
867
875
|
rescue => e
|
868
876
|
ignore = @ignore_exception_classes.any? { |clazz| e.class <= clazz }
|
869
877
|
|
@@ -879,6 +887,10 @@ EOC
|
|
879
887
|
end
|
880
888
|
end
|
881
889
|
|
890
|
+
def retry_stream_retryable?
|
891
|
+
@buffer.storable?
|
892
|
+
end
|
893
|
+
|
882
894
|
def is_existing_connection(host)
|
883
895
|
# check if the host provided match the current connection
|
884
896
|
return false if @_es.nil?
|
@@ -3175,6 +3175,82 @@ class ElasticsearchOutput < Test::Unit::TestCase
|
|
3175
3175
|
assert_equal [['retry', 1, sample_record]], driver.events
|
3176
3176
|
end
|
3177
3177
|
|
3178
|
+
class FulfilledBufferRetryStreamTest < self
|
3179
|
+
def test_bulk_error_retags_with_error_when_configured_and_fullfilled_buffer
|
3180
|
+
def create_driver(conf='', es_version=5, client_version="\"5.0\"")
|
3181
|
+
@client_version ||= client_version
|
3182
|
+
Fluent::Plugin::ElasticsearchOutput.module_eval(<<-CODE)
|
3183
|
+
def retry_stream_retryable?
|
3184
|
+
false
|
3185
|
+
end
|
3186
|
+
CODE
|
3187
|
+
# For request stub to detect compatibility.
|
3188
|
+
@es_version ||= es_version
|
3189
|
+
@client_version ||= client_version
|
3190
|
+
if @es_version
|
3191
|
+
Fluent::Plugin::ElasticsearchOutput.module_eval(<<-CODE)
|
3192
|
+
def detect_es_major_version
|
3193
|
+
#{@es_version}
|
3194
|
+
end
|
3195
|
+
CODE
|
3196
|
+
end
|
3197
|
+
Fluent::Plugin::ElasticsearchOutput.module_eval(<<-CODE)
|
3198
|
+
def client_library_version
|
3199
|
+
#{@client_version}
|
3200
|
+
end
|
3201
|
+
CODE
|
3202
|
+
Fluent::Test::Driver::Output.new(Fluent::Plugin::ElasticsearchOutput).configure(conf)
|
3203
|
+
end
|
3204
|
+
driver = create_driver("retry_tag retry\n")
|
3205
|
+
stub_request(:post, 'http://localhost:9200/_bulk')
|
3206
|
+
.to_return(lambda do |req|
|
3207
|
+
{ :status => 200,
|
3208
|
+
:headers => { 'Content-Type' => 'json' },
|
3209
|
+
:body => %({
|
3210
|
+
"took" : 1,
|
3211
|
+
"errors" : true,
|
3212
|
+
"items" : [
|
3213
|
+
{
|
3214
|
+
"create" : {
|
3215
|
+
"_index" : "foo",
|
3216
|
+
"_type" : "bar",
|
3217
|
+
"_id" : "abc1",
|
3218
|
+
"status" : 403,
|
3219
|
+
"error" : {
|
3220
|
+
"type" : "cluster_block_exception",
|
3221
|
+
"reason":"index [foo] blocked by: [FORBIDDEN/8/index write (api)]"
|
3222
|
+
}
|
3223
|
+
}
|
3224
|
+
},
|
3225
|
+
{
|
3226
|
+
"create" : {
|
3227
|
+
"_index" : "foo",
|
3228
|
+
"_type" : "bar",
|
3229
|
+
"_id" : "abc2",
|
3230
|
+
"status" : 403,
|
3231
|
+
"error" : {
|
3232
|
+
"type" : "cluster_block_exception",
|
3233
|
+
"reason":"index [foo] blocked by: [FORBIDDEN/8/index write (api)]"
|
3234
|
+
}
|
3235
|
+
}
|
3236
|
+
}
|
3237
|
+
]
|
3238
|
+
})
|
3239
|
+
}
|
3240
|
+
end)
|
3241
|
+
|
3242
|
+
# Check buffer fulfillment condition
|
3243
|
+
assert_raise(Fluent::Plugin::ElasticsearchOutput::RetryStreamEmitFailure) do
|
3244
|
+
driver.run(default_tag: 'test') do
|
3245
|
+
driver.feed(1, sample_record)
|
3246
|
+
driver.feed(1, sample_record)
|
3247
|
+
end
|
3248
|
+
end
|
3249
|
+
|
3250
|
+
assert_equal [], driver.events
|
3251
|
+
end
|
3252
|
+
end
|
3253
|
+
|
3178
3254
|
def test_create_should_write_records_with_ids_and_skip_those_without
|
3179
3255
|
driver.configure("write_operation create\nid_key my_id\n@log_level debug")
|
3180
3256
|
stub_request(:post, 'http://localhost:9200/_bulk')
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: fluent-plugin-elasticsearch
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 3.
|
4
|
+
version: 3.8.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- diogo
|
@@ -10,7 +10,7 @@ authors:
|
|
10
10
|
autorequire:
|
11
11
|
bindir: bin
|
12
12
|
cert_chain: []
|
13
|
-
date: 2019-12-
|
13
|
+
date: 2019-12-19 00:00:00.000000000 Z
|
14
14
|
dependencies:
|
15
15
|
- !ruby/object:Gem::Dependency
|
16
16
|
name: fluentd
|