fluent-plugin-kinesis 1.1.3 → 1.2.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 8d2e990f8886423c95645780d0e44dc8fa9db2d6
4
- data.tar.gz: f37b1fc727a468940b901698364812f55eeacbb3
3
+ metadata.gz: d8b59ae71478266b7d619c76b71811310f2cf90a
4
+ data.tar.gz: 4e3049d7c6ec83dc6da5858fee3ddfa8b3e76660
5
5
  SHA512:
6
- metadata.gz: ef98238e2b6b7a580f9ac67e439efe758bf40d71b23e9efa2d63165c10a3a6015f9ec20cb07294b33396aa393872616a896fc82d6bf4d54d9541ca1c58646506
7
- data.tar.gz: 7c833e6fe327b1c320769ce00f3cd30c58adaa1bf7a7ca463f220e02840d06bb0d0a00ffe7b8bcf7d8a2beae26acaa0cad973b6fde21e2d13f6b0845ae446398
6
+ metadata.gz: 1bf9e15306dbf5f88f5dcdd731d06395821c7a29f1f0d0597075db19fc638f6b2439003ef08326bac4bc9f1bbd0a124f5ced2fef5b83b0c52b81e4f6fdcbb35a
7
+ data.tar.gz: 1ac91c53effd50cd9ebb947f7be53ec8162f4d736e836c6076149ed4da80ed39ada244ece88fd23923e9daedb1026e35d74f50e564ced387b338d7b0de8223ed
@@ -1,5 +1,9 @@
1
1
  # CHANGELOG
2
2
 
3
+ ## 1.2.0
4
+ - Feature - Add reduce_max_size_error_message configuration [#127](https://github.com/awslabs/aws-fluent-plugin-kinesis/pull/127)
5
+ - Bug fix - Fix empty batch error [#125](https://github.com/awslabs/aws-fluent-plugin-kinesis/pull/125)
6
+
3
7
  ## 1.1.3
4
8
 
5
9
  - Bug fix - Fix issues with fluentd 0.14.12 [#99](https://github.com/awslabs/aws-fluent-plugin-kinesis/issues/99)
data/README.md CHANGED
@@ -122,7 +122,7 @@ The credential provider will be choosen by the steps below:
122
122
 
123
123
  - Use [**shared_credentials**](#shared_credentials) section if you set it
124
124
  - Use [**assume_role_credentials**](#assume_role_credentials) section if you set it
125
- - Otherwise, default provicder chain:
125
+ - Otherwise, default provider chain:
126
126
  - [**aws_key_id**](#aws_key_id) and [**aws_sec_key**](#aws_sec_key)
127
127
  - Environment variables (ex. `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, etc.)
128
128
  - Default shared credentials (`default` in `~/.aws/credentials`)
@@ -135,7 +135,11 @@ AWS access key id.
135
135
  AWS secret access key.
136
136
 
137
137
  ### shared_credentials
138
- Use this config section to specify shared credential file path and profile name. If you want to use default profile (`default` in `~/.aws/credentials`), you don't have to specify here.
138
+ Use this config section to specify shared credential file path and profile name. If you want to use default profile (`default` in `~/.aws/credentials`), you don't have to specify here. For example, you can specify the config like below:
139
+
140
+ <shared_credentials>
141
+ profile_name "your_profile_name"
142
+ </shared_credentials>
139
143
 
140
144
  #### profile_name
141
145
  Profile name of the credential file.
@@ -144,7 +148,11 @@ Profile name of the credential file.
144
148
  Path for the credential file.
145
149
 
146
150
  ### assume_role_credentials
147
- Use this config section for cross account access.
151
+ Use this config section for cross account access. For example, you can specify the config like below:
152
+
153
+ <assume_role_credentials>
154
+ role_arn "your_role_arn_in_cross_account_to_assume"
155
+ </assume_role_credentials>
148
156
 
149
157
  #### role_arn
150
158
  IAM Role to be assumed with [AssumeRole][assume_role].
@@ -170,6 +178,9 @@ If your record contains a field whose string should be sent to Amazon Kinesis di
170
178
  ### log_truncate_max_size
171
179
  Integer, default 0. When emitting the log entry, the message will be truncated by this size to avoid infinite loop when the log is also sent to Kinesis. The value 0 (default) means no truncation.
172
180
 
181
+ ### reduce_max_size_error_message
182
+ Boolean, default false. When each record exceeds the maximum size, the original record is put on its error message. If this parameter is turned on, the error message will contain only summarized data to prevent large traffic generated by the error message.
183
+
173
184
  ## Configuration: kinesis_streams
174
185
  Here are `kinesis_streams` specific configurations.
175
186
 
@@ -400,7 +411,7 @@ Minimum: 1
400
411
  Maximum (inclusive): 9223372036854775807
401
412
 
402
413
  #### record_max_buffered_time
403
- Maximum amount of itme (milliseconds) a record may spend being buffered before it gets sent. Records may be sent sooner than this depending on the other buffering limits.
414
+ Maximum amount of time (milliseconds) a record may spend being buffered before it gets sent. Records may be sent sooner than this depending on the other buffering limits.
404
415
 
405
416
  This setting provides coarse ordering among records - any two records will be reordered by no more than twice this amount (assuming no failures and retries and equal network latency).
406
417
 
@@ -92,6 +92,7 @@ module Fluent
92
92
  config_param :formatter, :string, default: 'json'
93
93
  config_param :data_key, :string, default: nil
94
94
  config_param :log_truncate_max_size, :integer, default: 0
95
+ config_param :reduce_max_size_error_message, :bool, default: false
95
96
  end
96
97
 
97
98
  def config_param_for_batch_request
@@ -24,8 +24,13 @@ module Fluent
24
24
  end
25
25
 
26
26
  class ExceedMaxRecordSizeError < SkipRecordError
27
- def initialize(record)
28
- super "Record size limit exceeded for #{record}"
27
+ def initialize(record, reduce_message)
28
+ if reduce_message == true
29
+ sampled_record = "#{record.slice(0,1024)}...#{record.slice(-1024,1024)}"
30
+ super "Record size limit exceeded for #{record.length}-bytes record: #{sampled_record}"
31
+ else
32
+ super "Record size limit exceeded for #{record}"
33
+ end
29
34
  end
30
35
  end
31
36
 
@@ -64,7 +64,7 @@ module Fluent
64
64
  converted = convert_format(tag, time, record)
65
65
  converted[:data] += "\n" if @append_new_line
66
66
  if converted[:data].size > MaxRecordSize
67
- raise ExceedMaxRecordSizeError, converted[:data]
67
+ raise ExceedMaxRecordSizeError.new(converted[:data], @reduce_max_size_error_message)
68
68
  else
69
69
  converted
70
70
  end
@@ -23,6 +23,7 @@ module Fluent
23
23
  def write(chunk)
24
24
  records = convert_to_records(chunk)
25
25
  split_to_batches(records).each do |batch|
26
+ next unless batch.size > 0
26
27
  batch_request_with_retry(batch)
27
28
  end
28
29
  log.debug("Written #{records.size} records")
@@ -13,5 +13,5 @@
13
13
  # permissions and limitations under the License.
14
14
 
15
15
  module FluentPluginKinesis
16
- VERSION = '1.1.3'
16
+ VERSION = '1.2.0'
17
17
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: fluent-plugin-kinesis
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.3
4
+ version: 1.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Amazon Web Services
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2017-02-02 00:00:00.000000000 Z
11
+ date: 2017-09-21 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: fluentd
@@ -325,7 +325,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
325
325
  version: '0'
326
326
  requirements: []
327
327
  rubyforge_project:
328
- rubygems_version: 2.5.2
328
+ rubygems_version: 2.6.13
329
329
  signing_key:
330
330
  specification_version: 4
331
331
  summary: Fluentd output plugin that sends events to Amazon Kinesis.