fluent-plugin-kinesis-aggregation 0.2.1 → 0.3.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: 2ca1b05d0268c5f19045de4e90129f6018cbeca5
4
- data.tar.gz: 434e9f311ebcb99684d1c11620f6ec0e5dce9a55
2
+ SHA256:
3
+ metadata.gz: eca9c6a9bf3957d2782027f32b7121cb11f777434fa87229dfb0e0aa21f611e4
4
+ data.tar.gz: 6c482a9463582a0397006a05e4db554b7934f86f2450222847724129c7883432
5
5
  SHA512:
6
- metadata.gz: 03035ab01ba22533165a108983b940ba58607b874b9e446ecfd012fe70e6128ab651f371d7d6f0fdd6b60befbe360d727a0c9e121976201bfd5adda619ff5228
7
- data.tar.gz: 05f031ff1367b86dfce49f178784a02f8f89230a666fc9060752373a0864383dae6ce39db27e0c58e1c47f335890e3c143bde5259e5f0b7d0233c0a9884a3ab6
6
+ metadata.gz: 612a0af58db3f07e799d925f29148e0816e60c1eb9cd07b0c6a8a8c786506754ac4c737190c4a2fd00e4deb9a574bd79f0b25edba4583970b3b412da4fefb568
7
+ data.tar.gz: '04698ac254d9930dd2b05c9bb96c3253d5a33d7f65602e30abcae8fab0c58e69d68557954691fd48c35d647812628a5f4bd06edae80ea15d982734aa5c17c433'
@@ -1,8 +1,8 @@
1
1
  language: ruby
2
2
 
3
3
  rvm:
4
- - 2.2
5
- - 2.1
4
+ - 2.4
5
+ - 2.3
6
6
 
7
7
  os:
8
8
  - linux
@@ -1,9 +1,30 @@
1
1
  # CHANGELOG
2
2
 
3
- ## Next
3
+ ## 0.3.2
4
+
5
+ - Modify aws-sdk usage to require just the API/SDK resources for Kinesis
6
+ - Drop support and testing for deprecated Ruby versions (<2.3)
7
+
8
+ ## 0.3.1
9
+
10
+ - Change aws-sdk usage to work with both v2 and v3
11
+ (in particular, makes it possible to use latest td-agent which includes the s3 plugin
12
+ and pulls in aws-sdk v3)
13
+
14
+ ## 0.3.0
15
+
16
+ - Update to use fluentd 0.14 API (stick to 0.2.3 if you need support for earlier versions of fluentd)
17
+ Much thanks to cosmo0920 for doing this.
18
+
19
+ ## 0.2.3
20
+
21
+ - emit stream name in error
22
+
23
+ ## 0.2.1 - 0.2.2
4
24
 
5
25
  - update documentation to refer to published gem
6
26
  - turn on testing for Ruby 2.1
27
+ - allow running on Ruby 2.1
7
28
 
8
29
  ## 0.2.0
9
30
 
data/README.md CHANGED
@@ -8,14 +8,26 @@ This is a rewrite of [aws-fluent-plugin-kinesis](https://github.com/awslabs/aws-
8
8
  a different shipment method using the
9
9
  [KPL aggregation format](https://github.com/awslabs/amazon-kinesis-producer/blob/master/aggregation-format.md).
10
10
 
11
+ *Since this plugin was forked, aws-fluent-plugin-kinesis has undergone considerable development (and improvement).
12
+ Most notably, the upcoming 2.0 release supports KPL aggregated records using google-protobuf without
13
+ the overhead of using the KPL:
14
+ https://github.com/awslabs/aws-fluent-plugin-kinesis/issues/107*
15
+
16
+ *However, it still uses msgpack for internal buffering and only uses protobuf when it ships the records,
17
+ whereas this plugin processes each record as it comes in and ships the result by simple concatenation
18
+ of the encoded records. This may not be faster, of course - could depend on the overhead of calling
19
+ the protobuf methods - but most probably is. The discussion below is also still mostly valid,
20
+ in that the awslabs plugin does not have PutRecord == chunk equivalency, but instead has its
21
+ own internal retry method.*
22
+
11
23
  The basic idea is to have one PutRecord === one chunk. This has a number of advantages:
12
24
 
13
25
  - much less complexity in plugin (less CPU/memory)
14
26
  - by aggregating, we increase the throughput and decrease the cost
15
27
  - since a single chunk either succeeds or fails,
16
28
  we get to use fluentd's more complex/complete retry mechanism
17
- (which is also exposed in the monitor). The existing retry mechanism
18
- has [unfortunate issues under heavy load](https://github.com/awslabs/aws-fluent-plugin-kinesis/issues/42)
29
+ (which is also exposed by the monitor plugin; we view this in datadog). The existing retry mechanism
30
+ had [unfortunate issues under heavy load](https://github.com/awslabs/aws-fluent-plugin-kinesis/issues/42)
19
31
  - we get ordering within a chunk without having to rely on sequence
20
32
  numbers (though not overall ordering)
21
33
 
@@ -23,7 +35,7 @@ However, there are drawbacks:
23
35
 
24
36
  - if you're using this as an aggregator, you will need to tune the
25
37
  buffer size on your sources fairly low such that it is less
26
- that the low buffer_chunk_limit on the aggregator
38
+ than the low buffer_chunk_limit on the aggregator
27
39
  - you have to use a KCL library to ingest
28
40
  - you can't use a calculated partition key (based on the record);
29
41
  essentially, you need to use a random partition key
@@ -63,7 +75,7 @@ specify the library path via RUBYLIB:
63
75
  ## Dependencies
64
76
 
65
77
  * Ruby 2.1+
66
- * Fluentd 0.10.43+
78
+ * Fluentd 0.14.15+ (if you need 0.10 or 0.12 support, use the fluentd-v0.12 branch or version 0.2.x on rubygems)
67
79
 
68
80
  ## Basic Usage
69
81
 
@@ -145,24 +157,28 @@ forces all writes to a specific shard, and if you're using
145
157
  a single thread/process will probably keep event ordering
146
158
  (not recommended - watch out for hot shards!).
147
159
 
148
- ### detach_process
149
-
150
- Integer. Optional. This defines the number of parallel processes to start.
151
- This can be used to increase throughput by allowing multiple processes to
152
- execute the plugin at once. Setting this option to > 0 will cause the plugin
153
- to run in a separate process. The default is 0.
154
-
155
160
  ### num_threads
156
161
 
157
162
  Integer. The number of threads to flush the buffer. This plugin is based on
158
- Fluentd::BufferedOutput, so we buffer incoming records before emitting them to
163
+ Fluentd::Plugin::Output, so we buffer incoming records before emitting them to
159
164
  Amazon Kinesis. You can find the detail about buffering mechanism [here](http://docs.fluentd.org/articles/buffer-plugin-overview).
160
165
  Emitting records to Amazon Kinesis via network causes I/O Wait, so parallelizing
161
166
  emitting with threads will improve throughput.
162
167
 
163
168
  This option can be used to parallelize writes into the output(s)
164
169
  designated by the output plugin. The default is 1.
165
- Also you can use this option with *detach_process*.
170
+ Also you can use this option with *multi workers*.
171
+
172
+ ### multi workers
173
+
174
+ This feature is introduced in Fluentd v0.14.
175
+ Instead of using *detach_process*, this feature can use as the following system directive.
176
+ Note that *detach_process* parameter is removed after using v0.14 Output Plugin API.
177
+ The default is 1.
178
+
179
+ <system>
180
+ workers 5
181
+ </system>
166
182
 
167
183
  ### debug
168
184
 
@@ -17,24 +17,25 @@ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
17
17
 
18
18
  Gem::Specification.new do |spec|
19
19
  spec.name = "fluent-plugin-kinesis-aggregation"
20
- spec.version = '0.2.1'
20
+ spec.version = '0.3.2'
21
21
  spec.author = 'Atlassian'
22
+ spec.email = 'jhaggerty@atlassian.com'
22
23
  spec.summary = %q{Fluentd output plugin that sends KPL style aggregated events to Amazon Kinesis.}
23
24
  spec.homepage = "https://github.com/atlassian/fluent-plugin-kinesis-aggregation"
24
- spec.license = "Apache License, Version 2.0"
25
+ spec.license = "Apache-2.0"
25
26
 
26
27
  spec.files = `git ls-files`.split($/)
27
28
  spec.executables = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
28
29
  spec.test_files = spec.files.grep(%r{^(test|spec|features)/})
29
30
  spec.require_paths = ["lib"]
30
- spec.required_ruby_version = '>= 2.2'
31
+ spec.required_ruby_version = '>= 2.3'
31
32
 
32
- spec.add_development_dependency "bundler", "~> 1.3"
33
- spec.add_development_dependency "rake", "~> 10.0"
34
- spec.add_development_dependency "test-unit-rr", "~> 1.0"
33
+ spec.add_development_dependency "bundler", ">= 1.10"
34
+ spec.add_development_dependency "rake", ">= 10.0"
35
+ spec.add_development_dependency "test-unit", ">= 3.0.8"
36
+ spec.add_development_dependency "test-unit-rr", ">= 1.0.3"
35
37
 
36
- spec.add_dependency "fluentd", ">= 0.10.53", "< 0.13"
37
- spec.add_dependency "aws-sdk-core", ">= 2.0.12", "< 3.0"
38
- spec.add_dependency "msgpack", ">= 0.5.8"
39
- spec.add_dependency "google-protobuf", ">= 3.0.0.alpha.4.0"
38
+ spec.add_dependency "fluentd", [">= 0.14.22", "< 2"]
39
+ spec.add_dependency "aws-sdk-kinesis", "~> 1", "!= 1.4", "!= 1.5", "!= 1.14"
40
+ spec.add_dependency "google-protobuf", "~> 3"
40
41
  end
@@ -11,13 +11,14 @@
11
11
  # ANY KIND, either express or implied. See the License for the specific
12
12
  # language governing permissions and limitations under the License.
13
13
 
14
- require 'aws-sdk-core'
14
+ require 'aws-sdk-kinesis'
15
15
  require 'yajl'
16
16
  require 'logger'
17
17
  require 'securerandom'
18
18
  require 'digest'
19
19
 
20
20
  require 'google/protobuf'
21
+ require 'fluent/plugin/output'
21
22
 
22
23
  Google::Protobuf::DescriptorPool.generated_pool.build do
23
24
  add_message "AggregatedRecord" do
@@ -43,12 +44,11 @@ Record = Google::Protobuf::DescriptorPool.generated_pool.lookup("Record").msgcla
43
44
 
44
45
 
45
46
  module FluentPluginKinesisAggregation
46
- class OutputFilter < Fluent::BufferedOutput
47
+ class OutputFilter < Fluent::Plugin::Output
47
48
 
48
- include Fluent::DetachMultiProcessMixin
49
- include Fluent::SetTimeKeyMixin
50
- include Fluent::SetTagKeyMixin
49
+ helpers :compat_parameters, :inject
51
50
 
51
+ DEFAULT_BUFFER_TYPE = "memory"
52
52
  NAME = 'kinesis-aggregation'
53
53
  PUT_RECORD_MAX_DATA_SIZE = 1024 * 1024
54
54
  # 200 is an arbitrary number more than the envelope overhead
@@ -81,26 +81,31 @@ module FluentPluginKinesisAggregation
81
81
 
82
82
  config_param :http_proxy, :string, default: nil
83
83
 
84
+ config_section :buffer do
85
+ config_set_default :@type, DEFAULT_BUFFER_TYPE
86
+ end
87
+
84
88
  def configure(conf)
89
+ compat_parameters_convert(conf, :buffer, :inject)
85
90
  super
86
91
 
87
- if @buffer.chunk_limit > FLUENTD_MAX_BUFFER_SIZE
92
+ if @buffer.chunk_limit_size > FLUENTD_MAX_BUFFER_SIZE
88
93
  raise Fluent::ConfigError, "Kinesis buffer_chunk_limit is set to more than the 1mb shard limit (i.e. you won't be able to write your chunks!"
89
94
  end
90
95
 
91
- if @buffer.chunk_limit > FLUENTD_MAX_BUFFER_SIZE / 3
96
+ if @buffer.chunk_limit_size > FLUENTD_MAX_BUFFER_SIZE / 3
92
97
  log.warn 'Kinesis buffer_chunk_limit is set at more than 1/3 of the per second shard limit (1mb). This is not good if you have many producers.'
93
98
  end
94
99
  end
95
100
 
96
101
  def start
97
- detach_multi_process do
98
- super
99
- load_client
100
- end
102
+ super
103
+ load_client
101
104
  end
102
105
 
103
106
  def format(tag, time, record)
107
+ record = inject_values_to_record(tag, time, record)
108
+
104
109
  return AggregatedRecord.encode(AggregatedRecord.new(
105
110
  records: [Record.new(
106
111
  partition_key_index: 1,
@@ -112,7 +117,7 @@ module FluentPluginKinesisAggregation
112
117
  def write(chunk)
113
118
  records = chunk.read
114
119
  if records.length > FLUENTD_MAX_BUFFER_SIZE
115
- log.error "Can't emit aggregated record of length #{records.length} (more than #{FLUENTD_MAX_BUFFER_SIZE})"
120
+ log.error "Can't emit aggregated #{@stream_name} stream record of length #{records.length} (more than #{FLUENTD_MAX_BUFFER_SIZE})"
116
121
  return # do not throw, since we can't retry
117
122
  end
118
123
 
@@ -28,4 +28,7 @@ require 'test/unit/rr'
28
28
  $LOAD_PATH.unshift(File.join(File.dirname(__FILE__), '..', 'lib'))
29
29
  $LOAD_PATH.unshift(File.dirname(__FILE__))
30
30
  require 'fluent/test'
31
+ require 'fluent/test/helpers'
32
+ require 'fluent/test/driver/output'
33
+ require 'fluent/process'
31
34
  require 'fluent/plugin/out_kinesis-aggregation'
@@ -14,6 +14,8 @@
14
14
  require 'helper'
15
15
 
16
16
  class KinesisOutputTest < Test::Unit::TestCase
17
+ include Fluent::Test::Helpers
18
+
17
19
  def setup
18
20
  Fluent::Test.setup
19
21
  end
@@ -27,14 +29,14 @@ class KinesisOutputTest < Test::Unit::TestCase
27
29
  buffer_chunk_limit 100k
28
30
  ]
29
31
 
30
- def create_driver(conf = CONFIG, tag='test')
31
- Fluent::Test::BufferedOutputTestDriver
32
- .new(FluentPluginKinesisAggregation::OutputFilter, tag).configure(conf)
32
+ def create_driver(conf = CONFIG)
33
+ Fluent::Test::Driver::Output
34
+ .new(FluentPluginKinesisAggregation::OutputFilter).configure(conf)
33
35
  end
34
36
 
35
37
  def create_mock_client
36
38
  client = mock(Object.new)
37
- mock(Aws::Kinesis::Client).new({}) { client }
39
+ stub(Aws::Kinesis::Client).new(anything) { client }
38
40
  return client
39
41
  end
40
42
 
@@ -104,7 +106,7 @@ class KinesisOutputTest < Test::Unit::TestCase
104
106
  end
105
107
 
106
108
  d = create_driver
107
- d.run
109
+ d.run(default_tag: "test")
108
110
  end
109
111
 
110
112
  def test_load_client_with_credentials
@@ -132,7 +134,7 @@ class KinesisOutputTest < Test::Unit::TestCase
132
134
  buffer_chunk_limit 100k
133
135
  EOS
134
136
 
135
- d.run
137
+ d.run(default_tag: "test")
136
138
  end
137
139
 
138
140
  def test_load_client_with_role_arn
@@ -160,7 +162,7 @@ class KinesisOutputTest < Test::Unit::TestCase
160
162
  fixed_partition_key test_partition_key
161
163
  buffer_chunk_limit 100k
162
164
  EOS
163
- d.run
165
+ d.run(default_tag: "test")
164
166
  end
165
167
 
166
168
  def test_emitting
@@ -169,18 +171,19 @@ class KinesisOutputTest < Test::Unit::TestCase
169
171
  data1 = {"a"=>1,"time"=>"2011-01-02T13:14:15Z","tag"=>"test"}
170
172
  data2 = {"a"=>2,"time"=>"2011-01-02T13:14:15Z","tag"=>"test"}
171
173
 
172
- time = Time.parse("2011-01-02 13:14:15 UTC").to_i
173
- d.emit(data1, time)
174
- d.emit(data2, time)
174
+ time = event_time("2011-01-02 13:14:15 UTC")
175
175
 
176
- client = create_mock_client
177
- client.put_record(
178
- stream_name: 'test_stream',
179
- data: "\xF3\x89\x9A\xC2\n\x01a\n\x12test_partition_key\x1A6\b\x01\x1A2{\"a\":1,\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}\x1A6\b\x01\x1A2{\"a\":2,\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}\xA2\x0E y\x8B\x02\xDF\xAE\xAB\x93\x1C;\xCB\xAD\x1Fx".b,
180
- partition_key: 'test_partition_key'
181
- ) { {} }
176
+ d.run(default_tag: "test") do
177
+ client = create_mock_client
178
+ stub.instance_of(Aws::Kinesis::Client).put_record(
179
+ stream_name: 'test_stream',
180
+ data: "\xF3\x89\x9A\xC2\n\x01a\n\x12test_partition_key\x1A6\b\x01\x1A2{\"a\":1,\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}\x1A6\b\x01\x1A2{\"a\":2,\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}\xA2\x0E y\x8B\x02\xDF\xAE\xAB\x93\x1C;\xCB\xAD\x1Fx".b,
181
+ partition_key: 'test_partition_key'
182
+ ) { {} }
182
183
 
183
- d.run
184
+ d.feed(time, data1)
185
+ d.feed(time, data2)
186
+ end
184
187
  end
185
188
 
186
189
  def test_multibyte
@@ -188,29 +191,32 @@ class KinesisOutputTest < Test::Unit::TestCase
188
191
 
189
192
  data1 = {"a"=>"\xE3\x82\xA4\xE3\x83\xB3\xE3\x82\xB9\xE3\x83\x88\xE3\x83\xBC\xE3\x83\xAB","time"=>"2011-01-02T13:14:15Z".b,"tag"=>"test"}
190
193
 
191
- time = Time.parse("2011-01-02 13:14:15 UTC").to_i
192
- d.emit(data1, time)
193
194
 
194
- client = create_mock_client
195
- client.put_record(
196
- stream_name: 'test_stream',
197
- data: "\xF3\x89\x9A\xC2\n\x01a\n\x12test_partition_key\x1AI\b\x01\x1AE{\"a\":\"\xE3\x82\xA4\xE3\x83\xB3\xE3\x82\xB9\xE3\x83\x88\xE3\x83\xBC\xE3\x83\xAB\",\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}_$\x9C\xF9v+pV:g7c\xE3\xF2$\xBA".b,
198
- partition_key: 'test_partition_key'
199
- ) { {} }
195
+ time = event_time("2011-01-02 13:14:15 UTC")
196
+ d.run(default_tag: "test") do
197
+ client = create_mock_client
198
+ stub.instance_of(Aws::Kinesis::Client).put_record(
199
+ stream_name: 'test_stream',
200
+ data: "\xF3\x89\x9A\xC2\n\x01a\n\x12test_partition_key\x1AI\b\x01\x1AE{\"a\":\"\xE3\x82\xA4\xE3\x83\xB3\xE3\x82\xB9\xE3\x83\x88\xE3\x83\xBC\xE3\x83\xAB\",\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}_$\x9C\xF9v+pV:g7c\xE3\xF2$\xBA".b,
201
+ partition_key: 'test_partition_key'
202
+ ) { {} }
200
203
 
201
- d.run
204
+ d.feed(time, data1)
205
+ end
202
206
  end
203
207
 
204
208
  def test_fail_on_bigchunk
205
209
  d = create_driver
206
210
 
207
- d.emit(
208
- {"msg" => "z" * 1024 * 1024},
209
- Time.parse("2011-01-02 13:14:15 UTC").to_i)
210
- client = dont_allow(Object.new)
211
- client.put_record
212
- mock(Aws::Kinesis::Client).new({}) { client }
213
-
214
- d.run
211
+ assert_raise(Fluent::Plugin::Buffer::BufferChunkOverflowError) do
212
+ d.run(default_tag: "test") do
213
+ d.feed(
214
+ event_time("2011-01-02 13:14:15 UTC"),
215
+ {"msg" => "z" * 1024 * 1024})
216
+ client = dont_allow(Object.new)
217
+ client.put_record
218
+ mock(Aws::Kinesis::Client).new(anything) { client }
219
+ end
220
+ end
215
221
  end
216
222
  end
metadata CHANGED
@@ -1,127 +1,139 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: fluent-plugin-kinesis-aggregation
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.1
4
+ version: 0.3.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Atlassian
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2016-02-09 00:00:00.000000000 Z
11
+ date: 2020-03-17 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
15
15
  requirement: !ruby/object:Gem::Requirement
16
16
  requirements:
17
- - - "~>"
17
+ - - ">="
18
18
  - !ruby/object:Gem::Version
19
- version: '1.3'
19
+ version: '1.10'
20
20
  type: :development
21
21
  prerelease: false
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
- - - "~>"
24
+ - - ">="
25
25
  - !ruby/object:Gem::Version
26
- version: '1.3'
26
+ version: '1.10'
27
27
  - !ruby/object:Gem::Dependency
28
28
  name: rake
29
29
  requirement: !ruby/object:Gem::Requirement
30
30
  requirements:
31
- - - "~>"
31
+ - - ">="
32
32
  - !ruby/object:Gem::Version
33
33
  version: '10.0'
34
34
  type: :development
35
35
  prerelease: false
36
36
  version_requirements: !ruby/object:Gem::Requirement
37
37
  requirements:
38
- - - "~>"
38
+ - - ">="
39
39
  - !ruby/object:Gem::Version
40
40
  version: '10.0'
41
41
  - !ruby/object:Gem::Dependency
42
- name: test-unit-rr
42
+ name: test-unit
43
43
  requirement: !ruby/object:Gem::Requirement
44
44
  requirements:
45
- - - "~>"
45
+ - - ">="
46
46
  - !ruby/object:Gem::Version
47
- version: '1.0'
47
+ version: 3.0.8
48
48
  type: :development
49
49
  prerelease: false
50
50
  version_requirements: !ruby/object:Gem::Requirement
51
51
  requirements:
52
- - - "~>"
52
+ - - ">="
53
53
  - !ruby/object:Gem::Version
54
- version: '1.0'
54
+ version: 3.0.8
55
55
  - !ruby/object:Gem::Dependency
56
- name: fluentd
56
+ name: test-unit-rr
57
57
  requirement: !ruby/object:Gem::Requirement
58
58
  requirements:
59
59
  - - ">="
60
60
  - !ruby/object:Gem::Version
61
- version: 0.10.53
62
- - - "<"
63
- - !ruby/object:Gem::Version
64
- version: '0.13'
65
- type: :runtime
61
+ version: 1.0.3
62
+ type: :development
66
63
  prerelease: false
67
64
  version_requirements: !ruby/object:Gem::Requirement
68
65
  requirements:
69
66
  - - ">="
70
67
  - !ruby/object:Gem::Version
71
- version: 0.10.53
72
- - - "<"
73
- - !ruby/object:Gem::Version
74
- version: '0.13'
68
+ version: 1.0.3
75
69
  - !ruby/object:Gem::Dependency
76
- name: aws-sdk-core
70
+ name: fluentd
77
71
  requirement: !ruby/object:Gem::Requirement
78
72
  requirements:
79
73
  - - ">="
80
74
  - !ruby/object:Gem::Version
81
- version: 2.0.12
75
+ version: 0.14.22
82
76
  - - "<"
83
77
  - !ruby/object:Gem::Version
84
- version: '3.0'
78
+ version: '2'
85
79
  type: :runtime
86
80
  prerelease: false
87
81
  version_requirements: !ruby/object:Gem::Requirement
88
82
  requirements:
89
83
  - - ">="
90
84
  - !ruby/object:Gem::Version
91
- version: 2.0.12
85
+ version: 0.14.22
92
86
  - - "<"
93
87
  - !ruby/object:Gem::Version
94
- version: '3.0'
88
+ version: '2'
95
89
  - !ruby/object:Gem::Dependency
96
- name: msgpack
90
+ name: aws-sdk-kinesis
97
91
  requirement: !ruby/object:Gem::Requirement
98
92
  requirements:
99
- - - ">="
93
+ - - "~>"
94
+ - !ruby/object:Gem::Version
95
+ version: '1'
96
+ - - "!="
97
+ - !ruby/object:Gem::Version
98
+ version: '1.4'
99
+ - - "!="
100
+ - !ruby/object:Gem::Version
101
+ version: '1.5'
102
+ - - "!="
100
103
  - !ruby/object:Gem::Version
101
- version: 0.5.8
104
+ version: '1.14'
102
105
  type: :runtime
103
106
  prerelease: false
104
107
  version_requirements: !ruby/object:Gem::Requirement
105
108
  requirements:
106
- - - ">="
109
+ - - "~>"
110
+ - !ruby/object:Gem::Version
111
+ version: '1'
112
+ - - "!="
113
+ - !ruby/object:Gem::Version
114
+ version: '1.4'
115
+ - - "!="
116
+ - !ruby/object:Gem::Version
117
+ version: '1.5'
118
+ - - "!="
107
119
  - !ruby/object:Gem::Version
108
- version: 0.5.8
120
+ version: '1.14'
109
121
  - !ruby/object:Gem::Dependency
110
122
  name: google-protobuf
111
123
  requirement: !ruby/object:Gem::Requirement
112
124
  requirements:
113
- - - ">="
125
+ - - "~>"
114
126
  - !ruby/object:Gem::Version
115
- version: 3.0.0.alpha.4.0
127
+ version: '3'
116
128
  type: :runtime
117
129
  prerelease: false
118
130
  version_requirements: !ruby/object:Gem::Requirement
119
131
  requirements:
120
- - - ">="
132
+ - - "~>"
121
133
  - !ruby/object:Gem::Version
122
- version: 3.0.0.alpha.4.0
134
+ version: '3'
123
135
  description:
124
- email:
136
+ email: jhaggerty@atlassian.com
125
137
  executables: []
126
138
  extensions: []
127
139
  extra_rdoc_files: []
@@ -142,7 +154,7 @@ files:
142
154
  - test/plugin/test_out_kinesis-aggregation.rb
143
155
  homepage: https://github.com/atlassian/fluent-plugin-kinesis-aggregation
144
156
  licenses:
145
- - Apache License, Version 2.0
157
+ - Apache-2.0
146
158
  metadata: {}
147
159
  post_install_message:
148
160
  rdoc_options: []
@@ -152,15 +164,14 @@ required_ruby_version: !ruby/object:Gem::Requirement
152
164
  requirements:
153
165
  - - ">="
154
166
  - !ruby/object:Gem::Version
155
- version: '2.2'
167
+ version: '2.3'
156
168
  required_rubygems_version: !ruby/object:Gem::Requirement
157
169
  requirements:
158
170
  - - ">="
159
171
  - !ruby/object:Gem::Version
160
172
  version: '0'
161
173
  requirements: []
162
- rubyforge_project:
163
- rubygems_version: 2.2.2
174
+ rubygems_version: 3.0.3
164
175
  signing_key:
165
176
  specification_version: 4
166
177
  summary: Fluentd output plugin that sends KPL style aggregated events to Amazon Kinesis.