fluent-plugin-kinesis-aggregation 0.2.3 → 0.3.4
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +5 -5
- data/.travis.yml +2 -2
- data/CHANGELOG.md +31 -1
- data/README.md +28 -12
- data/fluent-plugin-kinesis-aggregation.gemspec +11 -10
- data/lib/fluent/plugin/out_kinesis-aggregation.rb +16 -11
- data/test/helper.rb +2 -0
- data/test/plugin/test_out_kinesis-aggregation.rb +40 -34
- metadata +65 -42
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
|
-
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
2
|
+
SHA256:
|
3
|
+
metadata.gz: c2a6185b52c9f7ddf3fc0b22b3c67cb37f7aa5d1051acd5ac20abe2e22833a22
|
4
|
+
data.tar.gz: eab8e7770c9fa88f60a9c37d2197785ac131f65c479eb2967063eb406a38d9d9
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 75b40853a06feeeea4c52ee4707152310901e1bc6884beebb6bbb0aa717125bbf8ca2a405589d157790681ddd19b168c3c466896b1ee24a1bfc40b8564218d73
|
7
|
+
data.tar.gz: 85face7f0056933478ef4de000545ac9330b5bd38caa09d6356911012f1dd17960f3d513a4484039eaed2db23ad65c098a9915dd5ba231d81d8f790f4507cc7e
|
data/.travis.yml
CHANGED
data/CHANGELOG.md
CHANGED
@@ -1,9 +1,39 @@
|
|
1
1
|
# CHANGELOG
|
2
2
|
|
3
|
-
##
|
3
|
+
## 0.3.4
|
4
|
+
|
5
|
+
- aws-sdk-kinesis 1.24 is missing a dependency from a newer version of the aws-sdk-core gem; 1.24 has been yanked and 1.24.1 has been released with the fix, but just in case 1.24 has already been installed/cached anywhere, add it to the list of excluded versions.
|
6
|
+
- Previously, we pinned google-protobuf to 3.11.x because 3.12 required Ruby >=2.5 (and td-agent ships with Ruby 2.4 embedded). google-protobuf 3.12.1 restores support for Ruby 2.3 and 2.4, so we can relax our pinning for this dependency a bit by requiring versions greater than 3.12.
|
7
|
+
|
8
|
+
## 0.3.3
|
9
|
+
|
10
|
+
- Dependency google-protobuf 3.12.0 dropped support for Ruby <2.5; td-agent3 bundles Ruby 2.4, so google-protobuf is now pinned to 3.11.x.
|
11
|
+
|
12
|
+
## 0.3.2
|
13
|
+
|
14
|
+
- Modify aws-sdk usage to require just the API/SDK resources for Kinesis
|
15
|
+
- Drop support and testing for deprecated Ruby versions (<2.3)
|
16
|
+
|
17
|
+
## 0.3.1
|
18
|
+
|
19
|
+
- Change aws-sdk usage to work with both v2 and v3
|
20
|
+
(in particular, makes it possible to use latest td-agent which includes the s3 plugin
|
21
|
+
and pulls in aws-sdk v3)
|
22
|
+
|
23
|
+
## 0.3.0
|
24
|
+
|
25
|
+
- Update to use fluentd 0.14 API (stick to 0.2.3 if you need support for earlier versions of fluentd)
|
26
|
+
Much thanks to cosmo0920 for doing this.
|
27
|
+
|
28
|
+
## 0.2.3
|
29
|
+
|
30
|
+
- emit stream name in error
|
31
|
+
|
32
|
+
## 0.2.1 - 0.2.2
|
4
33
|
|
5
34
|
- update documentation to refer to published gem
|
6
35
|
- turn on testing for Ruby 2.1
|
36
|
+
- allow running on Ruby 2.1
|
7
37
|
|
8
38
|
## 0.2.0
|
9
39
|
|
data/README.md
CHANGED
@@ -8,14 +8,26 @@ This is a rewrite of [aws-fluent-plugin-kinesis](https://github.com/awslabs/aws-
|
|
8
8
|
a different shipment method using the
|
9
9
|
[KPL aggregation format](https://github.com/awslabs/amazon-kinesis-producer/blob/master/aggregation-format.md).
|
10
10
|
|
11
|
+
*Since this plugin was forked, aws-fluent-plugin-kinesis has undergone considerable development (and improvement).
|
12
|
+
Most notably, the upcoming 2.0 release supports KPL aggregated records using google-protobuf without
|
13
|
+
the overhead of using the KPL:
|
14
|
+
https://github.com/awslabs/aws-fluent-plugin-kinesis/issues/107*
|
15
|
+
|
16
|
+
*However, it still uses msgpack for internal buffering and only uses protobuf when it ships the records,
|
17
|
+
whereas this plugin processes each record as it comes in and ships the result by simple concatenation
|
18
|
+
of the encoded records. This may not be faster, of course - could depend on the overhead of calling
|
19
|
+
the protobuf methods - but most probably is. The discussion below is also still mostly valid,
|
20
|
+
in that the awslabs plugin does not have PutRecord == chunk equivalency, but instead has its
|
21
|
+
own internal retry method.*
|
22
|
+
|
11
23
|
The basic idea is to have one PutRecord === one chunk. This has a number of advantages:
|
12
24
|
|
13
25
|
- much less complexity in plugin (less CPU/memory)
|
14
26
|
- by aggregating, we increase the throughput and decrease the cost
|
15
27
|
- since a single chunk either succeeds or fails,
|
16
28
|
we get to use fluentd's more complex/complete retry mechanism
|
17
|
-
(which is also exposed
|
18
|
-
|
29
|
+
(which is also exposed by the monitor plugin; we view this in datadog). The existing retry mechanism
|
30
|
+
had [unfortunate issues under heavy load](https://github.com/awslabs/aws-fluent-plugin-kinesis/issues/42)
|
19
31
|
- we get ordering within a chunk without having to rely on sequence
|
20
32
|
numbers (though not overall ordering)
|
21
33
|
|
@@ -63,7 +75,7 @@ specify the library path via RUBYLIB:
|
|
63
75
|
## Dependencies
|
64
76
|
|
65
77
|
* Ruby 2.1+
|
66
|
-
* Fluentd 0.10.
|
78
|
+
* Fluentd 0.14.15+ (if you need 0.10 or 0.12 support, use the fluentd-v0.12 branch or version 0.2.x on rubygems)
|
67
79
|
|
68
80
|
## Basic Usage
|
69
81
|
|
@@ -145,24 +157,28 @@ forces all writes to a specific shard, and if you're using
|
|
145
157
|
a single thread/process will probably keep event ordering
|
146
158
|
(not recommended - watch out for hot shards!).
|
147
159
|
|
148
|
-
### detach_process
|
149
|
-
|
150
|
-
Integer. Optional. This defines the number of parallel processes to start.
|
151
|
-
This can be used to increase throughput by allowing multiple processes to
|
152
|
-
execute the plugin at once. Setting this option to > 0 will cause the plugin
|
153
|
-
to run in a separate process. The default is 0.
|
154
|
-
|
155
160
|
### num_threads
|
156
161
|
|
157
162
|
Integer. The number of threads to flush the buffer. This plugin is based on
|
158
|
-
Fluentd::
|
163
|
+
Fluentd::Plugin::Output, so we buffer incoming records before emitting them to
|
159
164
|
Amazon Kinesis. You can find the detail about buffering mechanism [here](http://docs.fluentd.org/articles/buffer-plugin-overview).
|
160
165
|
Emitting records to Amazon Kinesis via network causes I/O Wait, so parallelizing
|
161
166
|
emitting with threads will improve throughput.
|
162
167
|
|
163
168
|
This option can be used to parallelize writes into the output(s)
|
164
169
|
designated by the output plugin. The default is 1.
|
165
|
-
Also you can use this option with *
|
170
|
+
Also you can use this option with *multi workers*.
|
171
|
+
|
172
|
+
### multi workers
|
173
|
+
|
174
|
+
This feature is introduced in Fluentd v0.14.
|
175
|
+
Instead of using *detach_process*, this feature can use as the following system directive.
|
176
|
+
Note that *detach_process* parameter is removed after using v0.14 Output Plugin API.
|
177
|
+
The default is 1.
|
178
|
+
|
179
|
+
<system>
|
180
|
+
workers 5
|
181
|
+
</system>
|
166
182
|
|
167
183
|
### debug
|
168
184
|
|
@@ -17,24 +17,25 @@ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
|
|
17
17
|
|
18
18
|
Gem::Specification.new do |spec|
|
19
19
|
spec.name = "fluent-plugin-kinesis-aggregation"
|
20
|
-
spec.version = '0.
|
20
|
+
spec.version = '0.3.4'
|
21
21
|
spec.author = 'Atlassian'
|
22
|
+
spec.email = 'lgoolsbee@atlassian.com'
|
22
23
|
spec.summary = %q{Fluentd output plugin that sends KPL style aggregated events to Amazon Kinesis.}
|
23
24
|
spec.homepage = "https://github.com/atlassian/fluent-plugin-kinesis-aggregation"
|
24
|
-
spec.license = "Apache
|
25
|
+
spec.license = "Apache-2.0"
|
25
26
|
|
26
27
|
spec.files = `git ls-files`.split($/)
|
27
28
|
spec.executables = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
|
28
29
|
spec.test_files = spec.files.grep(%r{^(test|spec|features)/})
|
29
30
|
spec.require_paths = ["lib"]
|
30
|
-
spec.required_ruby_version = '>= 2.
|
31
|
+
spec.required_ruby_version = '>= 2.3'
|
31
32
|
|
32
|
-
spec.add_development_dependency "bundler", "
|
33
|
-
spec.add_development_dependency "rake", "
|
34
|
-
spec.add_development_dependency "test-unit
|
33
|
+
spec.add_development_dependency "bundler", ">= 1.10"
|
34
|
+
spec.add_development_dependency "rake", ">= 10.0"
|
35
|
+
spec.add_development_dependency "test-unit", ">= 3.0.8"
|
36
|
+
spec.add_development_dependency "test-unit-rr", ">= 1.0.3"
|
35
37
|
|
36
|
-
spec.add_dependency "fluentd", ">= 0.
|
37
|
-
spec.add_dependency "aws-sdk-
|
38
|
-
spec.add_dependency "
|
39
|
-
spec.add_dependency "google-protobuf", ">= 3.0.0.alpha.4.0"
|
38
|
+
spec.add_dependency "fluentd", [">= 0.14.22", "< 2"]
|
39
|
+
spec.add_dependency "aws-sdk-kinesis", "~> 1", "!= 1.4", "!= 1.5", "!= 1.14", "!= 1.24"
|
40
|
+
spec.add_dependency "google-protobuf", "~> 3", "< 3.12"
|
40
41
|
end
|
@@ -11,13 +11,14 @@
|
|
11
11
|
# ANY KIND, either express or implied. See the License for the specific
|
12
12
|
# language governing permissions and limitations under the License.
|
13
13
|
|
14
|
-
require 'aws-sdk-
|
14
|
+
require 'aws-sdk-kinesis'
|
15
15
|
require 'yajl'
|
16
16
|
require 'logger'
|
17
17
|
require 'securerandom'
|
18
18
|
require 'digest'
|
19
19
|
|
20
20
|
require 'google/protobuf'
|
21
|
+
require 'fluent/plugin/output'
|
21
22
|
|
22
23
|
Google::Protobuf::DescriptorPool.generated_pool.build do
|
23
24
|
add_message "AggregatedRecord" do
|
@@ -43,12 +44,11 @@ Record = Google::Protobuf::DescriptorPool.generated_pool.lookup("Record").msgcla
|
|
43
44
|
|
44
45
|
|
45
46
|
module FluentPluginKinesisAggregation
|
46
|
-
class OutputFilter < Fluent::
|
47
|
+
class OutputFilter < Fluent::Plugin::Output
|
47
48
|
|
48
|
-
|
49
|
-
include Fluent::SetTimeKeyMixin
|
50
|
-
include Fluent::SetTagKeyMixin
|
49
|
+
helpers :compat_parameters, :inject
|
51
50
|
|
51
|
+
DEFAULT_BUFFER_TYPE = "memory"
|
52
52
|
NAME = 'kinesis-aggregation'
|
53
53
|
PUT_RECORD_MAX_DATA_SIZE = 1024 * 1024
|
54
54
|
# 200 is an arbitrary number more than the envelope overhead
|
@@ -81,26 +81,31 @@ module FluentPluginKinesisAggregation
|
|
81
81
|
|
82
82
|
config_param :http_proxy, :string, default: nil
|
83
83
|
|
84
|
+
config_section :buffer do
|
85
|
+
config_set_default :@type, DEFAULT_BUFFER_TYPE
|
86
|
+
end
|
87
|
+
|
84
88
|
def configure(conf)
|
89
|
+
compat_parameters_convert(conf, :buffer, :inject)
|
85
90
|
super
|
86
91
|
|
87
|
-
if @buffer.
|
92
|
+
if @buffer.chunk_limit_size > FLUENTD_MAX_BUFFER_SIZE
|
88
93
|
raise Fluent::ConfigError, "Kinesis buffer_chunk_limit is set to more than the 1mb shard limit (i.e. you won't be able to write your chunks!"
|
89
94
|
end
|
90
95
|
|
91
|
-
if @buffer.
|
96
|
+
if @buffer.chunk_limit_size > FLUENTD_MAX_BUFFER_SIZE / 3
|
92
97
|
log.warn 'Kinesis buffer_chunk_limit is set at more than 1/3 of the per second shard limit (1mb). This is not good if you have many producers.'
|
93
98
|
end
|
94
99
|
end
|
95
100
|
|
96
101
|
def start
|
97
|
-
|
98
|
-
|
99
|
-
load_client
|
100
|
-
end
|
102
|
+
super
|
103
|
+
load_client
|
101
104
|
end
|
102
105
|
|
103
106
|
def format(tag, time, record)
|
107
|
+
record = inject_values_to_record(tag, time, record)
|
108
|
+
|
104
109
|
return AggregatedRecord.encode(AggregatedRecord.new(
|
105
110
|
records: [Record.new(
|
106
111
|
partition_key_index: 1,
|
data/test/helper.rb
CHANGED
@@ -28,5 +28,7 @@ require 'test/unit/rr'
|
|
28
28
|
$LOAD_PATH.unshift(File.join(File.dirname(__FILE__), '..', 'lib'))
|
29
29
|
$LOAD_PATH.unshift(File.dirname(__FILE__))
|
30
30
|
require 'fluent/test'
|
31
|
+
require 'fluent/test/helpers'
|
32
|
+
require 'fluent/test/driver/output'
|
31
33
|
require 'fluent/process'
|
32
34
|
require 'fluent/plugin/out_kinesis-aggregation'
|
@@ -14,6 +14,8 @@
|
|
14
14
|
require 'helper'
|
15
15
|
|
16
16
|
class KinesisOutputTest < Test::Unit::TestCase
|
17
|
+
include Fluent::Test::Helpers
|
18
|
+
|
17
19
|
def setup
|
18
20
|
Fluent::Test.setup
|
19
21
|
end
|
@@ -27,14 +29,14 @@ class KinesisOutputTest < Test::Unit::TestCase
|
|
27
29
|
buffer_chunk_limit 100k
|
28
30
|
]
|
29
31
|
|
30
|
-
def create_driver(conf = CONFIG
|
31
|
-
Fluent::Test::
|
32
|
-
.new(FluentPluginKinesisAggregation::OutputFilter
|
32
|
+
def create_driver(conf = CONFIG)
|
33
|
+
Fluent::Test::Driver::Output
|
34
|
+
.new(FluentPluginKinesisAggregation::OutputFilter).configure(conf)
|
33
35
|
end
|
34
36
|
|
35
37
|
def create_mock_client
|
36
38
|
client = mock(Object.new)
|
37
|
-
|
39
|
+
stub(Aws::Kinesis::Client).new(anything) { client }
|
38
40
|
return client
|
39
41
|
end
|
40
42
|
|
@@ -104,7 +106,7 @@ class KinesisOutputTest < Test::Unit::TestCase
|
|
104
106
|
end
|
105
107
|
|
106
108
|
d = create_driver
|
107
|
-
d.run
|
109
|
+
d.run(default_tag: "test")
|
108
110
|
end
|
109
111
|
|
110
112
|
def test_load_client_with_credentials
|
@@ -132,7 +134,7 @@ class KinesisOutputTest < Test::Unit::TestCase
|
|
132
134
|
buffer_chunk_limit 100k
|
133
135
|
EOS
|
134
136
|
|
135
|
-
d.run
|
137
|
+
d.run(default_tag: "test")
|
136
138
|
end
|
137
139
|
|
138
140
|
def test_load_client_with_role_arn
|
@@ -160,7 +162,7 @@ class KinesisOutputTest < Test::Unit::TestCase
|
|
160
162
|
fixed_partition_key test_partition_key
|
161
163
|
buffer_chunk_limit 100k
|
162
164
|
EOS
|
163
|
-
d.run
|
165
|
+
d.run(default_tag: "test")
|
164
166
|
end
|
165
167
|
|
166
168
|
def test_emitting
|
@@ -169,18 +171,19 @@ class KinesisOutputTest < Test::Unit::TestCase
|
|
169
171
|
data1 = {"a"=>1,"time"=>"2011-01-02T13:14:15Z","tag"=>"test"}
|
170
172
|
data2 = {"a"=>2,"time"=>"2011-01-02T13:14:15Z","tag"=>"test"}
|
171
173
|
|
172
|
-
time =
|
173
|
-
d.emit(data1, time)
|
174
|
-
d.emit(data2, time)
|
174
|
+
time = event_time("2011-01-02 13:14:15 UTC")
|
175
175
|
|
176
|
-
|
177
|
-
|
178
|
-
|
179
|
-
|
180
|
-
|
181
|
-
|
176
|
+
d.run(default_tag: "test") do
|
177
|
+
client = create_mock_client
|
178
|
+
stub.instance_of(Aws::Kinesis::Client).put_record(
|
179
|
+
stream_name: 'test_stream',
|
180
|
+
data: "\xF3\x89\x9A\xC2\n\x01a\n\x12test_partition_key\x1A6\b\x01\x1A2{\"a\":1,\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}\x1A6\b\x01\x1A2{\"a\":2,\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}\xA2\x0E y\x8B\x02\xDF\xAE\xAB\x93\x1C;\xCB\xAD\x1Fx".b,
|
181
|
+
partition_key: 'test_partition_key'
|
182
|
+
) { {} }
|
182
183
|
|
183
|
-
|
184
|
+
d.feed(time, data1)
|
185
|
+
d.feed(time, data2)
|
186
|
+
end
|
184
187
|
end
|
185
188
|
|
186
189
|
def test_multibyte
|
@@ -188,29 +191,32 @@ class KinesisOutputTest < Test::Unit::TestCase
|
|
188
191
|
|
189
192
|
data1 = {"a"=>"\xE3\x82\xA4\xE3\x83\xB3\xE3\x82\xB9\xE3\x83\x88\xE3\x83\xBC\xE3\x83\xAB","time"=>"2011-01-02T13:14:15Z".b,"tag"=>"test"}
|
190
193
|
|
191
|
-
time = Time.parse("2011-01-02 13:14:15 UTC").to_i
|
192
|
-
d.emit(data1, time)
|
193
194
|
|
194
|
-
|
195
|
-
|
196
|
-
|
197
|
-
|
198
|
-
|
199
|
-
|
195
|
+
time = event_time("2011-01-02 13:14:15 UTC")
|
196
|
+
d.run(default_tag: "test") do
|
197
|
+
client = create_mock_client
|
198
|
+
stub.instance_of(Aws::Kinesis::Client).put_record(
|
199
|
+
stream_name: 'test_stream',
|
200
|
+
data: "\xF3\x89\x9A\xC2\n\x01a\n\x12test_partition_key\x1AI\b\x01\x1AE{\"a\":\"\xE3\x82\xA4\xE3\x83\xB3\xE3\x82\xB9\xE3\x83\x88\xE3\x83\xBC\xE3\x83\xAB\",\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}_$\x9C\xF9v+pV:g7c\xE3\xF2$\xBA".b,
|
201
|
+
partition_key: 'test_partition_key'
|
202
|
+
) { {} }
|
200
203
|
|
201
|
-
|
204
|
+
d.feed(time, data1)
|
205
|
+
end
|
202
206
|
end
|
203
207
|
|
204
208
|
def test_fail_on_bigchunk
|
205
209
|
d = create_driver
|
206
210
|
|
207
|
-
|
208
|
-
|
209
|
-
|
210
|
-
|
211
|
-
|
212
|
-
|
213
|
-
|
214
|
-
|
211
|
+
assert_raise(Fluent::Plugin::Buffer::BufferChunkOverflowError) do
|
212
|
+
d.run(default_tag: "test") do
|
213
|
+
d.feed(
|
214
|
+
event_time("2011-01-02 13:14:15 UTC"),
|
215
|
+
{"msg" => "z" * 1024 * 1024})
|
216
|
+
client = dont_allow(Object.new)
|
217
|
+
client.put_record
|
218
|
+
mock(Aws::Kinesis::Client).new(anything) { client }
|
219
|
+
end
|
220
|
+
end
|
215
221
|
end
|
216
222
|
end
|
metadata
CHANGED
@@ -1,127 +1,151 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: fluent-plugin-kinesis-aggregation
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 0.3.4
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Atlassian
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date:
|
11
|
+
date: 2020-06-12 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: bundler
|
15
15
|
requirement: !ruby/object:Gem::Requirement
|
16
16
|
requirements:
|
17
|
-
- - "
|
17
|
+
- - ">="
|
18
18
|
- !ruby/object:Gem::Version
|
19
|
-
version: '1.
|
19
|
+
version: '1.10'
|
20
20
|
type: :development
|
21
21
|
prerelease: false
|
22
22
|
version_requirements: !ruby/object:Gem::Requirement
|
23
23
|
requirements:
|
24
|
-
- - "
|
24
|
+
- - ">="
|
25
25
|
- !ruby/object:Gem::Version
|
26
|
-
version: '1.
|
26
|
+
version: '1.10'
|
27
27
|
- !ruby/object:Gem::Dependency
|
28
28
|
name: rake
|
29
29
|
requirement: !ruby/object:Gem::Requirement
|
30
30
|
requirements:
|
31
|
-
- - "
|
31
|
+
- - ">="
|
32
32
|
- !ruby/object:Gem::Version
|
33
33
|
version: '10.0'
|
34
34
|
type: :development
|
35
35
|
prerelease: false
|
36
36
|
version_requirements: !ruby/object:Gem::Requirement
|
37
37
|
requirements:
|
38
|
-
- - "
|
38
|
+
- - ">="
|
39
39
|
- !ruby/object:Gem::Version
|
40
40
|
version: '10.0'
|
41
41
|
- !ruby/object:Gem::Dependency
|
42
|
-
name: test-unit
|
42
|
+
name: test-unit
|
43
43
|
requirement: !ruby/object:Gem::Requirement
|
44
44
|
requirements:
|
45
|
-
- - "
|
45
|
+
- - ">="
|
46
46
|
- !ruby/object:Gem::Version
|
47
|
-
version:
|
47
|
+
version: 3.0.8
|
48
48
|
type: :development
|
49
49
|
prerelease: false
|
50
50
|
version_requirements: !ruby/object:Gem::Requirement
|
51
51
|
requirements:
|
52
|
-
- - "
|
52
|
+
- - ">="
|
53
53
|
- !ruby/object:Gem::Version
|
54
|
-
version:
|
54
|
+
version: 3.0.8
|
55
55
|
- !ruby/object:Gem::Dependency
|
56
|
-
name:
|
56
|
+
name: test-unit-rr
|
57
57
|
requirement: !ruby/object:Gem::Requirement
|
58
58
|
requirements:
|
59
59
|
- - ">="
|
60
60
|
- !ruby/object:Gem::Version
|
61
|
-
version: 0.
|
62
|
-
|
63
|
-
- !ruby/object:Gem::Version
|
64
|
-
version: '0.13'
|
65
|
-
type: :runtime
|
61
|
+
version: 1.0.3
|
62
|
+
type: :development
|
66
63
|
prerelease: false
|
67
64
|
version_requirements: !ruby/object:Gem::Requirement
|
68
65
|
requirements:
|
69
66
|
- - ">="
|
70
67
|
- !ruby/object:Gem::Version
|
71
|
-
version: 0.
|
72
|
-
- - "<"
|
73
|
-
- !ruby/object:Gem::Version
|
74
|
-
version: '0.13'
|
68
|
+
version: 1.0.3
|
75
69
|
- !ruby/object:Gem::Dependency
|
76
|
-
name:
|
70
|
+
name: fluentd
|
77
71
|
requirement: !ruby/object:Gem::Requirement
|
78
72
|
requirements:
|
79
73
|
- - ">="
|
80
74
|
- !ruby/object:Gem::Version
|
81
|
-
version:
|
75
|
+
version: 0.14.22
|
82
76
|
- - "<"
|
83
77
|
- !ruby/object:Gem::Version
|
84
|
-
version: '
|
78
|
+
version: '2'
|
85
79
|
type: :runtime
|
86
80
|
prerelease: false
|
87
81
|
version_requirements: !ruby/object:Gem::Requirement
|
88
82
|
requirements:
|
89
83
|
- - ">="
|
90
84
|
- !ruby/object:Gem::Version
|
91
|
-
version:
|
85
|
+
version: 0.14.22
|
92
86
|
- - "<"
|
93
87
|
- !ruby/object:Gem::Version
|
94
|
-
version: '
|
88
|
+
version: '2'
|
95
89
|
- !ruby/object:Gem::Dependency
|
96
|
-
name:
|
90
|
+
name: aws-sdk-kinesis
|
97
91
|
requirement: !ruby/object:Gem::Requirement
|
98
92
|
requirements:
|
99
|
-
- - "
|
93
|
+
- - "~>"
|
94
|
+
- !ruby/object:Gem::Version
|
95
|
+
version: '1'
|
96
|
+
- - "!="
|
97
|
+
- !ruby/object:Gem::Version
|
98
|
+
version: '1.4'
|
99
|
+
- - "!="
|
100
|
+
- !ruby/object:Gem::Version
|
101
|
+
version: '1.5'
|
102
|
+
- - "!="
|
100
103
|
- !ruby/object:Gem::Version
|
101
|
-
version:
|
104
|
+
version: '1.14'
|
105
|
+
- - "!="
|
106
|
+
- !ruby/object:Gem::Version
|
107
|
+
version: '1.24'
|
102
108
|
type: :runtime
|
103
109
|
prerelease: false
|
104
110
|
version_requirements: !ruby/object:Gem::Requirement
|
105
111
|
requirements:
|
106
|
-
- - "
|
112
|
+
- - "~>"
|
113
|
+
- !ruby/object:Gem::Version
|
114
|
+
version: '1'
|
115
|
+
- - "!="
|
116
|
+
- !ruby/object:Gem::Version
|
117
|
+
version: '1.4'
|
118
|
+
- - "!="
|
107
119
|
- !ruby/object:Gem::Version
|
108
|
-
version:
|
120
|
+
version: '1.5'
|
121
|
+
- - "!="
|
122
|
+
- !ruby/object:Gem::Version
|
123
|
+
version: '1.14'
|
124
|
+
- - "!="
|
125
|
+
- !ruby/object:Gem::Version
|
126
|
+
version: '1.24'
|
109
127
|
- !ruby/object:Gem::Dependency
|
110
128
|
name: google-protobuf
|
111
129
|
requirement: !ruby/object:Gem::Requirement
|
112
130
|
requirements:
|
113
|
-
- - "
|
131
|
+
- - "~>"
|
132
|
+
- !ruby/object:Gem::Version
|
133
|
+
version: '3'
|
134
|
+
- - "<"
|
114
135
|
- !ruby/object:Gem::Version
|
115
|
-
version: 3.
|
136
|
+
version: '3.12'
|
116
137
|
type: :runtime
|
117
138
|
prerelease: false
|
118
139
|
version_requirements: !ruby/object:Gem::Requirement
|
119
140
|
requirements:
|
120
|
-
- - "
|
141
|
+
- - "~>"
|
142
|
+
- !ruby/object:Gem::Version
|
143
|
+
version: '3'
|
144
|
+
- - "<"
|
121
145
|
- !ruby/object:Gem::Version
|
122
|
-
version: 3.
|
146
|
+
version: '3.12'
|
123
147
|
description:
|
124
|
-
email:
|
148
|
+
email: lgoolsbee@atlassian.com
|
125
149
|
executables: []
|
126
150
|
extensions: []
|
127
151
|
extra_rdoc_files: []
|
@@ -142,7 +166,7 @@ files:
|
|
142
166
|
- test/plugin/test_out_kinesis-aggregation.rb
|
143
167
|
homepage: https://github.com/atlassian/fluent-plugin-kinesis-aggregation
|
144
168
|
licenses:
|
145
|
-
- Apache
|
169
|
+
- Apache-2.0
|
146
170
|
metadata: {}
|
147
171
|
post_install_message:
|
148
172
|
rdoc_options: []
|
@@ -152,15 +176,14 @@ required_ruby_version: !ruby/object:Gem::Requirement
|
|
152
176
|
requirements:
|
153
177
|
- - ">="
|
154
178
|
- !ruby/object:Gem::Version
|
155
|
-
version: '2.
|
179
|
+
version: '2.3'
|
156
180
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
157
181
|
requirements:
|
158
182
|
- - ">="
|
159
183
|
- !ruby/object:Gem::Version
|
160
184
|
version: '0'
|
161
185
|
requirements: []
|
162
|
-
|
163
|
-
rubygems_version: 2.5.2
|
186
|
+
rubygems_version: 3.0.3
|
164
187
|
signing_key:
|
165
188
|
specification_version: 4
|
166
189
|
summary: Fluentd output plugin that sends KPL style aggregated events to Amazon Kinesis.
|