fluent-plugin-kinesis-aggregation 0.1.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +7 -0
- data/.gitignore +18 -0
- data/.travis.yml +14 -0
- data/CHANGELOG.md +47 -0
- data/CONTRIBUTORS.txt +6 -0
- data/Gemfile +15 -0
- data/LICENSE.txt +202 -0
- data/NOTICE.txt +2 -0
- data/README.md +251 -0
- data/Rakefile +23 -0
- data/fluent-plugin-kinesis-aggregation.gemspec +42 -0
- data/lib/fluent/plugin/out_kinesis-aggregation.rb +197 -0
- data/lib/fluent/plugin/version.rb +16 -0
- data/test/helper.rb +31 -0
- data/test/plugin/test_out_kinesis-aggregation.rb +223 -0
- metadata +169 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA1:
|
3
|
+
metadata.gz: 5548ac33732637507260fd93ab07af9ffefd9838
|
4
|
+
data.tar.gz: 4775107634ec7271130fe2b3338f29659626ec1d
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: 607e54b746d6b9dd127ece2f481998b93c30140296c261753c108e3ee6e30d4fbbcb4ffd23322b4d5cef45c6f51f14805d1c65faba0f1d325a405e691a3c3dc9
|
7
|
+
data.tar.gz: 3ce44451659fb57bffc3bf4cf91b6b15b0d31422627b033ced0c32ee51bdfec7581ff8b41864df955dbddda928c8235a65e0a47513000539db884a9b11f935aa
|
data/.gitignore
ADDED
data/.travis.yml
ADDED
data/CHANGELOG.md
ADDED
@@ -0,0 +1,47 @@
|
|
1
|
+
# CHANGELOG
|
2
|
+
|
3
|
+
## 0.4.0
|
4
|
+
- Feature - Add option to ensure Kinesis Stream connection. [#35](https://github.com/awslabs/aws-fluent-plugin-kinesis/pull/35)
|
5
|
+
- Feature - Add option to support zlib compression. [#39](https://github.com/awslabs/aws-fluent-plugin-kinesis/pull/39)
|
6
|
+
|
7
|
+
Note: We introduced [Semantic Versioning](http://semver.org/) here.
|
8
|
+
|
9
|
+
## 0.3.6
|
10
|
+
|
11
|
+
- **Cross account access support**: Added support for cross account access for Amazon Kinesis stream. With this update, you can put reocrds to streams those are owned by other AWS account. This feature is achieved by [AssumeRole](http://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html).
|
12
|
+
|
13
|
+
## 0.3.5
|
14
|
+
|
15
|
+
- **1MB record size limit**: Increased record size limit from 50KB to 1MB due to [Amazon Kinesis improvement.](http://aws.amazon.com/jp/about-aws/whats-new/2015/06/amazon-kinesis-announces-put-pricing-change-1mb-record-support-and-the-kinesis-producer-library/)
|
16
|
+
- **Switching IAM user support**: Added support for [shared credential file](http://docs.aws.amazon.com/ja_jp/AWSSdkDocsRuby/latest/DeveloperGuide/prog-basics-creds.html#creds-specify-provider).
|
17
|
+
|
18
|
+
## 0.3.4
|
19
|
+
|
20
|
+
- **Multi-byte UTF-8 support**: We now support multi-byte UTF-8 by using *use_yajl* option.
|
21
|
+
|
22
|
+
## 0.3.3
|
23
|
+
|
24
|
+
- **Security improvements**: Disabled logging `aws_key_id` and `aws_sec_key` into log file.
|
25
|
+
|
26
|
+
## 0.3.2
|
27
|
+
|
28
|
+
- **http_proxy support**: Added HTTP proxy support.
|
29
|
+
|
30
|
+
## 0.3.1
|
31
|
+
|
32
|
+
- **Fluentd v0.12 support**: We now support Fluentd v0.12.
|
33
|
+
|
34
|
+
## 0.3.0
|
35
|
+
|
36
|
+
- **Throughput improvement**: Added support for [PutRecords API](http://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecords.html) by default.
|
37
|
+
- **Bug fix**: Removed redundant Base64 encoding of data for each Kinesis record emitted. Applications consuming these records will need to be updated accordingly.
|
38
|
+
|
39
|
+
## 0.2.0
|
40
|
+
|
41
|
+
- **Partition key randomization**: Added support for partition key randomization.
|
42
|
+
- **Throughput improvements**: Added support for spawning additional processes and threads to increase throughput to Amazon Kinesis.
|
43
|
+
- **AWS SDK for Ruby V2**: Added support for [AWS SDK for Ruby V2](https://github.com/aws/aws-sdk-core-ruby).
|
44
|
+
|
45
|
+
## 0.1.0
|
46
|
+
|
47
|
+
- Release on Github
|
data/CONTRIBUTORS.txt
ADDED
data/Gemfile
ADDED
@@ -0,0 +1,15 @@
|
|
1
|
+
# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
2
|
+
#
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
4
|
+
# may not use this file except in compliance with the License. A copy of
|
5
|
+
# the License is located at
|
6
|
+
#
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
8
|
+
#
|
9
|
+
# or in the "license" file accompanying this file. This file is
|
10
|
+
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
11
|
+
# ANY KIND, either express or implied. See the License for the specific
|
12
|
+
# language governing permissions and limitations under the License.
|
13
|
+
|
14
|
+
source 'https://rubygems.org'
|
15
|
+
gemspec
|
data/LICENSE.txt
ADDED
@@ -0,0 +1,202 @@
|
|
1
|
+
|
2
|
+
Apache License
|
3
|
+
Version 2.0, January 2004
|
4
|
+
http://www.apache.org/licenses/
|
5
|
+
|
6
|
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
7
|
+
|
8
|
+
1. Definitions.
|
9
|
+
|
10
|
+
"License" shall mean the terms and conditions for use, reproduction,
|
11
|
+
and distribution as defined by Sections 1 through 9 of this document.
|
12
|
+
|
13
|
+
"Licensor" shall mean the copyright owner or entity authorized by
|
14
|
+
the copyright owner that is granting the License.
|
15
|
+
|
16
|
+
"Legal Entity" shall mean the union of the acting entity and all
|
17
|
+
other entities that control, are controlled by, or are under common
|
18
|
+
control with that entity. For the purposes of this definition,
|
19
|
+
"control" means (i) the power, direct or indirect, to cause the
|
20
|
+
direction or management of such entity, whether by contract or
|
21
|
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
22
|
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
23
|
+
|
24
|
+
"You" (or "Your") shall mean an individual or Legal Entity
|
25
|
+
exercising permissions granted by this License.
|
26
|
+
|
27
|
+
"Source" form shall mean the preferred form for making modifications,
|
28
|
+
including but not limited to software source code, documentation
|
29
|
+
source, and configuration files.
|
30
|
+
|
31
|
+
"Object" form shall mean any form resulting from mechanical
|
32
|
+
transformation or translation of a Source form, including but
|
33
|
+
not limited to compiled object code, generated documentation,
|
34
|
+
and conversions to other media types.
|
35
|
+
|
36
|
+
"Work" shall mean the work of authorship, whether in Source or
|
37
|
+
Object form, made available under the License, as indicated by a
|
38
|
+
copyright notice that is included in or attached to the work
|
39
|
+
(an example is provided in the Appendix below).
|
40
|
+
|
41
|
+
"Derivative Works" shall mean any work, whether in Source or Object
|
42
|
+
form, that is based on (or derived from) the Work and for which the
|
43
|
+
editorial revisions, annotations, elaborations, or other modifications
|
44
|
+
represent, as a whole, an original work of authorship. For the purposes
|
45
|
+
of this License, Derivative Works shall not include works that remain
|
46
|
+
separable from, or merely link (or bind by name) to the interfaces of,
|
47
|
+
the Work and Derivative Works thereof.
|
48
|
+
|
49
|
+
"Contribution" shall mean any work of authorship, including
|
50
|
+
the original version of the Work and any modifications or additions
|
51
|
+
to that Work or Derivative Works thereof, that is intentionally
|
52
|
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
53
|
+
or by an individual or Legal Entity authorized to submit on behalf of
|
54
|
+
the copyright owner. For the purposes of this definition, "submitted"
|
55
|
+
means any form of electronic, verbal, or written communication sent
|
56
|
+
to the Licensor or its representatives, including but not limited to
|
57
|
+
communication on electronic mailing lists, source code control systems,
|
58
|
+
and issue tracking systems that are managed by, or on behalf of, the
|
59
|
+
Licensor for the purpose of discussing and improving the Work, but
|
60
|
+
excluding communication that is conspicuously marked or otherwise
|
61
|
+
designated in writing by the copyright owner as "Not a Contribution."
|
62
|
+
|
63
|
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
64
|
+
on behalf of whom a Contribution has been received by Licensor and
|
65
|
+
subsequently incorporated within the Work.
|
66
|
+
|
67
|
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
68
|
+
this License, each Contributor hereby grants to You a perpetual,
|
69
|
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
70
|
+
copyright license to reproduce, prepare Derivative Works of,
|
71
|
+
publicly display, publicly perform, sublicense, and distribute the
|
72
|
+
Work and such Derivative Works in Source or Object form.
|
73
|
+
|
74
|
+
3. Grant of Patent License. Subject to the terms and conditions of
|
75
|
+
this License, each Contributor hereby grants to You a perpetual,
|
76
|
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
77
|
+
(except as stated in this section) patent license to make, have made,
|
78
|
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
79
|
+
where such license applies only to those patent claims licensable
|
80
|
+
by such Contributor that are necessarily infringed by their
|
81
|
+
Contribution(s) alone or by combination of their Contribution(s)
|
82
|
+
with the Work to which such Contribution(s) was submitted. If You
|
83
|
+
institute patent litigation against any entity (including a
|
84
|
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
85
|
+
or a Contribution incorporated within the Work constitutes direct
|
86
|
+
or contributory patent infringement, then any patent licenses
|
87
|
+
granted to You under this License for that Work shall terminate
|
88
|
+
as of the date such litigation is filed.
|
89
|
+
|
90
|
+
4. Redistribution. You may reproduce and distribute copies of the
|
91
|
+
Work or Derivative Works thereof in any medium, with or without
|
92
|
+
modifications, and in Source or Object form, provided that You
|
93
|
+
meet the following conditions:
|
94
|
+
|
95
|
+
(a) You must give any other recipients of the Work or
|
96
|
+
Derivative Works a copy of this License; and
|
97
|
+
|
98
|
+
(b) You must cause any modified files to carry prominent notices
|
99
|
+
stating that You changed the files; and
|
100
|
+
|
101
|
+
(c) You must retain, in the Source form of any Derivative Works
|
102
|
+
that You distribute, all copyright, patent, trademark, and
|
103
|
+
attribution notices from the Source form of the Work,
|
104
|
+
excluding those notices that do not pertain to any part of
|
105
|
+
the Derivative Works; and
|
106
|
+
|
107
|
+
(d) If the Work includes a "NOTICE" text file as part of its
|
108
|
+
distribution, then any Derivative Works that You distribute must
|
109
|
+
include a readable copy of the attribution notices contained
|
110
|
+
within such NOTICE file, excluding those notices that do not
|
111
|
+
pertain to any part of the Derivative Works, in at least one
|
112
|
+
of the following places: within a NOTICE text file distributed
|
113
|
+
as part of the Derivative Works; within the Source form or
|
114
|
+
documentation, if provided along with the Derivative Works; or,
|
115
|
+
within a display generated by the Derivative Works, if and
|
116
|
+
wherever such third-party notices normally appear. The contents
|
117
|
+
of the NOTICE file are for informational purposes only and
|
118
|
+
do not modify the License. You may add Your own attribution
|
119
|
+
notices within Derivative Works that You distribute, alongside
|
120
|
+
or as an addendum to the NOTICE text from the Work, provided
|
121
|
+
that such additional attribution notices cannot be construed
|
122
|
+
as modifying the License.
|
123
|
+
|
124
|
+
You may add Your own copyright statement to Your modifications and
|
125
|
+
may provide additional or different license terms and conditions
|
126
|
+
for use, reproduction, or distribution of Your modifications, or
|
127
|
+
for any such Derivative Works as a whole, provided Your use,
|
128
|
+
reproduction, and distribution of the Work otherwise complies with
|
129
|
+
the conditions stated in this License.
|
130
|
+
|
131
|
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
132
|
+
any Contribution intentionally submitted for inclusion in the Work
|
133
|
+
by You to the Licensor shall be under the terms and conditions of
|
134
|
+
this License, without any additional terms or conditions.
|
135
|
+
Notwithstanding the above, nothing herein shall supersede or modify
|
136
|
+
the terms of any separate license agreement you may have executed
|
137
|
+
with Licensor regarding such Contributions.
|
138
|
+
|
139
|
+
6. Trademarks. This License does not grant permission to use the trade
|
140
|
+
names, trademarks, service marks, or product names of the Licensor,
|
141
|
+
except as required for reasonable and customary use in describing the
|
142
|
+
origin of the Work and reproducing the content of the NOTICE file.
|
143
|
+
|
144
|
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
145
|
+
agreed to in writing, Licensor provides the Work (and each
|
146
|
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
147
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
148
|
+
implied, including, without limitation, any warranties or conditions
|
149
|
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
150
|
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
151
|
+
appropriateness of using or redistributing the Work and assume any
|
152
|
+
risks associated with Your exercise of permissions under this License.
|
153
|
+
|
154
|
+
8. Limitation of Liability. In no event and under no legal theory,
|
155
|
+
whether in tort (including negligence), contract, or otherwise,
|
156
|
+
unless required by applicable law (such as deliberate and grossly
|
157
|
+
negligent acts) or agreed to in writing, shall any Contributor be
|
158
|
+
liable to You for damages, including any direct, indirect, special,
|
159
|
+
incidental, or consequential damages of any character arising as a
|
160
|
+
result of this License or out of the use or inability to use the
|
161
|
+
Work (including but not limited to damages for loss of goodwill,
|
162
|
+
work stoppage, computer failure or malfunction, or any and all
|
163
|
+
other commercial damages or losses), even if such Contributor
|
164
|
+
has been advised of the possibility of such damages.
|
165
|
+
|
166
|
+
9. Accepting Warranty or Additional Liability. While redistributing
|
167
|
+
the Work or Derivative Works thereof, You may choose to offer,
|
168
|
+
and charge a fee for, acceptance of support, warranty, indemnity,
|
169
|
+
or other liability obligations and/or rights consistent with this
|
170
|
+
License. However, in accepting such obligations, You may act only
|
171
|
+
on Your own behalf and on Your sole responsibility, not on behalf
|
172
|
+
of any other Contributor, and only if You agree to indemnify,
|
173
|
+
defend, and hold each Contributor harmless for any liability
|
174
|
+
incurred by, or claims asserted against, such Contributor by reason
|
175
|
+
of your accepting any such warranty or additional liability.
|
176
|
+
|
177
|
+
END OF TERMS AND CONDITIONS
|
178
|
+
|
179
|
+
APPENDIX: How to apply the Apache License to your work.
|
180
|
+
|
181
|
+
To apply the Apache License to your work, attach the following
|
182
|
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
183
|
+
replaced with your own identifying information. (Don't include
|
184
|
+
the brackets!) The text should be enclosed in the appropriate
|
185
|
+
comment syntax for the file format. We also recommend that a
|
186
|
+
file or class name and description of purpose be included on the
|
187
|
+
same "printed page" as the copyright notice for easier
|
188
|
+
identification within third-party archives.
|
189
|
+
|
190
|
+
Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
191
|
+
|
192
|
+
Licensed under the Apache License, Version 2.0 (the "License");
|
193
|
+
you may not use this file except in compliance with the License.
|
194
|
+
You may obtain a copy of the License at
|
195
|
+
|
196
|
+
http://www.apache.org/licenses/LICENSE-2.0
|
197
|
+
|
198
|
+
Unless required by applicable law or agreed to in writing, software
|
199
|
+
distributed under the License is distributed on an "AS IS" BASIS,
|
200
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
201
|
+
See the License for the specific language governing permissions and
|
202
|
+
limitations under the License.
|
data/NOTICE.txt
ADDED
data/README.md
ADDED
@@ -0,0 +1,251 @@
|
|
1
|
+
# Fluent Plugin for Amazon Kinesis producing KPL records
|
2
|
+
|
3
|
+
[![Build Status](https://travis-ci.org/atlassian/fluent-plugin-kinesis-aggregation.svg?branch=master)](https://travis-ci.org/atlassian/fluent-plugin-kinesis-aggregation)
|
4
|
+
|
5
|
+
## Before you start...
|
6
|
+
|
7
|
+
This is a rewrite of [aws-fluent-plugin-kinesis](https://github.com/awslabs/aws-fluent-plugin-kinesis) to implement
|
8
|
+
a different shipment method using the
|
9
|
+
[KPL aggregation format](https://github.com/awslabs/amazon-kinesis-producer/blob/master/aggregation-format.md).
|
10
|
+
|
11
|
+
The basic idea is to have one PutRecord === one chunk. This has a number of advantages:
|
12
|
+
|
13
|
+
- much less complexity in plugin (less CPU/memory)
|
14
|
+
- by aggregating, we increase the throughput and decrease the cost
|
15
|
+
- since a single chunk either succeeds or fails,
|
16
|
+
we get to use fluentd's more complex/complete retry mechanism
|
17
|
+
(which is also exposed in the monitor). The existing retry mechanism
|
18
|
+
has [unfortunate issues under heavy load](https://github.com/awslabs/aws-fluent-plugin-kinesis/issues/42)
|
19
|
+
- we get ordering within a chunk without having to rely on sequence
|
20
|
+
numbers (though not overall ordering)
|
21
|
+
|
22
|
+
However, there are drawbacks:
|
23
|
+
|
24
|
+
- you have to use a KCL library to ingest
|
25
|
+
- you can't use a calculated partition key (based on the record);
|
26
|
+
essentially, you need to use a random partition key
|
27
|
+
|
28
|
+
## Overview
|
29
|
+
|
30
|
+
[Fluentd](http://fluentd.org/) output plugin
|
31
|
+
that sends events to [Amazon Kinesis](https://aws.amazon.com/kinesis/).
|
32
|
+
|
33
|
+
## Installation
|
34
|
+
|
35
|
+
In case of using with Fluentd:
|
36
|
+
Fluentd will be also installed via the process below.
|
37
|
+
|
38
|
+
git clone https://github.com/atlassian/fluent-plugin-kinesis-aggregation.git
|
39
|
+
cd fluent-plugin-kinesis-aggregation
|
40
|
+
bundle install
|
41
|
+
rake build
|
42
|
+
rake install
|
43
|
+
|
44
|
+
Also, you can use this plugin with td-agent:
|
45
|
+
You have to install td-agent before installing this plugin.
|
46
|
+
|
47
|
+
git clone https://github.com/atlassian/fluent-plugin-kinesis-aggregation.git
|
48
|
+
cd fluent-plugin-kinesis-aggregation
|
49
|
+
bundle install
|
50
|
+
rake build
|
51
|
+
fluent-gem install pkg/fluent-plugin-kinesis-aggregation
|
52
|
+
|
53
|
+
Or just download specify your Ruby library path.
|
54
|
+
Below is the sample for specifying your library path via RUBYLIB.
|
55
|
+
|
56
|
+
git clone https://github.com/atlassian/fluent-plugin-kinesis-aggregation.git
|
57
|
+
cd fluent-plugin-kinesis-aggregation
|
58
|
+
bundle install
|
59
|
+
export RUBYLIB=$RUBYLIB:/path/to/fluent-plugin-kinesis-aggregation/lib
|
60
|
+
|
61
|
+
## Dependencies
|
62
|
+
|
63
|
+
* Ruby 2.2+
|
64
|
+
* Fluentd 0.10.43+
|
65
|
+
|
66
|
+
## Basic Usage
|
67
|
+
|
68
|
+
Here are general procedures for using this plugin:
|
69
|
+
|
70
|
+
1. Install.
|
71
|
+
1. Edit configuration
|
72
|
+
1. Run Fluentd or td-agent
|
73
|
+
|
74
|
+
You can run this plugin with Fluentd as follows:
|
75
|
+
|
76
|
+
1. Install.
|
77
|
+
1. Edit configuration file and save it as 'fluentd.conf'.
|
78
|
+
1. Then, run `fluentd -c /path/to/fluentd.conf`
|
79
|
+
|
80
|
+
To run with td-agent, it would be as follows:
|
81
|
+
|
82
|
+
1. Install.
|
83
|
+
1. Edit configuration file provided by td-agent.
|
84
|
+
1. Then, run or restart td-agent.
|
85
|
+
|
86
|
+
## Configuration
|
87
|
+
|
88
|
+
Here are items for Fluentd configuration file.
|
89
|
+
|
90
|
+
To put records into Amazon Kinesis,
|
91
|
+
you need to provide AWS security credentials.
|
92
|
+
If you provide aws_key_id and aws_sec_key in configuration file as below,
|
93
|
+
we use it. You can also provide credentials via environment variables as
|
94
|
+
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY. Also we support IAM Role for
|
95
|
+
authentication. Please find the [AWS SDK for Ruby Developer Guide](http://docs.aws.amazon.com/AWSSdkDocsRuby/latest/DeveloperGuide/ruby-dg-setup.html)
|
96
|
+
for more information about authentication.
|
97
|
+
We support all options which AWS SDK for Ruby supports.
|
98
|
+
|
99
|
+
### type
|
100
|
+
|
101
|
+
Use the word 'kinesis-aggregation'.
|
102
|
+
|
103
|
+
### stream_name
|
104
|
+
|
105
|
+
Name of the stream to put data.
|
106
|
+
|
107
|
+
### aws_key_id
|
108
|
+
|
109
|
+
AWS access key id.
|
110
|
+
|
111
|
+
### aws_sec_key
|
112
|
+
|
113
|
+
AWS secret key.
|
114
|
+
|
115
|
+
### role_arn
|
116
|
+
|
117
|
+
IAM Role to be assumed with [AssumeRole](http://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html).
|
118
|
+
Use this option for cross account access.
|
119
|
+
|
120
|
+
### external_id
|
121
|
+
|
122
|
+
A unique identifier that is used by third parties when
|
123
|
+
[assuming roles](http://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) in their customers' accounts.
|
124
|
+
Use this option with `role_arn` for third party cross account access.
|
125
|
+
For details, please see [How to Use an External ID When Granting Access to Your AWS Resources to a Third Party](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html).
|
126
|
+
|
127
|
+
### region
|
128
|
+
|
129
|
+
AWS region of your stream.
|
130
|
+
It should be in form like "us-east-1", "us-west-2".
|
131
|
+
Refer to [Regions and Endpoints in AWS General Reference](http://docs.aws.amazon.com/general/latest/gr/rande.html#ak_region)
|
132
|
+
for supported regions.
|
133
|
+
|
134
|
+
### http_proxy
|
135
|
+
|
136
|
+
Proxy server, if any.
|
137
|
+
It should be in form like "http://squid:3128/"
|
138
|
+
|
139
|
+
### fixed_partition_key
|
140
|
+
|
141
|
+
Instead of using a random partition key, used a fixed one. This
|
142
|
+
forces all writes to a specific shard, and if you're using
|
143
|
+
a single thread/process will probably keep event ordering
|
144
|
+
(not recommended - watch out for hot shards!).
|
145
|
+
|
146
|
+
### detach_process
|
147
|
+
|
148
|
+
Integer. Optional. This defines the number of parallel processes to start.
|
149
|
+
This can be used to increase throughput by allowing multiple processes to
|
150
|
+
execute the plugin at once. Setting this option to > 0 will cause the plugin
|
151
|
+
to run in a separate process. The default is 0.
|
152
|
+
|
153
|
+
### num_threads
|
154
|
+
|
155
|
+
Integer. The number of threads to flush the buffer. This plugin is based on
|
156
|
+
Fluentd::BufferedOutput, so we buffer incoming records before emitting them to
|
157
|
+
Amazon Kinesis. You can find the detail about buffering mechanism [here](http://docs.fluentd.org/articles/buffer-plugin-overview).
|
158
|
+
Emitting records to Amazon Kinesis via network causes I/O Wait, so parallelizing
|
159
|
+
emitting with threads will improve throughput.
|
160
|
+
|
161
|
+
This option can be used to parallelize writes into the output(s)
|
162
|
+
designated by the output plugin. The default is 1.
|
163
|
+
Also you can use this option with *detach_process*.
|
164
|
+
|
165
|
+
### debug
|
166
|
+
|
167
|
+
Boolean. Enable if you need to debug Amazon Kinesis API call. Default is false.
|
168
|
+
|
169
|
+
## Configuration examples
|
170
|
+
|
171
|
+
Here are some configuration examles.
|
172
|
+
Assume that the JSON object below is coming to with tag 'your_tag'.
|
173
|
+
|
174
|
+
{
|
175
|
+
"name":"foo",
|
176
|
+
"action":"bar"
|
177
|
+
}
|
178
|
+
|
179
|
+
### Simply putting events to Amazon Kinesis with a partition key
|
180
|
+
|
181
|
+
In this example, a value 'foo' will be used as the partition key,
|
182
|
+
then events will be sent to the stream specified in 'stream_name'.
|
183
|
+
|
184
|
+
<match your_tag>
|
185
|
+
type kinesis-aggregation
|
186
|
+
|
187
|
+
stream_name YOUR_STREAM_NAME
|
188
|
+
|
189
|
+
aws_key_id YOUR_AWS_ACCESS_KEY
|
190
|
+
aws_sec_key YOUR_SECRET_KEY
|
191
|
+
|
192
|
+
region us-east-1
|
193
|
+
|
194
|
+
fixed_partition_key foo
|
195
|
+
|
196
|
+
# You should set the buffer_chunk_limit to substantially less
|
197
|
+
# than the kinesis 1mb record limit, since we ship a chunk at once.
|
198
|
+
buffer_chunk_limit 300k
|
199
|
+
</match>
|
200
|
+
|
201
|
+
### Improving throughput to Amazon Kinesis
|
202
|
+
|
203
|
+
The achievable throughput to Amazon Kinesis is limited to single-threaded
|
204
|
+
PutRecord calls, which should be at most around 300kb each.
|
205
|
+
The plugin can also be configured to execute in parallel.
|
206
|
+
The **detach_process** and **num_threads** configuration settings control
|
207
|
+
parallelism.
|
208
|
+
|
209
|
+
In case of the configuration below, you will spawn 2 processes.
|
210
|
+
|
211
|
+
<match your_tag>
|
212
|
+
type kinesis
|
213
|
+
|
214
|
+
stream_name YOUR_STREAM_NAME
|
215
|
+
region us-east-1
|
216
|
+
|
217
|
+
detach_process 2
|
218
|
+
buffer_chunk_limit 300k
|
219
|
+
</match>
|
220
|
+
|
221
|
+
You can also specify a number of threads to put.
|
222
|
+
The number of threads is bound to each individual processes.
|
223
|
+
So in this case, you will spawn 1 process which has 50 threads.
|
224
|
+
|
225
|
+
<match your_tag>
|
226
|
+
type kinesis
|
227
|
+
|
228
|
+
stream_name YOUR_STREAM_NAME
|
229
|
+
region us-east-1
|
230
|
+
|
231
|
+
num_threads 50
|
232
|
+
buffer_chunk_limit 300k
|
233
|
+
</match>
|
234
|
+
|
235
|
+
Both options can be used together, in the configuration below,
|
236
|
+
you will spawn 2 processes and 50 threads per each processes.
|
237
|
+
|
238
|
+
<match your_tag>
|
239
|
+
type kinesis
|
240
|
+
|
241
|
+
stream_name YOUR_STREAM_NAME
|
242
|
+
region us-east-1
|
243
|
+
|
244
|
+
detach_process 2
|
245
|
+
num_threads 50
|
246
|
+
buffer_chunk_limit 300k
|
247
|
+
</match>
|
248
|
+
|
249
|
+
## Related Resources
|
250
|
+
|
251
|
+
* [Amazon Kinesis Developer Guide](http://docs.aws.amazon.com/kinesis/latest/dev/introduction.html)
|
data/Rakefile
ADDED
@@ -0,0 +1,23 @@
|
|
1
|
+
# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
2
|
+
#
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
4
|
+
# may not use this file except in compliance with the License. A copy of
|
5
|
+
# the License is located at
|
6
|
+
#
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
8
|
+
#
|
9
|
+
# or in the "license" file accompanying this file. This file is
|
10
|
+
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
11
|
+
# ANY KIND, either express or implied. See the License for the specific
|
12
|
+
# language governing permissions and limitations under the License.
|
13
|
+
|
14
|
+
require "bundler/gem_tasks"
|
15
|
+
|
16
|
+
require 'rake/testtask'
|
17
|
+
Rake::TestTask.new(:test) do |test|
|
18
|
+
test.libs << 'lib' << 'test'
|
19
|
+
test.pattern = 'test/**/test_*.rb'
|
20
|
+
test.verbose = true
|
21
|
+
end
|
22
|
+
|
23
|
+
task :default => :test
|
@@ -0,0 +1,42 @@
|
|
1
|
+
# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
2
|
+
#
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
4
|
+
# may not use this file except in compliance with the License. A copy of
|
5
|
+
# the License is located at
|
6
|
+
#
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
8
|
+
#
|
9
|
+
# or in the "license" file accompanying this file. This file is
|
10
|
+
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
11
|
+
# ANY KIND, either express or implied. See the License for the specific
|
12
|
+
# language governing permissions and limitations under the License.
|
13
|
+
|
14
|
+
# coding: utf-8
|
15
|
+
lib = File.expand_path('../lib', __FILE__)
|
16
|
+
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
|
17
|
+
|
18
|
+
require "fluent/plugin/version"
|
19
|
+
|
20
|
+
Gem::Specification.new do |spec|
|
21
|
+
spec.name = "fluent-plugin-kinesis-aggregation"
|
22
|
+
spec.version = FluentPluginKinesisAggregation::VERSION
|
23
|
+
spec.author = 'Someone'
|
24
|
+
spec.summary = %q{Fluentd output plugin that sends KPL style aggregated events to Amazon Kinesis.}
|
25
|
+
spec.homepage = "https://github.com/wryun/fluent-plugin-kinesis-aggregation"
|
26
|
+
spec.license = "Apache License, Version 2.0"
|
27
|
+
|
28
|
+
spec.files = `git ls-files`.split($/)
|
29
|
+
spec.executables = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
|
30
|
+
spec.test_files = spec.files.grep(%r{^(test|spec|features)/})
|
31
|
+
spec.require_paths = ["lib"]
|
32
|
+
spec.required_ruby_version = '>= 2.2'
|
33
|
+
|
34
|
+
spec.add_development_dependency "bundler", "~> 1.3"
|
35
|
+
spec.add_development_dependency "rake", "~> 10.0"
|
36
|
+
spec.add_development_dependency "test-unit-rr", "~> 1.0"
|
37
|
+
|
38
|
+
spec.add_dependency "fluentd", ">= 0.10.53", "< 0.13"
|
39
|
+
spec.add_dependency "aws-sdk-core", ">= 2.0.12", "< 3.0"
|
40
|
+
spec.add_dependency "msgpack", ">= 0.5.8"
|
41
|
+
spec.add_dependency "protobuf", ">= 3.5.5"
|
42
|
+
end
|
@@ -0,0 +1,197 @@
|
|
1
|
+
# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
2
|
+
#
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
4
|
+
# may not use this file except in compliance with the License. A copy of
|
5
|
+
# the License is located at
|
6
|
+
#
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
8
|
+
#
|
9
|
+
# or in the "license" file accompanying this file. This file is
|
10
|
+
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
11
|
+
# ANY KIND, either express or implied. See the License for the specific
|
12
|
+
# language governing permissions and limitations under the License.
|
13
|
+
|
14
|
+
require 'aws-sdk-core'
|
15
|
+
require 'yajl'
|
16
|
+
require 'logger'
|
17
|
+
require 'securerandom'
|
18
|
+
require 'fluent/plugin/version'
|
19
|
+
require 'digest'
|
20
|
+
|
21
|
+
require 'protobuf'
|
22
|
+
require 'protobuf/message'
|
23
|
+
|
24
|
+
|
25
|
+
# https://github.com/awslabs/amazon-kinesis-producer/blob/master/aggregation-format.md
|
26
|
+
class AggregatedRecord < ::Protobuf::Message; end
|
27
|
+
class Tag < ::Protobuf::Message; end
|
28
|
+
class Record < ::Protobuf::Message; end
|
29
|
+
|
30
|
+
class AggregatedRecord
|
31
|
+
repeated :string, :partition_key_table, 1
|
32
|
+
repeated :string, :explicit_hash_key_table, 2
|
33
|
+
repeated ::Record, :records, 3
|
34
|
+
end
|
35
|
+
|
36
|
+
class Tag
|
37
|
+
required :string, :key, 1
|
38
|
+
optional :string, :value, 2
|
39
|
+
end
|
40
|
+
|
41
|
+
class Record
|
42
|
+
required :uint64, :partition_key_index, 1
|
43
|
+
optional :uint64, :explicit_hash_key_index, 2
|
44
|
+
required :bytes, :data, 3
|
45
|
+
repeated ::Tag, :tags, 4
|
46
|
+
end
|
47
|
+
|
48
|
+
|
49
|
+
module FluentPluginKinesisAggregation
|
50
|
+
class OutputFilter < Fluent::BufferedOutput
|
51
|
+
|
52
|
+
include Fluent::DetachMultiProcessMixin
|
53
|
+
include Fluent::SetTimeKeyMixin
|
54
|
+
include Fluent::SetTagKeyMixin
|
55
|
+
|
56
|
+
USER_AGENT_NAME = 'fluent-plugin-kinesis-aggregation-output-filter'
|
57
|
+
PUT_RECORD_MAX_DATA_SIZE = 1024 * 1024
|
58
|
+
# 200 is an arbitrary number more than the envelope overhead
|
59
|
+
# and big enough to store partition/hash key table in
|
60
|
+
# AggregatedRecords. Note that you shouldn't really ever have
|
61
|
+
# the buffer this high, since you're likely to fail the write
|
62
|
+
# if anyone else is writing to the shard at the time.
|
63
|
+
FLUENTD_MAX_BUFFER_SIZE = PUT_RECORD_MAX_DATA_SIZE - 200
|
64
|
+
|
65
|
+
Fluent::Plugin.register_output('kinesis-aggregation', self)
|
66
|
+
|
67
|
+
config_set_default :include_time_key, true
|
68
|
+
config_set_default :include_tag_key, true
|
69
|
+
|
70
|
+
config_param :aws_key_id, :string, default: nil, :secret => true
|
71
|
+
config_param :aws_sec_key, :string, default: nil, :secret => true
|
72
|
+
# The 'region' parameter is optional because
|
73
|
+
# it may be set as an environment variable.
|
74
|
+
config_param :region, :string, default: nil
|
75
|
+
|
76
|
+
config_param :profile, :string, :default => nil
|
77
|
+
config_param :credentials_path, :string, :default => nil
|
78
|
+
config_param :role_arn, :string, :default => nil
|
79
|
+
config_param :external_id, :string, :default => nil
|
80
|
+
|
81
|
+
config_param :stream_name, :string
|
82
|
+
config_param :fixed_partition_key, :string, default: nil
|
83
|
+
|
84
|
+
config_param :debug, :bool, default: false
|
85
|
+
|
86
|
+
config_param :http_proxy, :string, default: nil
|
87
|
+
|
88
|
+
def configure(conf)
|
89
|
+
super
|
90
|
+
|
91
|
+
if @buffer.chunk_limit > FLUENTD_MAX_BUFFER_SIZE
|
92
|
+
raise Fluent::ConfigError, "Kinesis buffer_chunk_limit is set to more than the 1mb shard limit (i.e. you won't be able to write your chunks!"
|
93
|
+
end
|
94
|
+
|
95
|
+
if @buffer.chunk_limit > FLUENTD_MAX_BUFFER_SIZE / 3
|
96
|
+
log.warn 'Kinesis buffer_chunk_limit is set at more than 1/3 of the per second shard limit (1mb). This is not good if you have many producers.'
|
97
|
+
end
|
98
|
+
end
|
99
|
+
|
100
|
+
def start
|
101
|
+
detach_multi_process do
|
102
|
+
super
|
103
|
+
load_client
|
104
|
+
end
|
105
|
+
end
|
106
|
+
|
107
|
+
def format(tag, time, record)
|
108
|
+
return AggregatedRecord.new(
|
109
|
+
records: [Record.new(
|
110
|
+
partition_key_index: 0,
|
111
|
+
data: Yajl.dump(record)
|
112
|
+
)]
|
113
|
+
).encode
|
114
|
+
end
|
115
|
+
|
116
|
+
def write(chunk)
|
117
|
+
records = chunk.read
|
118
|
+
if records.length > FLUENTD_MAX_BUFFER_SIZE
|
119
|
+
log.error "Can't emit aggregated record of length #{records.length} (more than #{FLUENTD_MAX_BUFFER_SIZE})"
|
120
|
+
return # do not throw, since we can't retry
|
121
|
+
end
|
122
|
+
|
123
|
+
partition_key = @fixed_partition_key || SecureRandom.uuid
|
124
|
+
|
125
|
+
# confusing magic. Because of the format of protobuf records,
|
126
|
+
# it's valid (in this case) to concatenate the AggregatedRecords
|
127
|
+
# to form one AggregatedRecord, since we only have a repeated field
|
128
|
+
# in records.
|
129
|
+
message = AggregatedRecord.new(
|
130
|
+
partition_key_table: [partition_key]
|
131
|
+
).encode + records
|
132
|
+
|
133
|
+
@client.put_record(
|
134
|
+
stream_name: @stream_name,
|
135
|
+
data: kpl_aggregation_pack(message),
|
136
|
+
partition_key: partition_key
|
137
|
+
)
|
138
|
+
end
|
139
|
+
|
140
|
+
private
|
141
|
+
|
142
|
+
# https://github.com/awslabs/amazon-kinesis-producer/blob/master/aggregation-format.md
|
143
|
+
KPL_MAGIC_NUMBER = "\xF3\x89\x9A\xC2"
|
144
|
+
def kpl_aggregation_pack(message)
|
145
|
+
[
|
146
|
+
KPL_MAGIC_NUMBER, message, Digest::MD5.digest(message)
|
147
|
+
].pack("A4A*A16")
|
148
|
+
end
|
149
|
+
|
150
|
+
# This code is unchanged from https://github.com/awslabs/aws-fluent-plugin-kinesis
|
151
|
+
def load_client
|
152
|
+
user_agent_suffix = "#{USER_AGENT_NAME}/#{FluentPluginKinesisAggregation::VERSION}"
|
153
|
+
|
154
|
+
options = {
|
155
|
+
user_agent_suffix: user_agent_suffix
|
156
|
+
}
|
157
|
+
|
158
|
+
if @region
|
159
|
+
options[:region] = @region
|
160
|
+
end
|
161
|
+
|
162
|
+
if @aws_key_id && @aws_sec_key
|
163
|
+
options.update(
|
164
|
+
access_key_id: @aws_key_id,
|
165
|
+
secret_access_key: @aws_sec_key,
|
166
|
+
)
|
167
|
+
elsif @profile
|
168
|
+
credentials_opts = {:profile_name => @profile}
|
169
|
+
credentials_opts[:path] = @credentials_path if @credentials_path
|
170
|
+
credentials = Aws::SharedCredentials.new(credentials_opts)
|
171
|
+
options[:credentials] = credentials
|
172
|
+
elsif @role_arn
|
173
|
+
credentials = Aws::AssumeRoleCredentials.new(
|
174
|
+
client: Aws::STS::Client.new(options),
|
175
|
+
role_arn: @role_arn,
|
176
|
+
role_session_name: "fluent-plugin-kinesis-aggregation",
|
177
|
+
external_id: @external_id,
|
178
|
+
duration_seconds: 60 * 60
|
179
|
+
)
|
180
|
+
options[:credentials] = credentials
|
181
|
+
end
|
182
|
+
|
183
|
+
if @debug
|
184
|
+
options.update(
|
185
|
+
logger: Logger.new(log.out),
|
186
|
+
log_level: :debug
|
187
|
+
)
|
188
|
+
end
|
189
|
+
|
190
|
+
if @http_proxy
|
191
|
+
options[:http_proxy] = @http_proxy
|
192
|
+
end
|
193
|
+
|
194
|
+
@client = Aws::Kinesis::Client.new(options)
|
195
|
+
end
|
196
|
+
end
|
197
|
+
end
|
@@ -0,0 +1,16 @@
|
|
1
|
+
# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
2
|
+
#
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
4
|
+
# may not use this file except in compliance with the License. A copy of
|
5
|
+
# the License is located at
|
6
|
+
#
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
8
|
+
#
|
9
|
+
# or in the "license" file accompanying this file. This file is
|
10
|
+
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
11
|
+
# ANY KIND, either express or implied. See the License for the specific
|
12
|
+
# language governing permissions and limitations under the License.
|
13
|
+
|
14
|
+
module FluentPluginKinesisAggregation
|
15
|
+
VERSION = '0.1.0'
|
16
|
+
end
|
data/test/helper.rb
ADDED
@@ -0,0 +1,31 @@
|
|
1
|
+
# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
2
|
+
#
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
4
|
+
# may not use this file except in compliance with the License. A copy of
|
5
|
+
# the License is located at
|
6
|
+
#
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
8
|
+
#
|
9
|
+
# or in the "license" file accompanying this file. This file is
|
10
|
+
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
11
|
+
# ANY KIND, either express or implied. See the License for the specific
|
12
|
+
# language governing permissions and limitations under the License.
|
13
|
+
|
14
|
+
require 'rubygems'
|
15
|
+
require 'bundler'
|
16
|
+
require 'stringio'
|
17
|
+
begin
|
18
|
+
Bundler.setup(:default, :development)
|
19
|
+
rescue Bundler::BundlerError => e
|
20
|
+
$stderr.puts e.message
|
21
|
+
$stderr.puts "Run `bundle install` to install missing gems"
|
22
|
+
exit e.status_code
|
23
|
+
end
|
24
|
+
|
25
|
+
require 'test/unit'
|
26
|
+
require 'test/unit/rr'
|
27
|
+
|
28
|
+
$LOAD_PATH.unshift(File.join(File.dirname(__FILE__), '..', 'lib'))
|
29
|
+
$LOAD_PATH.unshift(File.dirname(__FILE__))
|
30
|
+
require 'fluent/test'
|
31
|
+
require 'fluent/plugin/out_kinesis-aggregation'
|
@@ -0,0 +1,223 @@
|
|
1
|
+
# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
2
|
+
#
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
4
|
+
# may not use this file except in compliance with the License. A copy of
|
5
|
+
# the License is located at
|
6
|
+
#
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
8
|
+
#
|
9
|
+
# or in the "license" file accompanying this file. This file is
|
10
|
+
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
11
|
+
# ANY KIND, either express or implied. See the License for the specific
|
12
|
+
# language governing permissions and limitations under the License.
|
13
|
+
|
14
|
+
require 'helper'
|
15
|
+
|
16
|
+
class KinesisOutputTest < Test::Unit::TestCase
|
17
|
+
def setup
|
18
|
+
Fluent::Test.setup
|
19
|
+
end
|
20
|
+
|
21
|
+
CONFIG = %[
|
22
|
+
aws_key_id test_key_id
|
23
|
+
aws_sec_key test_sec_key
|
24
|
+
stream_name test_stream
|
25
|
+
region us-east-1
|
26
|
+
fixed_partition_key test_partition_key
|
27
|
+
buffer_chunk_limit 100k
|
28
|
+
]
|
29
|
+
|
30
|
+
def create_driver(conf = CONFIG, tag='test')
|
31
|
+
Fluent::Test::BufferedOutputTestDriver
|
32
|
+
.new(FluentPluginKinesisAggregation::OutputFilter, tag).configure(conf)
|
33
|
+
end
|
34
|
+
|
35
|
+
def create_mock_client
|
36
|
+
client = mock(Object.new)
|
37
|
+
mock(Aws::Kinesis::Client).new({}) { client }
|
38
|
+
return client
|
39
|
+
end
|
40
|
+
|
41
|
+
def test_configure
|
42
|
+
d = create_driver
|
43
|
+
assert_equal 'test_key_id', d.instance.aws_key_id
|
44
|
+
assert_equal 'test_sec_key', d.instance.aws_sec_key
|
45
|
+
assert_equal 'test_stream', d.instance.stream_name
|
46
|
+
assert_equal 'us-east-1', d.instance.region
|
47
|
+
assert_equal 'test_partition_key', d.instance.fixed_partition_key
|
48
|
+
end
|
49
|
+
|
50
|
+
def test_configure_with_credentials
|
51
|
+
d = create_driver(<<-EOS)
|
52
|
+
profile default
|
53
|
+
credentials_path /home/scott/.aws/credentials
|
54
|
+
stream_name test_stream
|
55
|
+
region us-east-1
|
56
|
+
fixed_partition_key test_partition_key
|
57
|
+
buffer_chunk_limit 100k
|
58
|
+
EOS
|
59
|
+
|
60
|
+
assert_equal 'default', d.instance.profile
|
61
|
+
assert_equal '/home/scott/.aws/credentials', d.instance.credentials_path
|
62
|
+
assert_equal 'test_stream', d.instance.stream_name
|
63
|
+
assert_equal 'us-east-1', d.instance.region
|
64
|
+
assert_equal 'test_partition_key', d.instance.fixed_partition_key
|
65
|
+
end
|
66
|
+
|
67
|
+
def test_configure_with_more_options
|
68
|
+
conf = %[
|
69
|
+
stream_name test_stream
|
70
|
+
region us-east-1
|
71
|
+
http_proxy http://proxy:3333/
|
72
|
+
fixed_partition_key test_partition_key
|
73
|
+
buffer_chunk_limit 100k
|
74
|
+
]
|
75
|
+
d = create_driver(conf)
|
76
|
+
assert_equal 'test_stream', d.instance.stream_name
|
77
|
+
assert_equal 'us-east-1', d.instance.region
|
78
|
+
assert_equal 'http://proxy:3333/', d.instance.http_proxy
|
79
|
+
assert_equal 'test_partition_key', d.instance.fixed_partition_key
|
80
|
+
end
|
81
|
+
|
82
|
+
def test_configure_fails_on_big_chunk_limit
|
83
|
+
conf = %[
|
84
|
+
stream_name test_stream
|
85
|
+
region us-east-1
|
86
|
+
http_proxy http://proxy:3333/
|
87
|
+
fixed_partition_key test_partition_key
|
88
|
+
buffer_chunk_limit 1m
|
89
|
+
]
|
90
|
+
assert_raise Fluent::ConfigError do
|
91
|
+
create_driver(conf)
|
92
|
+
end
|
93
|
+
end
|
94
|
+
|
95
|
+
def test_load_client
|
96
|
+
client = stub(Object.new)
|
97
|
+
client.put_record { {} }
|
98
|
+
|
99
|
+
stub(Aws::Kinesis::Client).new do |options|
|
100
|
+
assert_equal("test_key_id", options[:access_key_id])
|
101
|
+
assert_equal("test_sec_key", options[:secret_access_key])
|
102
|
+
assert_equal("us-east-1", options[:region])
|
103
|
+
client
|
104
|
+
end
|
105
|
+
|
106
|
+
d = create_driver
|
107
|
+
d.run
|
108
|
+
end
|
109
|
+
|
110
|
+
def test_load_client_with_credentials
|
111
|
+
client = stub(Object.new)
|
112
|
+
client.put_record { {} }
|
113
|
+
|
114
|
+
stub(Aws::Kinesis::Client).new do |options|
|
115
|
+
assert_equal(nil, options[:access_key_id])
|
116
|
+
assert_equal(nil, options[:secret_access_key])
|
117
|
+
assert_equal("us-east-1", options[:region])
|
118
|
+
|
119
|
+
credentials = options[:credentials]
|
120
|
+
assert_equal("default", credentials.profile_name)
|
121
|
+
assert_equal("/home/scott/.aws/credentials", credentials.path)
|
122
|
+
|
123
|
+
client
|
124
|
+
end
|
125
|
+
|
126
|
+
d = create_driver(<<-EOS)
|
127
|
+
profile default
|
128
|
+
credentials_path /home/scott/.aws/credentials
|
129
|
+
stream_name test_stream
|
130
|
+
region us-east-1
|
131
|
+
fixed_partition_key test_partition_key
|
132
|
+
buffer_chunk_limit 100k
|
133
|
+
EOS
|
134
|
+
|
135
|
+
d.run
|
136
|
+
end
|
137
|
+
|
138
|
+
def test_load_client_with_role_arn
|
139
|
+
client = stub(Object.new)
|
140
|
+
client.put_record { {} }
|
141
|
+
|
142
|
+
stub(Aws::AssumeRoleCredentials).new do |options|
|
143
|
+
assert_equal("arn:aws:iam::001234567890:role/my-role", options[:role_arn])
|
144
|
+
assert_equal("fluent-plugin-kinesis-aggregation", options[:role_session_name])
|
145
|
+
assert_equal("my_external_id", options[:external_id])
|
146
|
+
assert_equal(3600, options[:duration_seconds])
|
147
|
+
"sts_credentials"
|
148
|
+
end
|
149
|
+
|
150
|
+
stub(Aws::Kinesis::Client).new do |options|
|
151
|
+
assert_equal("sts_credentials", options[:credentials])
|
152
|
+
client
|
153
|
+
end
|
154
|
+
|
155
|
+
d = create_driver(<<-EOS)
|
156
|
+
role_arn arn:aws:iam::001234567890:role/my-role
|
157
|
+
external_id my_external_id
|
158
|
+
stream_name test_stream
|
159
|
+
region us-east-1
|
160
|
+
fixed_partition_key test_partition_key
|
161
|
+
buffer_chunk_limit 100k
|
162
|
+
EOS
|
163
|
+
d.run
|
164
|
+
end
|
165
|
+
|
166
|
+
def test_format
|
167
|
+
d = create_driver
|
168
|
+
|
169
|
+
data1 = {"test_partition_key"=>"key1","a"=>1,"time"=>"2011-01-02T13:14:15Z","tag"=>"test"}
|
170
|
+
data2 = {"test_partition_key"=>"key2","a"=>2,"time"=>"2011-01-02T13:14:15Z","tag"=>"test"}
|
171
|
+
|
172
|
+
time = Time.parse("2011-01-02 13:14:15 UTC").to_i
|
173
|
+
d.emit(data1, time)
|
174
|
+
d.emit(data2, time)
|
175
|
+
d.expect_format("\u001AR\b\u0000\u001AN{\"test_partition_key\":\"key1\",\"a\":1,\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}")
|
176
|
+
d.expect_format("\u001AR\b\u0000\u001AN{\"test_partition_key\":\"key2\",\"a\":2,\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}")
|
177
|
+
|
178
|
+
client = create_mock_client
|
179
|
+
client.put_record(
|
180
|
+
stream_name: 'test_stream',
|
181
|
+
data: "\xF3\x89\x9A\xC2\n\x12test_partition_key\x1AR\b\x00\x1AN{\"test_partition_key\":\"key1\",\"a\":1,\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}\x1AR\b\x00\x1AN{\"test_partition_key\":\"key2\",\"a\":2,\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}\xB6j\x1E\xF7q\xC9}v\vU\xAD\xA3@<\x82\xA9".force_encoding("ASCII-8BIT"),
|
182
|
+
partition_key: 'test_partition_key'
|
183
|
+
) { {} }
|
184
|
+
|
185
|
+
d.run
|
186
|
+
end
|
187
|
+
|
188
|
+
def test_multibyte
|
189
|
+
d = create_driver
|
190
|
+
|
191
|
+
data1 = {"test_partition_key"=>"key1","a"=>"\xE3\x82\xA4\xE3\x83\xB3\xE3\x82\xB9\xE3\x83\x88\xE3\x83\xBC\xE3\x83\xAB","time"=>"2011-01-02T13:14:15Z","tag"=>"test"}
|
192
|
+
data1["a"].force_encoding("ASCII-8BIT")
|
193
|
+
|
194
|
+
time = Time.parse("2011-01-02 13:14:15 UTC").to_i
|
195
|
+
d.emit(data1, time)
|
196
|
+
|
197
|
+
d.expect_format(
|
198
|
+
"\x1Ae\b\x00\x1Aa{\"test_partition_key\":\"key1\",\"a\":\"\xE3\x82\xA4\xE3\x83\xB3\xE3\x82\xB9\xE3\x83\x88\xE3\x83\xBC\xE3\x83\xAB\",\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}".force_encoding("ASCII-8BIT")
|
199
|
+
)
|
200
|
+
|
201
|
+
client = create_mock_client
|
202
|
+
client.put_record(
|
203
|
+
stream_name: 'test_stream',
|
204
|
+
data: "\xF3\x89\x9A\xC2\n\x12test_partition_key\x1Ae\b\x00\x1Aa{\"test_partition_key\":\"key1\",\"a\":\"\xE3\x82\xA4\xE3\x83\xB3\xE3\x82\xB9\xE3\x83\x88\xE3\x83\xBC\xE3\x83\xAB\",\"time\":\"2011-01-02T13:14:15Z\",\"tag\":\"test\"}\xC8\x13{\xFBL_\x8FE\x02\xEEC\xC9_~\xEF(".force_encoding("ASCII-8BIT"),
|
205
|
+
partition_key: 'test_partition_key'
|
206
|
+
) { {} }
|
207
|
+
|
208
|
+
d.run
|
209
|
+
end
|
210
|
+
|
211
|
+
def test_fail_on_bigchunk
|
212
|
+
d = create_driver
|
213
|
+
|
214
|
+
d.emit(
|
215
|
+
{"msg": "z" * 1024 * 1024},
|
216
|
+
Time.parse("2011-01-02 13:14:15 UTC").to_i)
|
217
|
+
client = dont_allow(Object.new)
|
218
|
+
client.put_record
|
219
|
+
mock(Aws::Kinesis::Client).new({}) { client }
|
220
|
+
|
221
|
+
d.run
|
222
|
+
end
|
223
|
+
end
|
metadata
ADDED
@@ -0,0 +1,169 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: fluent-plugin-kinesis-aggregation
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.1.0
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- Someone
|
8
|
+
autorequire:
|
9
|
+
bindir: bin
|
10
|
+
cert_chain: []
|
11
|
+
date: 2015-11-12 00:00:00.000000000 Z
|
12
|
+
dependencies:
|
13
|
+
- !ruby/object:Gem::Dependency
|
14
|
+
name: bundler
|
15
|
+
requirement: !ruby/object:Gem::Requirement
|
16
|
+
requirements:
|
17
|
+
- - "~>"
|
18
|
+
- !ruby/object:Gem::Version
|
19
|
+
version: '1.3'
|
20
|
+
type: :development
|
21
|
+
prerelease: false
|
22
|
+
version_requirements: !ruby/object:Gem::Requirement
|
23
|
+
requirements:
|
24
|
+
- - "~>"
|
25
|
+
- !ruby/object:Gem::Version
|
26
|
+
version: '1.3'
|
27
|
+
- !ruby/object:Gem::Dependency
|
28
|
+
name: rake
|
29
|
+
requirement: !ruby/object:Gem::Requirement
|
30
|
+
requirements:
|
31
|
+
- - "~>"
|
32
|
+
- !ruby/object:Gem::Version
|
33
|
+
version: '10.0'
|
34
|
+
type: :development
|
35
|
+
prerelease: false
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
37
|
+
requirements:
|
38
|
+
- - "~>"
|
39
|
+
- !ruby/object:Gem::Version
|
40
|
+
version: '10.0'
|
41
|
+
- !ruby/object:Gem::Dependency
|
42
|
+
name: test-unit-rr
|
43
|
+
requirement: !ruby/object:Gem::Requirement
|
44
|
+
requirements:
|
45
|
+
- - "~>"
|
46
|
+
- !ruby/object:Gem::Version
|
47
|
+
version: '1.0'
|
48
|
+
type: :development
|
49
|
+
prerelease: false
|
50
|
+
version_requirements: !ruby/object:Gem::Requirement
|
51
|
+
requirements:
|
52
|
+
- - "~>"
|
53
|
+
- !ruby/object:Gem::Version
|
54
|
+
version: '1.0'
|
55
|
+
- !ruby/object:Gem::Dependency
|
56
|
+
name: fluentd
|
57
|
+
requirement: !ruby/object:Gem::Requirement
|
58
|
+
requirements:
|
59
|
+
- - ">="
|
60
|
+
- !ruby/object:Gem::Version
|
61
|
+
version: 0.10.53
|
62
|
+
- - "<"
|
63
|
+
- !ruby/object:Gem::Version
|
64
|
+
version: '0.13'
|
65
|
+
type: :runtime
|
66
|
+
prerelease: false
|
67
|
+
version_requirements: !ruby/object:Gem::Requirement
|
68
|
+
requirements:
|
69
|
+
- - ">="
|
70
|
+
- !ruby/object:Gem::Version
|
71
|
+
version: 0.10.53
|
72
|
+
- - "<"
|
73
|
+
- !ruby/object:Gem::Version
|
74
|
+
version: '0.13'
|
75
|
+
- !ruby/object:Gem::Dependency
|
76
|
+
name: aws-sdk-core
|
77
|
+
requirement: !ruby/object:Gem::Requirement
|
78
|
+
requirements:
|
79
|
+
- - ">="
|
80
|
+
- !ruby/object:Gem::Version
|
81
|
+
version: 2.0.12
|
82
|
+
- - "<"
|
83
|
+
- !ruby/object:Gem::Version
|
84
|
+
version: '3.0'
|
85
|
+
type: :runtime
|
86
|
+
prerelease: false
|
87
|
+
version_requirements: !ruby/object:Gem::Requirement
|
88
|
+
requirements:
|
89
|
+
- - ">="
|
90
|
+
- !ruby/object:Gem::Version
|
91
|
+
version: 2.0.12
|
92
|
+
- - "<"
|
93
|
+
- !ruby/object:Gem::Version
|
94
|
+
version: '3.0'
|
95
|
+
- !ruby/object:Gem::Dependency
|
96
|
+
name: msgpack
|
97
|
+
requirement: !ruby/object:Gem::Requirement
|
98
|
+
requirements:
|
99
|
+
- - ">="
|
100
|
+
- !ruby/object:Gem::Version
|
101
|
+
version: 0.5.8
|
102
|
+
type: :runtime
|
103
|
+
prerelease: false
|
104
|
+
version_requirements: !ruby/object:Gem::Requirement
|
105
|
+
requirements:
|
106
|
+
- - ">="
|
107
|
+
- !ruby/object:Gem::Version
|
108
|
+
version: 0.5.8
|
109
|
+
- !ruby/object:Gem::Dependency
|
110
|
+
name: protobuf
|
111
|
+
requirement: !ruby/object:Gem::Requirement
|
112
|
+
requirements:
|
113
|
+
- - ">="
|
114
|
+
- !ruby/object:Gem::Version
|
115
|
+
version: 3.5.5
|
116
|
+
type: :runtime
|
117
|
+
prerelease: false
|
118
|
+
version_requirements: !ruby/object:Gem::Requirement
|
119
|
+
requirements:
|
120
|
+
- - ">="
|
121
|
+
- !ruby/object:Gem::Version
|
122
|
+
version: 3.5.5
|
123
|
+
description:
|
124
|
+
email:
|
125
|
+
executables: []
|
126
|
+
extensions: []
|
127
|
+
extra_rdoc_files: []
|
128
|
+
files:
|
129
|
+
- ".gitignore"
|
130
|
+
- ".travis.yml"
|
131
|
+
- CHANGELOG.md
|
132
|
+
- CONTRIBUTORS.txt
|
133
|
+
- Gemfile
|
134
|
+
- LICENSE.txt
|
135
|
+
- NOTICE.txt
|
136
|
+
- README.md
|
137
|
+
- Rakefile
|
138
|
+
- fluent-plugin-kinesis-aggregation.gemspec
|
139
|
+
- lib/fluent/plugin/out_kinesis-aggregation.rb
|
140
|
+
- lib/fluent/plugin/version.rb
|
141
|
+
- test/helper.rb
|
142
|
+
- test/plugin/test_out_kinesis-aggregation.rb
|
143
|
+
homepage: https://github.com/wryun/fluent-plugin-kinesis-aggregation
|
144
|
+
licenses:
|
145
|
+
- Apache License, Version 2.0
|
146
|
+
metadata: {}
|
147
|
+
post_install_message:
|
148
|
+
rdoc_options: []
|
149
|
+
require_paths:
|
150
|
+
- lib
|
151
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
152
|
+
requirements:
|
153
|
+
- - ">="
|
154
|
+
- !ruby/object:Gem::Version
|
155
|
+
version: '2.2'
|
156
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
157
|
+
requirements:
|
158
|
+
- - ">="
|
159
|
+
- !ruby/object:Gem::Version
|
160
|
+
version: '0'
|
161
|
+
requirements: []
|
162
|
+
rubyforge_project:
|
163
|
+
rubygems_version: 2.4.5.1
|
164
|
+
signing_key:
|
165
|
+
specification_version: 4
|
166
|
+
summary: Fluentd output plugin that sends KPL style aggregated events to Amazon Kinesis.
|
167
|
+
test_files:
|
168
|
+
- test/helper.rb
|
169
|
+
- test/plugin/test_out_kinesis-aggregation.rb
|