logstash-output-kinesis 0.0.7-java → 0.0.8-java
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +66 -10
- data/build.gradle +1 -0
- data/lib/logstash-output-kinesis/version.rb +1 -1
- data/lib/logstash/outputs/kinesis.rb +63 -0
- data/vendor/jar-dependencies/runtime-jars/aws-java-sdk-sts-1.9.37.jar +0 -0
- metadata +3 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: ba5057a7a75e92ec491b6129eafc0a09742174d4
|
4
|
+
data.tar.gz: 9d779195310054cc17f7eeb4d492a4e5d25ee748
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: e8ca60399e520e3cd298ae1a4c40b4ee3267ed4f62498f8e8650900ef87d187001cf9fd1825bfe584de63c1c984d162725d791592986778e11f6e754dead60ab
|
7
|
+
data.tar.gz: eb83a4d5e8a9f77a929ac3c77a5921bb9932d8e8215dcca7c62b71973253fe16394fcbc1c485c8f142f437d6a519c6b44ee0535ab94c2795b000a020658f942b
|
data/README.md
CHANGED
@@ -5,14 +5,14 @@
|
|
5
5
|
|
6
6
|
This is a plugin for [Logstash](https://github.com/elasticsearch/logstash).
|
7
7
|
|
8
|
-
It will send log records to a [Kinesis stream](https://aws.amazon.com/kinesis/), using the [KPL](https://docs.aws.amazon.com/kinesis/latest/dev/developing-producers-with-kpl.html) library.
|
8
|
+
It will send log records to a [Kinesis stream](https://aws.amazon.com/kinesis/), using the [Kinesis Producer Library (KPL)](https://docs.aws.amazon.com/kinesis/latest/dev/developing-producers-with-kpl.html) library.
|
9
9
|
|
10
10
|
|
11
11
|
## Configuration
|
12
12
|
|
13
13
|
Minimum required configuration to get this plugin chugging along:
|
14
14
|
|
15
|
-
```
|
15
|
+
```nginx
|
16
16
|
output {
|
17
17
|
kinesis {
|
18
18
|
stream_name => "logs-stream"
|
@@ -23,23 +23,79 @@ output {
|
|
23
23
|
|
24
24
|
This plugin accepts a wide range of configuration options, most of which come from the underlying KPL library itself. [View the full list of KPL configuration options here.][kpldoc]
|
25
25
|
|
26
|
-
Please note that configuration options are snake_cased instead of camelCased. So, where [KinesisProducerConfiguration][kpldoc] offers a `setMetricsLevel` option,this plugin accepts a `metrics_level` option.
|
26
|
+
Please note that configuration options are snake_cased instead of camelCased. So, where [KinesisProducerConfiguration][kpldoc] offers a `setMetricsLevel` option, this plugin accepts a `metrics_level` option.
|
27
|
+
|
28
|
+
### Metrics
|
29
|
+
|
30
|
+
The underlying KPL library defaults to sending CloudWatch metrics to give insight into what it's actually doing at runtime. It's highly recommended you ensure these metrics are flowing through, and use them to monitor the health of your log shipping.
|
27
31
|
|
28
|
-
|
32
|
+
If for some reason you want to switch them off, you can easily do so:
|
33
|
+
|
34
|
+
```nginx
|
35
|
+
output {
|
36
|
+
kinesis {
|
37
|
+
# ...
|
29
38
|
|
30
|
-
|
39
|
+
metrics_level => "none"
|
40
|
+
}
|
41
|
+
}
|
42
|
+
```
|
31
43
|
|
32
|
-
|
44
|
+
If you choose to keep metrics enabled, ensure the AWS credentials you provide to this plugin are able to write to Kinesis *and* write to CloudWatch.
|
33
45
|
|
34
|
-
|
46
|
+
### Authentication
|
47
|
+
|
48
|
+
By default, this plugin will use the AWS SDK [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html) to obtain credentials for communication with the Kinesis stream (and CloudWatch, if metrics are enabled). The following places will be checked for credentials:
|
49
|
+
|
50
|
+
* `AWS_ACCESS_KEY_ID` / `AWS_SECRET_KEY` environment variables available to the Logstash prociess
|
35
51
|
* `~/.aws/credentials` credentials file
|
36
|
-
* Instance profile
|
52
|
+
* Instance profile (if Logstash is running in an EC2 instance)
|
53
|
+
|
54
|
+
If you want to provide credentials directly in the config file, you can do so:
|
55
|
+
|
56
|
+
```nginx
|
57
|
+
output {
|
58
|
+
kinesis {
|
59
|
+
# ...
|
60
|
+
|
61
|
+
access_key => "AKIAIDFAKECREDENTIAL"
|
62
|
+
secret_key => "KX0ofakeLcredentialsGrightJherepOlolPkQk"
|
63
|
+
|
64
|
+
# You can provide specific credentials for CloudWatch metrics:
|
65
|
+
metrics_access_key => "AKIAIDFAKECREDENTIAL"
|
66
|
+
metrics_secret_key => "KX0ofakeLcredentialsGrightJherepOlolPkQk"
|
67
|
+
}
|
68
|
+
}
|
69
|
+
```
|
70
|
+
|
71
|
+
If `access_key` and `secret_key` are provided, they will be used for communicating with Kinesis *and* CloudWatch. If `metrics_access_key` and `metrics_secret_key` are provided, they will be used for communication with CloudWatch. If only the metrics credentials were provided, Kinesis would use the default credentials provider (explained above) and CloudWatch would use the specific credentials. Confused? Good!
|
72
|
+
|
73
|
+
#### Using STS
|
74
|
+
|
75
|
+
You can also configure this plugin to use [AWS STS](https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html) to "assume" a role that has access to Kinesis and CloudWatch. If you use this in combination with EC2 instance profiles (which the defaults credentials provider explained above uses) then you can actually configure your Logstash to write to Kinesis and CloudWatch without any hardcoded credentials.
|
76
|
+
|
77
|
+
```nginx
|
78
|
+
output {
|
79
|
+
kinesis {
|
80
|
+
# ...
|
81
|
+
|
82
|
+
role_arn => "arn:aws:iam::123456789:role/my-kinesis-producer-role"
|
83
|
+
|
84
|
+
# You can also provide a specific role to assume for CloudWatch metrics:
|
85
|
+
metrics_role_arn => "arn:aws:iam::123456789:role/my-metrics-role"
|
86
|
+
}
|
87
|
+
}
|
88
|
+
```
|
89
|
+
|
90
|
+
You can combine `role_arn` / `metrics_role_arn` with the explicit AWS credentials config explained earlier, too.
|
91
|
+
|
92
|
+
All this stuff can be mixed too - if you wanted to use hardcoded credentials for Kinesis, but then assume a role via STS for accessing CloudWatch, you can do that. Vice versa would work too - assume a role for accessing Kinesis and then providing hardcoded credentials for CloudWatch. Make things as arbitrarily complicated for yourself as you like ;)
|
37
93
|
|
38
94
|
### Building a partition key
|
39
95
|
|
40
96
|
Kinesis demands a [partition key](https://docs.aws.amazon.com/kinesis/latest/dev/key-concepts.html#partition-key) be provided for each record. By default, this plugin will provide a very boring partition key of `-`. However, you can configure it to compute a partition key from fields in your log events.
|
41
97
|
|
42
|
-
```
|
98
|
+
```nginx
|
43
99
|
output {
|
44
100
|
kinesis {
|
45
101
|
# ...
|
@@ -56,7 +112,7 @@ If you are using an older version of the Amazon KCL library to consume your reco
|
|
56
112
|
|
57
113
|
If you wish to simply disable record aggregation, that's easy:
|
58
114
|
|
59
|
-
```
|
115
|
+
```nginx
|
60
116
|
output {
|
61
117
|
kinesis {
|
62
118
|
aggregation_enabled => false
|
data/build.gradle
CHANGED
@@ -18,6 +18,20 @@ class LogStash::Outputs::Kinesis < LogStash::Outputs::Base
|
|
18
18
|
# A list of event data keys to use when constructing a partition key
|
19
19
|
config :event_partition_keys, :validate => :array, :default => []
|
20
20
|
|
21
|
+
# An AWS access key to use for authentication to Kinesis and CloudWatch
|
22
|
+
config :access_key, :validate => :string
|
23
|
+
# An AWS secret key to use for authentication to Kinesis and CloudWatch
|
24
|
+
config :secret_key, :validate => :string
|
25
|
+
# If provided, STS will be used to assume this role and use it to authenticate to Kinesis and CloudWatch
|
26
|
+
config :role_arn, :validate => :string
|
27
|
+
|
28
|
+
# If provided, use this AWS access key for authentication to CloudWatch
|
29
|
+
config :metrics_access_key, :validate => :string
|
30
|
+
# If provided, use this AWS secret key for authentication to CloudWatch
|
31
|
+
config :metrics_secret_key, :validate => :string
|
32
|
+
# If provided, STS will be used to assume this role and use it to authenticate to CloudWatch
|
33
|
+
config :metrics_role_arn, :validate => :string
|
34
|
+
|
21
35
|
config :aggregation_enabled, :validate => :boolean, :default => true
|
22
36
|
config :aggregation_max_count, :validate => :number, :default => 4294967295
|
23
37
|
config :aggregation_max_size, :validate => :number, :default => 51200
|
@@ -45,10 +59,14 @@ class LogStash::Outputs::Kinesis < LogStash::Outputs::Base
|
|
45
59
|
config :verify_certificate, :validate => :boolean, :default => true
|
46
60
|
|
47
61
|
KPL = com.amazonaws.services.kinesis.producer
|
62
|
+
AWSAuth = com.amazonaws.auth
|
48
63
|
ByteBuffer = java.nio.ByteBuffer
|
49
64
|
|
50
65
|
public
|
51
66
|
def register
|
67
|
+
@metrics_access_key ||= @access_key
|
68
|
+
@metrics_secret_key ||= @secret_key
|
69
|
+
|
52
70
|
@producer = KPL.KinesisProducer::new(create_kpl_config)
|
53
71
|
@codec.on_event(&method(:send_record))
|
54
72
|
end
|
@@ -87,17 +105,22 @@ class LogStash::Outputs::Kinesis < LogStash::Outputs::Base
|
|
87
105
|
def create_kpl_config
|
88
106
|
config = KPL.KinesisProducerConfiguration::new()
|
89
107
|
|
108
|
+
credentials_provider = create_credentials_provider
|
109
|
+
metrics_credentials_provider = create_metrics_credentials_provider
|
110
|
+
|
90
111
|
config.setAggregationEnabled(@aggregation_enabled)
|
91
112
|
config.setAggregationMaxCount(@aggregation_max_count)
|
92
113
|
config.setAggregationMaxSize(@aggregation_max_size)
|
93
114
|
config.setCollectionMaxCount(@collection_max_count)
|
94
115
|
config.setCollectionMaxSize(@collection_max_size)
|
95
116
|
config.setConnectTimeout(@connect_timeout)
|
117
|
+
config.setCredentialsProvider(credentials_provider)
|
96
118
|
config.setCredentialsRefreshDelay(@credentials_refresh_delay)
|
97
119
|
config.setCustomEndpoint(@custom_endpoint) if !@custom_endpoint.nil?
|
98
120
|
config.setFailIfThrottled(@fail_if_throttled)
|
99
121
|
config.setLogLevel(@log_level)
|
100
122
|
config.setMaxConnections(@max_connections)
|
123
|
+
config.setMetricsCredentialsProvider(metrics_credentials_provider)
|
101
124
|
config.setMetricsGranularity(@metrics_granularity)
|
102
125
|
config.setMetricsLevel(@metrics_level)
|
103
126
|
config.setMetricsNamespace(@metrics_namespace)
|
@@ -116,6 +139,28 @@ class LogStash::Outputs::Kinesis < LogStash::Outputs::Base
|
|
116
139
|
config
|
117
140
|
end
|
118
141
|
|
142
|
+
def create_credentials_provider
|
143
|
+
provider = AWSAuth.DefaultAWSCredentialsProviderChain.new()
|
144
|
+
if @access_key and @secret_key
|
145
|
+
provider = BasicCredentialsProvider.new(AWSAuth.BasicAWSCredentials.new(@access_key, @secret_key))
|
146
|
+
end
|
147
|
+
if @role_arn
|
148
|
+
provider = AWSAuth.STSAssumeRoleSessionCredentialsProvider.new(provider, @role_arn, "logstash-output-kinesis")
|
149
|
+
end
|
150
|
+
provider
|
151
|
+
end
|
152
|
+
|
153
|
+
def create_metrics_credentials_provider
|
154
|
+
provider = AWSAuth.DefaultAWSCredentialsProviderChain.new()
|
155
|
+
if @metrics_access_key and @metrics_secret_key
|
156
|
+
provider = BasicCredentialsProvider.new(AWSAuth.BasicAWSCredentials.new(@metrics_access_key, @metrics_secret_key))
|
157
|
+
end
|
158
|
+
if @metrics_role_arn
|
159
|
+
provider = AWSAuth.STSAssumeRoleSessionCredentialsProvider.new(provider, @metrics_role_arn, "logstash-output-kinesis")
|
160
|
+
end
|
161
|
+
provider
|
162
|
+
end
|
163
|
+
|
119
164
|
def send_record(event, payload)
|
120
165
|
begin
|
121
166
|
event_blob = ByteBuffer::wrap(payload.to_java_bytes)
|
@@ -125,3 +170,21 @@ class LogStash::Outputs::Kinesis < LogStash::Outputs::Base
|
|
125
170
|
end
|
126
171
|
end
|
127
172
|
end
|
173
|
+
|
174
|
+
class BasicCredentialsProvider
|
175
|
+
java_implements 'com.amazonaws.auth.AWSCredentialsProvider'
|
176
|
+
|
177
|
+
def initialize(credentials)
|
178
|
+
@credentials = credentials
|
179
|
+
end
|
180
|
+
|
181
|
+
java_signature 'com.amazonaws.auth.AWSCredentials getCredentials()'
|
182
|
+
def getCredentials
|
183
|
+
@credentials
|
184
|
+
end
|
185
|
+
|
186
|
+
java_signature 'void refresh()'
|
187
|
+
def refresh
|
188
|
+
# Noop.
|
189
|
+
end
|
190
|
+
end
|
Binary file
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: logstash-output-kinesis
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.0.
|
4
|
+
version: 0.0.8
|
5
5
|
platform: java
|
6
6
|
authors:
|
7
7
|
- Sam Day
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2015-
|
11
|
+
date: 2015-09-05 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
requirement: !ruby/object:Gem::Requirement
|
@@ -111,6 +111,7 @@ files:
|
|
111
111
|
- spec/spec_helper.rb
|
112
112
|
- vendor/jar-dependencies/runtime-jars/amazon-kinesis-producer-0.10.1.jar
|
113
113
|
- vendor/jar-dependencies/runtime-jars/aws-java-sdk-core-1.9.37.jar
|
114
|
+
- vendor/jar-dependencies/runtime-jars/aws-java-sdk-sts-1.9.37.jar
|
114
115
|
- vendor/jar-dependencies/runtime-jars/commons-codec-1.6.jar
|
115
116
|
- vendor/jar-dependencies/runtime-jars/commons-io-2.4.jar
|
116
117
|
- vendor/jar-dependencies/runtime-jars/commons-lang-2.6.jar
|