logstash-output-s3 4.1.8 → 4.1.9
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +4 -0
- data/docs/index.asciidoc +32 -29
- data/logstash-output-s3.gemspec +1 -1
- metadata +2 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 7653884bdf554ea5da088fff8ccedf6e4371d55bab768276f887043fc1951d61
|
4
|
+
data.tar.gz: b972c14f15fef7b7b088db17d521c1255832280b005e35060948ee27c98b0b2e
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: de5e04e1f51a90c62b14992a4aba4fc2f05ba7be30ea697bbd21747a376d13a5dd9a4e6dc773cc727f773f16541ba35a58fb3ab1147e37e0e757cdb8487eef13
|
7
|
+
data.tar.gz: 77e29b0995184da283de1ea27122ab6ac343e22a337a35c617cbd1005657c44ef00616a7687672f543e799f89911ecf03a651d8184d5988ea7fa2d1680686b2f
|
data/CHANGELOG.md
CHANGED
@@ -1,3 +1,7 @@
|
|
1
|
+
## 4.1.9
|
2
|
+
- Added configuration information for multiple s3 outputs to documentation [#196](https://github.com/logstash-plugins/logstash-output-s3/pull/196)
|
3
|
+
- Fixed formatting problems and typographical errors [#194](https://github.com/logstash-plugins/logstash-output-s3/pull/194), [#201](https://github.com/logstash-plugins/logstash-output-s3/pull/201), and [#204](https://github.com/logstash-plugins/logstash-output-s3/pull/204)
|
4
|
+
|
1
5
|
## 4.1.8
|
2
6
|
- Add support for setting mutipart upload threshold [#202](https://github.com/logstash-plugins/logstash-output-s3/pull/202)
|
3
7
|
|
data/docs/index.asciidoc
CHANGED
@@ -21,39 +21,41 @@ include::{include_path}/plugin_header.asciidoc[]
|
|
21
21
|
|
22
22
|
==== Description
|
23
23
|
|
24
|
-
INFORMATION:
|
25
|
-
|
26
24
|
This plugin batches and uploads logstash events into Amazon Simple Storage Service (Amazon S3).
|
27
25
|
|
28
|
-
|
29
|
-
|
30
|
-
* S3 PutObject permission
|
26
|
+
S3 outputs create temporary files into the OS' temporary directory.
|
27
|
+
You can specify where to save them using the `temporary_directory` option.
|
31
28
|
|
32
|
-
|
29
|
+
IMPORTANT: For configurations containing multiple s3 outputs with the restore
|
30
|
+
option enabled, each output should define its own 'temporary_directory'.
|
33
31
|
|
34
|
-
|
32
|
+
===== Requirements
|
35
33
|
|
36
|
-
|
34
|
+
* Amazon S3 Bucket and S3 Access Permissions (Typically access_key_id and secret_access_key)
|
35
|
+
* S3 PutObject permission
|
37
36
|
|
37
|
+
===== S3 output file
|
38
|
+
|
39
|
+
[source,txt]
|
40
|
+
-----
|
41
|
+
`ls.s3.312bc026-2f5d-49bc-ae9f-5940cf4ad9a6.2013-04-18T10.00.tag_hello.part0.txt`
|
42
|
+
-----
|
38
43
|
|
39
44
|
|=======
|
40
|
-
| ls.s3 |
|
45
|
+
| ls.s3 | indicates logstash plugin s3 |
|
41
46
|
| 312bc026-2f5d-49bc-ae9f-5940cf4ad9a6 | a new, random uuid per file. |
|
42
47
|
| 2013-04-18T10.00 | represents the time whenever you specify time_file. |
|
43
|
-
| tag_hello |
|
44
|
-
| part0 |
|
48
|
+
| tag_hello | indicates the event's tag. |
|
49
|
+
| part0 | If you indicate size_file, it will generate more parts if your file.size > size_file.
|
50
|
+
When a file is full, it gets pushed to the bucket and then deleted from the temporary directory.
|
51
|
+
If a file is empty, it is simply deleted. Empty files will not be pushed. |
|
45
52
|
|=======
|
46
53
|
|
47
|
-
Crash Recovery
|
48
|
-
* This plugin will recover and upload temporary log files after crash/abnormal termination when using `restore` set to true
|
49
|
-
|
50
|
-
|
54
|
+
===== Crash Recovery
|
51
55
|
|
56
|
+
This plugin will recover and upload temporary log files after crash/abnormal termination when using `restore` set to true
|
52
57
|
|
53
|
-
|
54
|
-
|
55
|
-
|
56
|
-
#### Usage:
|
58
|
+
===== Usage
|
57
59
|
This is an example of logstash config:
|
58
60
|
[source,ruby]
|
59
61
|
output {
|
@@ -120,11 +122,11 @@ output plugins.
|
|
120
122
|
|
121
123
|
This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order:
|
122
124
|
|
123
|
-
|
124
|
-
|
125
|
-
|
126
|
-
|
127
|
-
|
125
|
+
. Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config
|
126
|
+
. External credentials file specified by `aws_credentials_file`
|
127
|
+
. Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`
|
128
|
+
. Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY`
|
129
|
+
. IAM Instance Profile (available when running inside EC2)
|
128
130
|
|
129
131
|
[id="plugins-{type}s-{plugin}-additional_settings"]
|
130
132
|
===== `additional_settings`
|
@@ -211,7 +213,7 @@ guaranteed to work correctly with the AWS SDK.
|
|
211
213
|
Specify a prefix to the uploaded filename, this can simulate directories on S3. Prefix does not require leading slash.
|
212
214
|
This option supports logstash interpolation: https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html#sprintf;
|
213
215
|
for example, files can be prefixed with the event date using `prefix = "%{+YYYY}/%{+MM}/%{+dd}"`.
|
214
|
-
Be warned this can
|
216
|
+
Be warned this can create a lot of temporary local files.
|
215
217
|
|
216
218
|
[id="plugins-{type}s-{plugin}-proxy_uri"]
|
217
219
|
===== `proxy_uri`
|
@@ -235,6 +237,9 @@ The AWS Region
|
|
235
237
|
* Value type is <<boolean,boolean>>
|
236
238
|
* Default value is `true`
|
237
239
|
|
240
|
+
Used to enable recovery after crash/abnormal termination.
|
241
|
+
Temporary log files will be recovered and uploaded.
|
242
|
+
|
238
243
|
[id="plugins-{type}s-{plugin}-role_arn"]
|
239
244
|
===== `role_arn`
|
240
245
|
|
@@ -350,7 +355,7 @@ default to the current OS temporary directory in linux /tmp/logstash
|
|
350
355
|
|
351
356
|
Set the time, in MINUTES, to close the current sub_time_section of bucket.
|
352
357
|
If you define file_size you have a number of files in consideration of the section and the current tag.
|
353
|
-
0 stay all time on
|
358
|
+
0 stay all time on listener, beware if you specific 0 and size_file 0, because you will not put the file on bucket,
|
354
359
|
for now the only thing this plugin can do is to put the file when logstash restart.
|
355
360
|
|
356
361
|
[id="plugins-{type}s-{plugin}-upload_multipart_threshold"]
|
@@ -384,9 +389,7 @@ Specify how many workers to use to upload the files to S3
|
|
384
389
|
* Default value is `true`
|
385
390
|
|
386
391
|
The common use case is to define permission on the root bucket and give Logstash full access to write its logs.
|
387
|
-
In some
|
388
|
-
|
389
|
-
|
392
|
+
In some circumstances you need finer grained permission on subfolder, this allow you to disable the check at startup.
|
390
393
|
|
391
394
|
[id="plugins-{type}s-{plugin}-common-options"]
|
392
395
|
include::{include_path}/{type}.asciidoc[]
|
data/logstash-output-s3.gemspec
CHANGED
@@ -1,6 +1,6 @@
|
|
1
1
|
Gem::Specification.new do |s|
|
2
2
|
s.name = 'logstash-output-s3'
|
3
|
-
s.version = '4.1.
|
3
|
+
s.version = '4.1.9'
|
4
4
|
s.licenses = ['Apache-2.0']
|
5
5
|
s.summary = "Sends Logstash events to the Amazon Simple Storage Service"
|
6
6
|
s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: logstash-output-s3
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 4.1.
|
4
|
+
version: 4.1.9
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Elastic
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2019-
|
11
|
+
date: 2019-04-15 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
requirement: !ruby/object:Gem::Requirement
|