logstash-output-s3 4.1.8 → 4.1.9

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a4cf264de35b436280daac5ccab48529f9f3e8037a37c40f634e479c49dd0b3e
4
- data.tar.gz: 6c5fe67b82ea4dc36a471ad06199fb241a171bb4142ef754da2436211d99d4b6
3
+ metadata.gz: 7653884bdf554ea5da088fff8ccedf6e4371d55bab768276f887043fc1951d61
4
+ data.tar.gz: b972c14f15fef7b7b088db17d521c1255832280b005e35060948ee27c98b0b2e
5
5
  SHA512:
6
- metadata.gz: f1afe8b78490ae972d08216adf887447ada0fef5adb3c67badb8243b84b907d35da6bebc0dbd354dae10f3adb0f1840a996192eac43482725a58eb70fa55eb0e
7
- data.tar.gz: 611ef2103780b7ef705dace7e1bd95e5c931c4f4276cf3db22c648acf555667f5eb88c43856014d6e7a093f9ea22c50731955ade23a779d85a0d9088085e9e33
6
+ metadata.gz: de5e04e1f51a90c62b14992a4aba4fc2f05ba7be30ea697bbd21747a376d13a5dd9a4e6dc773cc727f773f16541ba35a58fb3ab1147e37e0e757cdb8487eef13
7
+ data.tar.gz: 77e29b0995184da283de1ea27122ab6ac343e22a337a35c617cbd1005657c44ef00616a7687672f543e799f89911ecf03a651d8184d5988ea7fa2d1680686b2f
@@ -1,3 +1,7 @@
1
+ ## 4.1.9
2
+ - Added configuration information for multiple s3 outputs to documentation [#196](https://github.com/logstash-plugins/logstash-output-s3/pull/196)
3
+ - Fixed formatting problems and typographical errors [#194](https://github.com/logstash-plugins/logstash-output-s3/pull/194), [#201](https://github.com/logstash-plugins/logstash-output-s3/pull/201), and [#204](https://github.com/logstash-plugins/logstash-output-s3/pull/204)
4
+
1
5
  ## 4.1.8
2
6
  - Add support for setting mutipart upload threshold [#202](https://github.com/logstash-plugins/logstash-output-s3/pull/202)
3
7
 
@@ -21,39 +21,41 @@ include::{include_path}/plugin_header.asciidoc[]
21
21
 
22
22
  ==== Description
23
23
 
24
- INFORMATION:
25
-
26
24
  This plugin batches and uploads logstash events into Amazon Simple Storage Service (Amazon S3).
27
25
 
28
- Requirements:
29
- * Amazon S3 Bucket and S3 Access Permissions (Typically access_key_id and secret_access_key)
30
- * S3 PutObject permission
26
+ S3 outputs create temporary files into the OS' temporary directory.
27
+ You can specify where to save them using the `temporary_directory` option.
31
28
 
32
- S3 outputs create temporary files into the OS' temporary directory, you can specify where to save them using the `temporary_directory` option.
29
+ IMPORTANT: For configurations containing multiple s3 outputs with the restore
30
+ option enabled, each output should define its own 'temporary_directory'.
33
31
 
34
- S3 output files have the following format
32
+ ===== Requirements
35
33
 
36
- ls.s3.312bc026-2f5d-49bc-ae9f-5940cf4ad9a6.2013-04-18T10.00.tag_hello.part0.txt
34
+ * Amazon S3 Bucket and S3 Access Permissions (Typically access_key_id and secret_access_key)
35
+ * S3 PutObject permission
37
36
 
37
+ ===== S3 output file
38
+
39
+ [source,txt]
40
+ -----
41
+ `ls.s3.312bc026-2f5d-49bc-ae9f-5940cf4ad9a6.2013-04-18T10.00.tag_hello.part0.txt`
42
+ -----
38
43
 
39
44
  |=======
40
- | ls.s3 | indicate logstash plugin s3 |
45
+ | ls.s3 | indicates logstash plugin s3 |
41
46
  | 312bc026-2f5d-49bc-ae9f-5940cf4ad9a6 | a new, random uuid per file. |
42
47
  | 2013-04-18T10.00 | represents the time whenever you specify time_file. |
43
- | tag_hello | this indicates the event's tag. |
44
- | part0 | this means if you indicate size_file then it will generate more parts if your file.size > size_file. When a file is full it will be pushed to the bucket and then deleted from the temporary directory. If a file is empty, it is simply deleted. Empty files will not be pushed |
48
+ | tag_hello | indicates the event's tag. |
49
+ | part0 | If you indicate size_file, it will generate more parts if your file.size > size_file.
50
+ When a file is full, it gets pushed to the bucket and then deleted from the temporary directory.
51
+ If a file is empty, it is simply deleted. Empty files will not be pushed. |
45
52
  |=======
46
53
 
47
- Crash Recovery:
48
- * This plugin will recover and upload temporary log files after crash/abnormal termination when using `restore` set to true
49
-
50
-
54
+ ===== Crash Recovery
51
55
 
56
+ This plugin will recover and upload temporary log files after crash/abnormal termination when using `restore` set to true
52
57
 
53
-
54
-
55
-
56
- #### Usage:
58
+ ===== Usage
57
59
  This is an example of logstash config:
58
60
  [source,ruby]
59
61
  output {
@@ -120,11 +122,11 @@ output plugins.
120
122
 
121
123
  This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order:
122
124
 
123
- 1. Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config
124
- 2. External credentials file specified by `aws_credentials_file`
125
- 3. Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`
126
- 4. Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY`
127
- 5. IAM Instance Profile (available when running inside EC2)
125
+ . Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config
126
+ . External credentials file specified by `aws_credentials_file`
127
+ . Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`
128
+ . Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY`
129
+ . IAM Instance Profile (available when running inside EC2)
128
130
 
129
131
  [id="plugins-{type}s-{plugin}-additional_settings"]
130
132
  ===== `additional_settings`
@@ -211,7 +213,7 @@ guaranteed to work correctly with the AWS SDK.
211
213
  Specify a prefix to the uploaded filename, this can simulate directories on S3. Prefix does not require leading slash.
212
214
  This option supports logstash interpolation: https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html#sprintf;
213
215
  for example, files can be prefixed with the event date using `prefix = "%{+YYYY}/%{+MM}/%{+dd}"`.
214
- Be warned this can created a lot of temporary local files.
216
+ Be warned this can create a lot of temporary local files.
215
217
 
216
218
  [id="plugins-{type}s-{plugin}-proxy_uri"]
217
219
  ===== `proxy_uri`
@@ -235,6 +237,9 @@ The AWS Region
235
237
  * Value type is <<boolean,boolean>>
236
238
  * Default value is `true`
237
239
 
240
+ Used to enable recovery after crash/abnormal termination.
241
+ Temporary log files will be recovered and uploaded.
242
+
238
243
  [id="plugins-{type}s-{plugin}-role_arn"]
239
244
  ===== `role_arn`
240
245
 
@@ -350,7 +355,7 @@ default to the current OS temporary directory in linux /tmp/logstash
350
355
 
351
356
  Set the time, in MINUTES, to close the current sub_time_section of bucket.
352
357
  If you define file_size you have a number of files in consideration of the section and the current tag.
353
- 0 stay all time on listerner, beware if you specific 0 and size_file 0, because you will not put the file on bucket,
358
+ 0 stay all time on listener, beware if you specific 0 and size_file 0, because you will not put the file on bucket,
354
359
  for now the only thing this plugin can do is to put the file when logstash restart.
355
360
 
356
361
  [id="plugins-{type}s-{plugin}-upload_multipart_threshold"]
@@ -384,9 +389,7 @@ Specify how many workers to use to upload the files to S3
384
389
  * Default value is `true`
385
390
 
386
391
  The common use case is to define permission on the root bucket and give Logstash full access to write its logs.
387
- In some circonstances you need finer grained permission on subfolder, this allow you to disable the check at startup.
388
-
389
-
392
+ In some circumstances you need finer grained permission on subfolder, this allow you to disable the check at startup.
390
393
 
391
394
  [id="plugins-{type}s-{plugin}-common-options"]
392
395
  include::{include_path}/{type}.asciidoc[]
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-output-s3'
3
- s.version = '4.1.8'
3
+ s.version = '4.1.9'
4
4
  s.licenses = ['Apache-2.0']
5
5
  s.summary = "Sends Logstash events to the Amazon Simple Storage Service"
6
6
  s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-output-s3
3
3
  version: !ruby/object:Gem::Version
4
- version: 4.1.8
4
+ version: 4.1.9
5
5
  platform: ruby
6
6
  authors:
7
7
  - Elastic
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2019-03-18 00:00:00.000000000 Z
11
+ date: 2019-04-15 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement