logstash-output-s3-leprechaun-fork 1.0.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 2fe984e47077fbe44633828196b4d0ad33d8831d
4
+ data.tar.gz: aa83386a2e0c7c48477baa61683a1f75ce9d9dab
5
+ SHA512:
6
+ metadata.gz: 5be919e5e7649465ac89638512187afe0dbddb56bcc735b34e7fdff2b6b3413e006fc1489822b51ca0aa53dd79c89820f7ff2194bf81706ea372f88fa72460f7
7
+ data.tar.gz: f88eb9e9bf9ebc9eb0504d339f5b6c8f2def129208803687766dbc587639d890009388e33a08c7cd029169a2bc99f9c10dd9065519edc4c9e7febe907c268a8b
data/CHANGELOG.md ADDED
@@ -0,0 +1,3 @@
1
+ # 1.0.1
2
+ - Fix a synchronization issue when doing file rotation and checking the size of the current file
3
+ - Fix an issue with synchronization when shutting down the plugin and closing the current temp file
data/CONTRIBUTORS ADDED
@@ -0,0 +1,17 @@
1
+ The following is a list of people who have contributed ideas, code, bug
2
+ reports, or in general have helped logstash along its way.
3
+
4
+ Contributors:
5
+ * Colin Surprenant (colinsurprenant)
6
+ * Helmut Duregger (hduregger)
7
+ * Jordan Sissel (jordansissel)
8
+ * Kurt Hurtado (kurtado)
9
+ * Mattia Peterle (MattiaBeast)
10
+ * Nick Ethier (nickethier)
11
+ * Pier-Hugues Pellerin (ph)
12
+ * Richard Pijnenburg (electrical)
13
+
14
+ Note: If you've sent us patches, bug reports, or otherwise contributed to
15
+ Logstash, and you aren't on the list above and want to be, please let us know
16
+ and we'll make sure you're here. Contributions from folks like you are what make
17
+ open source awesome.
data/DEVELOPER.md ADDED
@@ -0,0 +1,15 @@
1
+ [Missing the other part of the readme]
2
+
3
+ ## Running the tests
4
+
5
+ ```
6
+ bundle install
7
+ bundle rspec
8
+ ```
9
+
10
+ If you want to run the integration test against a real bucket you need to pass
11
+ your aws credentials to the test runner or declare it in your environment.
12
+
13
+ ```
14
+ AWS_REGION=us-east-1 AWS_ACCESS_KEY_ID=123 AWS_SECRET_ACCESS_KEY=secret AWS_LOGSTASH_TEST_BUCKET=mytest bundle exec rspec spec/integration/s3_spec.rb --tag integration
15
+ ```
data/Gemfile ADDED
@@ -0,0 +1,2 @@
1
+ source 'https://rubygems.org'
2
+ gemspec
data/LICENSE ADDED
@@ -0,0 +1,13 @@
1
+ Copyright (c) 2012–2015 Elasticsearch <http://www.elastic.co>
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License");
4
+ you may not use this file except in compliance with the License.
5
+ You may obtain a copy of the License at
6
+
7
+ http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ Unless required by applicable law or agreed to in writing, software
10
+ distributed under the License is distributed on an "AS IS" BASIS,
11
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ See the License for the specific language governing permissions and
13
+ limitations under the License.
data/NOTICE.TXT ADDED
@@ -0,0 +1,5 @@
1
+ Elasticsearch
2
+ Copyright 2012-2015 Elasticsearch
3
+
4
+ This product includes software developed by The Apache Software
5
+ Foundation (http://www.apache.org/).
data/README.md ADDED
@@ -0,0 +1,86 @@
1
+ # Logstash Plugin
2
+
3
+ This is a plugin for [Logstash](https://github.com/elastic/logstash).
4
+
5
+ It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
6
+
7
+ ## Documentation
8
+
9
+ Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
10
+
11
+ - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
12
+ - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
13
+
14
+ ## Need Help?
15
+
16
+ Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
17
+
18
+ ## Developing
19
+
20
+ ### 1. Plugin Developement and Testing
21
+
22
+ #### Code
23
+ - To get started, you'll need JRuby with the Bundler gem installed.
24
+
25
+ - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
26
+
27
+ - Install dependencies
28
+ ```sh
29
+ bundle install
30
+ ```
31
+
32
+ #### Test
33
+
34
+ - Update your dependencies
35
+
36
+ ```sh
37
+ bundle install
38
+ ```
39
+
40
+ - Run tests
41
+
42
+ ```sh
43
+ bundle exec rspec
44
+ ```
45
+
46
+ ### 2. Running your unpublished Plugin in Logstash
47
+
48
+ #### 2.1 Run in a local Logstash clone
49
+
50
+ - Edit Logstash `Gemfile` and add the local plugin path, for example:
51
+ ```ruby
52
+ gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
53
+ ```
54
+ - Install plugin
55
+ ```sh
56
+ bin/plugin install --no-verify
57
+ ```
58
+ - Run Logstash with your plugin
59
+ ```sh
60
+ bin/logstash -e 'filter {awesome {}}'
61
+ ```
62
+ At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
63
+
64
+ #### 2.2 Run in an installed Logstash
65
+
66
+ You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
67
+
68
+ - Build your plugin gem
69
+ ```sh
70
+ gem build logstash-filter-awesome.gemspec
71
+ ```
72
+ - Install the plugin from the Logstash home
73
+ ```sh
74
+ bin/plugin install /your/local/plugin/logstash-filter-awesome.gem
75
+ ```
76
+ - Start Logstash and proceed to test the plugin
77
+
78
+ ## Contributing
79
+
80
+ All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
81
+
82
+ Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
83
+
84
+ It is more important to the community that you are able to contribute.
85
+
86
+ For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
@@ -0,0 +1,480 @@
1
+ # encoding: utf-8
2
+ require "logstash/outputs/base"
3
+ require "logstash/namespace"
4
+ require "logstash/plugin_mixins/aws_config"
5
+ require "stud/temporary"
6
+ require "socket" # for Socket.gethostname
7
+ require "thread"
8
+ require "tmpdir"
9
+ require "fileutils"
10
+
11
+
12
+ # INFORMATION:
13
+ #
14
+ # This plugin was created for store the logstash's events into Amazon Simple Storage Service (Amazon S3).
15
+ # For use it you needs authentications and an s3 bucket.
16
+ # Be careful to have the permission to write file on S3's bucket and run logstash with super user for establish connection.
17
+ #
18
+ # S3 plugin allows you to do something complex, let's explain:)
19
+ #
20
+ # S3 outputs create temporary files into "/opt/logstash/S3_temp/". If you want, you can change the path at the start of register method.
21
+ # This files have a special name, for example:
22
+ #
23
+ # ls.s3.ip-10-228-27-95.2013-04-18T10.00.tag_hello.part0.txt
24
+ #
25
+ # ls.s3 : indicate logstash plugin s3
26
+ #
27
+ # "ip-10-228-27-95" : indicate you ip machine, if you have more logstash and writing on the same bucket for example.
28
+ # "2013-04-18T10.00" : represents the time whenever you specify time_file.
29
+ # "tag_hello" : this indicate the event's tag, you can collect events with the same tag.
30
+ # "part0" : this means if you indicate size_file then it will generate more parts if you file.size > size_file.
31
+ # When a file is full it will pushed on bucket and will be deleted in temporary directory.
32
+ # If a file is empty is not pushed, but deleted.
33
+ #
34
+ # This plugin have a system to restore the previous temporary files if something crash.
35
+ #
36
+ ##[Note] :
37
+ #
38
+ ## If you specify size_file and time_file then it will create file for each tag (if specified), when time_file or
39
+ ## their size > size_file, it will be triggered then they will be pushed on s3's bucket and will delete from local disk.
40
+ ## If you don't specify size_file, but time_file then it will create only one file for each tag (if specified).
41
+ ## When time_file it will be triggered then the files will be pushed on s3's bucket and delete from local disk.
42
+ #
43
+ ## If you don't specify time_file, but size_file then it will create files for each tag (if specified),
44
+ ## that will be triggered when their size > size_file, then they will be pushed on s3's bucket and will delete from local disk.
45
+ #
46
+ ## If you don't specific size_file and time_file you have a curios mode. It will create only one file for each tag (if specified).
47
+ ## Then the file will be rest on temporary directory and don't will be pushed on bucket until we will restart logstash.
48
+ #
49
+ #
50
+ # #### Usage:
51
+ # This is an example of logstash config:
52
+ # [source,ruby]
53
+ # output {
54
+ # s3{
55
+ # access_key_id => "crazy_key" (required)
56
+ # secret_access_key => "monkey_access_key" (required)
57
+ # endpoint_region => "eu-west-1" (required)
58
+ # bucket => "boss_please_open_your_bucket" (required)
59
+ # size_file => 2048 (optional)
60
+ # time_file => 5 (optional)
61
+ # format => "plain" (optional)
62
+ # canned_acl => "private" (optional. Options are "private", "public_read", "public_read_write", "authenticated_read". Defaults to "private" )
63
+ # }
64
+ #
65
+ class LogStash::Outputs::S3 < LogStash::Outputs::Base
66
+ include LogStash::PluginMixins::AwsConfig
67
+
68
+ TEMPFILE_EXTENSION = "txt"
69
+ S3_INVALID_CHARACTERS = /[\^`><]/
70
+
71
+ config_name "s3"
72
+ default :codec, 'line'
73
+
74
+ # S3 bucket
75
+ config :bucket, :validate => :string
76
+
77
+ # AWS endpoint_region
78
+ config :endpoint_region, :validate => ["us-east-1", "us-west-1", "us-west-2",
79
+ "eu-west-1", "ap-southeast-1", "ap-southeast-2",
80
+ "ap-northeast-1", "sa-east-1", "us-gov-west-1"], :deprecated => 'Deprecated, use region instead.'
81
+
82
+ # Set the size of file in bytes, this means that files on bucket when have dimension > file_size, they are stored in two or more file.
83
+ # If you have tags then it will generate a specific size file for every tags
84
+ ##NOTE: define size of file is the better thing, because generate a local temporary file on disk and then put it in bucket.
85
+ config :size_file, :validate => :number, :default => 0
86
+
87
+ # Set the time, in minutes, to close the current sub_time_section of bucket.
88
+ # If you define file_size you have a number of files in consideration of the section and the current tag.
89
+ # 0 stay all time on listerner, beware if you specific 0 and size_file 0, because you will not put the file on bucket,
90
+ # for now the only thing this plugin can do is to put the file when logstash restart.
91
+ config :time_file, :validate => :number, :default => 0
92
+
93
+ ## IMPORTANT: if you use multiple instance of s3, you should specify on one of them the "restore=> true" and on the others "restore => false".
94
+ ## This is hack for not destroy the new files after restoring the initial files.
95
+ ## If you do not specify "restore => true" when logstash crashes or is restarted, the files are not sent into the bucket,
96
+ ## for example if you have single Instance.
97
+ config :restore, :validate => :boolean, :default => false
98
+
99
+ # The S3 canned ACL to use when putting the file. Defaults to "private".
100
+ config :canned_acl, :validate => ["private", "public_read", "public_read_write", "authenticated_read"],
101
+ :default => "private"
102
+
103
+ # Set the directory where logstash will store the tmp files before sending it to S3
104
+ # default to the current OS temporary directory in linux /tmp/logstash
105
+ config :temporary_directory, :validate => :string, :default => File.join(Dir.tmpdir, "logstash")
106
+
107
+ # Specify a prefix to the uploaded filename, this can simulate directories on S3
108
+ config :prefix, :validate => :string, :default => ''
109
+
110
+ # Specify how many workers to use to upload the files to S3
111
+ config :upload_workers_count, :validate => :number, :default => 1
112
+
113
+ # Exposed attributes for testing purpose.
114
+ attr_accessor :tempfile
115
+ attr_reader :page_counter
116
+ attr_reader :s3
117
+
118
+ def aws_s3_config
119
+ @logger.info("Registering s3 output", :bucket => @bucket, :endpoint_region => @region)
120
+ @s3 = AWS::S3.new(aws_options_hash)
121
+ end
122
+
123
+ def aws_service_endpoint(region)
124
+ # Make the deprecated endpoint_region work
125
+ # TODO: (ph) Remove this after deprecation.
126
+
127
+ if @endpoint_region
128
+ region_to_use = @endpoint_region
129
+ else
130
+ region_to_use = @region
131
+ end
132
+
133
+ return {
134
+ :s3_endpoint => region_to_use == 'us-east-1' ? 's3.amazonaws.com' : "s3-#{region_to_use}.amazonaws.com"
135
+ }
136
+ end
137
+
138
+ public
139
+ def write_on_bucket(file)
140
+ # find and use the bucket
141
+ bucket = @s3.buckets[@bucket]
142
+
143
+ remote_filename = file.gsub(@temporary_directory, "").sub!(/^\//, '')
144
+
145
+ split = remote_filename.split(".")
146
+
147
+ split.pop
148
+
149
+ split << ""
150
+
151
+ @logger.info("write_on_bucket: #{remote_filename}")
152
+
153
+ File.open(file, 'r') do |fileIO|
154
+ begin
155
+ # prepare for write the file
156
+ object = bucket.objects[remote_filename]
157
+ object.write(fileIO, :acl => @canned_acl)
158
+ rescue AWS::Errors::Base => error
159
+ @logger.error("S3: AWS error", :error => error)
160
+ raise LogStash::Error, "AWS Configuration Error, #{error}"
161
+ end
162
+ end
163
+
164
+ @logger.debug("S3: has written remote file in bucket with canned ACL", :remote_filename => remote_filename, :bucket => @bucket, :canned_acl => @canned_acl)
165
+ end
166
+
167
+ public
168
+ def register
169
+ require "aws-sdk"
170
+ # required if using ruby version < 2.0
171
+ # http://ruby.awsblog.com/post/Tx16QY1CI5GVBFT/Threading-with-the-AWS-SDK-for-Ruby
172
+ AWS.eager_autoload!(AWS::S3)
173
+
174
+ workers_not_supported
175
+
176
+ @s3 = aws_s3_config
177
+ @upload_queue = Queue.new
178
+ @file_rotation_lock = Mutex.new
179
+
180
+ if @prefix && @prefix =~ S3_INVALID_CHARACTERS
181
+ @logger.error("S3: prefix contains invalid characters", :prefix => @prefix, :contains => S3_INVALID_CHARACTERS)
182
+ raise LogStash::ConfigurationError, "S3: prefix contains invalid characters"
183
+ end
184
+
185
+ if !Dir.exist?(@temporary_directory)
186
+ FileUtils.mkdir_p(@temporary_directory)
187
+ end
188
+
189
+ @segregations = {}
190
+
191
+ test_s3_write
192
+
193
+ restore_from_crashes if @restore == true
194
+ register_segregation("test/test")
195
+ configure_periodic_rotation if time_file != 0
196
+ @segregations.delete("test/test")
197
+ # configure_upload_workers
198
+
199
+ @codec.on_event do |event, encoded_event|
200
+ handle_event(event, encoded_event)
201
+ end
202
+ end
203
+
204
+
205
+ # Use the same method that Amazon use to check
206
+ # permission on the user bucket by creating a small file
207
+ public
208
+ def test_s3_write
209
+ @logger.debug("S3: Creating a test file on S3")
210
+
211
+ test_filename = File.join(
212
+ @temporary_directory,
213
+ "logstash-programmatic-access-test-object-#{Time.now.to_i}"
214
+ )
215
+
216
+ File.open(test_filename, 'a') do |file|
217
+ file.write('test')
218
+ end
219
+
220
+ begin
221
+ write_on_bucket(test_filename)
222
+ delete_on_bucket(test_filename)
223
+ ensure
224
+ File.delete(test_filename)
225
+ end
226
+ end
227
+
228
+ public
229
+ def restore_from_crashes
230
+ @logger.debug("S3: is attempting to verify previous crashes...")
231
+
232
+ Dir[File.join(@temporary_directory, "*.#{TEMPFILE_EXTENSION}")].each do |file|
233
+ name_file = File.basename(file)
234
+ @logger.warn("S3: have found temporary file the upload process crashed, uploading file to S3.", :filename => name_file)
235
+ move_file_to_bucket_async(file)
236
+ end
237
+ end
238
+
239
+ public
240
+ def periodic_interval
241
+ @time_file * 60
242
+ end
243
+
244
+ public
245
+ def get_temporary_filename(directory, file, page_counter = 0)
246
+ # Just to make sure we don't over-write files from a 'concurrent' logstash instance
247
+ # this includes a node that was replaced and gets it's part number reset
248
+ rand_string = (0...8).map { (65 + rand(26)).chr }.join
249
+ return "#{@temporary_directory}/#{directory}/#{file}.part-#{page_counter}.#{rand_string}.#{TEMPFILE_EXTENSION}"
250
+ end
251
+
252
+ public
253
+ def receive(event)
254
+ return unless output?(event)
255
+ @codec.encode(event)
256
+ end
257
+
258
+ public
259
+ def rotate_events_log? segregation
260
+ @file_rotation_lock.synchronize do
261
+ @segregations[segregation][:file].size > @size_file
262
+ end
263
+ end
264
+
265
+ public
266
+ def write_events_to_multiple_files?
267
+ @size_file > 0
268
+ end
269
+
270
+ public
271
+ def teardown
272
+ shutdown_upload_workers
273
+ @periodic_rotation_thread.stop! if @periodic_rotation_thread
274
+
275
+ @file_rotation_lock.synchronize do
276
+ @tempfile.close unless @tempfile.nil? && @tempfile.closed?
277
+ end
278
+ finished
279
+ end
280
+
281
+ private
282
+ def shutdown_upload_workers
283
+ @logger.debug("S3: Gracefully shutdown the upload workers")
284
+ @upload_queue << LogStash::ShutdownEvent
285
+ end
286
+
287
+ private
288
+ def extract_base(segregation)
289
+ dirs = segregation.split("/")
290
+ dir = dirs[0..(dirs.length- 2)]
291
+ file = dirs[-1]
292
+ return [dir.join("/"), file]
293
+ end
294
+
295
+ private
296
+ def register_segregation segregation
297
+ # Register the aggregation (file pointer, page counter and timestamp)
298
+ unless @segregations.keys.include? segregation
299
+ @logger.info("register new segregation: #{segregation}")
300
+
301
+ directory, file = extract_base(segregation)
302
+ @logger.info(:directory => directory, :file => file)
303
+
304
+ begin
305
+ temp_dir = @temporary_directory + "/" + directory
306
+ FileUtils.mkdir_p(temp_dir)
307
+ @logger.info("created directory: #{directory}")
308
+
309
+ @segregations[segregation] = {
310
+ :start_time => Time.now.to_i,
311
+ :directory => directory,
312
+ :file_base => file,
313
+ :current_page => 0,
314
+ :file_pointers => {
315
+ 0 => File.open(get_temporary_filename(directory, file, 0), 'a'),
316
+ }
317
+ }
318
+ rescue StandardError => e
319
+ @logger.info(e)
320
+ @logger.info("Failed to create temp directory")
321
+ raise e
322
+ end
323
+ end
324
+ end
325
+
326
+ def commit_locally(segregation, event, encoded_event)
327
+ seg = @segregations[segregation]
328
+ if seg[:file_pointers][seg[:current_page]].syswrite(encoded_event)
329
+ @logger.info("S3> commit_locally: write success (file: #{seg[:file_pointers][seg[:current_page]].path}")
330
+ else
331
+ # What do?
332
+ @logger.info("S3> commit_locally: write fail (file: #{seg[:file_pointers][seg[:current_page]].path}")
333
+ end
334
+
335
+ end
336
+
337
+ # Only based on file_size, time is in another thread
338
+ def should_commit?(segregation)
339
+ seg = @segregations[segregation]
340
+ if @size_file > 0 and seg[:file_pointers][seg[:current_page]].size > @size_file
341
+ @logger.info("S3> should_commit: upload because of size")
342
+ return true
343
+ end
344
+
345
+ return false
346
+ end
347
+
348
+ def commit(segregation)
349
+ current_page = @segregations[segregation][:current_page]
350
+ time_start = @segregations[segregation][:start_time]
351
+
352
+ next_page(segregation)
353
+
354
+ Stud::Task.new do
355
+ LogStash::Util::set_thread_name("S3> thread: commit")
356
+ upload_and_delete(segregation, current_page, time_start)
357
+ end
358
+
359
+ end
360
+
361
+ def upload_and_delete(segregation, page_to_upload, time_start)
362
+ begin
363
+ @logger.info("in thread")
364
+ seg = @segregations[segregation]
365
+ bucket = @s3.buckets[@bucket]
366
+ key = seg[:file_pointers][page_to_upload].path.gsub(@temporary_directory, "").sub!(/^\//, '')
367
+ @logger.info("write_on_bucket: #{key}")
368
+ @file_rotation_lock.synchronize do
369
+ @logger.info("#{segregation} size is #{seg[:file_pointers][page_to_upload].size}")
370
+ if seg[:file_pointers][page_to_upload].size > 0
371
+ File.open(seg[:file_pointers][page_to_upload].path, 'r') do |fileIO|
372
+ begin
373
+ # prepare for write the file
374
+ object = bucket.objects[key]
375
+ object.write(fileIO, :acl => @canned_acl)
376
+ @logger.debug("S3: has written remote file in bucket with canned ACL", :remote_filename => key, :bucket => @bucket, :canned_acl => @canned_acl)
377
+ rescue AWS::Errors::Base => error
378
+ @logger.error("S3: AWS error", :error => error)
379
+ raise LogStash::Error, "AWS Configuration Error, #{error}"
380
+ end
381
+ end
382
+ else
383
+ @logger.info("don't upload: size <= 0")
384
+ end
385
+
386
+ FileUtils.rm(@segregations[segregation][:file_pointers][page_to_upload].path)
387
+ @segregations[segregation][:file_pointers].delete(page_to_upload)
388
+ end
389
+
390
+ rescue StandardError => e
391
+ @logger.info(e)
392
+ raise e
393
+ end
394
+
395
+ end
396
+
397
+ private
398
+ def handle_event(event, encoded_event)
399
+ segregation = event.sprintf(@prefix)
400
+
401
+ register_segregation(segregation)
402
+
403
+ commit_locally(segregation, event, encoded_event)
404
+
405
+ if should_commit? segregation
406
+ commit(segregation)
407
+ end
408
+ end
409
+
410
+ private
411
+ def configure_periodic_rotation
412
+ @periodic_rotation_thread = Stud::Task.new do
413
+ begin
414
+ LogStash::Util::set_thread_name("S3> thread: periodic_uploader")
415
+ begin
416
+ Stud.interval(periodic_interval, :sleep_then_run => true) do
417
+ begin
418
+ @logger.info("running periodic uploader ... but may not see new segregations")
419
+ @segregations.each { |segregation, values|
420
+ commit(segregation)
421
+ }
422
+ rescue StandardError => e
423
+ @logger.info(e)
424
+ end
425
+ end
426
+ rescue StandardError => e
427
+ @logger.info(e)
428
+ end
429
+ rescue StandardError => e
430
+ @logger.info(e)
431
+ end
432
+ end
433
+ end
434
+
435
+ private
436
+ def next_page( segregation )
437
+ seg = @segregations[segregation]
438
+ seg[:current_page] = seg[:current_page] + 1
439
+ seg[:file_pointers][seg[:current_page]] = File.open(
440
+ get_temporary_filename(
441
+ @segregations[segregation][:directory],
442
+ @segregations[segregation][:file_base],
443
+ @segregations[segregation][:current_page]
444
+ ),
445
+ 'a')
446
+ end
447
+
448
+
449
+ private
450
+ def delete_on_bucket(filename)
451
+ bucket = @s3.buckets[@bucket]
452
+
453
+ remote_filename = "#{@prefix}#{File.basename(filename)}"
454
+
455
+ @logger.debug("S3: delete file from bucket", :remote_filename => remote_filename, :bucket => @bucket)
456
+
457
+ begin
458
+ # prepare for write the file
459
+ object = bucket.objects[remote_filename]
460
+ object.delete
461
+ rescue AWS::Errors::Base => e
462
+ @logger.error("S3: AWS error", :error => e)
463
+ raise LogStash::ConfigurationError, "AWS Configuration Error"
464
+ end
465
+ end
466
+ end
467
+
468
+ class Segment
469
+ def initialize(s3, segment, file_size, timeout)
470
+ @s3 = s3
471
+ @segment = segment
472
+ @file_size = file_size
473
+ @timeout = timeout
474
+ end
475
+
476
+ def enqueue(event)
477
+
478
+ end
479
+
480
+ end
@@ -0,0 +1,31 @@
1
+ Gem::Specification.new do |s|
2
+
3
+ s.name = 'logstash-output-s3-leprechaun-fork'
4
+ s.version = '1.0.2'
5
+ s.licenses = ['Apache License (2.0)']
6
+ s.summary = "This plugin was created for store the logstash's events into Amazon Simple Storage Service (Amazon S3)"
7
+ s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program"
8
+ s.authors = ["Laurence MacGuire", "Elastic"]
9
+ s.email = 'leprechaun@gmail.com'
10
+ s.homepage = "https://www.github.com/leprechaun/logstash-output-s3"
11
+ s.require_paths = ["lib"]
12
+
13
+ # Files
14
+ s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT']
15
+
16
+ # Tests
17
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
18
+
19
+ # Special flag to let us know this is actually a logstash plugin
20
+ s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" }
21
+
22
+ # Gem dependencies
23
+ s.add_runtime_dependency "logstash-core", '>= 1.4.0', '< 2.0.0'
24
+ s.add_runtime_dependency 'logstash-mixin-aws'
25
+ s.add_runtime_dependency 'stud', '~> 0.0.18'
26
+ s.add_development_dependency 'logstash-devutils'
27
+ s.add_development_dependency 'logstash-input-generator'
28
+ s.add_development_dependency 'logstash-input-stdin'
29
+ s.add_development_dependency 'logstash-codec-line'
30
+ end
31
+
@@ -0,0 +1,96 @@
1
+ require "logstash/devutils/rspec/spec_helper"
2
+ require "logstash/outputs/s3"
3
+ require 'socket'
4
+ require "aws-sdk"
5
+ require "fileutils"
6
+ require "stud/temporary"
7
+ require_relative "../supports/helpers"
8
+
9
+ describe LogStash::Outputs::S3, :integration => true, :s3 => true do
10
+ before do
11
+ Thread.abort_on_exception = true
12
+ end
13
+
14
+ let!(:minimal_settings) { { "access_key_id" => ENV['AWS_ACCESS_KEY_ID'],
15
+ "secret_access_key" => ENV['AWS_SECRET_ACCESS_KEY'],
16
+ "bucket" => ENV['AWS_LOGSTASH_TEST_BUCKET'],
17
+ "region" => ENV["AWS_REGION"] || "us-east-1",
18
+ "temporary_directory" => Stud::Temporary.pathname('temporary_directory') }}
19
+
20
+ let!(:s3_object) do
21
+ s3output = LogStash::Outputs::S3.new(minimal_settings)
22
+ s3output.register
23
+ s3output.s3
24
+ end
25
+
26
+ after(:all) do
27
+ delete_matching_keys_on_bucket('studtmp')
28
+ delete_matching_keys_on_bucket('my-prefix')
29
+ end
30
+
31
+ describe "#register" do
32
+ it "write a file on the bucket to check permissions" do
33
+ s3 = LogStash::Outputs::S3.new(minimal_settings)
34
+ expect(s3.register).not_to raise_error
35
+ end
36
+ end
37
+
38
+ describe "#write_on_bucket" do
39
+ after(:all) do
40
+ File.unlink(fake_data.path)
41
+ end
42
+
43
+ let!(:fake_data) { Stud::Temporary.file }
44
+
45
+ it "should prefix the file on the bucket if a prefix is specified" do
46
+ prefix = "my-prefix"
47
+
48
+ config = minimal_settings.merge({
49
+ "prefix" => prefix,
50
+ })
51
+
52
+ s3 = LogStash::Outputs::S3.new(config)
53
+ s3.register
54
+ s3.write_on_bucket(fake_data)
55
+
56
+ expect(key_exists_on_bucket?("#{prefix}#{File.basename(fake_data.path)}")).to eq(true)
57
+ end
58
+
59
+ it 'should use the same local filename if no prefix is specified' do
60
+ s3 = LogStash::Outputs::S3.new(minimal_settings)
61
+ s3.register
62
+ s3.write_on_bucket(fake_data)
63
+
64
+ expect(key_exists_on_bucket?(File.basename(fake_data.path))).to eq(true)
65
+ end
66
+ end
67
+
68
+ describe "#move_file_to_bucket" do
69
+ let!(:s3) { LogStash::Outputs::S3.new(minimal_settings) }
70
+
71
+ before do
72
+ s3.register
73
+ end
74
+
75
+ it "should upload the file if the size > 0" do
76
+ tmp = Stud::Temporary.file
77
+ allow(File).to receive(:zero?).and_return(false)
78
+ s3.move_file_to_bucket(tmp)
79
+ expect(key_exists_on_bucket?(File.basename(tmp.path))).to eq(true)
80
+ end
81
+ end
82
+
83
+ describe "#restore_from_crashes" do
84
+ it "read the temp directory and upload the matching file to s3" do
85
+ Stud::Temporary.pathname do |temp_path|
86
+ tempfile = File.open(File.join(temp_path, 'A'), 'w+') { |f| f.write('test')}
87
+
88
+ s3 = LogStash::Outputs::S3.new(minimal_settings.merge({ "temporary_directory" => temp_path }))
89
+ s3.restore_from_crashes
90
+
91
+ expect(File.exist?(tempfile.path)).to eq(false)
92
+ expect(key_exists_on_bucket?(File.basename(tempfile.path))).to eq(true)
93
+ end
94
+ end
95
+ end
96
+ end
@@ -0,0 +1,343 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/outputs/s3"
4
+ require "logstash/codecs/line"
5
+ require "logstash/pipeline"
6
+ require "aws-sdk"
7
+ require "fileutils"
8
+ require_relative "../supports/helpers"
9
+
10
+ describe LogStash::Outputs::S3 do
11
+ before do
12
+ # We stub all the calls from S3, for more information see:
13
+ # http://ruby.awsblog.com/post/Tx2SU6TYJWQQLC3/Stubbing-AWS-Responses
14
+ AWS.stub!
15
+ Thread.abort_on_exception = true
16
+ end
17
+
18
+ let(:minimal_settings) { { "access_key_id" => "1234",
19
+ "secret_access_key" => "secret",
20
+ "bucket" => "my-bucket" } }
21
+
22
+ describe "configuration" do
23
+ let!(:config) { { "endpoint_region" => "sa-east-1" } }
24
+
25
+ it "should support the deprecated endpoint_region as a configuration option" do
26
+ s3 = LogStash::Outputs::S3.new(config)
27
+ expect(s3.aws_options_hash[:s3_endpoint]).to eq("s3-sa-east-1.amazonaws.com")
28
+ end
29
+
30
+ it "should fallback to region if endpoint_region isnt defined" do
31
+ s3 = LogStash::Outputs::S3.new(config.merge({ "region" => 'sa-east-1' }))
32
+ expect(s3.aws_options_hash).to include(:s3_endpoint => "s3-sa-east-1.amazonaws.com")
33
+ end
34
+ end
35
+
36
+ describe "#register" do
37
+ it "should create the tmp directory if it doesn't exist" do
38
+ temporary_directory = Stud::Temporary.pathname("temporary_directory")
39
+
40
+ config = {
41
+ "access_key_id" => "1234",
42
+ "secret_access_key" => "secret",
43
+ "bucket" => "logstash",
44
+ "size_file" => 10,
45
+ "temporary_directory" => temporary_directory
46
+ }
47
+
48
+ s3 = LogStash::Outputs::S3.new(config)
49
+ allow(s3).to receive(:test_s3_write)
50
+ s3.register
51
+
52
+ expect(Dir.exist?(temporary_directory)).to eq(true)
53
+ s3.teardown
54
+ FileUtils.rm_r(temporary_directory)
55
+ end
56
+
57
+ it "should raise a ConfigurationError if the prefix contains one or more '\^`><' characters" do
58
+ config = {
59
+ "prefix" => "`no\><^"
60
+ }
61
+
62
+ s3 = LogStash::Outputs::S3.new(config)
63
+
64
+ expect {
65
+ s3.register
66
+ }.to raise_error(LogStash::ConfigurationError)
67
+ end
68
+ end
69
+
70
+ describe "#generate_temporary_filename" do
71
+ before do
72
+ allow(Socket).to receive(:gethostname) { "logstash.local" }
73
+ allow(Time).to receive(:now) { Time.new('2015-10-09-09:00') }
74
+ end
75
+
76
+ it "should add tags to the filename if present" do
77
+ config = minimal_settings.merge({ "tags" => ["elasticsearch", "logstash", "kibana"], "temporary_directory" => "/tmp/logstash"})
78
+ s3 = LogStash::Outputs::S3.new(config)
79
+ expect(s3.get_temporary_filename).to eq("ls.s3.logstash.local.2015-01-01T00.00.tag_elasticsearch.logstash.kibana.part0.txt")
80
+ end
81
+
82
+ it "should not add the tags to the filename" do
83
+ config = minimal_settings.merge({ "tags" => [], "temporary_directory" => "/tmp/logstash" })
84
+ s3 = LogStash::Outputs::S3.new(config)
85
+ expect(s3.get_temporary_filename(3)).to eq("ls.s3.logstash.local.2015-01-01T00.00.part3.txt")
86
+ end
87
+
88
+ it "normalized the temp directory to include the trailing slash if missing" do
89
+ s3 = LogStash::Outputs::S3.new(minimal_settings.merge({ "temporary_directory" => "/tmp/logstash" }))
90
+ expect(s3.get_temporary_filename).to eq("ls.s3.logstash.local.2015-01-01T00.00.part0.txt")
91
+ end
92
+ end
93
+
94
+ describe "#write_on_bucket" do
95
+ let!(:fake_data) { Stud::Temporary.file }
96
+
97
+ let(:fake_bucket) do
98
+ s3 = double('S3Object')
99
+ allow(s3).to receive(:write)
100
+ s3
101
+ end
102
+
103
+ it "should prefix the file on the bucket if a prefix is specified" do
104
+ prefix = "my-prefix"
105
+
106
+ config = minimal_settings.merge({
107
+ "prefix" => prefix,
108
+ "bucket" => "my-bucket"
109
+ })
110
+
111
+ expect_any_instance_of(AWS::S3::ObjectCollection).to receive(:[]).with("#{prefix}#{File.basename(fake_data)}") { fake_bucket }
112
+
113
+ s3 = LogStash::Outputs::S3.new(config)
114
+ allow(s3).to receive(:test_s3_write)
115
+ s3.register
116
+ s3.write_on_bucket(fake_data)
117
+ end
118
+
119
+ it 'should use the same local filename if no prefix is specified' do
120
+ config = minimal_settings.merge({
121
+ "bucket" => "my-bucket"
122
+ })
123
+
124
+ expect_any_instance_of(AWS::S3::ObjectCollection).to receive(:[]).with(File.basename(fake_data)) { fake_bucket }
125
+
126
+ s3 = LogStash::Outputs::S3.new(minimal_settings)
127
+ allow(s3).to receive(:test_s3_write)
128
+ s3.register
129
+ s3.write_on_bucket(fake_data)
130
+ end
131
+ end
132
+
133
+ describe "#write_events_to_multiple_files?" do
134
+ it 'returns true if the size_file is != 0 ' do
135
+ s3 = LogStash::Outputs::S3.new(minimal_settings.merge({ "size_file" => 200 }))
136
+ expect(s3.write_events_to_multiple_files?).to eq(true)
137
+ end
138
+
139
+ it 'returns false if size_file is zero or not set' do
140
+ s3 = LogStash::Outputs::S3.new(minimal_settings)
141
+ expect(s3.write_events_to_multiple_files?).to eq(false)
142
+ end
143
+ end
144
+
145
+ describe "#write_to_tempfile" do
146
+ it "should append the event to a file" do
147
+ Stud::Temporary.file("logstash", "a+") do |tmp|
148
+ s3 = LogStash::Outputs::S3.new(minimal_settings)
149
+ allow(s3).to receive(:test_s3_write)
150
+ s3.register
151
+ s3.tempfile = tmp
152
+ s3.write_to_tempfile("test-write")
153
+ tmp.rewind
154
+ expect(tmp.read).to eq("test-write")
155
+ end
156
+ end
157
+ end
158
+
159
+ describe "#rotate_events_log" do
160
+
161
+ context "having a single worker" do
162
+ let(:s3) { LogStash::Outputs::S3.new(minimal_settings.merge({ "size_file" => 1024 })) }
163
+
164
+ before(:each) do
165
+ s3.register
166
+ end
167
+
168
+ it "returns true if the tempfile is over the file_size limit" do
169
+ Stud::Temporary.file do |tmp|
170
+ allow(tmp).to receive(:size) { 2024001 }
171
+
172
+ s3.tempfile = tmp
173
+ expect(s3.rotate_events_log?).to be(true)
174
+ end
175
+ end
176
+
177
+ it "returns false if the tempfile is under the file_size limit" do
178
+ Stud::Temporary.file do |tmp|
179
+ allow(tmp).to receive(:size) { 100 }
180
+
181
+ s3.tempfile = tmp
182
+ expect(s3.rotate_events_log?).to eq(false)
183
+ end
184
+ end
185
+ end
186
+
187
+ context "having periodic rotations" do
188
+ let(:s3) { LogStash::Outputs::S3.new(minimal_settings.merge({ "size_file" => 1024, "time_file" => 6e-10 })) }
189
+ let(:tmp) { Tempfile.new('s3_rotation_temp_file') }
190
+
191
+ before(:each) do
192
+ s3.tempfile = tmp
193
+ s3.register
194
+ end
195
+
196
+ after(:each) do
197
+ s3.teardown
198
+ tmp.close
199
+ tmp.unlink
200
+ end
201
+
202
+ it "raises no error when periodic rotation happen" do
203
+ 1000.times do
204
+ expect { s3.rotate_events_log? }.not_to raise_error
205
+ end
206
+ end
207
+ end
208
+ end
209
+
210
+ describe "#move_file_to_bucket" do
211
+ subject { LogStash::Outputs::S3.new(minimal_settings) }
212
+
213
+ it "should always delete the source file" do
214
+ tmp = Stud::Temporary.file
215
+
216
+ allow(File).to receive(:zero?).and_return(true)
217
+ expect(File).to receive(:delete).with(tmp)
218
+
219
+ subject.move_file_to_bucket(tmp)
220
+ end
221
+
222
+ it 'should not upload the file if the size of the file is zero' do
223
+ temp_file = Stud::Temporary.file
224
+ allow(temp_file).to receive(:zero?).and_return(true)
225
+
226
+ expect(subject).not_to receive(:write_on_bucket)
227
+ subject.move_file_to_bucket(temp_file)
228
+ end
229
+
230
+ it "should upload the file if the size > 0" do
231
+ tmp = Stud::Temporary.file
232
+
233
+ allow(File).to receive(:zero?).and_return(false)
234
+ expect(subject).to receive(:write_on_bucket)
235
+
236
+ subject.move_file_to_bucket(tmp)
237
+ end
238
+ end
239
+
240
+ describe "#restore_from_crashes" do
241
+ it "read the temp directory and upload the matching file to s3" do
242
+ s3 = LogStash::Outputs::S3.new(minimal_settings.merge({ "temporary_directory" => "/tmp/logstash/" }))
243
+
244
+ expect(Dir).to receive(:[]).with("/tmp/logstash/*.txt").and_return(["/tmp/logstash/01.txt"])
245
+ expect(s3).to receive(:move_file_to_bucket_async).with("/tmp/logstash/01.txt")
246
+
247
+
248
+ s3.restore_from_crashes
249
+ end
250
+ end
251
+
252
+ describe "#receive" do
253
+ it "should send the event through the codecs" do
254
+ data = {"foo" => "bar", "baz" => {"bah" => ["a","b","c"]}, "@timestamp" => "2014-05-30T02:52:17.929Z"}
255
+ event = LogStash::Event.new(data)
256
+
257
+ expect_any_instance_of(LogStash::Codecs::Line).to receive(:encode).with(event)
258
+
259
+ s3 = LogStash::Outputs::S3.new(minimal_settings)
260
+ allow(s3).to receive(:test_s3_write)
261
+ s3.register
262
+
263
+ s3.receive(event)
264
+ end
265
+ end
266
+
267
+ describe "when rotating the temporary file" do
268
+ before { allow(File).to receive(:delete) }
269
+
270
+ it "doesn't skip events if using the size_file option" do
271
+ Stud::Temporary.directory do |temporary_directory|
272
+ size_file = rand(200..20000)
273
+ event_count = rand(300..15000)
274
+
275
+ config = %Q[
276
+ input {
277
+ generator {
278
+ count => #{event_count}
279
+ }
280
+ }
281
+ output {
282
+ s3 {
283
+ access_key_id => "1234"
284
+ secret_access_key => "secret"
285
+ size_file => #{size_file}
286
+ codec => line
287
+ temporary_directory => '#{temporary_directory}'
288
+ bucket => 'testing'
289
+ }
290
+ }
291
+ ]
292
+
293
+ pipeline = LogStash::Pipeline.new(config)
294
+
295
+ pipeline_thread = Thread.new { pipeline.run }
296
+ sleep 0.1 while !pipeline.ready?
297
+ pipeline_thread.join
298
+
299
+ events_written_count = events_in_files(Dir[File.join(temporary_directory, 'ls.*.txt')])
300
+ expect(events_written_count).to eq(event_count)
301
+ end
302
+ end
303
+
304
+ it "doesn't skip events if using the time_file option", :tag => :slow do
305
+ Stud::Temporary.directory do |temporary_directory|
306
+ time_file = rand(5..10)
307
+ number_of_rotation = rand(4..10)
308
+
309
+ config = {
310
+ "time_file" => time_file,
311
+ "codec" => "line",
312
+ "temporary_directory" => temporary_directory,
313
+ "bucket" => "testing"
314
+ }
315
+
316
+ s3 = LogStash::Outputs::S3.new(minimal_settings.merge(config))
317
+ # Make the test run in seconds intead of minutes..
318
+ allow(s3).to receive(:periodic_interval).and_return(time_file)
319
+ s3.register
320
+
321
+ # Force to have a few files rotation
322
+ stop_time = Time.now + (number_of_rotation * time_file)
323
+ event_count = 0
324
+
325
+ event = LogStash::Event.new("message" => "Hello World")
326
+
327
+ until Time.now > stop_time do
328
+ s3.receive(event)
329
+ event_count += 1
330
+ end
331
+ s3.teardown
332
+
333
+ generated_files = Dir[File.join(temporary_directory, 'ls.*.txt')]
334
+
335
+ events_written_count = events_in_files(generated_files)
336
+
337
+ # Skew times can affect the number of rotation..
338
+ expect(generated_files.count).to be_within(number_of_rotation).of(number_of_rotation + 1)
339
+ expect(events_written_count).to eq(event_count)
340
+ end
341
+ end
342
+ end
343
+ end
@@ -0,0 +1,14 @@
1
+ def delete_matching_keys_on_bucket(prefix)
2
+ s3_object.buckets[minimal_settings["bucket"]].objects.with_prefix(prefix).each do |obj|
3
+ obj.delete
4
+ end
5
+ end
6
+
7
+ def key_exists_on_bucket?(key)
8
+ s3_object.buckets[minimal_settings["bucket"]].objects[key].exists?
9
+ end
10
+
11
+ def events_in_files(files)
12
+ files.collect { |file| File.foreach(file).count }.inject(&:+)
13
+ end
14
+
metadata ADDED
@@ -0,0 +1,168 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: logstash-output-s3-leprechaun-fork
3
+ version: !ruby/object:Gem::Version
4
+ version: 1.0.2
5
+ platform: ruby
6
+ authors:
7
+ - Laurence MacGuire
8
+ - Elastic
9
+ autorequire:
10
+ bindir: bin
11
+ cert_chain: []
12
+ date: 2016-04-28 00:00:00.000000000 Z
13
+ dependencies:
14
+ - !ruby/object:Gem::Dependency
15
+ name: logstash-core
16
+ requirement: !ruby/object:Gem::Requirement
17
+ requirements:
18
+ - - ">="
19
+ - !ruby/object:Gem::Version
20
+ version: 1.4.0
21
+ - - "<"
22
+ - !ruby/object:Gem::Version
23
+ version: 2.0.0
24
+ type: :runtime
25
+ prerelease: false
26
+ version_requirements: !ruby/object:Gem::Requirement
27
+ requirements:
28
+ - - ">="
29
+ - !ruby/object:Gem::Version
30
+ version: 1.4.0
31
+ - - "<"
32
+ - !ruby/object:Gem::Version
33
+ version: 2.0.0
34
+ - !ruby/object:Gem::Dependency
35
+ name: logstash-mixin-aws
36
+ requirement: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - ">="
39
+ - !ruby/object:Gem::Version
40
+ version: '0'
41
+ type: :runtime
42
+ prerelease: false
43
+ version_requirements: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - ">="
46
+ - !ruby/object:Gem::Version
47
+ version: '0'
48
+ - !ruby/object:Gem::Dependency
49
+ name: stud
50
+ requirement: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - "~>"
53
+ - !ruby/object:Gem::Version
54
+ version: 0.0.18
55
+ type: :runtime
56
+ prerelease: false
57
+ version_requirements: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - "~>"
60
+ - !ruby/object:Gem::Version
61
+ version: 0.0.18
62
+ - !ruby/object:Gem::Dependency
63
+ name: logstash-devutils
64
+ requirement: !ruby/object:Gem::Requirement
65
+ requirements:
66
+ - - ">="
67
+ - !ruby/object:Gem::Version
68
+ version: '0'
69
+ type: :development
70
+ prerelease: false
71
+ version_requirements: !ruby/object:Gem::Requirement
72
+ requirements:
73
+ - - ">="
74
+ - !ruby/object:Gem::Version
75
+ version: '0'
76
+ - !ruby/object:Gem::Dependency
77
+ name: logstash-input-generator
78
+ requirement: !ruby/object:Gem::Requirement
79
+ requirements:
80
+ - - ">="
81
+ - !ruby/object:Gem::Version
82
+ version: '0'
83
+ type: :development
84
+ prerelease: false
85
+ version_requirements: !ruby/object:Gem::Requirement
86
+ requirements:
87
+ - - ">="
88
+ - !ruby/object:Gem::Version
89
+ version: '0'
90
+ - !ruby/object:Gem::Dependency
91
+ name: logstash-input-stdin
92
+ requirement: !ruby/object:Gem::Requirement
93
+ requirements:
94
+ - - ">="
95
+ - !ruby/object:Gem::Version
96
+ version: '0'
97
+ type: :development
98
+ prerelease: false
99
+ version_requirements: !ruby/object:Gem::Requirement
100
+ requirements:
101
+ - - ">="
102
+ - !ruby/object:Gem::Version
103
+ version: '0'
104
+ - !ruby/object:Gem::Dependency
105
+ name: logstash-codec-line
106
+ requirement: !ruby/object:Gem::Requirement
107
+ requirements:
108
+ - - ">="
109
+ - !ruby/object:Gem::Version
110
+ version: '0'
111
+ type: :development
112
+ prerelease: false
113
+ version_requirements: !ruby/object:Gem::Requirement
114
+ requirements:
115
+ - - ">="
116
+ - !ruby/object:Gem::Version
117
+ version: '0'
118
+ description: This gem is a logstash plugin required to be installed on top of the
119
+ Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not
120
+ a stand-alone program
121
+ email: leprechaun@gmail.com
122
+ executables: []
123
+ extensions: []
124
+ extra_rdoc_files: []
125
+ files:
126
+ - CHANGELOG.md
127
+ - CONTRIBUTORS
128
+ - DEVELOPER.md
129
+ - Gemfile
130
+ - LICENSE
131
+ - NOTICE.TXT
132
+ - README.md
133
+ - lib/logstash/outputs/s3.rb
134
+ - logstash-output-s3.gemspec
135
+ - spec/integration/s3_spec.rb
136
+ - spec/outputs/s3_spec.rb
137
+ - spec/supports/helpers.rb
138
+ homepage: https://www.github.com/leprechaun/logstash-output-s3
139
+ licenses:
140
+ - Apache License (2.0)
141
+ metadata:
142
+ logstash_plugin: 'true'
143
+ logstash_group: output
144
+ post_install_message:
145
+ rdoc_options: []
146
+ require_paths:
147
+ - lib
148
+ required_ruby_version: !ruby/object:Gem::Requirement
149
+ requirements:
150
+ - - ">="
151
+ - !ruby/object:Gem::Version
152
+ version: '0'
153
+ required_rubygems_version: !ruby/object:Gem::Requirement
154
+ requirements:
155
+ - - ">="
156
+ - !ruby/object:Gem::Version
157
+ version: '0'
158
+ requirements: []
159
+ rubyforge_project:
160
+ rubygems_version: 2.6.3
161
+ signing_key:
162
+ specification_version: 4
163
+ summary: This plugin was created for store the logstash's events into Amazon Simple
164
+ Storage Service (Amazon S3)
165
+ test_files:
166
+ - spec/integration/s3_spec.rb
167
+ - spec/outputs/s3_spec.rb
168
+ - spec/supports/helpers.rb