logstash-input-s3cloudtrail 3.1.1

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 06d71b21006617b46bdf9f6dcb275fac0f1961ed
4
+ data.tar.gz: 993cf883a509aa4865d2c276fcd45b70e5320172
5
+ SHA512:
6
+ metadata.gz: 49a3ebc0e5b6fb5bb82140f095e293269e8e85fbe854a86857c715c53820ff819332fa891f2b278091de4002d125e2790ee674470ac1a28d836f4d246e31cac4
7
+ data.tar.gz: f1af2dd396c7dfa5ade74fa93cb54dc28dfd588ea29b74a23da0b9dd3913069c38e51ef40d2235f25efe8d6208d4e5118be4efe2f9dfb586885395400ed88730
@@ -0,0 +1,29 @@
1
+ ## 3.1.1
2
+ - Relax constraint on logstash-core-plugin-api to >= 1.60 <= 2.99
3
+
4
+ ## 3.1.0
5
+ - breaking,config: Remove deprecated config `credentials` and `region_endpoint`. Please use AWS mixin.
6
+
7
+ ## 3.0.1
8
+ - Republish all the gems under jruby.
9
+
10
+ ## 3.0.0
11
+ - Update the plugin to the version 2.0 of the plugin api, this change is required for Logstash 5.0 compatibility. See https://github.com/elastic/logstash/issues/5141
12
+
13
+ ## 2.0.6
14
+ - Depend on logstash-core-plugin-api instead of logstash-core, removing the need to mass update plugins on major releases of logstash
15
+
16
+ ## 2.0.5
17
+ - New dependency requirements for logstash-core for the 5.0 release
18
+
19
+ ## 2.0.4
20
+ - Fix for Error: No Such Key problem when deleting
21
+
22
+ ## 2.0.3
23
+ - Do not raise an exception if the sincedb file is empty, instead return the current time #66
24
+
25
+ ## 2.0.0
26
+ - Plugins were updated to follow the new shutdown semantic, this mainly allows Logstash to instruct input plugins to terminate gracefully,
27
+ instead of using Thread.raise on the plugins' threads. Ref: https://github.com/elastic/logstash/pull/3895
28
+ - Dependency on logstash-core update to 2.0
29
+
@@ -0,0 +1,19 @@
1
+ The following is a list of people who have contributed ideas, code, bug
2
+ reports, or in general have helped logstash along its way.
3
+
4
+ Contributors:
5
+ * Aaron Mildenstein (untergeek)
6
+ * Adam Tucker (adamjt)
7
+ * John Pariseau (ururk)
8
+ * Jordan Sissel (jordansissel)
9
+ * Mathieu Guillaume (mguillaume)
10
+ * Pier-Hugues Pellerin (ph)
11
+ * Richard Pijnenburg (electrical)
12
+ * Suyog Rao (suyograo)
13
+ * Ted Timmons (tedder)
14
+ * Ryan O'Keeffe (danielredoak)
15
+
16
+ Note: If you've sent us patches, bug reports, or otherwise contributed to
17
+ Logstash, and you aren't on the list above and want to be, please let us know
18
+ and we'll make sure you're here. Contributions from folks like you are what make
19
+ open source awesome.
data/Gemfile ADDED
@@ -0,0 +1,4 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in logstash-mass_effect.gemspec
4
+ gemspec
data/LICENSE ADDED
@@ -0,0 +1,13 @@
1
+ Copyright (c) 2012–2016 Elasticsearch <http://www.elastic.co>
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License");
4
+ you may not use this file except in compliance with the License.
5
+ You may obtain a copy of the License at
6
+
7
+ http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ Unless required by applicable law or agreed to in writing, software
10
+ distributed under the License is distributed on an "AS IS" BASIS,
11
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ See the License for the specific language governing permissions and
13
+ limitations under the License.
@@ -0,0 +1,5 @@
1
+ Elasticsearch
2
+ Copyright 2012-2015 Elasticsearch
3
+
4
+ This product includes software developed by The Apache Software
5
+ Foundation (http://www.apache.org/).
@@ -0,0 +1,117 @@
1
+ # Logstash Plugin
2
+
3
+ [![Travis Build Status](https://travis-ci.org/logstash-plugins/logstash-input-s3.svg)](https://travis-ci.org/logstash-plugins/logstash-input-s3)
4
+
5
+ This is a plugin for [Logstash](https://github.com/elastic/logstash).
6
+
7
+ It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
8
+
9
+ ## Required S3 Permissions
10
+
11
+ This plugin reads from your S3 bucket, and would require the following
12
+ permissions applied to the AWS IAM Policy being used:
13
+
14
+ * `s3:ListBucket` to check if the S3 bucket exists and list objects in it.
15
+ * `s3:GetObject` to check object metadata and download objects from S3 buckets.
16
+
17
+ You might also need `s3:DeleteObject` when setting S3 input to delete on read.
18
+ And the `s3:CreateBucket` permission to create a backup bucket unless already
19
+ exists.
20
+ In addition, when `backup_to_bucket` is used, the `s3:PutObject` action is also required.
21
+
22
+ For buckets that have versioning enabled, you might need to add additional
23
+ permissions.
24
+
25
+ More information about S3 permissions can be found at -
26
+ http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
27
+
28
+ ## Documentation
29
+
30
+ Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/).
31
+
32
+ - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive
33
+ - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide
34
+
35
+ ## Need Help?
36
+
37
+ Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.
38
+
39
+ ## Developing
40
+
41
+ ### 1. Plugin Developement and Testing
42
+
43
+ #### Code
44
+ - To get started, you'll need JRuby with the Bundler gem installed.
45
+
46
+ - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example).
47
+
48
+ - Install dependencies
49
+ ```sh
50
+ bundle install
51
+ ```
52
+
53
+ #### Test
54
+
55
+ - Update your dependencies
56
+
57
+ ```sh
58
+ bundle install
59
+ ```
60
+
61
+ - Run tests
62
+
63
+ ```sh
64
+ bundle exec rspec
65
+ ```
66
+
67
+ ### 2. Running your unpublished Plugin in Logstash
68
+
69
+ #### 2.1 Run in a local Logstash clone
70
+
71
+ - Edit Logstash `Gemfile` and add the local plugin path, for example:
72
+ ```ruby
73
+ gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
74
+ ```
75
+ - Install plugin
76
+ ```sh
77
+ # Logstash 2.3 and higher
78
+ bin/logstash-plugin install --no-verify
79
+
80
+ # Prior to Logstash 2.3
81
+ bin/plugin install --no-verify
82
+
83
+ ```
84
+ - Run Logstash with your plugin
85
+ ```sh
86
+ bin/logstash -e 'filter {awesome {}}'
87
+ ```
88
+ At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.
89
+
90
+ #### 2.2 Run in an installed Logstash
91
+
92
+ You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using:
93
+
94
+ - Build your plugin gem
95
+ ```sh
96
+ gem build logstash-filter-awesome.gemspec
97
+ ```
98
+ - Install the plugin from the Logstash home
99
+ ```sh
100
+ # Logstash 2.3 and higher
101
+ bin/logstash-plugin install --no-verify
102
+
103
+ # Prior to Logstash 2.3
104
+ bin/plugin install --no-verify
105
+
106
+ ```
107
+ - Start Logstash and proceed to test the plugin
108
+
109
+ ## Contributing
110
+
111
+ All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.
112
+
113
+ Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.
114
+
115
+ It is more important to the community that you are able to contribute.
116
+
117
+ For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file.
@@ -0,0 +1,377 @@
1
+ # encoding: utf-8
2
+ require "logstash/inputs/base"
3
+ require "logstash/namespace"
4
+ require "logstash/plugin_mixins/aws_config"
5
+ require "time"
6
+ require "tmpdir"
7
+ require "stud/interval"
8
+ require "stud/temporary"
9
+
10
+ # Stream events from files from a S3 bucket.
11
+ #
12
+ # Each line from each file generates an event.
13
+ # Files ending in `.gz` are handled as gzip'ed files.
14
+ class LogStash::Inputs::S3Cloudtrail < LogStash::Inputs::Base
15
+ include LogStash::PluginMixins::AwsConfig::V2
16
+
17
+ config_name "s3cloudtrail"
18
+
19
+ default :codec, "plain"
20
+
21
+ # The name of the S3 bucket.
22
+ config :bucket, :validate => :string, :required => true
23
+
24
+ # If specified, the prefix of filenames in the bucket must match (not a regexp)
25
+ config :prefix, :validate => :string, :default => nil
26
+
27
+ # Where to write the since database (keeps track of the date
28
+ # the last handled file was added to S3). The default will write
29
+ # sincedb files to some path matching "$HOME/.sincedb*"
30
+ # Should be a path with filename not just a directory.
31
+ config :sincedb_path, :validate => :string, :default => nil
32
+
33
+ # Name of a S3 bucket to backup processed files to.
34
+ config :backup_to_bucket, :validate => :string, :default => nil
35
+
36
+ # Append a prefix to the key (full path including file name in s3) after processing.
37
+ # If backing up to another (or the same) bucket, this effectively lets you
38
+ # choose a new 'folder' to place the files in
39
+ config :backup_add_prefix, :validate => :string, :default => nil
40
+
41
+ # Path of a local directory to backup processed files to.
42
+ config :backup_to_dir, :validate => :string, :default => nil
43
+
44
+ # Whether to delete processed files from the original bucket.
45
+ config :delete, :validate => :boolean, :default => false
46
+
47
+ # Interval to wait between to check the file list again after a run is finished.
48
+ # Value is in seconds.
49
+ config :interval, :validate => :number, :default => 60
50
+
51
+ # Ruby style regexp of keys to exclude from the bucket
52
+ config :exclude_pattern, :validate => :string, :default => nil
53
+
54
+ # Set the directory where logstash will store the tmp files before processing them.
55
+ # default to the current OS temporary directory in linux /tmp/logstash
56
+ config :temporary_directory, :validate => :string, :default => File.join(Dir.tmpdir, "logstash")
57
+
58
+ public
59
+ def register
60
+ require "fileutils"
61
+ require "digest/md5"
62
+ require "aws-sdk-resources"
63
+
64
+ @logger.info("Registering s3cloudtrail input", :bucket => @bucket, :region => @region)
65
+
66
+ s3 = get_s3object
67
+
68
+ @s3bucket = s3.bucket(@bucket)
69
+
70
+ unless @backup_to_bucket.nil?
71
+ @backup_bucket = s3.bucket(@backup_to_bucket)
72
+ begin
73
+ s3.client.head_bucket({ :bucket => @backup_to_bucket})
74
+ rescue Aws::S3::Errors::NoSuchBucket
75
+ s3.create_bucket({ :bucket => @backup_to_bucket})
76
+ end
77
+ end
78
+
79
+ unless @backup_to_dir.nil?
80
+ Dir.mkdir(@backup_to_dir, 0700) unless File.exists?(@backup_to_dir)
81
+ end
82
+
83
+ FileUtils.mkdir_p(@temporary_directory) unless Dir.exist?(@temporary_directory)
84
+ end
85
+
86
+ public
87
+ def run(queue)
88
+ @current_thread = Thread.current
89
+ Stud.interval(@interval) do
90
+ process_files(queue)
91
+ end
92
+ end # def run
93
+
94
+ public
95
+ def list_new_files
96
+ objects = {}
97
+
98
+ @s3bucket.objects(:prefix => @prefix).each do |log|
99
+ @logger.debug("S3Cloudtrail input: Found key", :key => log.key)
100
+
101
+ unless ignore_filename?(log.key)
102
+ if sincedb.newer?(log.last_modified)
103
+ objects[log.key] = log.last_modified
104
+ @logger.debug("S3Cloudtrail input: Adding to objects[]", :key => log.key)
105
+ @logger.debug("objects[] length is: ", :length => objects.length)
106
+ end
107
+ end
108
+ end
109
+ return objects.keys.sort {|a,b| objects[a] <=> objects[b]}
110
+ end # def fetch_new_files
111
+
112
+ public
113
+ def backup_to_bucket(object)
114
+ unless @backup_to_bucket.nil?
115
+ backup_key = "#{@backup_add_prefix}#{object.key}"
116
+ @backup_bucket.object(backup_key).copy_from(:copy_source => "#{object.bucket_name}/#{object.key}")
117
+ if @delete
118
+ object.delete()
119
+ end
120
+ end
121
+ end
122
+
123
+ public
124
+ def backup_to_dir(filename)
125
+ unless @backup_to_dir.nil?
126
+ FileUtils.cp(filename, @backup_to_dir)
127
+ end
128
+ end
129
+
130
+ public
131
+ def process_files(queue)
132
+ objects = list_new_files
133
+
134
+ objects.each do |key|
135
+ if stop?
136
+ break
137
+ else
138
+ @logger.debug("S3 input processing", :bucket => @bucket, :key => key)
139
+ process_log(queue, key)
140
+ end
141
+ end
142
+ end # def process_files
143
+
144
+ public
145
+ def stop
146
+ # @current_thread is initialized in the `#run` method,
147
+ # this variable is needed because the `#stop` is a called in another thread
148
+ # than the `#run` method and requiring us to call stop! with a explicit thread.
149
+ Stud.stop!(@current_thread)
150
+ end
151
+
152
+ private
153
+
154
+ # Read the content of the local file
155
+ #
156
+ # @param [Queue] Where to push the event
157
+ # @param [String] Which file to read from
158
+ # @return [Boolean] True if the file was completely read, false otherwise.
159
+ def process_local_log(queue, filename)
160
+ @logger.debug('Processing file', :filename => filename)
161
+ metadata = {}
162
+ # Currently codecs operates on bytes instead of stream.
163
+ # So all IO stuff: decompression, reading need to be done in the actual
164
+ # input and send as bytes to the codecs.
165
+ read_file(filename) do |line|
166
+ if stop?
167
+ @logger.warn("Logstash S3Cloudtrail input, stop reading in the middle of the file, we will read it again when logstash is started")
168
+ return false
169
+ end
170
+
171
+ @codec.decode(line) do |event|
172
+ # We are making an assumption concerning cloudfront
173
+ # log format, the user will use the plain or the line codec
174
+ # and the message key will represent the actual line content.
175
+ # If the event is only metadata the event will be drop.
176
+ # This was the behavior of the pre 1.5 plugin.
177
+ #
178
+ # The line need to go through the codecs to replace
179
+ # unknown bytes in the log stream before doing a regexp match or
180
+ # you will get a `Error: invalid byte sequence in UTF-8'
181
+ if event_is_metadata?(event)
182
+ @logger.debug('Event is metadata, updating the current cloudfront metadata', :event => event)
183
+ update_metadata(metadata, event)
184
+ else
185
+ decorate(event)
186
+
187
+ event.set("cloudfront_version", metadata[:cloudfront_version]) unless metadata[:cloudfront_version].nil?
188
+ event.set("cloudfront_fields", metadata[:cloudfront_fields]) unless metadata[:cloudfront_fields].nil?
189
+
190
+ queue << event
191
+ end
192
+ end
193
+ end
194
+
195
+ return true
196
+ end # def process_local_log
197
+
198
+ private
199
+ def event_is_metadata?(event)
200
+ return false if event.get("message").nil?
201
+ line = event.get("message")
202
+ version_metadata?(line) || fields_metadata?(line)
203
+ end
204
+
205
+ private
206
+ def version_metadata?(line)
207
+ line.start_with?('#Version: ')
208
+ end
209
+
210
+ private
211
+ def fields_metadata?(line)
212
+ line.start_with?('#Fields: ')
213
+ end
214
+
215
+ private
216
+ def update_metadata(metadata, event)
217
+ line = event.get('message').strip
218
+
219
+ if version_metadata?(line)
220
+ metadata[:cloudfront_version] = line.split(/#Version: (.+)/).last
221
+ end
222
+
223
+ if fields_metadata?(line)
224
+ metadata[:cloudfront_fields] = line.split(/#Fields: (.+)/).last
225
+ end
226
+ end
227
+
228
+ private
229
+ def read_file(filename, &block)
230
+ if gzip?(filename)
231
+ read_gzip_file(filename, block)
232
+ else
233
+ read_plain_file(filename, block)
234
+ end
235
+ end
236
+
237
+ def read_plain_file(filename, block)
238
+ File.open(filename, 'rb') do |file|
239
+ file.each(&block)
240
+ end
241
+ end
242
+
243
+ private
244
+ def read_gzip_file(filename, block)
245
+ begin
246
+
247
+ Zlib::GzipReader.open(filename) do |decoder|
248
+ lines = decoder.read.split("\n")
249
+ lines.each do |line|
250
+ block.call(line)
251
+ end
252
+ end
253
+ #Zlib::GzipReader.open(filename) do |decoder|
254
+ # decoder.each_line { |line| block.call(line) }
255
+ #end
256
+ rescue Zlib::Error, Zlib::GzipFile::Error => e
257
+ @logger.error("Gzip codec: We cannot uncompress the gzip file", :filename => filename)
258
+ raise e
259
+ end
260
+ end
261
+
262
+ private
263
+ def gzip?(filename)
264
+ filename.end_with?('.gz')
265
+ end
266
+
267
+ private
268
+ def sincedb
269
+ @sincedb ||= if @sincedb_path.nil?
270
+ @logger.info("Using default generated file for the sincedb", :filename => sincedb_file)
271
+ SinceDB::File.new(sincedb_file)
272
+ else
273
+ @logger.info("Using the provided sincedb_path",
274
+ :sincedb_path => @sincedb_path)
275
+ SinceDB::File.new(@sincedb_path)
276
+ end
277
+ end
278
+
279
+ private
280
+ def sincedb_file
281
+ File.join(ENV["HOME"], ".sincedb_" + Digest::MD5.hexdigest("#{@bucket}+#{@prefix}"))
282
+ end
283
+
284
+ private
285
+ def ignore_filename?(filename)
286
+ if @prefix == filename
287
+ return true
288
+ elsif filename.end_with?("/")
289
+ return true
290
+ elsif (@backup_add_prefix && @backup_to_bucket == @bucket && filename =~ /^#{backup_add_prefix}/)
291
+ return true
292
+ elsif @exclude_pattern.nil?
293
+ return false
294
+ elsif filename =~ Regexp.new(@exclude_pattern)
295
+ return true
296
+ else
297
+ return false
298
+ end
299
+ end
300
+
301
+ private
302
+ def process_log(queue, key)
303
+ object = @s3bucket.object(key)
304
+
305
+ filename = File.join(temporary_directory, File.basename(key))
306
+ if download_remote_file(object, filename)
307
+ if process_local_log(queue, filename)
308
+ lastmod = object.last_modified
309
+ backup_to_bucket(object)
310
+ backup_to_dir(filename)
311
+ delete_file_from_bucket(object)
312
+ FileUtils.remove_entry_secure(filename, true)
313
+ sincedb.write(lastmod)
314
+ end
315
+ else
316
+ FileUtils.remove_entry_secure(filename, true)
317
+ end
318
+ end
319
+
320
+ private
321
+ # Stream the remove file to the local disk
322
+ #
323
+ # @param [S3Object] Reference to the remove S3 objec to download
324
+ # @param [String] The Temporary filename to stream to.
325
+ # @return [Boolean] True if the file was completely downloaded
326
+ def download_remote_file(remote_object, local_filename)
327
+ completed = false
328
+ @logger.debug("S3Cloudtrail input: Download remote file", :remote_key => remote_object.key, :local_filename => local_filename)
329
+ File.open(local_filename, 'wb') do |s3file|
330
+ return completed if stop?
331
+ remote_object.get(:response_target => s3file)
332
+ end
333
+ completed = true
334
+
335
+ return completed
336
+ end
337
+
338
+ private
339
+ def delete_file_from_bucket(object)
340
+ if @delete and @backup_to_bucket.nil?
341
+ object.delete()
342
+ end
343
+ end
344
+
345
+ private
346
+ def get_s3object
347
+ s3 = Aws::S3::Resource.new(aws_options_hash)
348
+ end
349
+
350
+ private
351
+ module SinceDB
352
+ class File
353
+ def initialize(file)
354
+ @sincedb_path = file
355
+ end
356
+
357
+ def newer?(date)
358
+ date > read
359
+ end
360
+
361
+ def read
362
+ if ::File.exists?(@sincedb_path)
363
+ content = ::File.read(@sincedb_path).chomp.strip
364
+ # If the file was created but we didn't have the time to write to it
365
+ return content.empty? ? Time.new(0) : Time.parse(content)
366
+ else
367
+ return Time.new(0)
368
+ end
369
+ end
370
+
371
+ def write(since = nil)
372
+ since = Time.now() if since.nil?
373
+ ::File.open(@sincedb_path, 'w') { |file| file.write(since.to_s) }
374
+ end
375
+ end
376
+ end
377
+ end # class LogStash::Inputs::S3
@@ -0,0 +1,33 @@
1
+ Gem::Specification.new do |s|
2
+
3
+ s.name = 'logstash-input-s3cloudtrail'
4
+ s.version = '3.1.1'
5
+ s.licenses = ['Apache License (2.0)']
6
+ s.summary = "Stream events from files from a S3 bucket."
7
+ s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
8
+ s.authors = ["Elastic"]
9
+ s.email = 'info@elastic.co'
10
+ s.homepage = "http://www.elastic.co/guide/en/logstash/current/index.html"
11
+ s.require_paths = ["lib"]
12
+
13
+ # Files
14
+ s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT']
15
+
16
+ # Tests
17
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
18
+
19
+ # Special flag to let us know this is actually a logstash plugin
20
+ s.metadata = { "logstash_plugin" => "true", "logstash_group" => "input" }
21
+
22
+ # Gem dependencies
23
+ s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99"
24
+ s.add_runtime_dependency 'logstash-mixin-aws'
25
+ s.add_runtime_dependency 'stud', '~> 0.0.18'
26
+ # s.add_runtime_dependency 'aws-sdk-resources', '>= 2.0.33'
27
+
28
+ s.add_development_dependency 'logstash-devutils'
29
+ s.add_development_dependency 'simplecov'
30
+ s.add_development_dependency 'coveralls'
31
+ s.add_development_dependency "logstash-codec-json"
32
+ end
33
+
@@ -0,0 +1,4 @@
1
+ #Version: 1.0
2
+ #Fields: date time x-edge-location c-ip x-event sc-bytes x-cf-status x-cf-client-id cs-uri-stem cs-uri-query c-referrer x-page-url​ c-user-agent x-sname x-sname-query x-file-ext x-sid
3
+ 2010-03-12 23:51:20 SEA4 192.0.2.147 connect 2014 OK bfd8a98bee0840d9b871b7f6ade9908f rtmp://shqshne4jdp4b6.cloudfront.net/cfx/st​ key=value http://player.longtailvideo.com/player.swf http://www.longtailvideo.com/support/jw-player-setup-wizard?example=204 LNX%2010,0,32,18 - - - -
4
+ 2010-03-12 23:51:21 SEA4 192.0.2.222 play 3914 OK bfd8a98bee0840d9b871b7f6ade9908f rtmp://shqshne4jdp4b6.cloudfront.net/cfx/st​ key=value http://player.longtailvideo.com/player.swf http://www.longtailvideo.com/support/jw-player-setup-wizard?example=204 LNX%2010,0,32,18 myvideo p=2&q=4 flv 1
@@ -0,0 +1,2 @@
1
+ 2015-01-01T02:52:45.866722Z no "GET http://www.logstash.com:80/utfmadness/≈4od HTTP/1.1"
2
+
@@ -0,0 +1,2 @@
1
+ { "hello": "world" }
2
+ { "hello": "awesome world" }
@@ -0,0 +1,2 @@
1
+ 2010-03-12 23:51:20 SEA4 192.0.2.147 connect 2014 OK bfd8a98bee0840d9b871b7f6ade9908f rtmp://shqshne4jdp4b6.cloudfront.net/cfx/st​ key=value http://player.longtailvideo.com/player.swf http://www.longtailvideo.com/support/jw-player-setup-wizard?example=204 LNX%2010,0,32,18 - - - -
2
+ 2010-03-12 23:51:21 SEA4 192.0.2.222 play 3914 OK bfd8a98bee0840d9b871b7f6ade9908f rtmp://shqshne4jdp4b6.cloudfront.net/cfx/st​ key=value http://player.longtailvideo.com/player.swf http://www.longtailvideo.com/support/jw-player-setup-wizard?example=204 LNX%2010,0,32,18 myvideo p=2&q=4 flv 1
@@ -0,0 +1,288 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/inputs/s3"
4
+ require "logstash/errors"
5
+ require "aws-sdk-resources"
6
+ require_relative "../support/helpers"
7
+ require "stud/temporary"
8
+ require "aws-sdk"
9
+ require "fileutils"
10
+
11
+ describe LogStash::Inputs::S3 do
12
+ let(:temporary_directory) { Stud::Temporary.pathname }
13
+ let(:sincedb_path) { Stud::Temporary.pathname }
14
+ let(:day) { 3600 * 24 }
15
+ let(:creds) { Aws::Credentials.new('1234', 'secret') }
16
+ let(:config) {
17
+ {
18
+ "access_key_id" => "1234",
19
+ "secret_access_key" => "secret",
20
+ "bucket" => "logstash-test",
21
+ "temporary_directory" => temporary_directory,
22
+ "sincedb_path" => File.join(sincedb_path, ".sincedb")
23
+ }
24
+ }
25
+
26
+
27
+ before do
28
+ FileUtils.mkdir_p(sincedb_path)
29
+ Aws.config[:stub_responses] = true
30
+ Thread.abort_on_exception = true
31
+ end
32
+
33
+ context "when interrupting the plugin" do
34
+ let(:config) { super.merge({ "interval" => 5 }) }
35
+
36
+ before do
37
+ expect_any_instance_of(LogStash::Inputs::S3).to receive(:list_new_files).and_return(TestInfiniteS3Object.new)
38
+ end
39
+
40
+ it_behaves_like "an interruptible input plugin"
41
+ end
42
+
43
+ describe "#register" do
44
+ subject { LogStash::Inputs::S3.new(config) }
45
+
46
+ context "with temporary directory" do
47
+ let(:temporary_directory) { Stud::Temporary.pathname }
48
+
49
+ it "creates the direct when it doesn't exist" do
50
+ expect { subject.register }.to change { Dir.exist?(temporary_directory) }.from(false).to(true)
51
+ end
52
+ end
53
+ end
54
+
55
+ describe '#get_s3object' do
56
+ subject { LogStash::Inputs::S3.new(settings) }
57
+
58
+ context 'with modern access key options' do
59
+ let(:settings) {
60
+ {
61
+ "access_key_id" => "1234",
62
+ "secret_access_key" => "secret",
63
+ "proxy_uri" => "http://example.com",
64
+ "bucket" => "logstash-test",
65
+ }
66
+ }
67
+
68
+ it 'should instantiate AWS::S3 clients with a proxy set' do
69
+ expect(Aws::S3::Resource).to receive(:new).with({
70
+ :credentials => kind_of(Aws::Credentials),
71
+ :http_proxy => 'http://example.com',
72
+ :region => subject.region
73
+ })
74
+
75
+ subject.send(:get_s3object)
76
+ end
77
+ end
78
+ end
79
+
80
+ describe "#list_new_files" do
81
+ before { allow_any_instance_of(Aws::S3::Bucket).to receive(:objects) { objects_list } }
82
+
83
+ let!(:present_object) { double(:key => 'this-should-be-present', :last_modified => Time.now) }
84
+ let(:objects_list) {
85
+ [
86
+ double(:key => 'exclude-this-file-1', :last_modified => Time.now - 2 * day),
87
+ double(:key => 'exclude/logstash', :last_modified => Time.now - 2 * day),
88
+ present_object
89
+ ]
90
+ }
91
+
92
+ it 'should allow user to exclude files from the s3 bucket' do
93
+ plugin = LogStash::Inputs::S3.new(config.merge({ "exclude_pattern" => "^exclude" }))
94
+ plugin.register
95
+ expect(plugin.list_new_files).to eq([present_object.key])
96
+ end
97
+
98
+ it 'should support not providing a exclude pattern' do
99
+ plugin = LogStash::Inputs::S3.new(config)
100
+ plugin.register
101
+ expect(plugin.list_new_files).to eq(objects_list.map(&:key))
102
+ end
103
+
104
+ context "If the bucket is the same as the backup bucket" do
105
+ it 'should ignore files from the bucket if they match the backup prefix' do
106
+ objects_list = [
107
+ double(:key => 'mybackup-log-1', :last_modified => Time.now),
108
+ present_object
109
+ ]
110
+
111
+ allow_any_instance_of(Aws::S3::Bucket).to receive(:objects) { objects_list }
112
+
113
+ plugin = LogStash::Inputs::S3.new(config.merge({ 'backup_add_prefix' => 'mybackup',
114
+ 'backup_to_bucket' => config['bucket']}))
115
+ plugin.register
116
+ expect(plugin.list_new_files).to eq([present_object.key])
117
+ end
118
+ end
119
+
120
+ it 'should ignore files older than X' do
121
+ plugin = LogStash::Inputs::S3.new(config.merge({ 'backup_add_prefix' => 'exclude-this-file'}))
122
+
123
+ expect_any_instance_of(LogStash::Inputs::S3::SinceDB::File).to receive(:read).exactly(objects_list.size) { Time.now - day }
124
+ plugin.register
125
+
126
+ expect(plugin.list_new_files).to eq([present_object.key])
127
+ end
128
+
129
+ it 'should ignore file if the file match the prefix' do
130
+ prefix = 'mysource/'
131
+
132
+ objects_list = [
133
+ double(:key => prefix, :last_modified => Time.now),
134
+ present_object
135
+ ]
136
+
137
+ allow_any_instance_of(Aws::S3::Bucket).to receive(:objects).with(:prefix => prefix) { objects_list }
138
+
139
+ plugin = LogStash::Inputs::S3.new(config.merge({ 'prefix' => prefix }))
140
+ plugin.register
141
+ expect(plugin.list_new_files).to eq([present_object.key])
142
+ end
143
+
144
+ it 'should sort return object sorted by last_modification date with older first' do
145
+ objects = [
146
+ double(:key => 'YESTERDAY', :last_modified => Time.now - day),
147
+ double(:key => 'TODAY', :last_modified => Time.now),
148
+ double(:key => 'TWO_DAYS_AGO', :last_modified => Time.now - 2 * day)
149
+ ]
150
+
151
+ allow_any_instance_of(Aws::S3::Bucket).to receive(:objects) { objects }
152
+
153
+
154
+ plugin = LogStash::Inputs::S3.new(config)
155
+ plugin.register
156
+ expect(plugin.list_new_files).to eq(['TWO_DAYS_AGO', 'YESTERDAY', 'TODAY'])
157
+ end
158
+
159
+ describe "when doing backup on the s3" do
160
+ it 'should copy to another s3 bucket when keeping the original file' do
161
+ plugin = LogStash::Inputs::S3.new(config.merge({ "backup_to_bucket" => "mybackup"}))
162
+ plugin.register
163
+
164
+ s3object = Aws::S3::Object.new('mybucket', 'testkey')
165
+ expect_any_instance_of(Aws::S3::Object).to receive(:copy_from).with(:copy_source => "mybucket/testkey")
166
+ expect(s3object).to_not receive(:delete)
167
+
168
+ plugin.backup_to_bucket(s3object)
169
+ end
170
+
171
+ it 'should copy to another s3 bucket when deleting the original file' do
172
+ plugin = LogStash::Inputs::S3.new(config.merge({ "backup_to_bucket" => "mybackup", "delete" => true }))
173
+ plugin.register
174
+
175
+ s3object = Aws::S3::Object.new('mybucket', 'testkey')
176
+ expect_any_instance_of(Aws::S3::Object).to receive(:copy_from).with(:copy_source => "mybucket/testkey")
177
+ expect(s3object).to receive(:delete)
178
+
179
+ plugin.backup_to_bucket(s3object)
180
+ end
181
+
182
+ it 'should add the specified prefix to the backup file' do
183
+ plugin = LogStash::Inputs::S3.new(config.merge({ "backup_to_bucket" => "mybackup",
184
+ "backup_add_prefix" => 'backup-' }))
185
+ plugin.register
186
+
187
+ s3object = Aws::S3::Object.new('mybucket', 'testkey')
188
+ expect_any_instance_of(Aws::S3::Object).to receive(:copy_from).with(:copy_source => "mybucket/testkey")
189
+ expect(s3object).to_not receive(:delete)
190
+
191
+ plugin.backup_to_bucket(s3object)
192
+ end
193
+ end
194
+
195
+ it 'should support doing local backup of files' do
196
+ Stud::Temporary.directory do |backup_dir|
197
+ Stud::Temporary.file do |source_file|
198
+ backup_file = File.join(backup_dir.to_s, Pathname.new(source_file.path).basename.to_s)
199
+
200
+ plugin = LogStash::Inputs::S3.new(config.merge({ "backup_to_dir" => backup_dir }))
201
+
202
+ plugin.backup_to_dir(source_file)
203
+
204
+ expect(File.exists?(backup_file)).to eq(true)
205
+ end
206
+ end
207
+ end
208
+ end
209
+
210
+ shared_examples "generated events" do
211
+ it 'should process events' do
212
+ events = fetch_events(config)
213
+ expect(events.size).to eq(2)
214
+ end
215
+
216
+ it "deletes the temporary file" do
217
+ events = fetch_events(config)
218
+ expect(Dir.glob(File.join(temporary_directory, "*")).size).to eq(0)
219
+ end
220
+ end
221
+
222
+ context 'when working with logs' do
223
+ let(:objects) { [log] }
224
+ let(:log) { double(:key => 'uncompressed.log', :last_modified => Time.now - 2 * day) }
225
+ let(:data) { File.read(log_file) }
226
+
227
+ before do
228
+ Aws.config[:s3] = {
229
+ stub_responses: {
230
+ get_object: { body: data }
231
+ }
232
+ }
233
+ allow_any_instance_of(Aws::S3::Bucket).to receive(:objects) { objects }
234
+ allow_any_instance_of(Aws::S3::Bucket).to receive(:object).with(log.key) { log }
235
+ expect(log).to receive(:get).with(instance_of(Hash)) do |arg|
236
+ File.open(arg[:response_target], 'wb') { |s3file| s3file.write(data) }
237
+ end
238
+ end
239
+
240
+ context "when event doesn't have a `message` field" do
241
+ let(:log_file) { File.join(File.dirname(__FILE__), '..', 'fixtures', 'json.log') }
242
+ let(:config) {
243
+ {
244
+ "access_key_id" => "1234",
245
+ "secret_access_key" => "secret",
246
+ "bucket" => "logstash-test",
247
+ "codec" => "json",
248
+ }
249
+ }
250
+
251
+ include_examples "generated events"
252
+ end
253
+
254
+ context 'compressed' do
255
+ let(:log) { double(:key => 'log.gz', :last_modified => Time.now - 2 * day) }
256
+ let(:log_file) { File.join(File.dirname(__FILE__), '..', 'fixtures', 'compressed.log.gz') }
257
+
258
+ include_examples "generated events"
259
+ end
260
+
261
+ context 'plain text' do
262
+ let(:log_file) { File.join(File.dirname(__FILE__), '..', 'fixtures', 'uncompressed.log') }
263
+
264
+ include_examples "generated events"
265
+ end
266
+
267
+ context 'encoded' do
268
+ let(:log_file) { File.join(File.dirname(__FILE__), '..', 'fixtures', 'invalid_utf8.log') }
269
+
270
+ include_examples "generated events"
271
+ end
272
+
273
+ context 'cloudfront' do
274
+ let(:log_file) { File.join(File.dirname(__FILE__), '..', 'fixtures', 'cloudfront.log') }
275
+
276
+ it 'should extract metadata from cloudfront log' do
277
+ events = fetch_events(config)
278
+
279
+ events.each do |event|
280
+ expect(event.get('cloudfront_fields')).to eq('date time x-edge-location c-ip x-event sc-bytes x-cf-status x-cf-client-id cs-uri-stem cs-uri-query c-referrer x-page-url​ c-user-agent x-sname x-sname-query x-file-ext x-sid')
281
+ expect(event.get('cloudfront_version')).to eq('1.0')
282
+ end
283
+ end
284
+
285
+ include_examples "generated events"
286
+ end
287
+ end
288
+ end
@@ -0,0 +1,17 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/inputs/s3"
4
+ require "stud/temporary"
5
+ require "fileutils"
6
+
7
+ describe LogStash::Inputs::S3::SinceDB::File do
8
+ let(:file) { Stud::Temporary.file.path }
9
+ subject { LogStash::Inputs::S3::SinceDB::File.new(file) }
10
+ before do
11
+ FileUtils.touch(file)
12
+ end
13
+
14
+ it "doesnt raise an exception if the file is empty" do
15
+ expect { subject.read }.not_to raise_error
16
+ end
17
+ end
@@ -0,0 +1,61 @@
1
+ require "logstash/devutils/rspec/spec_helper"
2
+ require "logstash/inputs/s3"
3
+ require "aws-sdk"
4
+ require "fileutils"
5
+ require_relative "../support/helpers"
6
+
7
+ describe LogStash::Inputs::S3, :integration => true, :s3 => true do
8
+ before do
9
+ Thread.abort_on_exception = true
10
+
11
+ upload_file('../fixtures/uncompressed.log' , "#{prefix}uncompressed_1.log")
12
+ upload_file('../fixtures/compressed.log.gz', "#{prefix}compressed_1.log.gz")
13
+ end
14
+
15
+ after do
16
+ delete_remote_files(prefix)
17
+ FileUtils.rm_rf(temporary_directory)
18
+ delete_remote_files(backup_prefix)
19
+ end
20
+
21
+ let(:temporary_directory) { Stud::Temporary.directory }
22
+ let(:prefix) { 'logstash-s3-input-prefix/' }
23
+
24
+ let(:minimal_settings) { { "access_key_id" => ENV['AWS_ACCESS_KEY_ID'],
25
+ "secret_access_key" => ENV['AWS_SECRET_ACCESS_KEY'],
26
+ "bucket" => ENV['AWS_LOGSTASH_TEST_BUCKET'],
27
+ "region" => ENV["AWS_REGION"] || "us-east-1",
28
+ "prefix" => prefix,
29
+ "temporary_directory" => temporary_directory } }
30
+ let(:backup_prefix) { "backup/" }
31
+
32
+ it "support prefix to scope the remote files" do
33
+ events = fetch_events(minimal_settings)
34
+ expect(events.size).to eq(4)
35
+ end
36
+
37
+
38
+ it "add a prefix to the file" do
39
+ fetch_events(minimal_settings.merge({ "backup_to_bucket" => ENV["AWS_LOGSTASH_TEST_BUCKET"],
40
+ "backup_add_prefix" => backup_prefix }))
41
+ expect(list_remote_files(backup_prefix).size).to eq(2)
42
+ end
43
+
44
+ it "allow you to backup to a local directory" do
45
+ Stud::Temporary.directory do |backup_dir|
46
+ fetch_events(minimal_settings.merge({ "backup_to_dir" => backup_dir }))
47
+ expect(Dir.glob(File.join(backup_dir, "*")).size).to eq(2)
48
+ end
49
+ end
50
+
51
+ context "remote backup" do
52
+ it "another bucket" do
53
+ fetch_events(minimal_settings.merge({ "backup_to_bucket" => "logstash-s3-input-backup"}))
54
+ expect(list_remote_files("", "logstash-s3-input-backup").size).to eq(2)
55
+ end
56
+
57
+ after do
58
+ delete_bucket("logstash-s3-input-backup")
59
+ end
60
+ end
61
+ end
@@ -0,0 +1,45 @@
1
+ def fetch_events(settings)
2
+ queue = []
3
+ s3 = LogStash::Inputs::S3.new(settings)
4
+ s3.register
5
+ s3.process_files(queue)
6
+ queue
7
+ end
8
+
9
+ # delete_files(prefix)
10
+ def upload_file(local_file, remote_name)
11
+ bucket = s3object.bucket(ENV['AWS_LOGSTASH_TEST_BUCKET'])
12
+ file = File.expand_path(File.join(File.dirname(__FILE__), local_file))
13
+ bucket.object(remote_name).upload_file(file)
14
+ end
15
+
16
+ def delete_remote_files(prefix)
17
+ bucket = s3object.bucket(ENV['AWS_LOGSTASH_TEST_BUCKET'])
18
+ bucket.objects(:prefix => prefix).each { |object| object.delete }
19
+ end
20
+
21
+ def list_remote_files(prefix, target_bucket = ENV['AWS_LOGSTASH_TEST_BUCKET'])
22
+ bucket = s3object.bucket(target_bucket)
23
+ bucket.objects(:prefix => prefix).collect(&:key)
24
+ end
25
+
26
+ def delete_bucket(name)
27
+ s3object.bucket(name).objects.map(&:delete)
28
+ s3object.bucket(name).delete
29
+ end
30
+
31
+ def s3object
32
+ Aws::S3::Resource.new
33
+ end
34
+
35
+ class TestInfiniteS3Object
36
+ def each
37
+ counter = 1
38
+
39
+ loop do
40
+ yield "awesome-#{counter}"
41
+ counter +=1
42
+ end
43
+ end
44
+ end
45
+
metadata ADDED
@@ -0,0 +1,175 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: logstash-input-s3cloudtrail
3
+ version: !ruby/object:Gem::Version
4
+ version: 3.1.1
5
+ platform: ruby
6
+ authors:
7
+ - Elastic
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2017-01-03 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ requirement: !ruby/object:Gem::Requirement
15
+ requirements:
16
+ - - ">="
17
+ - !ruby/object:Gem::Version
18
+ version: '1.60'
19
+ - - "<="
20
+ - !ruby/object:Gem::Version
21
+ version: '2.99'
22
+ name: logstash-core-plugin-api
23
+ prerelease: false
24
+ type: :runtime
25
+ version_requirements: !ruby/object:Gem::Requirement
26
+ requirements:
27
+ - - ">="
28
+ - !ruby/object:Gem::Version
29
+ version: '1.60'
30
+ - - "<="
31
+ - !ruby/object:Gem::Version
32
+ version: '2.99'
33
+ - !ruby/object:Gem::Dependency
34
+ requirement: !ruby/object:Gem::Requirement
35
+ requirements:
36
+ - - ">="
37
+ - !ruby/object:Gem::Version
38
+ version: '0'
39
+ name: logstash-mixin-aws
40
+ prerelease: false
41
+ type: :runtime
42
+ version_requirements: !ruby/object:Gem::Requirement
43
+ requirements:
44
+ - - ">="
45
+ - !ruby/object:Gem::Version
46
+ version: '0'
47
+ - !ruby/object:Gem::Dependency
48
+ requirement: !ruby/object:Gem::Requirement
49
+ requirements:
50
+ - - "~>"
51
+ - !ruby/object:Gem::Version
52
+ version: 0.0.18
53
+ name: stud
54
+ prerelease: false
55
+ type: :runtime
56
+ version_requirements: !ruby/object:Gem::Requirement
57
+ requirements:
58
+ - - "~>"
59
+ - !ruby/object:Gem::Version
60
+ version: 0.0.18
61
+ - !ruby/object:Gem::Dependency
62
+ requirement: !ruby/object:Gem::Requirement
63
+ requirements:
64
+ - - ">="
65
+ - !ruby/object:Gem::Version
66
+ version: '0'
67
+ name: logstash-devutils
68
+ prerelease: false
69
+ type: :development
70
+ version_requirements: !ruby/object:Gem::Requirement
71
+ requirements:
72
+ - - ">="
73
+ - !ruby/object:Gem::Version
74
+ version: '0'
75
+ - !ruby/object:Gem::Dependency
76
+ requirement: !ruby/object:Gem::Requirement
77
+ requirements:
78
+ - - ">="
79
+ - !ruby/object:Gem::Version
80
+ version: '0'
81
+ name: simplecov
82
+ prerelease: false
83
+ type: :development
84
+ version_requirements: !ruby/object:Gem::Requirement
85
+ requirements:
86
+ - - ">="
87
+ - !ruby/object:Gem::Version
88
+ version: '0'
89
+ - !ruby/object:Gem::Dependency
90
+ requirement: !ruby/object:Gem::Requirement
91
+ requirements:
92
+ - - ">="
93
+ - !ruby/object:Gem::Version
94
+ version: '0'
95
+ name: coveralls
96
+ prerelease: false
97
+ type: :development
98
+ version_requirements: !ruby/object:Gem::Requirement
99
+ requirements:
100
+ - - ">="
101
+ - !ruby/object:Gem::Version
102
+ version: '0'
103
+ - !ruby/object:Gem::Dependency
104
+ requirement: !ruby/object:Gem::Requirement
105
+ requirements:
106
+ - - ">="
107
+ - !ruby/object:Gem::Version
108
+ version: '0'
109
+ name: logstash-codec-json
110
+ prerelease: false
111
+ type: :development
112
+ version_requirements: !ruby/object:Gem::Requirement
113
+ requirements:
114
+ - - ">="
115
+ - !ruby/object:Gem::Version
116
+ version: '0'
117
+ description: This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program
118
+ email: info@elastic.co
119
+ executables: []
120
+ extensions: []
121
+ extra_rdoc_files: []
122
+ files:
123
+ - CHANGELOG.md
124
+ - CONTRIBUTORS
125
+ - Gemfile
126
+ - LICENSE
127
+ - NOTICE.TXT
128
+ - README.md
129
+ - lib/logstash/inputs/s3cloudtrail.rb
130
+ - logstash-input-s3cloudtrail.gemspec
131
+ - spec/fixtures/cloudfront.log
132
+ - spec/fixtures/compressed.log.gz
133
+ - spec/fixtures/invalid_utf8.log
134
+ - spec/fixtures/json.log
135
+ - spec/fixtures/uncompressed.log
136
+ - spec/inputs/s3_spec.rb
137
+ - spec/inputs/sincedb_spec.rb
138
+ - spec/integration/s3_spec.rb
139
+ - spec/support/helpers.rb
140
+ homepage: http://www.elastic.co/guide/en/logstash/current/index.html
141
+ licenses:
142
+ - Apache License (2.0)
143
+ metadata:
144
+ logstash_plugin: 'true'
145
+ logstash_group: input
146
+ post_install_message:
147
+ rdoc_options: []
148
+ require_paths:
149
+ - lib
150
+ required_ruby_version: !ruby/object:Gem::Requirement
151
+ requirements:
152
+ - - ">="
153
+ - !ruby/object:Gem::Version
154
+ version: '0'
155
+ required_rubygems_version: !ruby/object:Gem::Requirement
156
+ requirements:
157
+ - - ">="
158
+ - !ruby/object:Gem::Version
159
+ version: '0'
160
+ requirements: []
161
+ rubyforge_project:
162
+ rubygems_version: 2.4.8
163
+ signing_key:
164
+ specification_version: 4
165
+ summary: Stream events from files from a S3 bucket.
166
+ test_files:
167
+ - spec/fixtures/cloudfront.log
168
+ - spec/fixtures/compressed.log.gz
169
+ - spec/fixtures/invalid_utf8.log
170
+ - spec/fixtures/json.log
171
+ - spec/fixtures/uncompressed.log
172
+ - spec/inputs/s3_spec.rb
173
+ - spec/inputs/sincedb_spec.rb
174
+ - spec/integration/s3_spec.rb
175
+ - spec/support/helpers.rb