logstash-output-s3 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,15 @@
1
+ ---
2
+ !binary "U0hBMQ==":
3
+ metadata.gz: !binary |-
4
+ YzM5YTk3ZjY1OWU3Mjg2YTMwZTA1NTY1ZWIxMWYyODJkMTcxMWQ5YQ==
5
+ data.tar.gz: !binary |-
6
+ NWQxNDkyNWEzNGEyNjE4MDQwZmE0ZTk3NjdhYjI4ZWY5ZWVmZTM2OA==
7
+ SHA512:
8
+ metadata.gz: !binary |-
9
+ YmQ1MjhiYTdjNGExOTMzYjBkYjc2MGRhOGRiODY0YWY0YjBiN2RhMjk2OWI4
10
+ YjJhMjEwOTk2YzJkNWRhYTYwOWM1MDA0YWI2MjI3NjdjOGYxMjFkNTI1Yzdj
11
+ MzM3ZGMwNjlkZGY4MzZmMjVhMDE4ZWQxZGVjNDVkYjBlOThmZjQ=
12
+ data.tar.gz: !binary |-
13
+ N2M5YTNlYzEyZjQ4MDZjZjExZDg1YjEzZDQ1MzNmNTk1ZWI0NWJlZjJjZTA0
14
+ NTM5NmY3NzA1MjdlOTU5MDcwZTczZWI5ZDRiZTAxZTdhYzAzZjVlOTYxZDIy
15
+ ODVlMTAxNjcwYzZkNGRjN2NmZDRjYjY0NzA4ZjRiNDcxNTUyNWQ=
data/.gitignore ADDED
@@ -0,0 +1,4 @@
1
+ *.gem
2
+ Gemfile.lock
3
+ .bundle
4
+ vendor
data/Gemfile ADDED
@@ -0,0 +1,4 @@
1
+ source 'http://rubygems.org'
2
+ gem 'rake'
3
+ gem 'gem_publisher'
4
+ gem 'archive-tar-minitar'
data/LICENSE ADDED
@@ -0,0 +1,13 @@
1
+ Copyright (c) 2012-2014 Elasticsearch <http://www.elasticsearch.org>
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License");
4
+ you may not use this file except in compliance with the License.
5
+ You may obtain a copy of the License at
6
+
7
+ http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ Unless required by applicable law or agreed to in writing, software
10
+ distributed under the License is distributed on an "AS IS" BASIS,
11
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ See the License for the specific language governing permissions and
13
+ limitations under the License.
data/Rakefile ADDED
@@ -0,0 +1,6 @@
1
+ @files=[]
2
+
3
+ task :default do
4
+ system("rake -T")
5
+ end
6
+
@@ -0,0 +1,357 @@
1
+ # encoding: utf-8
2
+ require "logstash/outputs/base"
3
+ require "logstash/namespace"
4
+ require "socket" # for Socket.gethostname
5
+
6
+ # TODO integrate aws_config in the future
7
+ #require "logstash/plugin_mixins/aws_config"
8
+
9
+ # INFORMATION:
10
+
11
+ # This plugin was created for store the logstash's events into Amazon Simple Storage Service (Amazon S3).
12
+ # For use it you needs authentications and an s3 bucket.
13
+ # Be careful to have the permission to write file on S3's bucket and run logstash with super user for establish connection.
14
+
15
+ # S3 plugin allows you to do something complex, let's explain:)
16
+
17
+ # S3 outputs create temporary files into "/opt/logstash/S3_temp/". If you want, you can change the path at the start of register method.
18
+ # This files have a special name, for example:
19
+
20
+ # ls.s3.ip-10-228-27-95.2013-04-18T10.00.tag_hello.part0.txt
21
+
22
+ # ls.s3 : indicate logstash plugin s3
23
+
24
+ # "ip-10-228-27-95" : indicate you ip machine, if you have more logstash and writing on the same bucket for example.
25
+ # "2013-04-18T10.00" : represents the time whenever you specify time_file.
26
+ # "tag_hello" : this indicate the event's tag, you can collect events with the same tag.
27
+ # "part0" : this means if you indicate size_file then it will generate more parts if you file.size > size_file.
28
+ # When a file is full it will pushed on bucket and will be deleted in temporary directory.
29
+ # If a file is empty is not pushed, but deleted.
30
+
31
+ # This plugin have a system to restore the previous temporary files if something crash.
32
+
33
+ ##[Note] :
34
+
35
+ ## If you specify size_file and time_file then it will create file for each tag (if specified), when time_file or
36
+ ## their size > size_file, it will be triggered then they will be pushed on s3's bucket and will delete from local disk.
37
+
38
+ ## If you don't specify size_file, but time_file then it will create only one file for each tag (if specified).
39
+ ## When time_file it will be triggered then the files will be pushed on s3's bucket and delete from local disk.
40
+
41
+ ## If you don't specify time_file, but size_file then it will create files for each tag (if specified),
42
+ ## that will be triggered when their size > size_file, then they will be pushed on s3's bucket and will delete from local disk.
43
+
44
+ ## If you don't specific size_file and time_file you have a curios mode. It will create only one file for each tag (if specified).
45
+ ## Then the file will be rest on temporary directory and don't will be pushed on bucket until we will restart logstash.
46
+
47
+ # INFORMATION ABOUT CLASS:
48
+
49
+ # I tried to comment the class at best i could do.
50
+ # I think there are much thing to improve, but if you want some points to develop here a list:
51
+
52
+ # TODO Integrate aws_config in the future
53
+ # TODO Find a method to push them all files when logtstash close the session.
54
+ # TODO Integrate @field on the path file
55
+ # TODO Permanent connection or on demand? For now on demand, but isn't a good implementation.
56
+ # Use a while or a thread to try the connection before break a time_out and signal an error.
57
+ # TODO If you have bugs report or helpful advice contact me, but remember that this code is much mine as much as yours,
58
+ # try to work on it if you want :)
59
+
60
+
61
+ # USAGE:
62
+
63
+ # This is an example of logstash config:
64
+
65
+ # output {
66
+ # s3{
67
+ # access_key_id => "crazy_key" (required)
68
+ # secret_access_key => "monkey_access_key" (required)
69
+ # endpoint_region => "eu-west-1" (required)
70
+ # bucket => "boss_please_open_your_bucket" (required)
71
+ # size_file => 2048 (optional)
72
+ # time_file => 5 (optional)
73
+ # format => "plain" (optional)
74
+ # canned_acl => "private" (optional. Options are "private", "public_read", "public_read_write", "authenticated_read". Defaults to "private" )
75
+ # }
76
+ # }
77
+
78
+ # We analize this:
79
+
80
+ # access_key_id => "crazy_key"
81
+ # Amazon will give you the key for use their service if you buy it or try it. (not very much open source anyway)
82
+
83
+ # secret_access_key => "monkey_access_key"
84
+ # Amazon will give you the secret_access_key for use their service if you buy it or try it . (not very much open source anyway).
85
+
86
+ # endpoint_region => "eu-west-1"
87
+ # When you make a contract with Amazon, you should know where the services you use.
88
+
89
+ # bucket => "boss_please_open_your_bucket"
90
+ # Be careful you have the permission to write on bucket and know the name.
91
+
92
+ # size_file => 2048
93
+ # Means the size, in KB, of files who can store on temporary directory before you will be pushed on bucket.
94
+ # Is useful if you have a little server with poor space on disk and you don't want blow up the server with unnecessary temporary log files.
95
+
96
+ # time_file => 5
97
+ # Means, in minutes, the time before the files will be pushed on bucket. Is useful if you want to push the files every specific time.
98
+
99
+ # format => "plain"
100
+ # Means the format of events you want to store in the files
101
+
102
+ # canned_acl => "private"
103
+ # The S3 canned ACL to use when putting the file. Defaults to "private".
104
+
105
+ # LET'S ROCK AND ROLL ON THE CODE!
106
+
107
+ class LogStash::Outputs::S3 < LogStash::Outputs::Base
108
+ #TODO integrate aws_config in the future
109
+ # include LogStash::PluginMixins::AwsConfig
110
+
111
+ config_name "s3"
112
+ milestone 1
113
+
114
+ # Aws access_key.
115
+ config :access_key_id, :validate => :string
116
+
117
+ # Aws secret_access_key
118
+ config :secret_access_key, :validate => :string
119
+
120
+ # S3 bucket
121
+ config :bucket, :validate => :string
122
+
123
+ # Aws endpoint_region
124
+ config :endpoint_region, :validate => ["us-east-1", "us-west-1", "us-west-2",
125
+ "eu-west-1", "ap-southeast-1", "ap-southeast-2",
126
+ "ap-northeast-1", "sa-east-1", "us-gov-west-1"], :default => "us-east-1"
127
+
128
+ # Set the size of file in KB, this means that files on bucket when have dimension > file_size, they are stored in two or more file.
129
+ # If you have tags then it will generate a specific size file for every tags
130
+ ##NOTE: define size of file is the better thing, because generate a local temporary file on disk and then put it in bucket.
131
+ config :size_file, :validate => :number, :default => 0
132
+
133
+ # Set the time, in minutes, to close the current sub_time_section of bucket.
134
+ # If you define file_size you have a number of files in consideration of the section and the current tag.
135
+ # 0 stay all time on listerner, beware if you specific 0 and size_file 0, because you will not put the file on bucket,
136
+ # for now the only thing this plugin can do is to put the file when logstash restart.
137
+ config :time_file, :validate => :number, :default => 0
138
+
139
+ # The event format you want to store in files. Defaults to plain text.
140
+ config :format, :validate => [ "json", "plain", "nil" ], :default => "plain"
141
+
142
+ ## IMPORTANT: if you use multiple instance of s3, you should specify on one of them the "restore=> true" and on the others "restore => false".
143
+ ## This is hack for not destroy the new files after restoring the initial files.
144
+ ## If you do not specify "restore => true" when logstash crashes or is restarted, the files are not sent into the bucket,
145
+ ## for example if you have single Instance.
146
+ config :restore, :validate => :boolean, :default => false
147
+
148
+ # Aws canned ACL
149
+ config :canned_acl, :validate => ["private", "public_read", "public_read_write", "authenticated_read"],
150
+ :default => "private"
151
+
152
+ # Method to set up the aws configuration and establish connection
153
+ def aws_s3_config
154
+
155
+ @endpoint_region == 'us-east-1' ? @endpoint_region = 's3.amazonaws.com' : @endpoint_region = 's3-'+@endpoint_region+'.amazonaws.com'
156
+
157
+ @logger.info("Registering s3 output", :bucket => @bucket, :endpoint_region => @endpoint_region)
158
+
159
+ AWS.config(
160
+ :access_key_id => @access_key_id,
161
+ :secret_access_key => @secret_access_key,
162
+ :s3_endpoint => @endpoint_region
163
+ )
164
+ @s3 = AWS::S3.new
165
+
166
+ end
167
+
168
+ # This method is used to manage sleep and awaken thread.
169
+ def time_alert(interval)
170
+
171
+ Thread.new do
172
+ loop do
173
+ start_time = Time.now
174
+ yield
175
+ elapsed = Time.now - start_time
176
+ sleep([interval - elapsed, 0].max)
177
+ end
178
+ end
179
+
180
+ end
181
+
182
+ # this method is used for write files on bucket. It accept the file and the name of file.
183
+ def write_on_bucket (file_data, file_basename)
184
+
185
+ # if you lose connection with s3, bad control implementation.
186
+ if ( @s3 == nil)
187
+ aws_s3_config
188
+ end
189
+
190
+ # find and use the bucket
191
+ bucket = @s3.buckets[@bucket]
192
+
193
+ @logger.debug "S3: ready to write "+file_basename+" in bucket "+@bucket+", Fire in the hole!"
194
+
195
+ # prepare for write the file
196
+ object = bucket.objects[file_basename]
197
+ object.write(:file => file_data, :acl => @canned_acl)
198
+
199
+ @logger.debug "S3: has written "+file_basename+" in bucket "+@bucket + " with canned ACL \"" + @canned_acl + "\""
200
+
201
+ end
202
+
203
+ # this method is used for create new path for name the file
204
+ def getFinalPath
205
+
206
+ @pass_time = Time.now
207
+ return @temp_directory+"ls.s3."+Socket.gethostname+"."+(@pass_time).strftime("%Y-%m-%dT%H.%M")
208
+
209
+ end
210
+
211
+ # This method is used for restore the previous crash of logstash or to prepare the files to send in bucket.
212
+ # Take two parameter: flag and name. Flag indicate if you want to restore or not, name is the name of file
213
+ def upFile(flag, name)
214
+
215
+ Dir[@temp_directory+name].each do |file|
216
+ name_file = File.basename(file)
217
+
218
+ if (flag == true)
219
+ @logger.warn "S3: have found temporary file: "+name_file+", something has crashed before... Prepare for upload in bucket!"
220
+ end
221
+
222
+ if (!File.zero?(file))
223
+ write_on_bucket(file, name_file)
224
+
225
+ if (flag == true)
226
+ @logger.debug "S3: file: "+name_file+" restored on bucket "+@bucket
227
+ else
228
+ @logger.debug "S3: file: "+name_file+" was put on bucket "+@bucket
229
+ end
230
+ end
231
+
232
+ File.delete (file)
233
+
234
+ end
235
+ end
236
+
237
+ # This method is used for create new empty temporary files for use. Flag is needed for indicate new subsection time_file.
238
+ def newFile (flag)
239
+
240
+ if (flag == true)
241
+ @current_final_path = getFinalPath
242
+ @sizeCounter = 0
243
+ end
244
+
245
+ if (@tags.size != 0)
246
+ @tempFile = File.new(@current_final_path+".tag_"+@tag_path+"part"+@sizeCounter.to_s+".txt", "w")
247
+ else
248
+ @tempFile = File.new(@current_final_path+".part"+@sizeCounter.to_s+".txt", "w")
249
+ end
250
+
251
+ end
252
+
253
+ public
254
+ def register
255
+ require "aws-sdk"
256
+ @temp_directory = "/opt/logstash/S3_temp/"
257
+
258
+ if (@tags.size != 0)
259
+ @tag_path = ""
260
+ for i in (0..@tags.size-1)
261
+ @tag_path += @tags[i].to_s+"."
262
+ end
263
+ end
264
+
265
+ if !(File.directory? @temp_directory)
266
+ @logger.debug "S3: Directory "+@temp_directory+" doesn't exist, let's make it!"
267
+ Dir.mkdir(@temp_directory)
268
+ else
269
+ @logger.debug "S3: Directory "+@temp_directory+" exist, nothing to do"
270
+ end
271
+
272
+ if (@restore == true )
273
+ @logger.debug "S3: is attempting to verify previous crashes..."
274
+
275
+ upFile(true, "*.txt")
276
+ end
277
+
278
+ newFile(true)
279
+
280
+ if (time_file != 0)
281
+ first_time = true
282
+ @thread = time_alert(@time_file*60) do
283
+ if (first_time == false)
284
+ @logger.debug "S3: time_file triggered, let's bucket the file if dosen't empty and create new file "
285
+ upFile(false, File.basename(@tempFile))
286
+ newFile(true)
287
+ else
288
+ first_time = false
289
+ end
290
+ end
291
+ end
292
+
293
+ end
294
+
295
+ public
296
+ def receive(event)
297
+ return unless output?(event)
298
+
299
+ # Prepare format of Events
300
+ if (@format == "plain")
301
+ message = self.class.format_message(event)
302
+ elsif (@format == "json")
303
+ message = event.to_json
304
+ else
305
+ message = event.to_s
306
+ end
307
+
308
+ if(time_file !=0)
309
+ @logger.debug "S3: trigger files after "+((@pass_time+60*time_file)-Time.now).to_s
310
+ end
311
+
312
+ # if specific the size
313
+ if(size_file !=0)
314
+
315
+ if (@tempFile.size < @size_file )
316
+
317
+ @logger.debug "S3: File have size: "+@tempFile.size.to_s+" and size_file is: "+ @size_file.to_s
318
+ @logger.debug "S3: put event into: "+File.basename(@tempFile)
319
+
320
+ # Put the event in the file, now!
321
+ File.open(@tempFile, 'a') do |file|
322
+ file.puts message
323
+ file.write "\n"
324
+ end
325
+
326
+ else
327
+
328
+ @logger.debug "S3: file: "+File.basename(@tempFile)+" is too large, let's bucket it and create new file"
329
+ upFile(false, File.basename(@tempFile))
330
+ @sizeCounter += 1
331
+ newFile(false)
332
+
333
+ end
334
+
335
+ # else we put all in one file
336
+ else
337
+
338
+ @logger.debug "S3: put event into "+File.basename(@tempFile)
339
+ File.open(@tempFile, 'a') do |file|
340
+ file.puts message
341
+ file.write "\n"
342
+ end
343
+ end
344
+
345
+ end
346
+
347
+ def self.format_message(event)
348
+ message = "Date: #{event[LogStash::Event::TIMESTAMP]}\n"
349
+ message << "Source: #{event["source"]}\n"
350
+ message << "Tags: #{event["tags"].join(', ')}\n"
351
+ message << "Fields: #{event.to_hash.inspect}\n"
352
+ message << "Message: #{event["message"]}"
353
+ end
354
+
355
+ end
356
+
357
+ # Enjoy it, by Bistic:)
@@ -0,0 +1,28 @@
1
+ Gem::Specification.new do |s|
2
+
3
+ s.name = 'logstash-output-s3'
4
+ s.version = '0.1.0'
5
+ s.licenses = ['Apache License (2.0)']
6
+ s.summary = "This plugin was created for store the logstash's events into Amazon Simple Storage Service (Amazon S3)"
7
+ s.description = "This plugin was created for store the logstash's events into Amazon Simple Storage Service (Amazon S3)"
8
+ s.authors = ["Elasticsearch"]
9
+ s.email = 'richard.pijnenburg@elasticsearch.com'
10
+ s.homepage = "http://logstash.net/"
11
+ s.require_paths = ["lib"]
12
+
13
+ # Files
14
+ s.files = `git ls-files`.split($\)+::Dir.glob('vendor/*')
15
+
16
+ # Tests
17
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
18
+
19
+ # Special flag to let us know this is actually a logstash plugin
20
+ s.metadata = { "logstash_plugin" => "true", "group" => "output" }
21
+
22
+ # Gem dependencies
23
+ s.add_runtime_dependency 'logstash', '>= 1.4.0', '< 2.0.0'
24
+
25
+ s.add_runtime_dependency 'aws-sdk'
26
+
27
+ end
28
+
@@ -0,0 +1,9 @@
1
+ require "gem_publisher"
2
+
3
+ desc "Publish gem to RubyGems.org"
4
+ task :publish_gem do |t|
5
+ gem_file = Dir.glob(File.expand_path('../*.gemspec',File.dirname(__FILE__))).first
6
+ gem = GemPublisher.publish_if_updated(gem_file, :rubygems)
7
+ puts "Published #{gem}" if gem
8
+ end
9
+
@@ -0,0 +1,169 @@
1
+ require "net/http"
2
+ require "uri"
3
+ require "digest/sha1"
4
+
5
+ def vendor(*args)
6
+ return File.join("vendor", *args)
7
+ end
8
+
9
+ directory "vendor/" => ["vendor"] do |task, args|
10
+ mkdir task.name
11
+ end
12
+
13
+ def fetch(url, sha1, output)
14
+
15
+ puts "Downloading #{url}"
16
+ actual_sha1 = download(url, output)
17
+
18
+ if actual_sha1 != sha1
19
+ fail "SHA1 does not match (expected '#{sha1}' but got '#{actual_sha1}')"
20
+ end
21
+ end # def fetch
22
+
23
+ def file_fetch(url, sha1)
24
+ filename = File.basename( URI(url).path )
25
+ output = "vendor/#{filename}"
26
+ task output => [ "vendor/" ] do
27
+ begin
28
+ actual_sha1 = file_sha1(output)
29
+ if actual_sha1 != sha1
30
+ fetch(url, sha1, output)
31
+ end
32
+ rescue Errno::ENOENT
33
+ fetch(url, sha1, output)
34
+ end
35
+ end.invoke
36
+
37
+ return output
38
+ end
39
+
40
+ def file_sha1(path)
41
+ digest = Digest::SHA1.new
42
+ fd = File.new(path, "r")
43
+ while true
44
+ begin
45
+ digest << fd.sysread(16384)
46
+ rescue EOFError
47
+ break
48
+ end
49
+ end
50
+ return digest.hexdigest
51
+ ensure
52
+ fd.close if fd
53
+ end
54
+
55
+ def download(url, output)
56
+ uri = URI(url)
57
+ digest = Digest::SHA1.new
58
+ tmp = "#{output}.tmp"
59
+ Net::HTTP.start(uri.host, uri.port, :use_ssl => (uri.scheme == "https")) do |http|
60
+ request = Net::HTTP::Get.new(uri.path)
61
+ http.request(request) do |response|
62
+ fail "HTTP fetch failed for #{url}. #{response}" if [200, 301].include?(response.code)
63
+ size = (response["content-length"].to_i || -1).to_f
64
+ count = 0
65
+ File.open(tmp, "w") do |fd|
66
+ response.read_body do |chunk|
67
+ fd.write(chunk)
68
+ digest << chunk
69
+ if size > 0 && $stdout.tty?
70
+ count += chunk.bytesize
71
+ $stdout.write(sprintf("\r%0.2f%%", count/size * 100))
72
+ end
73
+ end
74
+ end
75
+ $stdout.write("\r \r") if $stdout.tty?
76
+ end
77
+ end
78
+
79
+ File.rename(tmp, output)
80
+
81
+ return digest.hexdigest
82
+ rescue SocketError => e
83
+ puts "Failure while downloading #{url}: #{e}"
84
+ raise
85
+ ensure
86
+ File.unlink(tmp) if File.exist?(tmp)
87
+ end # def download
88
+
89
+ def untar(tarball, &block)
90
+ require "archive/tar/minitar"
91
+ tgz = Zlib::GzipReader.new(File.open(tarball))
92
+ # Pull out typesdb
93
+ tar = Archive::Tar::Minitar::Input.open(tgz)
94
+ tar.each do |entry|
95
+ path = block.call(entry)
96
+ next if path.nil?
97
+ parent = File.dirname(path)
98
+
99
+ mkdir_p parent unless File.directory?(parent)
100
+
101
+ # Skip this file if the output file is the same size
102
+ if entry.directory?
103
+ mkdir path unless File.directory?(path)
104
+ else
105
+ entry_mode = entry.instance_eval { @mode } & 0777
106
+ if File.exists?(path)
107
+ stat = File.stat(path)
108
+ # TODO(sissel): Submit a patch to archive-tar-minitar upstream to
109
+ # expose headers in the entry.
110
+ entry_size = entry.instance_eval { @size }
111
+ # If file sizes are same, skip writing.
112
+ next if stat.size == entry_size && (stat.mode & 0777) == entry_mode
113
+ end
114
+ puts "Extracting #{entry.full_name} from #{tarball} #{entry_mode.to_s(8)}"
115
+ File.open(path, "w") do |fd|
116
+ # eof? check lets us skip empty files. Necessary because the API provided by
117
+ # Archive::Tar::Minitar::Reader::EntryStream only mostly acts like an
118
+ # IO object. Something about empty files in this EntryStream causes
119
+ # IO.copy_stream to throw "can't convert nil into String" on JRuby
120
+ # TODO(sissel): File a bug about this.
121
+ while !entry.eof?
122
+ chunk = entry.read(16384)
123
+ fd.write(chunk)
124
+ end
125
+ #IO.copy_stream(entry, fd)
126
+ end
127
+ File.chmod(entry_mode, path)
128
+ end
129
+ end
130
+ tar.close
131
+ File.unlink(tarball) if File.file?(tarball)
132
+ end # def untar
133
+
134
+ def ungz(file)
135
+
136
+ outpath = file.gsub('.gz', '')
137
+ tgz = Zlib::GzipReader.new(File.open(file))
138
+ begin
139
+ File.open(outpath, "w") do |out|
140
+ IO::copy_stream(tgz, out)
141
+ end
142
+ File.unlink(file)
143
+ rescue
144
+ File.unlink(outpath) if File.file?(outpath)
145
+ raise
146
+ end
147
+ tgz.close
148
+ end
149
+
150
+ desc "Process any vendor files required for this plugin"
151
+ task "vendor" do |task, args|
152
+
153
+ @files.each do |file|
154
+ download = file_fetch(file['url'], file['sha1'])
155
+ if download =~ /.tar.gz/
156
+ prefix = download.gsub('.tar.gz', '').gsub('vendor/', '')
157
+ untar(download) do |entry|
158
+ if !file['files'].nil?
159
+ next unless file['files'].include?(entry.full_name.gsub(prefix, ''))
160
+ out = entry.full_name.split("/").last
161
+ end
162
+ File.join('vendor', out)
163
+ end
164
+ elsif download =~ /.gz/
165
+ ungz(download)
166
+ end
167
+ end
168
+
169
+ end
@@ -0,0 +1 @@
1
+ require 'spec_helper'
metadata ADDED
@@ -0,0 +1,91 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: logstash-output-s3
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Elasticsearch
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2014-11-06 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: logstash
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ! '>='
18
+ - !ruby/object:Gem::Version
19
+ version: 1.4.0
20
+ - - <
21
+ - !ruby/object:Gem::Version
22
+ version: 2.0.0
23
+ type: :runtime
24
+ prerelease: false
25
+ version_requirements: !ruby/object:Gem::Requirement
26
+ requirements:
27
+ - - ! '>='
28
+ - !ruby/object:Gem::Version
29
+ version: 1.4.0
30
+ - - <
31
+ - !ruby/object:Gem::Version
32
+ version: 2.0.0
33
+ - !ruby/object:Gem::Dependency
34
+ name: aws-sdk
35
+ requirement: !ruby/object:Gem::Requirement
36
+ requirements:
37
+ - - ! '>='
38
+ - !ruby/object:Gem::Version
39
+ version: '0'
40
+ type: :runtime
41
+ prerelease: false
42
+ version_requirements: !ruby/object:Gem::Requirement
43
+ requirements:
44
+ - - ! '>='
45
+ - !ruby/object:Gem::Version
46
+ version: '0'
47
+ description: This plugin was created for store the logstash's events into Amazon Simple
48
+ Storage Service (Amazon S3)
49
+ email: richard.pijnenburg@elasticsearch.com
50
+ executables: []
51
+ extensions: []
52
+ extra_rdoc_files: []
53
+ files:
54
+ - .gitignore
55
+ - Gemfile
56
+ - LICENSE
57
+ - Rakefile
58
+ - lib/logstash/outputs/s3.rb
59
+ - logstash-output-s3.gemspec
60
+ - rakelib/publish.rake
61
+ - rakelib/vendor.rake
62
+ - spec/outputs/s3_spec.rb
63
+ homepage: http://logstash.net/
64
+ licenses:
65
+ - Apache License (2.0)
66
+ metadata:
67
+ logstash_plugin: 'true'
68
+ group: output
69
+ post_install_message:
70
+ rdoc_options: []
71
+ require_paths:
72
+ - lib
73
+ required_ruby_version: !ruby/object:Gem::Requirement
74
+ requirements:
75
+ - - ! '>='
76
+ - !ruby/object:Gem::Version
77
+ version: '0'
78
+ required_rubygems_version: !ruby/object:Gem::Requirement
79
+ requirements:
80
+ - - ! '>='
81
+ - !ruby/object:Gem::Version
82
+ version: '0'
83
+ requirements: []
84
+ rubyforge_project:
85
+ rubygems_version: 2.4.1
86
+ signing_key:
87
+ specification_version: 4
88
+ summary: This plugin was created for store the logstash's events into Amazon Simple
89
+ Storage Service (Amazon S3)
90
+ test_files:
91
+ - spec/outputs/s3_spec.rb