smooth_s3 0.2.0 → 0.2.1
Sign up to get free protection for your applications and to get access to all the features.
- data/CHANGELOG +4 -1
- data/README.textile +19 -5
- data/VERSION +1 -1
- data/lib/smooth_s3.rb +2 -2
- data/lib/smooth_s3/bucket.rb +4 -4
- data/lib/smooth_s3/uploader.rb +25 -5
- metadata +4 -4
data/CHANGELOG
CHANGED
@@ -1,3 +1,6 @@
|
|
1
|
+
### 0.2.1 ###
|
2
|
+
- Added :only and :except options to the 2 directory_sync methods. You can pass a Regexp or an array of Regexps.
|
3
|
+
|
1
4
|
### 0.2.0 ###
|
2
5
|
- Refactoring run
|
3
6
|
- Connection errors and failed uploads are now all properly rescued by appropriate logic.
|
@@ -9,4 +12,4 @@
|
|
9
12
|
- Removed dependency to jeweler (oops!)
|
10
13
|
|
11
14
|
### 0.1.0 ###
|
12
|
-
- First release
|
15
|
+
- First release
|
data/README.textile
CHANGED
@@ -12,6 +12,20 @@ Smooth S3 is a user-friendly superset of the S3 gem geared towards file system b
|
|
12
12
|
|
13
13
|
The goal with Smooth S3 is to facilitate and simplify your S3 uploads. It is a library focused on the file system, so no integration with MySQL, third-party services or anything like that. Nothing prevents you from doing a _mysqldump_ and uploading the results in the same script using Smooth S3 though ;)
|
14
14
|
|
15
|
+
h2. Why Use Smooth S3?
|
16
|
+
|
17
|
+
Aren't the AWS/S3 and S3 gems already good enough? Well yes, they are, but all they do is wrap the AWS S3 API. They do an awesome job at it, mind you, but if you've had to write more than a few apps and/or scripts where you need to use S3, you know that there is quite a bit of boilerplate code required even for the simplest upload operations.
|
18
|
+
|
19
|
+
There was an opportunity to make all of this way simpler (or, _puts on sunglasses_, smoother), and out of that opportunity Smooth S3 was born. For the people who like bullet points, here are a few reasons why you should probably use it:
|
20
|
+
|
21
|
+
* **Smart defaults** - No more boilerplate code. Once the service is initialized (a one-liner, of course), you only need to specify bucket and file(s) and they will be uploaded with a single method call on the service. The bucket doesn't exist? No problem. As long as the name you provided is unique in the S3 global namespace, it will be created. Want to overwrite existing files in the bucket? Add a bang(!) to your method. You get the gist of it.
|
22
|
+
|
23
|
+
* **Few methods, lots of power** - On top of service initialization, the library currently only offers 4 methods (that could technically, and may eventually be reduced to 2) so it is pretty darn simple to use. Options are used to arm that simplicity with some power. There is a good chance that by combining options, you will get your uploads to behave the way you want with SmoothS3. If not, fork away or suggest one in the issues! :)
|
24
|
+
|
25
|
+
* **No raising, unless absolutely necessary** - The established S3 libraries are pretty trigger-happy when it comes to raising exceptions. We all love Fail Fast™ but there are times when it's simply not the right approach, such as with backups. Sure, you will eventually have rescued everything but by then, your codebase will have tripled. Smooth S3 only raises when it absolutely has no choice to do so (ex: Wrong AWS credentials). Failed uploads are retried a few times, then skipped with a log message if they are still a no-go (permissions etc.). Bucket and object operations are also retried in case of a random HTTP failures and won't throw exceptions if you look for a non-existant one.
|
26
|
+
|
27
|
+
* **Dogfood is being eaten** - Smooth S3 is being used in production to back up 100+ GBs of unique data daily over here at "Needium":http://needium.com. Most options that are added are derived from our direct needs, so you can be certain that the library will remain supported and new features will keep being added.
|
28
|
+
|
15
29
|
h2. Installation
|
16
30
|
|
17
31
|
This library has been developed against and designed for MRI 1.9.2 in a UNIX environment. It should be compatible with 1.8.6+, as well as with other Ruby implementations, but no guarantees. Probably not compatible with Windows environments.
|
@@ -29,7 +43,7 @@ Once that is done you can use the following methods:
|
|
29
43
|
|
30
44
|
* upload() - Regular file upload. Can specify multiple files at once. Directory structure is not preserved, only the file name.
|
31
45
|
|
32
|
-
*
|
46
|
+
* directory_sync() - Uploads the entire content of a directory and its subdirectories. Preserves folder directory structure inside S3.
|
33
47
|
|
34
48
|
* timestamped_upload() - Like a regular upload. Files uploaded this way have a timestamp added in front of their names. Default timestamp format: YYYYmmddHHMMSS
|
35
49
|
|
@@ -60,11 +74,11 @@ require 'smooth_s3'
|
|
60
74
|
@service.upload!("my_test_bucket", ["here.rb", "../parent.rb"], :prefix => "prefix/to/files")
|
61
75
|
|
62
76
|
|
63
|
-
#
|
77
|
+
# directory_sync()
|
64
78
|
# Params: bucket_name, directory(, :prefix)
|
65
79
|
|
66
|
-
@service.
|
67
|
-
@service.
|
80
|
+
@service.directory_sync("my_test_bucket", "../reports")
|
81
|
+
@service.directory_sync!("my_test_bucket", "data", :prefix => "prefix/to/dir")
|
68
82
|
|
69
83
|
|
70
84
|
# timestamped_upload()
|
@@ -85,7 +99,7 @@ require 'smooth_s3'
|
|
85
99
|
|
86
100
|
h2. Credits
|
87
101
|
|
88
|
-
* Jakub Kuźma for his great S3 library, often overshadowed by
|
102
|
+
* Jakub Kuźma for his great S3 library, often overshadowed by AWS-S3.
|
89
103
|
|
90
104
|
h2. LICENSE
|
91
105
|
|
data/VERSION
CHANGED
@@ -1 +1 @@
|
|
1
|
-
0.2.
|
1
|
+
0.2.1
|
data/lib/smooth_s3.rb
CHANGED
data/lib/smooth_s3/bucket.rb
CHANGED
@@ -31,14 +31,14 @@ module SmoothS3
|
|
31
31
|
Service.new_buckets[service.aws_key] << new_bucket
|
32
32
|
end
|
33
33
|
|
34
|
-
def self.store_file(file, remote_file, bucket, service,
|
34
|
+
def self.store_file(file, remote_file, bucket, service, options)
|
35
35
|
b = service.refresh.buckets[bucket]
|
36
36
|
|
37
|
-
if prefix
|
38
|
-
remote_file = prefix =~ /\/$/ ? (prefix + remote_file) : prefix + "/" + remote_file
|
37
|
+
if options[:prefix]
|
38
|
+
remote_file = options[:prefix] =~ /\/$/ ? (options[:prefix] + remote_file) : options[:prefix] + "/" + remote_file
|
39
39
|
end
|
40
40
|
|
41
|
-
unless overwrite == true
|
41
|
+
unless options[:overwrite] == true
|
42
42
|
if Bucket.file_exists?(remote_file, bucket, service)
|
43
43
|
puts "'#{remote_file}' already exists on S3 bucket named '#{bucket}'. Use the bang(!) version of the method to overwrite."
|
44
44
|
return
|
data/lib/smooth_s3/uploader.rb
CHANGED
@@ -8,7 +8,7 @@ module SmoothS3
|
|
8
8
|
valid_files = Uploader.validate_files(files)
|
9
9
|
valid_files.each do |vf|
|
10
10
|
remote_file_name = options[:timestamped] ? (options[:timestamp] + "_" + vf.split("/")[-1]) : vf.split("/")[-1]
|
11
|
-
Bucket.store_file(vf, remote_file_name, bucket, service, options
|
11
|
+
Bucket.store_file(vf, remote_file_name, bucket, service, options)
|
12
12
|
end
|
13
13
|
end
|
14
14
|
|
@@ -20,10 +20,10 @@ module SmoothS3
|
|
20
20
|
[:overwrite, :timestamped].each {|s| options[s] = false unless options[s]}
|
21
21
|
Bucket.select(bucket, service)
|
22
22
|
|
23
|
-
valid_files = Uploader.validate_files_in_directory(directory)
|
23
|
+
valid_files = Uploader.validate_files_in_directory(directory, options)
|
24
24
|
valid_files.each do |vf|
|
25
25
|
remote_file_name = options[:timestamped] ? (options[:timestamp] + "_" + vf[1]) : vf[1]
|
26
|
-
Bucket.store_file(vf[0], remote_file_name, bucket, service, options
|
26
|
+
Bucket.store_file(vf[0], remote_file_name, bucket, service, options)
|
27
27
|
end
|
28
28
|
end
|
29
29
|
|
@@ -78,7 +78,7 @@ module SmoothS3
|
|
78
78
|
valid_files
|
79
79
|
end
|
80
80
|
|
81
|
-
def self.validate_files_in_directory(directory)
|
81
|
+
def self.validate_files_in_directory(directory, options)
|
82
82
|
valid_files = []
|
83
83
|
|
84
84
|
begin
|
@@ -92,7 +92,27 @@ module SmoothS3
|
|
92
92
|
raise SmoothS3::Error, "'#{directory}' is not a valid directory."
|
93
93
|
end
|
94
94
|
|
95
|
-
valid_files
|
95
|
+
Uploader.filter_files(valid_files, options)
|
96
|
+
end
|
97
|
+
|
98
|
+
def self.filter_files(files, options)
|
99
|
+
[:except, :only].each { |o| options[o] = options[o].class == Regexp ? [options[o]] : options[o] }
|
100
|
+
except, only = [], []
|
101
|
+
|
102
|
+
files.each do |f|
|
103
|
+
options[:except].each {|r| except << f if f[0] =~ r} if options[:except]
|
104
|
+
options[:only].each {|r| only << f if f[0] =~ r} if options[:only]
|
105
|
+
end
|
106
|
+
|
107
|
+
if options[:except] && options[:only]
|
108
|
+
filtered_files = only - except
|
109
|
+
elsif options[:except]
|
110
|
+
filtered_files = files - except
|
111
|
+
elsif options[:only]
|
112
|
+
filtered_files = only
|
113
|
+
else
|
114
|
+
filtered_files = files
|
115
|
+
end
|
96
116
|
end
|
97
117
|
|
98
118
|
def self.calculate_timestamp(options)
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: smooth_s3
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.2.
|
4
|
+
version: 0.2.1
|
5
5
|
prerelease:
|
6
6
|
platform: ruby
|
7
7
|
authors:
|
@@ -9,11 +9,11 @@ authors:
|
|
9
9
|
autorequire:
|
10
10
|
bindir: bin
|
11
11
|
cert_chain: []
|
12
|
-
date: 2011-12-
|
12
|
+
date: 2011-12-16 00:00:00.000000000Z
|
13
13
|
dependencies:
|
14
14
|
- !ruby/object:Gem::Dependency
|
15
15
|
name: s3
|
16
|
-
requirement: &
|
16
|
+
requirement: &2153165680 !ruby/object:Gem::Requirement
|
17
17
|
none: false
|
18
18
|
requirements:
|
19
19
|
- - ~>
|
@@ -21,7 +21,7 @@ dependencies:
|
|
21
21
|
version: 0.3.9
|
22
22
|
type: :runtime
|
23
23
|
prerelease: false
|
24
|
-
version_requirements: *
|
24
|
+
version_requirements: *2153165680
|
25
25
|
description: A user-friendly superset of the S3 gem geared towards file system backup
|
26
26
|
operations. Simplifies standard actions such as basic uploads, for example allowing
|
27
27
|
multiple files to be uploaded in one operation and adds new functionality such as
|