frahugo-s3sync 1.3.8 → 1.4.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/README.rdoc +42 -29
- data/VERSION +1 -1
- data/bin/s3cmd +11 -11
- data/bin/s3sync +23 -23
- data/lib/s3sync/S3.rb +10 -10
- data/lib/s3sync/S3encoder.rb +7 -8
- metadata +24 -41
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA1:
|
3
|
+
metadata.gz: ec03e93ec835de16b422d1443343e91988810eff
|
4
|
+
data.tar.gz: b90aaf12ff814d70a54b85d8d4fbe24c3fabd80b
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: e42b61fe6b1f9b6800805c9ffeb3a66ae6ff18726de9a493425f102299c4ea1eb827a51a3b1fbf80910c4c18c2175f0d5ea6d64e45b495215951bdc0aff85414
|
7
|
+
data.tar.gz: 42068a9f5c51a1f625ae9154db7ee4c0e1fe23b4f5189ce0bd9314581b655e849dbea17a7b0e1d463691dc2b332e54a5dad624a5dbc623828b504d0f61d5c60c
|
data/README.rdoc
CHANGED
@@ -1,18 +1,31 @@
|
|
1
|
+
== CHANGES from version 1.3.8 for Ruby 2
|
2
|
+
|
3
|
+
* Use String#encode instead of Iconv#iconv
|
4
|
+
* Use Digest instead of Digest::Digest
|
5
|
+
|
6
|
+
== CHANGES from original to be compatible with 1.9.2
|
7
|
+
|
8
|
+
* require 'md5'
|
1
9
|
|
2
|
-
== CHANGED from original to be compatible with 1.9.2
|
3
|
-
* require 'md5'
|
4
10
|
Instead require "digest/md5"
|
11
|
+
|
5
12
|
* Thread.critical
|
13
|
+
|
6
14
|
Thread.critical is not used since 1.9
|
7
|
-
|
15
|
+
|
16
|
+
* Dir#collect
|
17
|
+
|
8
18
|
In 1.9.2 Dir#collect is not Array but Enumerator
|
19
|
+
|
9
20
|
* Array#to_s
|
21
|
+
|
10
22
|
The result of [1,2].to_s is different from 1.8. Instead of to_s, used join
|
23
|
+
|
11
24
|
* use Enumerator instead of thread_generator
|
12
25
|
|
13
26
|
== DESCRIPTION:
|
14
27
|
|
15
|
-
Welcome to s3sync.rb
|
28
|
+
Welcome to s3sync.rb
|
16
29
|
--------------------
|
17
30
|
Home page, wiki, forum, bug reports, etc: http://s3sync.net
|
18
31
|
|
@@ -57,7 +70,7 @@ get/puts.
|
|
57
70
|
|
58
71
|
|
59
72
|
About Directories, the bane of any S3 sync-er
|
60
|
-
---------------------------------------------
|
73
|
+
---------------------------------------------
|
61
74
|
In S3 there's no actual concept of folders, just keys and nodes. So, every tool
|
62
75
|
uses its own proprietary way of storing dir info (my scheme being the best
|
63
76
|
naturally) and in general the methods are not compatible.
|
@@ -76,7 +89,7 @@ s3sync's normal operation is to compare the file size and MD5 hash of each item
|
|
76
89
|
to decide whether it needs syncing. On the S3 side, these hashes are stored and
|
77
90
|
returned to us as the "ETag" of each item when the bucket is listed, so it's
|
78
91
|
very easy. On the local side, the MD5 must be calculated by pushing every byte
|
79
|
-
in the file through the MD5 algorithm. This is CPU and IO intensive!
|
92
|
+
in the file through the MD5 algorithm. This is CPU and IO intensive!
|
80
93
|
|
81
94
|
Thus you can specify the option --no-md5. This will compare the upload time on
|
82
95
|
S3 to the "last modified" time on the local item, and not do md5 calculations
|
@@ -91,7 +104,7 @@ behavior.
|
|
91
104
|
A word on SSL_CERT_DIR:
|
92
105
|
-----------------------
|
93
106
|
On my debian install I didn't find any root authority public keys. I installed
|
94
|
-
some by running this shell archive:
|
107
|
+
some by running this shell archive:
|
95
108
|
http://mirbsd.mirsolutions.de/cvs.cgi/src/etc/ssl.certs.shar
|
96
109
|
(You have to click download, and then run it wherever you want the certs to be
|
97
110
|
placed). I do not in any way assert that these certificates are good,
|
@@ -114,7 +127,7 @@ using more than one CA.
|
|
114
127
|
Getting started:
|
115
128
|
----------------
|
116
129
|
Invoke by typing s3sync.rb and you should get a nice usage screen.
|
117
|
-
Options can be specified in short or long form (except --delete, which has no
|
130
|
+
Options can be specified in short or long form (except --delete, which has no
|
118
131
|
short form)
|
119
132
|
|
120
133
|
ALWAYS TEST NEW COMMANDS using --dryrun(-n) if you want to see what will be
|
@@ -138,10 +151,10 @@ the command line you specify is not going to do something terrible to your
|
|
138
151
|
cherished and irreplaceable data.
|
139
152
|
|
140
153
|
|
141
|
-
Updates and other discussion:
|
142
|
-
-----------------------------
|
154
|
+
Updates and other discussion:
|
155
|
+
-----------------------------
|
143
156
|
The latest version of s3sync should normally be at:
|
144
|
-
http://s3.amazonaws.com/ServEdge_pub/s3sync/s3sync.tar.gz
|
157
|
+
http://s3.amazonaws.com/ServEdge_pub/s3sync/s3sync.tar.gz
|
145
158
|
and the Amazon S3 forums probably have a few threads going on it at any given
|
146
159
|
time. I may not always see things posted to the threads, so if you want you can
|
147
160
|
contact me at gbs-s3@10forward.com too.
|
@@ -153,7 +166,7 @@ contact me at gbs-s3@10forward.com too.
|
|
153
166
|
|
154
167
|
== SYNOPSIS:
|
155
168
|
|
156
|
-
Examples:
|
169
|
+
Examples:
|
157
170
|
---------
|
158
171
|
(using S3 bucket 'mybucket' and prefix 'pre')
|
159
172
|
Put the local etc directory itself into S3
|
@@ -181,19 +194,19 @@ Create a new bucket:
|
|
181
194
|
|
182
195
|
Create a new bucket in the EU:
|
183
196
|
s3cmd.rb createbucket BucketName EU
|
184
|
-
|
197
|
+
|
185
198
|
Find out the location constraint of a bucket:
|
186
199
|
s3cmd.rb location BucketName
|
187
200
|
|
188
201
|
Delete an old bucket you don't want any more:
|
189
202
|
s3cmd.rb deletebucket BucketName
|
190
|
-
|
203
|
+
|
191
204
|
Find out what's in a bucket, 10 lines at a time:
|
192
205
|
s3cmd.rb list BucketName 10
|
193
|
-
|
206
|
+
|
194
207
|
Only look in a particular prefix:
|
195
208
|
s3cmd.rb list BucketName:startsWithThis
|
196
|
-
|
209
|
+
|
197
210
|
Look in the virtual "directory" named foo;
|
198
211
|
lists sub-"directories" and keys that are at this level.
|
199
212
|
Note that if you specify a delimiter you must specify a max before it.
|
@@ -205,17 +218,17 @@ Delete a key:
|
|
205
218
|
|
206
219
|
Delete all keys that match (like a combo between list and delete):
|
207
220
|
s3cmd.rb deleteall BucketName:SomePrefix
|
208
|
-
|
209
|
-
Only pretend you're going to delete all keys that match, but list them:
|
221
|
+
|
222
|
+
Only pretend you're going to delete all keys that match, but list them:
|
210
223
|
s3cmd.rb --dryrun deleteall BucketName:SomePrefix
|
211
|
-
|
224
|
+
|
212
225
|
Delete all keys in a bucket (leaving the bucket):
|
213
226
|
s3cmd.rb deleteall BucketName
|
214
|
-
|
227
|
+
|
215
228
|
Get a file from S3 and store it to a local file
|
216
229
|
s3cmd.rb get BucketName:TheFileOnS3.txt ALocalFile.txt
|
217
|
-
|
218
|
-
Put a local file up to S3
|
230
|
+
|
231
|
+
Put a local file up to S3
|
219
232
|
Note we don't automatically set mime type, etc.
|
220
233
|
NOTE that the order of the options doesn't change. S3 stays first!
|
221
234
|
s3cmd.rb put BucketName:TheFileOnS3.txt ALocalFile.txt
|
@@ -250,7 +263,7 @@ gem 'frahugo-s3sync', :git => 'git://github.com/frahugo/s3sync.git'
|
|
250
263
|
|
251
264
|
Your environment:
|
252
265
|
-----------------
|
253
|
-
s3sync needs to know several interesting values to work right. It looks for
|
266
|
+
s3sync needs to know several interesting values to work right. It looks for
|
254
267
|
them in the following environment variables -or- a s3config.yml file.
|
255
268
|
In the yml case, the names need to be lowercase (see example file).
|
256
269
|
Furthermore, the yml is searched for in the following locations, in order:
|
@@ -261,7 +274,7 @@ Furthermore, the yml is searched for in the following locations, in order:
|
|
261
274
|
Required:
|
262
275
|
AWS_ACCESS_KEY_ID
|
263
276
|
AWS_SECRET_ACCESS_KEY
|
264
|
-
|
277
|
+
|
265
278
|
If you don't know what these are, then s3sync is probably not the
|
266
279
|
right tool for you to be starting out with.
|
267
280
|
Optional:
|
@@ -276,19 +289,19 @@ Optional:
|
|
276
289
|
AWS_CALLING_FORMAT - Defaults to REGULAR
|
277
290
|
REGULAR # http://s3.amazonaws.com/bucket/key
|
278
291
|
SUBDOMAIN # http://bucket.s3.amazonaws.com/key
|
279
|
-
VANITY # http://<vanity_domain>/key
|
292
|
+
VANITY # http://<vanity_domain>/key
|
280
293
|
|
281
294
|
Important: For EU-located buckets you should set the calling format to SUBDOMAIN
|
282
|
-
Important: For US buckets with CAPS or other weird traits set the calling format
|
295
|
+
Important: For US buckets with CAPS or other weird traits set the calling format
|
283
296
|
to REGULAR
|
284
297
|
|
285
|
-
I use "envdir" from the daemontools package to set up my env
|
298
|
+
I use "envdir" from the daemontools package to set up my env
|
286
299
|
variables easily: http://cr.yp.to/daemontools/envdir.html
|
287
300
|
For example:
|
288
301
|
envdir /root/s3sync/env /root/s3sync/s3sync.rb -etc etc etc
|
289
|
-
I know there are other similar tools out there as well.
|
302
|
+
I know there are other similar tools out there as well.
|
290
303
|
|
291
|
-
You can also just call it in a shell script where you have exported the vars
|
304
|
+
You can also just call it in a shell script where you have exported the vars
|
292
305
|
first such as:
|
293
306
|
#!/bin/bash
|
294
307
|
export AWS_ACCESS_KEY_ID=valueGoesHere
|
data/VERSION
CHANGED
@@ -1 +1 @@
|
|
1
|
-
1.
|
1
|
+
1.4.1
|
data/bin/s3cmd
CHANGED
@@ -40,7 +40,7 @@ module S3sync
|
|
40
40
|
--help -h --verbose -v --dryrun -n
|
41
41
|
--ssl -s --debug -d --progress
|
42
42
|
--expires-in=( <# of seconds> | [#d|#h|#m|#s] )
|
43
|
-
|
43
|
+
|
44
44
|
Commands:
|
45
45
|
#{name} listbuckets [headers]
|
46
46
|
#{name} createbucket <bucket> [constraint (i.e. EU)]
|
@@ -86,7 +86,7 @@ ENDUSAGE
|
|
86
86
|
# ---------- COMMAND PROCESSING ---------- #
|
87
87
|
command, path, file = ARGV
|
88
88
|
|
89
|
-
s3cmdUsage("You didn't set up your environment variables; see README.txt") if not($AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY)
|
89
|
+
s3cmdUsage("You didn't set up your environment variables; see README.txt") if not($AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY)
|
90
90
|
s3cmdUsage("Need a command (etc)") if not command
|
91
91
|
|
92
92
|
path = '' unless path
|
@@ -114,7 +114,7 @@ ENDUSAGE
|
|
114
114
|
res = s3cmdList(bucket, path, nil, nil, marker)
|
115
115
|
res.entries.each do |item|
|
116
116
|
# the s3 commands (with my modified UTF-8 conversion) expect native char encoding input
|
117
|
-
key =
|
117
|
+
key = item.key.encode($S3SYNC_NATIVE_CHARSET, "UTF-8")
|
118
118
|
$stderr.puts "delete #{bucket}:#{key} #{headers.inspect if headers}" if $S3syncOptions['--verbose']
|
119
119
|
S3try(:delete, bucket, key) unless $S3syncOptions['--dryrun']
|
120
120
|
end
|
@@ -122,7 +122,7 @@ ENDUSAGE
|
|
122
122
|
more = res.properties.is_truncated
|
123
123
|
marker = (res.properties.next_marker)? res.properties.next_marker : ((res.entries.length > 0) ? res.entries.last.key : nil)
|
124
124
|
# get this into local charset; when we pass it to s3 that is what's expected
|
125
|
-
marker =
|
125
|
+
marker = marker.encode($S3SYNC_NATIVE_CHARSET, "UTF-8") if marker
|
126
126
|
end
|
127
127
|
|
128
128
|
when "list"
|
@@ -138,19 +138,19 @@ ENDUSAGE
|
|
138
138
|
res = s3cmdList(bucket, path, max, delim, marker, headers)
|
139
139
|
if delim
|
140
140
|
res.common_prefix_entries.each do |item|
|
141
|
-
puts "dir: " +
|
141
|
+
puts "dir: " + item.prefix.encode($S3SYNC_NATIVE_CHARSET, "UTF-8")
|
142
142
|
end
|
143
143
|
puts "--------------------"
|
144
144
|
end
|
145
145
|
res.entries.each do |item|
|
146
|
-
puts
|
146
|
+
puts item.key.encode($S3SYNC_NATIVE_CHARSET, "UTF-8")
|
147
147
|
end
|
148
148
|
if res.properties.is_truncated
|
149
149
|
printf "More? Y/n: "
|
150
150
|
more = (STDIN.gets.match('^[Yy]?$'))
|
151
151
|
marker = (res.properties.next_marker)? res.properties.next_marker : ((res.entries.length > 0) ? res.entries.last.key : nil)
|
152
152
|
# get this into local charset; when we pass it to s3 that is what's expected
|
153
|
-
marker =
|
153
|
+
marker = marker.encode($S3SYNC_NATIVE_CHARSET, "UTF-8") if marker
|
154
154
|
else
|
155
155
|
more = false
|
156
156
|
end
|
@@ -264,7 +264,7 @@ ENDUSAGE
|
|
264
264
|
res = s3cmdList(bucket, path, nil, nil, marker)
|
265
265
|
res.entries.each do |item|
|
266
266
|
# the s3 commands (with my modified UTF-8 conversion) expect native char encoding input
|
267
|
-
path =
|
267
|
+
path = item.key.encode($S3SYNC_NATIVE_CHARSET, "UTF-8")
|
268
268
|
|
269
269
|
file = path.gsub(src_path, dest_key)
|
270
270
|
|
@@ -284,7 +284,7 @@ ENDUSAGE
|
|
284
284
|
more = res.properties.is_truncated
|
285
285
|
marker = (res.properties.next_marker)? res.properties.next_marker : ((res.entries.length > 0) ? res.entries.last.key : nil)
|
286
286
|
# get this into local charset; when we pass it to s3 that is what's expected
|
287
|
-
marker =
|
287
|
+
marker = marker.encode($S3SYNC_NATIVE_CHARSET, "UTF-8") if marker
|
288
288
|
end
|
289
289
|
|
290
290
|
when "headers"
|
@@ -300,7 +300,7 @@ ENDUSAGE
|
|
300
300
|
res = s3cmdList(bucket, path, nil, nil, marker)
|
301
301
|
res.entries.each do |item|
|
302
302
|
# the s3 commands (with my modified UTF-8 conversion) expect native char encoding input
|
303
|
-
key =
|
303
|
+
key = item.key.encode($S3SYNC_NATIVE_CHARSET, "UTF-8")
|
304
304
|
|
305
305
|
tmpHeaders = headers.merge({
|
306
306
|
"x-amz-copy-source" => "/#{bucket}/#{URI.escape(key)}",
|
@@ -322,7 +322,7 @@ ENDUSAGE
|
|
322
322
|
more = res.properties.is_truncated
|
323
323
|
marker = (res.properties.next_marker)? res.properties.next_marker : ((res.entries.length > 0) ? res.entries.last.key : nil)
|
324
324
|
# get this into local charset; when we pass it to s3 that is what's expected
|
325
|
-
marker =
|
325
|
+
marker = marker.encode($S3SYNC_NATIVE_CHARSET, "UTF-8") if marker
|
326
326
|
end
|
327
327
|
|
328
328
|
|
data/bin/s3sync
CHANGED
@@ -60,18 +60,18 @@ module S3sync
|
|
60
60
|
[ '--public-read','-p', GetoptLong::NO_ARGUMENT ],
|
61
61
|
[ '--delete', GetoptLong::NO_ARGUMENT ],
|
62
62
|
[ '--verbose', '-v', GetoptLong::NO_ARGUMENT ],
|
63
|
-
[ '--dryrun', '-n', GetoptLong::NO_ARGUMENT ],
|
63
|
+
[ '--dryrun', '-n', GetoptLong::NO_ARGUMENT ],
|
64
64
|
[ '--debug', '-d', GetoptLong::NO_ARGUMENT ],
|
65
65
|
[ '--memory', '-m', GetoptLong::NO_ARGUMENT ],
|
66
66
|
[ '--progress', GetoptLong::NO_ARGUMENT ],
|
67
67
|
[ '--expires', GetoptLong::REQUIRED_ARGUMENT ],
|
68
68
|
[ '--cache-control', GetoptLong::REQUIRED_ARGUMENT ],
|
69
69
|
[ '--exclude', GetoptLong::REQUIRED_ARGUMENT ],
|
70
|
-
[ '--gzip', GetoptLong::REQUIRED_ARGUMENT ],
|
70
|
+
[ '--gzip', GetoptLong::REQUIRED_ARGUMENT ],
|
71
71
|
[ '--key', '-k', GetoptLong::REQUIRED_ARGUMENT],
|
72
72
|
[ '--secret', GetoptLong::REQUIRED_ARGUMENT],
|
73
73
|
[ '--make-dirs', GetoptLong::NO_ARGUMENT ],
|
74
|
-
[ '--no-md5', GetoptLong::NO_ARGUMENT ]
|
74
|
+
[ '--no-md5', GetoptLong::NO_ARGUMENT ]
|
75
75
|
)
|
76
76
|
|
77
77
|
def S3sync.usage(message = nil)
|
@@ -79,11 +79,11 @@ module S3sync
|
|
79
79
|
name = $0.split('/').last
|
80
80
|
$stderr.puts <<-ENDUSAGE
|
81
81
|
#{name} [options] <source> <destination>\t\tversion #{$S3SYNC_VERSION}
|
82
|
-
--help -h --verbose -v --dryrun -n
|
82
|
+
--help -h --verbose -v --dryrun -n
|
83
83
|
--ssl -s --recursive -r --delete
|
84
84
|
--public-read -p --expires="<exp>" --cache-control="<cc>"
|
85
85
|
--exclude="<regexp>" --progress --debug -d
|
86
|
-
--key -k --secret -s --make-dirs
|
86
|
+
--key -k --secret -s --make-dirs
|
87
87
|
--no-md5 --gzip
|
88
88
|
One of <source> or <destination> must be of S3 format, the other a local path.
|
89
89
|
Reminders:
|
@@ -108,13 +108,13 @@ ENDUSAGE
|
|
108
108
|
if $S3syncOptions['--key']
|
109
109
|
$AWS_ACCESS_KEY_ID = $S3syncOptions['--key']
|
110
110
|
end
|
111
|
-
|
111
|
+
|
112
112
|
if $S3syncOptions['--secret']
|
113
113
|
$AWS_SECRET_ACCESS_KEY = $S3syncOptions['--secret']
|
114
114
|
end
|
115
115
|
|
116
116
|
# ---------- CONNECT ---------- #
|
117
|
-
S3sync::s3trySetup
|
117
|
+
S3sync::s3trySetup
|
118
118
|
|
119
119
|
# ---------- PREFIX PROCESSING ---------- #
|
120
120
|
def S3sync.s3Prefix?(pre)
|
@@ -122,7 +122,7 @@ ENDUSAGE
|
|
122
122
|
pre.include?(':') and not pre.match('^[A-Za-z]:[\\\\/]')
|
123
123
|
end
|
124
124
|
sourcePrefix, destinationPrefix = ARGV
|
125
|
-
usage("You didn't set up your environment variables; see README.txt") if not($AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY)
|
125
|
+
usage("You didn't set up your environment variables; see README.txt") if not($AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY)
|
126
126
|
usage('Need a source and a destination') if sourcePrefix == nil or destinationPrefix == nil
|
127
127
|
usage('Both arguments can\'t be on S3') if s3Prefix?(sourcePrefix) and s3Prefix?(destinationPrefix)
|
128
128
|
usage('One argument must be on S3') if !s3Prefix?(sourcePrefix) and !s3Prefix?(destinationPrefix)
|
@@ -161,7 +161,7 @@ ENDUSAGE
|
|
161
161
|
# canonicalize the local stuff
|
162
162
|
# but that can kill a trailing slash, which we need to preserve long enough to know whether we mean "the dir" or "its contents"
|
163
163
|
# it will get re-stripped by the local generator after expressing this knowledge
|
164
|
-
localTrailingSlash = localPrefix.match(%r{/$})
|
164
|
+
localTrailingSlash = localPrefix.match(%r{/$})
|
165
165
|
localPrefix.replace(File.expand_path(localPrefix))
|
166
166
|
localPrefix += '/' if localTrailingSlash
|
167
167
|
debug("localPrefix #{localPrefix}")
|
@@ -181,7 +181,7 @@ ENDUSAGE
|
|
181
181
|
debug("localTreeRecurse #{prefix} #{path}")
|
182
182
|
#if $S3syncOptions['--memory']
|
183
183
|
# $stderr.puts "Starting local recurse"
|
184
|
-
# stats = ostats stats
|
184
|
+
# stats = ostats stats
|
185
185
|
#end
|
186
186
|
d = nil
|
187
187
|
begin
|
@@ -198,7 +198,7 @@ ENDUSAGE
|
|
198
198
|
# the following sleight of hand is to make the recursion match the way s3 sorts
|
199
199
|
# take for example the directory 'foo' and the file 'foo.bar'
|
200
200
|
# when we encounter the dir we would want to recurse into it
|
201
|
-
# but S3 would just say 'period < slash' and sort 'foo.bar' between the dir node
|
201
|
+
# but S3 would just say 'period < slash' and sort 'foo.bar' between the dir node
|
202
202
|
# and the contents in that 'dir'
|
203
203
|
#
|
204
204
|
# so the solution is to not recurse into the directory until the point where
|
@@ -252,7 +252,7 @@ ENDUSAGE
|
|
252
252
|
end
|
253
253
|
#if $S3syncOptions['--memory']
|
254
254
|
# $stderr.puts "Ending local recurse"
|
255
|
-
# stats = ostats stats
|
255
|
+
# stats = ostats stats
|
256
256
|
#end
|
257
257
|
end
|
258
258
|
# a bit of a special case for local, since "foo/" and "foo" are essentially treated the same by file systems
|
@@ -266,10 +266,10 @@ ENDUSAGE
|
|
266
266
|
else
|
267
267
|
# trailing slash, so ignore the root itself, and just go into the first level
|
268
268
|
localPrefixTrim.sub!(%r{/$}, "") # strip the slash because of how we do local node slash accounting in the recurse above
|
269
|
-
localTreeRecurse(g, localPrefixTrim, "")
|
269
|
+
localTreeRecurse(g, localPrefixTrim, "")
|
270
270
|
end
|
271
271
|
end
|
272
|
-
|
272
|
+
|
273
273
|
# a generator that will return the nodes in the S3 tree one by one
|
274
274
|
# sorted and decorated for easy comparison with the local tree
|
275
275
|
s3Tree = Enumerator.new do |g|
|
@@ -305,16 +305,16 @@ ENDUSAGE
|
|
305
305
|
# get rid of the big s3 objects asap, just save light-weight nodes and strings
|
306
306
|
items = tItems.collect do |item|
|
307
307
|
if item.respond_to?('key')
|
308
|
-
key =
|
308
|
+
key = item.key.encode($S3SYNC_NATIVE_CHARSET, "UTF-8")
|
309
309
|
Node.new(key, item.size, item.etag, item.last_modified)
|
310
310
|
else
|
311
|
-
|
311
|
+
item.prefix.encode($S3SYNC_NATIVE_CHARSET, "UTF-8")
|
312
312
|
end
|
313
313
|
end
|
314
314
|
nextPage = d.properties.is_truncated
|
315
315
|
marker = (d.properties.next_marker)? d.properties.next_marker : ((d.entries.length > 0)? d.entries.last.key : '')
|
316
316
|
# get this into native char set (because when we feed it back to s3 that's what it will expect)
|
317
|
-
marker =
|
317
|
+
marker = marker.encode($S3SYNC_NATIVE_CHARSET, "UTF-8")
|
318
318
|
tItems = nil
|
319
319
|
d = nil # get rid of this before recursing; it's big
|
320
320
|
item = nil
|
@@ -338,7 +338,7 @@ ENDUSAGE
|
|
338
338
|
debug("skipping prefix #{excludePath} due to --exclude")
|
339
339
|
else
|
340
340
|
debug("prefix found: #{partialPath}")
|
341
|
-
s3TreeRecurse(g, bucket, prefix, partialPath) if $S3syncOptions['--recursive']
|
341
|
+
s3TreeRecurse(g, bucket, prefix, partialPath) if $S3syncOptions['--recursive']
|
342
342
|
end
|
343
343
|
end
|
344
344
|
end
|
@@ -386,7 +386,7 @@ ENDUSAGE
|
|
386
386
|
if $S3syncOptions['--delete']
|
387
387
|
if destinationNode.directory?
|
388
388
|
# have to wait
|
389
|
-
nodesToDelete.push(destinationNode)
|
389
|
+
nodesToDelete.push(destinationNode)
|
390
390
|
else
|
391
391
|
puts "Remove node #{destinationNode.name}" if $S3syncOptions['--verbose']
|
392
392
|
destinationNode.delete unless $S3syncOptions['--dryrun']
|
@@ -398,7 +398,7 @@ ENDUSAGE
|
|
398
398
|
puts "Update node #{sourceNode.name}" if $S3syncOptions['--verbose']
|
399
399
|
destinationNode.updateFrom(sourceNode) unless $S3syncOptions['--dryrun']
|
400
400
|
elsif $S3syncOptions['--debug']
|
401
|
-
$stderr.puts "Node #{sourceNode.name} unchanged"
|
401
|
+
$stderr.puts "Node #{sourceNode.name} unchanged"
|
402
402
|
end
|
403
403
|
sourceNode = sourceNode.nil? ? nil : sourceTree.next rescue nil
|
404
404
|
destinationNode = destinationNode.nil? ? nil : destinationTree.next rescue nil
|
@@ -509,7 +509,7 @@ ENDUSAGE
|
|
509
509
|
headers['Cache-Control'] = $S3syncOptions['--cache-control'] if $S3syncOptions['--cache-control']
|
510
510
|
fType = @path.split('.').last
|
511
511
|
if ($S3syncOptions['--gzip'] || "gz").split(",").include? fType
|
512
|
-
headers['Content-Encoding'] = "gzip"
|
512
|
+
headers['Content-Encoding'] = "gzip"
|
513
513
|
fType = @path.split('.')[-2]
|
514
514
|
end
|
515
515
|
debug("File extension: #{fType}")
|
@@ -644,12 +644,12 @@ ENDUSAGE
|
|
644
644
|
f = File.open(fName, 'wb')
|
645
645
|
f = ProgressStream.new(f, fromNode.size) if $S3syncOptions['--progress']
|
646
646
|
|
647
|
-
fromNode.to_stream(f)
|
647
|
+
fromNode.to_stream(f)
|
648
648
|
f.close
|
649
649
|
end
|
650
650
|
# get original item out of the way
|
651
651
|
File.unlink(@path) if File.exist?(@path)
|
652
|
-
if fromNode.symlink?
|
652
|
+
if fromNode.symlink?
|
653
653
|
linkTo = ''
|
654
654
|
File.open(fName, 'rb'){|f| linkTo = f.read}
|
655
655
|
debug("#{@path} will be a symlink to #{linkTo}")
|
data/lib/s3sync/S3.rb
CHANGED
@@ -78,7 +78,7 @@ module S3
|
|
78
78
|
if not bucket.empty?
|
79
79
|
buf << "/#{bucket}"
|
80
80
|
end
|
81
|
-
# append the key (it might be empty string)
|
81
|
+
# append the key (it might be empty string)
|
82
82
|
# append a slash regardless
|
83
83
|
buf << "/#{path}"
|
84
84
|
|
@@ -102,7 +102,7 @@ module S3
|
|
102
102
|
# url encode the result of that to protect the string if it's going to
|
103
103
|
# be used as a query string parameter.
|
104
104
|
def S3.encode(aws_secret_access_key, str, urlencode=false)
|
105
|
-
digest = OpenSSL::Digest
|
105
|
+
digest = OpenSSL::Digest.new('sha1')
|
106
106
|
b64_hmac =
|
107
107
|
Base64.encode64(
|
108
108
|
OpenSSL::HMAC.digest(digest, aws_secret_access_key, str)).strip
|
@@ -235,9 +235,9 @@ module S3
|
|
235
235
|
# does not make sense for vanity domains
|
236
236
|
server = @server
|
237
237
|
elsif @calling_format == CallingFormat::SUBDOMAIN
|
238
|
-
server = "#{bucket}.#{@server}"
|
238
|
+
server = "#{bucket}.#{@server}"
|
239
239
|
elsif @calling_format == CallingFormat::VANITY
|
240
|
-
server = bucket
|
240
|
+
server = bucket
|
241
241
|
else
|
242
242
|
server = @server
|
243
243
|
end
|
@@ -252,10 +252,10 @@ module S3
|
|
252
252
|
path << "/#{key}"
|
253
253
|
|
254
254
|
# build the path_argument string
|
255
|
-
# add the ? in all cases since
|
255
|
+
# add the ? in all cases since
|
256
256
|
# signature and credentials follow path args
|
257
257
|
path << '?'
|
258
|
-
path << S3.path_args_hash_to_string(path_args)
|
258
|
+
path << S3.path_args_hash_to_string(path_args)
|
259
259
|
|
260
260
|
http = Net::HTTP.new(server, @port)
|
261
261
|
http.use_ssl = @is_secure
|
@@ -329,15 +329,15 @@ module S3
|
|
329
329
|
# by default, expire in 1 minute
|
330
330
|
DEFAULT_EXPIRES_IN = 60
|
331
331
|
|
332
|
-
def initialize(aws_access_key_id, aws_secret_access_key, is_secure=true,
|
333
|
-
server=DEFAULT_HOST, port=PORTS_BY_SECURITY[is_secure],
|
332
|
+
def initialize(aws_access_key_id, aws_secret_access_key, is_secure=true,
|
333
|
+
server=DEFAULT_HOST, port=PORTS_BY_SECURITY[is_secure],
|
334
334
|
format=CallingFormat::REGULAR)
|
335
335
|
@aws_access_key_id = aws_access_key_id
|
336
336
|
@aws_secret_access_key = aws_secret_access_key
|
337
337
|
@protocol = is_secure ? 'https' : 'http'
|
338
338
|
@server = server
|
339
339
|
@port = port
|
340
|
-
@calling_format = format
|
340
|
+
@calling_format = format
|
341
341
|
# by default expire
|
342
342
|
@expires_in = DEFAULT_EXPIRES_IN
|
343
343
|
end
|
@@ -443,7 +443,7 @@ module S3
|
|
443
443
|
path_args["Signature"] = encoded_canonical.to_s
|
444
444
|
path_args["Expires"] = expires.to_s
|
445
445
|
path_args["AWSAccessKeyId"] = @aws_access_key_id.to_s
|
446
|
-
arg_string = S3.path_args_hash_to_string(path_args)
|
446
|
+
arg_string = S3.path_args_hash_to_string(path_args)
|
447
447
|
|
448
448
|
return "#{url}/#{key}?#{arg_string}"
|
449
449
|
end
|
data/lib/s3sync/S3encoder.rb
CHANGED
@@ -1,9 +1,9 @@
|
|
1
|
-
# This software code is made available "AS IS" without warranties of any
|
2
|
-
# kind. You may copy, display, modify and redistribute the software
|
3
|
-
# code either by itself or as incorporated into your code; provided that
|
4
|
-
# you do not remove any proprietary notices. Your use of this software
|
1
|
+
# This software code is made available "AS IS" without warranties of any
|
2
|
+
# kind. You may copy, display, modify and redistribute the software
|
3
|
+
# code either by itself or as incorporated into your code; provided that
|
4
|
+
# you do not remove any proprietary notices. Your use of this software
|
5
5
|
# code is at your own risk and you waive any claim against the author
|
6
|
-
# with respect to your use of this software code.
|
6
|
+
# with respect to your use of this software code.
|
7
7
|
# (c) 2007 s3sync.net
|
8
8
|
#
|
9
9
|
|
@@ -14,7 +14,6 @@
|
|
14
14
|
# to the underlying lib this stuff will need updating.
|
15
15
|
|
16
16
|
require 'cgi'
|
17
|
-
require 'iconv' # for UTF-8 conversion
|
18
17
|
|
19
18
|
# thanks to http://www.redhillconsulting.com.au/blogs/simon/archives/000326.html
|
20
19
|
module S3ExtendCGI
|
@@ -36,10 +35,10 @@ module S3ExtendCGI
|
|
36
35
|
attr_writer :nativeCharacterEncoding
|
37
36
|
@@useUTF8InEscape = false
|
38
37
|
attr_writer :useUTF8InEscape
|
39
|
-
|
38
|
+
|
40
39
|
def S3Extend_escape(string)
|
41
40
|
result = string
|
42
|
-
result =
|
41
|
+
result = string.encode("UTF-8", @nativeCharacterEncoding) if @useUTF8InEscape
|
43
42
|
result = S3Extend_escape_orig(result)
|
44
43
|
result.gsub!(/%2f/i, "/") if @exemptSlashesInEscape
|
45
44
|
result.gsub!("+", "%20") if @usePercent20InEscape
|
metadata
CHANGED
@@ -1,33 +1,24 @@
|
|
1
|
-
--- !ruby/object:Gem::Specification
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
2
|
name: frahugo-s3sync
|
3
|
-
version: !ruby/object:Gem::Version
|
4
|
-
|
5
|
-
segments:
|
6
|
-
- 1
|
7
|
-
- 3
|
8
|
-
- 8
|
9
|
-
version: 1.3.8
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 1.4.1
|
10
5
|
platform: ruby
|
11
|
-
authors:
|
6
|
+
authors:
|
12
7
|
- frahugo
|
13
8
|
autorequire:
|
14
9
|
bindir: bin
|
15
10
|
cert_chain: []
|
16
|
-
|
17
|
-
date: 2010-12-16 00:00:00 -05:00
|
18
|
-
default_executable:
|
11
|
+
date: 2015-07-23 00:00:00.000000000 Z
|
19
12
|
dependencies: []
|
20
|
-
|
21
13
|
description:
|
22
14
|
email: hugo@cekoya.com
|
23
|
-
executables:
|
15
|
+
executables:
|
24
16
|
- s3sync
|
25
17
|
- s3cmd
|
26
18
|
extensions: []
|
27
|
-
|
28
|
-
extra_rdoc_files:
|
19
|
+
extra_rdoc_files:
|
29
20
|
- README.rdoc
|
30
|
-
files:
|
21
|
+
files:
|
31
22
|
- History.txt
|
32
23
|
- Manifest.txt
|
33
24
|
- PostInstall.txt
|
@@ -51,38 +42,30 @@ files:
|
|
51
42
|
- script/generate
|
52
43
|
- test/test_helper.rb
|
53
44
|
- test/test_s3sync.rb
|
54
|
-
|
55
|
-
homepage: http://s3sync.net
|
45
|
+
homepage: https://github.com/frahugo/s3sync
|
56
46
|
licenses: []
|
57
|
-
|
47
|
+
metadata: {}
|
58
48
|
post_install_message:
|
59
|
-
rdoc_options:
|
60
|
-
- --charset=UTF-8
|
61
|
-
require_paths:
|
49
|
+
rdoc_options:
|
50
|
+
- "--charset=UTF-8"
|
51
|
+
require_paths:
|
62
52
|
- lib
|
63
|
-
required_ruby_version: !ruby/object:Gem::Requirement
|
64
|
-
|
65
|
-
requirements:
|
53
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
54
|
+
requirements:
|
66
55
|
- - ">="
|
67
|
-
- !ruby/object:Gem::Version
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
required_rubygems_version: !ruby/object:Gem::Requirement
|
72
|
-
none: false
|
73
|
-
requirements:
|
56
|
+
- !ruby/object:Gem::Version
|
57
|
+
version: '0'
|
58
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
59
|
+
requirements:
|
74
60
|
- - ">="
|
75
|
-
- !ruby/object:Gem::Version
|
76
|
-
|
77
|
-
- 0
|
78
|
-
version: "0"
|
61
|
+
- !ruby/object:Gem::Version
|
62
|
+
version: '0'
|
79
63
|
requirements: []
|
80
|
-
|
81
64
|
rubyforge_project:
|
82
|
-
rubygems_version:
|
65
|
+
rubygems_version: 2.4.6
|
83
66
|
signing_key:
|
84
67
|
specification_version: 3
|
85
|
-
summary: Fork of s3sync to be compatible with ruby 1.9
|
86
|
-
test_files:
|
68
|
+
summary: Fork of s3sync to be compatible with ruby 1.9 & 2
|
69
|
+
test_files:
|
87
70
|
- test/test_helper.rb
|
88
71
|
- test/test_s3sync.rb
|