backup 3.0.8 → 3.0.9
Sign up to get free protection for your applications and to get access to all the features.
- data/Gemfile.lock +3 -3
- data/README.md +10 -10
- data/bin/backup +11 -14
- data/lib/backup/dependency.rb +1 -1
- data/lib/backup/version.rb +1 -1
- metadata +2 -18
data/Gemfile.lock
CHANGED
@@ -17,10 +17,10 @@ GEM
|
|
17
17
|
rack (< 2, >= 1.1.0)
|
18
18
|
faraday_middleware (0.3.2)
|
19
19
|
faraday (~> 0.5.4)
|
20
|
-
fog (0.
|
20
|
+
fog (0.7.0)
|
21
21
|
builder
|
22
22
|
excon (>= 0.5.5)
|
23
|
-
formatador (>= 0.
|
23
|
+
formatador (>= 0.1.1)
|
24
24
|
json
|
25
25
|
mime-types
|
26
26
|
net-ssh (>= 2.0.23)
|
@@ -88,7 +88,7 @@ PLATFORMS
|
|
88
88
|
|
89
89
|
DEPENDENCIES
|
90
90
|
dropbox (~> 1.2.3)
|
91
|
-
fog (~> 0.
|
91
|
+
fog (~> 0.7.0)
|
92
92
|
fuubar
|
93
93
|
infinity_test
|
94
94
|
mail (~> 2.2.15)
|
data/README.md
CHANGED
@@ -165,14 +165,14 @@ Below you see a sample configuration file you could create for Backup 3. Just re
|
|
165
165
|
s3.keep = 20
|
166
166
|
end
|
167
167
|
|
168
|
-
sync_with
|
169
|
-
|
170
|
-
|
171
|
-
|
172
|
-
|
173
|
-
|
174
|
-
|
175
|
-
|
168
|
+
sync_with S3 do |s3|
|
169
|
+
s3.access_key_id = "my_access_key_id"
|
170
|
+
s3.secret_access_key = "my_secret_access_key"
|
171
|
+
s3.bucket = "my-bucket"
|
172
|
+
s3.path = "/backups"
|
173
|
+
s3.mirror = true
|
174
|
+
|
175
|
+
s3.directories do |directory|
|
176
176
|
directory.add "/var/apps/my_app/public/videos"
|
177
177
|
directory.add "/var/apps/my_app/public/music"
|
178
178
|
end
|
@@ -192,13 +192,13 @@ Below you see a sample configuration file you could create for Backup 3. Just re
|
|
192
192
|
|
193
193
|
### Explanation for the above example
|
194
194
|
|
195
|
-
First it dumps all the tables inside the MySQL database "my_sample_mysql_db", except for the "logs" table. It also dumps the MongoDB database "my_sample_mongo_db", but only the collections "users", "events" and "posts". After that it'll create a "user_avatars.tar" archive with all the uploaded avatars of the users. After that it'll create a "logs.tar" archive with the "production.log", "newrelic_agent.log" and "other.log" logs. After that it'll compress the backup file using Gzip (with the mode set to "best", rather than "fast" for best compression). After that it'll encrypt the whole backup file (everything included: databases, archives) using "OpenSSL". Now the Backup can only be extracted when you know the password to decrypt it ("my_secret_password" in this case). Then it'll store the backup file to Amazon S3 in to 'my_bucket/backups'. Next, we're going to use the
|
195
|
+
First it dumps all the tables inside the MySQL database "my_sample_mysql_db", except for the "logs" table. It also dumps the MongoDB database "my_sample_mongo_db", but only the collections "users", "events" and "posts". After that it'll create a "user_avatars.tar" archive with all the uploaded avatars of the users. After that it'll create a "logs.tar" archive with the "production.log", "newrelic_agent.log" and "other.log" logs. After that it'll compress the backup file using Gzip (with the mode set to "best", rather than "fast" for best compression). After that it'll encrypt the whole backup file (everything included: databases, archives) using "OpenSSL". Now the Backup can only be extracted when you know the password to decrypt it ("my_secret_password" in this case). Then it'll store the backup file to Amazon S3 in to 'my_bucket/backups'. Next, we're going to use the S3 Syncer to create a mirror of the `/var/apps/my_app/public/videos` and `/var/apps/my_app/public/music` directories on Amazon S3. (This will not package, compress, encrypt - but will directly sync the specified directories "as is" to your S3 bucket). Finally, it'll notify me by email if the backup raises an error/exception during the process, indicating that something went wrong. However, it does not notify me by email when successful backups occur because I set `mail.on_success` to `false`. It'll also notify me by Twitter when failed backups occur, but also when successful ones occur because I set the `tweet.on_success` to `true`.
|
196
196
|
|
197
197
|
### Things to note
|
198
198
|
|
199
199
|
The __keep__ option I passed in to the S3 storage location enables "Backup Cycling". In this case, after the 21st backup file gets pushed, it'll exceed the 20 backup limit, and remove the oldest backup from the S3 bucket.
|
200
200
|
|
201
|
-
The
|
201
|
+
The __S3__ Syncer ( `sync_with` ) is a different kind of __Storage__ method. As mentioned above, it does not follow the same procedure as the __Storage__ ( `store_with` ) method. A Storage method stores the final result of a copied/organized/packaged/compressed/encrypted file to the desired remote location. A Syncer directly syncs the specified directories and **completely bypasses** the copy/organize/package/compress/encrypt process. This is especially good for backing up directories containing gigabytes of data, such as images, music, videos, and similar large formats. Also, rather than transferring the whole directory every time, it'll only transfer files in all these directories that have been modified or added, thus, saving huge amounts of bandwidth, cpu load and time. You're also not bound to the 5GB per file restriction like the **Storage** method, unless you actually have files in these directories that are >= 5GB, which often is unlikely. Even if the whole directory (and sub-directories) are > 5GB (split over multiple files), it shouldn't be a problem as long as you don't have any *single* file that is 5GB in size. Also, in the above example you see `s3.mirror = true`, this tells the S3 Syncer to keep a "mirror" of the local directories in the S3 bucket. This means that if you delete a file locally, the next time it syncs, it'll also delete the file from the S3 bucket, keeping the local filesystem 1:1 with the S3 bucket.
|
202
202
|
|
203
203
|
The __Mail__ notifier. I have not provided the SMTP options to use my Gmail account to notify myself when exceptions are raised during the process. So this won't work, check out the wiki on how to configure this. I left it out in this example.
|
204
204
|
|
data/bin/backup
CHANGED
@@ -160,21 +160,17 @@ class BackupCLI < Thor
|
|
160
160
|
|
161
161
|
temp_file << "end\n\n"
|
162
162
|
temp_file.close
|
163
|
-
|
164
|
-
|
165
|
-
|
166
|
-
|
167
|
-
|
168
|
-
|
169
|
-
|
170
|
-
else
|
171
|
-
FileUtils.mkdir_p(Backup::PATH)
|
172
|
-
overwrite?(File.join(Backup::PATH, 'config.rb'))
|
173
|
-
File.open(File.join(Backup::PATH, 'config.rb'), 'w') do |file|
|
163
|
+
|
164
|
+
path = options[:path] || Backup::PATH
|
165
|
+
config = File.join(path, 'config.rb')
|
166
|
+
|
167
|
+
if overwrite?(config)
|
168
|
+
FileUtils.mkdir_p(path)
|
169
|
+
File.open(config, 'w') do |file|
|
174
170
|
file.write( File.read(temp_file.path) )
|
175
|
-
puts "Generated configuration file in '#{
|
171
|
+
puts "Generated configuration file in '#{ config }'"
|
176
172
|
end
|
177
|
-
end
|
173
|
+
end
|
178
174
|
temp_file.unlink
|
179
175
|
end
|
180
176
|
|
@@ -252,8 +248,9 @@ private
|
|
252
248
|
# Helper method for asking the user if he/she wants to overwrite the file
|
253
249
|
def overwrite?(path)
|
254
250
|
if File.exist?(path)
|
255
|
-
|
251
|
+
return yes? "A configuration file already exists in #{ path }. Do you want to overwrite? [y/n]"
|
256
252
|
end
|
253
|
+
true
|
257
254
|
end
|
258
255
|
|
259
256
|
end
|
data/lib/backup/dependency.rb
CHANGED
data/lib/backup/version.rb
CHANGED
metadata
CHANGED
@@ -1,13 +1,8 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: backup
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
hash: 23
|
5
4
|
prerelease:
|
6
|
-
|
7
|
-
- 3
|
8
|
-
- 0
|
9
|
-
- 8
|
10
|
-
version: 3.0.8
|
5
|
+
version: 3.0.9
|
11
6
|
platform: ruby
|
12
7
|
authors:
|
13
8
|
- Michael van Rooijen
|
@@ -15,7 +10,7 @@ autorequire:
|
|
15
10
|
bindir: bin
|
16
11
|
cert_chain: []
|
17
12
|
|
18
|
-
date: 2011-03-
|
13
|
+
date: 2011-03-18 00:00:00 +01:00
|
19
14
|
default_executable:
|
20
15
|
dependencies:
|
21
16
|
- !ruby/object:Gem::Dependency
|
@@ -26,11 +21,6 @@ dependencies:
|
|
26
21
|
requirements:
|
27
22
|
- - ~>
|
28
23
|
- !ruby/object:Gem::Version
|
29
|
-
hash: 43
|
30
|
-
segments:
|
31
|
-
- 0
|
32
|
-
- 14
|
33
|
-
- 6
|
34
24
|
version: 0.14.6
|
35
25
|
type: :runtime
|
36
26
|
version_requirements: *id001
|
@@ -192,18 +182,12 @@ required_ruby_version: !ruby/object:Gem::Requirement
|
|
192
182
|
requirements:
|
193
183
|
- - ">="
|
194
184
|
- !ruby/object:Gem::Version
|
195
|
-
hash: 3
|
196
|
-
segments:
|
197
|
-
- 0
|
198
185
|
version: "0"
|
199
186
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
200
187
|
none: false
|
201
188
|
requirements:
|
202
189
|
- - ">="
|
203
190
|
- !ruby/object:Gem::Version
|
204
|
-
hash: 3
|
205
|
-
segments:
|
206
|
-
- 0
|
207
191
|
version: "0"
|
208
192
|
requirements: []
|
209
193
|
|