s3-tar-backup 1.1.2 → 1.1.3

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: bedb4a33137d087e7fac1a4cafdb2e468775eda7
4
- data.tar.gz: cef9b811aae9a5b8d258c07cf8513fccc71104c4
3
+ metadata.gz: dcc3e740ebd672744c213fe89db2c3edc5f7b0cf
4
+ data.tar.gz: 3f987db293940781f1182f3ddc97cb0e6358e8b6
5
5
  SHA512:
6
- metadata.gz: a94533cb57bb687c5b1a56e6500190621c6e45f106e35c1bf1f02d612042072b1d09817021f44a39f3b06fe145283f7f79408c5891b3dc4be848904175411bac
7
- data.tar.gz: cfb9b23fed51994215573effc25273587d45ce9046d0d1236951e8aa54a0bfc6b88e64f99b4c23c7468bdcaaaf851226fd98a09a7ee5a96e0637f511f8776e3f
6
+ metadata.gz: e985752f6a8f2f27e666562a8a437a4dc264972754015dfd4fba3b78d8d6d30427038b3c811b858ad3389b1f219d61e529602ee94dd9dc27d28772fecd4f65b4
7
+ data.tar.gz: 296d49a7ec8f687b0eceedc932a8b3bf7cc976a47cab29bbd3be5653f37a5b45b1921690e6107846654ce15ed6bc4843ce213379f82f4159192ef67c1fd1f750
data/README.md CHANGED
@@ -9,7 +9,7 @@ You can then restore the files at a later date.
9
9
 
10
10
  This tool was built as a replacement for duplicity, after duplicity started throwing errors and generally being a bit unreliable.
11
11
  It uses command-line tar to create incremental backup snapshots, and the aws-s3 gem to upload these to S3.
12
- It can also optionally use command-ling gpg to encrypt backups.
12
+ It can also optionally use command-line gpg to encrypt backups.
13
13
 
14
14
  In practice, it turns out that this tool has few lower bandwidth and CPU requirements, and can restore a backup in a fraction of the time that duplicity would take.
15
15
 
@@ -71,8 +71,9 @@ compression = <compression_type>
71
71
  ; Optional: defaults to false
72
72
  always_full = <bool>
73
73
 
74
- ; Optional: defaults to no key
75
- gpg_key = <key ID>
74
+ ; You may choose one of the following two settings
75
+ gpg_key = <key ID> ; Asymmetric encryption
76
+ password_file = <password file> ; Symmetric encryption
76
77
 
77
78
  ; You have have multiple lines of the following types.
78
79
  ; Values from here and from your profiles will be combined
@@ -117,6 +118,11 @@ This is used to say that incremental backups should never be used.
117
118
  `gpg_key` is an optional GPG Key ID to use to encrypt backups.
118
119
  This key must exist in your keyring.
119
120
  By default, no key is used and backups are not encrypted.
121
+ This may not be used at the same time as `password_file`.
122
+
123
+ `password_file` is an optional path to a file containing the password to use to encrypt backups.
124
+ By default, backups are not encrypted.
125
+ This may not be used at the same time as `gpg_key`.
120
126
 
121
127
  `source` contains the folders to be backed up.
122
128
 
@@ -144,23 +150,35 @@ dest = <bucket_name>/<path>
144
150
  exclude = </some/dir>
145
151
  pre-backup = <some command>
146
152
  post-backup = <some command>
153
+
154
+ ; You may optionally specify one of the two following keys
147
155
  gpg_key = <key ID>
156
+ password_file = <password file>
148
157
  ```
149
158
 
150
159
  `profile_name` is the name of the profile. You'll use this later.
151
160
 
152
161
  ### Encryption
153
162
 
163
+ #### Asymmetric Encryption
164
+
154
165
  `s3-tar-backup` will encrypt your backups if you specify the config key `gpg_key`, which is the ID of the key to use for encrypting backups.
155
166
  In order to create an encrypted backup, the public key with this ID must exist in your keyring: it doesn't matter if it has a passphrase or not.
156
167
  In order to restore an encrypted backup, the private key corresponding to the public key which encrypted the backup must exist in your keyring: your `gpg-agent` will prompt you for the passphrase if required.
157
168
  The `gpg_key` option is not used when restoring from backup (instead gpg works out which key to use to decrypt the backup by looking at the backup itself), which means that you can safely change the key that `s3-tar-backup` uses to encrypt backups without losing access to older backups.
158
169
 
159
- `s3-tar-backup` works out whether or not to try and decrypt a backup by looking at its file extension, which means you can safely enable or disable encryption without losing access to older backups.
170
+ `s3-tar-backup` works out whether or not to try and decrypt a backup (and whether symmetric or asymmetric encryption is used) by looking at its file extension, which means you can safely enable or disable encryption without losing access to older backups.
160
171
 
161
172
  To create a key, run `gpg --gen-key`, and follow the prompts.
162
173
  Make sure you create a backup of the private key using `gpg -a --export-secret-keys <key ID> > s3-tar-backup-secret-key.asc`.
163
174
 
175
+ #### Symmetric Encryption
176
+
177
+ `s3-tar-backup` will encrypt your backups with a symmetric encryption key if the config key `password_file` is specified, which is the path to a file containing the passphrase to use the encrypt the backup, relative to the config file.
178
+ This option is used when both encrypting and decrypting backups, which means that `s3-tar-backup` will not be able to decrypt backups it previously created if you change the encryption key. To work around this, you can specify the `--password-file path/to/file` command-line option: if given, this will override the password file specified in your configuration file.
179
+ If you specify an empty password file (`--password-file ''`), then gpg will prompt you for a password on every file it tries to decrypt.
180
+ To avoid this inconvenience, you should run a full backup whenever you change the encryption key.
181
+
164
182
  ### Example config file
165
183
 
166
184
  ```ini
@@ -189,6 +207,8 @@ source = /root
189
207
  exclude = .backup
190
208
  ; Do full backups less rarely
191
209
  full_if_older_than = 4W
210
+ ; Use symmetric encryption for this profile
211
+ password_file = password.txt
192
212
 
193
213
  [profile "mysql"]
194
214
  pre-backup = mysqldump -uuser -ppassword --all-databases > /tmp/mysql_dump.sql
@@ -216,7 +236,7 @@ You can also specify multiple profiles.
216
236
 
217
237
  If no profile is specified, all profiles are backed up.
218
238
 
219
- `--full` will force s3-tar-backup to do a full backup (instead of an incremental one), regardless of which it thinks it should do based on your cofnig file.
239
+ `--full` will force s3-tar-backup to do a full backup (instead of an incremental one), regardless of which it thinks it should do based on your config file.
220
240
 
221
241
  `--verbose` will get tar to list the files that it is backing up.
222
242
 
@@ -239,7 +259,7 @@ s3-tar-backup will go through all old backups, and remove those specified by `re
239
259
  ### Restore
240
260
 
241
261
  ```
242
- s3-tar-backup --config <config_file> [--profile <profile>] --restore <restore_dir> [--restore_date <restore_date>] [--verbose]
262
+ s3-tar-backup --config <config_file> [--profile <profile>] --restore <restore_dir> [--restore_date <restore_date>] [--password-file <password_file>] [--verbose]
243
263
  ```
244
264
 
245
265
  This command will get s3-tar-backup to fetch all the necessary data to restore the latest version of your backup (or an older one if you use `--restore-date`), and stick it into `<restore_dir>`.
@@ -247,6 +267,8 @@ This command will get s3-tar-backup to fetch all the necessary data to restore t
247
267
  Using `<restore_date>`, you can tell s3-tar-backup to restore the first backup before the specified date.
248
268
  The date format to use is `YYYYMM[DD[hh[mm[ss]]]]`, for example `20110406` means `2011-04-06 00:00:00`, while `201104062143` means `2011-04-06 21:43:00`.
249
269
 
270
+ Use `--password-file` to override the file containing the symmetric encryption key to use to decrypt the backup (or `--password-file ''` to ask GPG to prompt for the password).
271
+
250
272
  `--verbose` makes tar spit out the files that it restores.
251
273
 
252
274
  Examples:
@@ -1,319 +1,353 @@
1
- require 'aws-sdk'
2
1
  require 'trollop'
3
2
  require 's3_tar_backup/ini_parser'
4
3
  require 's3_tar_backup/backup'
5
4
  require 's3_tar_backup/version'
5
+ require 's3_tar_backup/backend/s3_backend'
6
+ require 's3_tar_backup/backend/file_backend'
6
7
 
7
8
  module S3TarBackup
8
- class Main
9
- UPLOAD_TRIES = 5
10
-
11
- def run
12
- opts = Trollop::options do
13
- version VERSION
14
- banner "Backs up files to, and restores files from, Amazon's S3 storage, using tar incremental backups\n\n" \
15
- "Usage:\ns3-tar-backup -c config.ini [-p profile] --backup [--full] [-v]\n" \
16
- "s3-tar-backup -c config.ini [-p profile] --cleanup [-v]\n" \
17
- "s3-tar-backup -c config.ini [-p profile] --restore restore_dir\n\t[--restore_date date] [-v]\n" \
18
- "s3-tar-backup -c config.ini [-p profile] --backup-config [--verbose]\n" \
19
- "s3-tar-backup -c config.ini [-p profile] --list-backups\n\n" \
20
- "Option details:\n"
21
- opt :config, "Configuration file", :short => 'c', :type => :string, :required => true
22
- opt :backup, "Make an incremental backup"
23
- opt :full, "Make the backup a full backup"
24
- opt :profile, "The backup profile(s) to use (default all)", :short => 'p', :type => :strings
25
- opt :cleanup, "Clean up old backups"
26
- opt :restore, "Restore a backup to the specified dir", :type => :string
27
- opt :restore_date, "Restore a backup from the specified date. Format YYYYMM[DD[hh[mm[ss]]]]", :type => :string
28
- opt :backup_config, "Backs up the specified configuration file"
29
- opt :list_backups, "List the stored backup info for one or more profiles"
30
- opt :verbose, "Show verbose output", :short => 'v'
31
- conflicts :backup, :cleanup, :restore, :backup_config, :list_backups
32
- end
33
-
34
-
35
- Trollop::die "--full requires --backup" if opts[:full] && !opts[:backup]
36
- Trollop::die "--restore-date requires --restore" if opts[:restore_date_given] && !opts[:restore_given]
37
- unless opts[:backup] || opts[:cleanup] || opts[:restore_given] || opts[:backup_config] || opts[:list_backups]
38
- Trollop::die "Need one of --backup, --cleanup, --restore, --backup-config, --list-backups"
39
- end
40
-
41
- begin
42
- raise "Config file #{opts[:config]} not found" unless File.exists?(opts[:config])
43
- config = IniParser.new(opts[:config]).load
44
- profiles = opts[:profile] || config.find_sections(/^profile\./).keys.map{ |k| k.to_s.split('.', 2)[1] }
45
- @s3 = connect_s3(
46
- ENV['AWS_ACCESS_KEY_ID'] || config['settings.aws_access_key_id'],
47
- ENV['AWS_SECRET_ACCESS_KEY'] || config['settings.aws_secret_access_key'],
48
- config.get('settings.aws_region', false)
49
- )
50
-
51
- # This is a bit of a special case
52
- if opts[:backup_config]
53
- dest = config.get('settings.dest', false)
54
- raise "You must specify a single profile (used to determine the location to back up to) " \
55
- "if backing up config and dest key is not in [settings]" if !dest && profiles.count != 1
56
- dest ||= config["profile.#{profiles[0]}.dest"]
57
- puts "===== Backing up config file #{opts[:config]} ====="
58
- bucket, prefix = (config.get('settings.dest', false) || config["profile.#{profiles[0]}.dest"]).split('/', 2)
59
- puts "Uploading #{opts[:config]} to #{bucket}/#{prefix}/#{File.basename(opts[:config])}"
60
- upload(opts[:config], bucket, "#{prefix}/#{File.basename(opts[:config])}")
61
- return
62
- end
63
-
64
- profiles.dup.each do |profile|
65
- raise "No such profile: #{profile}" unless config.has_section?("profile.#{profile}")
66
- opts[:profile] = profile
67
- backup_config = gen_backup_config(opts[:profile], config)
68
- prev_backups = get_objects(backup_config[:bucket], backup_config[:dest_prefix], opts[:profile])
69
- perform_backup(opts, prev_backups, backup_config) if opts[:backup]
70
- perform_cleanup(prev_backups, backup_config) if opts[:backup] || opts[:cleanup]
71
- perform_restore(opts, prev_backups, backup_config) if opts[:restore_given]
72
- perform_list_backups(prev_backups, backup_config) if opts[:list_backups]
73
- end
74
- rescue Exception => e
75
- raise e
76
- Trollop::die e.to_s
77
- end
78
- end
79
-
80
- def connect_s3(access_key, secret_key, region)
81
- warn "No AWS region specified (config key settings.s3_region). Assuming eu-west-1" unless region
82
- AWS::S3.new(access_key_id: access_key, secret_access_key: secret_key, region: region || 'eu-west-1')
83
- end
84
-
85
- def gen_backup_config(profile, config)
86
- bucket, dest_prefix = (config.get("profile.#{profile}.dest", false) || config['settings.dest']).split('/', 2)
87
- gpg_key = config.get("profile.#{profile}.gpg_key", false) || config['settings.gpg_key']
88
- backup_config = {
89
- :backup_dir => config.get("profile.#{profile}.backup_dir", false) || config['settings.backup_dir'],
90
- :name => profile,
91
- :gpg_key => gpg_key && !gpg_key.empty? ? gpg_key : nil,
92
- :sources => [*config.get("profile.#{profile}.source", [])] + [*config.get("settings.source", [])],
93
- :exclude => [*config.get("profile.#{profile}.exclude", [])] + [*config.get("settings.exclude", [])],
94
- :bucket => bucket,
95
- :dest_prefix => dest_prefix.chomp('/'),
96
- :pre_backup => [*config.get("profile.#{profile}.pre-backup", [])] + [*config.get('settings.pre-backup', [])],
97
- :post_backup => [*config.get("profile.#{profile}.post-backup", [])] + [*config.get('settings.post-backup', [])],
98
- :full_if_older_than => config.get("profile.#{profile}.full_if_older_than", false) || config['settings.full_if_older_than'],
99
- :remove_older_than => config.get("profile.#{profile}.remove_older_than", false) || config.get('settings.remove_older_than', false),
100
- :remove_all_but_n_full => config.get("profile.#{profile}.remove_all_but_n_full", false) || config.get('settings.remove_all_but_n_full', false),
101
- :compression => (config.get("profile.#{profile}.compression", false) || config.get('settings.compression', 'bzip2')).to_sym,
102
- :always_full => config.get('settings.always_full', false) || config.get("profile.#{profile}.always_full", false),
103
- }
104
- backup_config
105
- end
106
-
107
- def perform_backup(opts, prev_backups, backup_config)
108
- puts "===== Backing up profile #{backup_config[:name]} ====="
109
- backup_config[:pre_backup].each_with_index do |cmd, i|
110
- puts "Executing pre-backup hook #{i+1}"
111
- exec(cmd)
112
- end
113
- full_required = full_required?(backup_config[:full_if_older_than], prev_backups)
114
- puts "Last full backup is too old. Forcing a full backup" if full_required && !opts[:full] && backup_config[:always_full]
115
- if full_required || opts[:full] || backup_config[:always_full]
116
- backup_full(backup_config, opts[:verbose])
117
- else
118
- backup_incr(backup_config, opts[:verbose])
119
- end
120
- backup_config[:post_backup].each_with_index do |cmd, i|
121
- puts "Executing post-backup hook #{i+1}"
122
- exec(cmd)
123
- end
124
- end
125
-
126
- def perform_cleanup(prev_backups, backup_config)
127
- puts "===== Cleaning up profile #{backup_config[:name]} ====="
128
- remove = []
129
- if age_str = backup_config[:remove_older_than]
130
- age = parse_interval(age_str)
131
- remove = prev_backups.select{ |o| o[:date] < age }
132
- # Don't want to delete anything before the last full backup
133
- unless remove.empty?
134
- kept = remove.slice!(remove.rindex{ |o| o[:type] == :full }..-1).count
135
- puts "Keeping #{kept} old backups as part of a chain" if kept > 1
136
- end
137
- elsif keep_n = backup_config[:remove_all_but_n_full]
138
- keep_n = keep_n.to_i
139
- # Get the date of the last full backup to keep
140
- if last_full_to_keep = prev_backups.select{ |o| o[:type] == :full }[-keep_n]
141
- # If there is a last full one...
142
- remove = prev_backups.select{ |o| o[:date] < last_full_to_keep[:date] }
143
- end
144
- end
145
-
146
- if remove.empty?
147
- puts "Nothing to do"
148
- else
149
- puts "Removing #{remove.count} old backup files"
150
- end
151
- remove.each do |object|
152
- @s3.buckets[backup_config[:bucket]].objects["#{backup_config[:dest_prefix]}/#{object[:name]}"].delete
153
- end
154
- end
155
-
156
- # Config should have the keys
157
- # backup_dir, name, soruces, exclude, bucket, dest_prefix
158
- def backup_incr(config, verbose=false)
159
- puts "Starting new incremental backup"
160
- backup = Backup.new(config[:backup_dir], config[:name], config[:sources], config[:exclude], config[:compression], config[:gpg_key])
161
-
162
- # Try and get hold of the snar file
163
- unless backup.snar_exists?
164
- puts "Failed to find snar file. Attempting to download..."
165
- s3_snar = "#{config[:dest_prefix]}/#{backup.snar}"
166
- object = @s3.buckets[config[:bucket]].objects[s3_snar]
167
- if object.exists?
168
- puts "Found file on S3. Downloading"
169
- open(backup.snar_path, 'wb') do |f|
170
- object.read do |chunk|
171
- f.write(chunk)
172
- end
173
- end
174
- else
175
- puts "Failed to download snar file. Defaulting to full backup"
176
- end
177
- end
178
-
179
- backup(config, backup, verbose)
180
- end
181
-
182
- def backup_full(config, verbose=false)
183
- puts "Starting new full backup"
184
- backup = Backup.new(config[:backup_dir], config[:name], config[:sources], config[:exclude], config[:compression], config[:gpg_key])
185
- # Nuke the snar file -- forces a full backup
186
- File.delete(backup.snar_path) if File.exists?(backup.snar_path)
187
- backup(config, backup, verbose)
188
- end
189
-
190
- def backup(config, backup, verbose=false)
191
- exec(backup.backup_cmd(verbose))
192
- puts "Uploading #{config[:bucket]}/#{config[:dest_prefix]}/#{File.basename(backup.archive)} (#{bytes_to_human(File.size(backup.archive))})"
193
- upload(backup.archive, config[:bucket], "#{config[:dest_prefix]}/#{File.basename(backup.archive)}")
194
- puts "Uploading snar (#{bytes_to_human(File.size(backup.snar_path))})"
195
- upload(backup.snar_path, config[:bucket], "#{config[:dest_prefix]}/#{File.basename(backup.snar)}")
196
- File.delete(backup.archive)
197
- end
198
-
199
- def upload(source, bucket, dest_name)
200
- tries = 0
201
- begin
202
- @s3.buckets[bucket].objects.create(dest_name, Pathname.new(source))
203
- rescue Errno::ECONNRESET => e
204
- tries += 1
205
- if tries <= UPLOAD_TRIES
206
- puts "Upload Exception: #{e}"
207
- puts "Retrying #{tries}/#{UPLOAD_TRIES}..."
208
- retry
209
- else
210
- raise e
211
- end
212
- end
213
- puts "Succeeded" if tries > 0
214
- end
215
-
216
- def perform_restore(opts, prev_backups, backup_config)
217
- puts "===== Restoring profile #{backup_config[:name]} ====="
218
- # If restore date given, parse
219
- if opts[:restore_date_given]
220
- m = opts[:restore_date].match(/(\d\d\d\d)(\d\d)(\d\d)?(\d\d)?(\d\d)?(\d\d)?/)
221
- raise "Unknown date format in --restore-to" if m.nil?
222
- restore_to = Time.new(*m[1..-1].map{ |s| s.to_i if s })
223
- else
224
- restore_to = Time.now
225
- end
226
-
227
- # Find the index of the first backup, incremental or full, before that date
228
- restore_end_index = prev_backups.rindex{ |o| o[:date] < restore_to }
229
- raise "Failed to find a backup for that date" unless restore_end_index
230
-
231
- # Find the first full backup before that one
232
- restore_start_index = prev_backups[0..restore_end_index].rindex{ |o| o[:type] == :full }
233
-
234
- restore_dir = opts[:restore].chomp('/') << '/'
235
-
236
- Dir.mkdir(restore_dir) unless Dir.exists?(restore_dir)
237
- raise "Destination dir is not a directory" unless File.directory?(restore_dir)
238
-
239
- prev_backups[restore_start_index..restore_end_index].each do |object|
240
- puts "Fetching #{backup_config[:bucket]}/#{backup_config[:dest_prefix]}/#{object[:name]} (#{bytes_to_human(object[:size])})"
241
- dl_file = "#{backup_config[:backup_dir]}/#{object[:name]}"
242
- open(dl_file, 'wb') do |f|
243
- @s3.buckets[backup_config[:bucket]].objects["#{backup_config[:dest_prefix]}/#{object[:name]}"].read do |chunk|
244
- f.write(chunk)
245
- end
246
- end
247
- puts "Extracting..."
248
- exec(Backup.restore_cmd(restore_dir, dl_file, opts[:verbose]))
249
-
250
- File.delete(dl_file)
251
- end
252
- end
253
-
254
- def perform_list_backups(prev_backups, backup_config)
255
- # prev_backups alreays contains just the files for the current profile
256
- puts "===== Backups list for #{backup_config[:name]} ====="
257
- puts "Type: N: Date:#{' '*18}Size: Chain Size: Format: Encrypted:\n\n"
258
- prev_type = ''
259
- total_size = 0
260
- chain_length = 0
261
- chain_cum_size = 0
262
- prev_backups.each do |object|
263
- type = object[:type] == prev_type && object[:type] == :incr ? " -- " : object[:type].to_s.capitalize
264
- prev_type = object[:type]
265
- chain_length += 1
266
- chain_length = 0 if object[:type] == :full
267
- chain_cum_size = 0 if object[:type] == :full
268
- chain_cum_size += object[:size]
269
-
270
- chain_length_str = (chain_length == 0 ? '' : chain_length.to_s).ljust(3)
271
- chain_cum_size_str = (object[:type] == :full ? '' : bytes_to_human(chain_cum_size)).ljust(8)
272
- puts "#{type} #{chain_length_str} #{object[:date].strftime('%F %T')} #{bytes_to_human(object[:size]).ljust(8)} " \
273
- "#{chain_cum_size_str} #{object[:compression].to_s.ljust(7)} #{object[:encryption] ? 'Y' : 'N'}"
274
- total_size += object[:size]
275
- end
276
- puts "\n"
277
- puts "Total size: #{bytes_to_human(total_size)}"
278
- puts "\n"
279
- end
280
-
281
- def get_objects(bucket, prefix, profile)
282
- objects = @s3.buckets[bucket].objects.with_prefix(prefix).map do |object|
283
- Backup.parse_object(object, profile)
284
- end
285
- objects.compact.sort_by{ |o| o[:date] }
286
- end
287
-
288
- def parse_interval(interval_str)
289
- time = Time.now
290
- time -= $1.to_i if interval_str =~ /(\d+)s/
291
- time -= $1.to_i*60 if interval_str =~ /(\d+)m/
292
- time -= $1.to_i*3600 if interval_str =~ /(\d+)h/
293
- time -= $1.to_i*86400 if interval_str =~ /(\d+)D/
294
- time -= $1.to_i*604800 if interval_str =~ /(\d+)W/
295
- time -= $1.to_i*2592000 if interval_str =~ /(\d+)M/
296
- time -= $1.to_i*31536000 if interval_str =~ /(\d+)Y/
297
- time
298
- end
299
-
300
- def full_required?(interval_str, objects)
301
- time = parse_interval(interval_str)
302
- objects.select{ |o| o[:type] == :full && o[:date] > time }.empty?
303
- end
304
-
305
- def bytes_to_human(n)
306
- count = 0
307
- while n >= 1014 && count < 4
308
- n /= 1024.0
309
- count += 1
310
- end
311
- format("%.2f", n) << %w(B KB MB GB TB)[count]
312
- end
313
-
314
- def exec(cmd)
315
- puts "Executing: #{cmd}"
316
- system(cmd)
317
- end
318
- end
9
+ class Main
10
+ UPLOAD_TRIES = 5
11
+
12
+ def run
13
+ opts = Trollop::options do
14
+ version VERSION
15
+ banner "Backs up files to, and restores files from, Amazon's S3 storage, using tar incremental backups\n\n" \
16
+ "Usage:\ns3-tar-backup -c config.ini [-p profile] --backup [--full] [-v]\n" \
17
+ "s3-tar-backup -c config.ini [-p profile] --cleanup [-v]\n" \
18
+ "s3-tar-backup -c config.ini [-p profile] --restore restore_dir\n\t[--restore_date date] [-v]\n" \
19
+ "s3-tar-backup -c config.ini [-p profile] --backup-config [--verbose]\n" \
20
+ "s3-tar-backup -c config.ini [-p profile] --list-backups\n\n" \
21
+ "Option details:\n"
22
+ opt :config, "Configuration file", :short => 'c', :type => :string
23
+ opt :backup, "Make an incremental backup"
24
+ opt :full, "Make the backup a full backup"
25
+ opt :profile, "The backup profile(s) to use (default all)", :short => 'p', :type => :strings
26
+ opt :cleanup, "Clean up old backups"
27
+ opt :restore, "Restore a backup to the specified dir", :type => :string
28
+ opt :restore_date, "Restore a backup from the specified date. Format YYYYMM[DD[hh[mm[ss]]]]", :type => :string
29
+ opt :backup_config, "Backs up the specified configuration file"
30
+ opt :list_backups, "List the stored backup info for one or more profiles"
31
+ opt :password_file, "Override the password file used to decrypt backups", :type => :string
32
+ opt :verbose, "Show verbose output", :short => 'v'
33
+ conflicts :backup, :cleanup, :restore, :backup_config, :list_backups
34
+ end
35
+
36
+
37
+ Trollop::die "--full requires --backup" if opts[:full] && !opts[:backup]
38
+ Trollop::die "--restore-date requires --restore" if opts[:restore_date_given] && !opts[:restore_given]
39
+ Trollop::die "--password-file requires --restore" if opts[:password_file_given] && !opts[:restore_given]
40
+ unless opts[:backup] || opts[:cleanup] || opts[:restore_given] || opts[:backup_config] || opts[:list_backups]
41
+ Trollop::die "Need one of --backup, --cleanup, --restore, --backup-config, --list-backups"
42
+ end
43
+
44
+ config_file = opts[:config] || '~/.s3-tar-backup/config.ini'
45
+
46
+ begin
47
+ raise "Config file #{config_file} not found.#{opts[:config] ? '' : ' You can specify a config file to use with --config'}" unless File.exists?(config_file)
48
+ config = IniParser.new(config_file).load
49
+ profiles = opts[:profile] || config.find_sections(/^profile\./).keys.map{ |k| k.to_s.split('.', 2)[1] }
50
+
51
+ # This is a bit of a special case
52
+ if opts[:backup_config]
53
+ dest = config.get('settings.dest', false)
54
+ raise "You must specify a single profile (used to determine the location to back up to) " \
55
+ "if backing up config and dest key is not in [settings]" if !dest && profiles.count != 1
56
+ dest ||= config["profile.#{profiles[0]}.dest"]
57
+ puts "===== Backing up config file #{config_file} ====="
58
+ prefix = config.get('settings.dest', false) || config["profile.#{profiles[0]}.dest"]
59
+ puts "Uploading #{config_file} to #{prefix}/#{File.basename(config_file)}"
60
+ backend = create_backend(config, prefix)
61
+ upload(backend, config_file, File.basename(config_file), false)
62
+ return
63
+ end
64
+
65
+ profiles.dup.each do |profile|
66
+ raise "No such profile: #{profile}" unless config.has_section?("profile.#{profile}")
67
+ opts[:profile] = profile
68
+ backup_config = gen_backup_config(opts[:profile], config)
69
+ prev_backups = get_objects(backup_config, opts[:profile])
70
+ perform_backup(opts, prev_backups, backup_config) if opts[:backup]
71
+ perform_cleanup(prev_backups, backup_config) if opts[:backup] || opts[:cleanup]
72
+ perform_restore(opts, prev_backups, backup_config) if opts[:restore_given]
73
+ perform_list_backups(prev_backups, backup_config) if opts[:list_backups]
74
+ end
75
+ rescue Exception => e
76
+ raise e
77
+ Trollop::die e.to_s
78
+ end
79
+ end
80
+
81
+ def absolute_path_from_config_file(config, path)
82
+ File.expand_path(File.join(File.expand_path(File.dirname(config.file_path)), path))
83
+ end
84
+
85
+ def create_backend(config, dest_prefix)
86
+ if dest_prefix.start_with?('file://')
87
+ Backend::FileBackend.new(dest_prefix['file://'.length..-1])
88
+ elsif dest_prefix.start_with?('/')
89
+ Backend::FileBackend.new(dest_prefix)
90
+ else
91
+ Backend::S3Backend.new(
92
+ ENV['AWS_ACCESS_KEY_ID'] || config['settings.aws_access_key_id'],
93
+ ENV['AWS_SECRET_ACCESS_KEY'] || config['settings.aws_secret_access_key'],
94
+ config.get('settings.aws_region', false),
95
+ (config.get('settings.dest', false) || config["profile.#{profiles[0]}.dest"]).sub(%r{^s3://}, '')
96
+ )
97
+ end
98
+ end
99
+
100
+ def gen_backup_config(profile, config)
101
+ top_gpg_key = config.get('settings.gpg_key', false)
102
+ profile_gpg_key = config.get("profile.#{profile}.gpg_key", false)
103
+ top_password_file = config.get('settings.password_file', false)
104
+ profile_password_file = config.get("profile.#{profile}.password_file", false)
105
+ raise "Cannot specify gpg_key and password_file together at the top level" if top_gpg_key && top_password_file
106
+ raise "Cannot specify both gpg_key and password_file for profile #{profile}" if profile_gpg_key && profile_password_file
107
+
108
+ encryption = nil
109
+ if profile_password_file
110
+ encryption = profile_password_file.empty? ? nil : { :type => :password_file, :password_file => absolute_path_from_config_file(config, profile_password_file) }
111
+ elsif profile_gpg_key
112
+ encryption = profile_gpg_key.empty? ? nil : { :type => :gpg_key, :gpg_key => profile_gpg_key }
113
+ elsif top_password_file
114
+ encryption = top_password_file.empty? ? nil : { :type => :password_file, :password_file => absolute_path_from_config_file(config, top_password_file) }
115
+ elsif top_gpg_key
116
+ encryption = top_gpg_key.empty? ? nil : { :type => :gpg_key, :gpg_key => top_gpg_key }
117
+ end
118
+
119
+ backup_config = {
120
+ :backup_dir => config.get("profile.#{profile}.backup_dir", false) || config.get('settings.backup_dir', '~/.s3-tar-backup/tmp'),
121
+ :name => profile,
122
+ :encryption => encryption,
123
+ :password_file => profile_password_file || top_password_file || '',
124
+ :sources => [*config.get("profile.#{profile}.source", [])] + [*config.get("settings.source", [])],
125
+ :exclude => [*config.get("profile.#{profile}.exclude", [])] + [*config.get("settings.exclude", [])],
126
+ :pre_backup => [*config.get("profile.#{profile}.pre-backup", [])] + [*config.get('settings.pre-backup', [])],
127
+ :post_backup => [*config.get("profile.#{profile}.post-backup", [])] + [*config.get('settings.post-backup', [])],
128
+ :full_if_older_than => config.get("profile.#{profile}.full_if_older_than", false) || config['settings.full_if_older_than'],
129
+ :remove_older_than => config.get("profile.#{profile}.remove_older_than", false) || config.get('settings.remove_older_than', false),
130
+ :remove_all_but_n_full => config.get("profile.#{profile}.remove_all_but_n_full", false) || config.get('settings.remove_all_but_n_full', false),
131
+ :compression => (config.get("profile.#{profile}.compression", false) || config.get('settings.compression', 'none')).to_sym,
132
+ :always_full => config.get('settings.always_full', false) || config.get("profile.#{profile}.always_full", false),
133
+ :backend => create_backend(config,config.get("profile.#{profile}.dest", false) || config['settings.dest']),
134
+ }
135
+ backup_config
136
+ end
137
+
138
+ def perform_backup(opts, prev_backups, backup_config)
139
+ puts "===== Backing up profile #{backup_config[:name]} ====="
140
+ backup_config[:pre_backup].each_with_index do |cmd, i|
141
+ puts "Executing pre-backup hook #{i+1}"
142
+ exec(cmd)
143
+ end
144
+ full_required = full_required?(backup_config[:full_if_older_than], prev_backups)
145
+ puts "Last full backup is too old. Forcing a full backup" if full_required && !opts[:full] && backup_config[:always_full]
146
+ if full_required || opts[:full] || backup_config[:always_full]
147
+ backup_full(backup_config, opts[:verbose])
148
+ else
149
+ backup_incr(backup_config, opts[:verbose])
150
+ end
151
+ backup_config[:post_backup].each_with_index do |cmd, i|
152
+ puts "Executing post-backup hook #{i+1}"
153
+ exec(cmd)
154
+ end
155
+ end
156
+
157
+ def perform_cleanup(prev_backups, backup_config)
158
+ puts "===== Cleaning up profile #{backup_config[:name]} ====="
159
+ remove = []
160
+ if age_str = backup_config[:remove_older_than]
161
+ age = parse_interval(age_str)
162
+ remove = prev_backups.select{ |o| o[:date] < age }
163
+ # Don't want to delete anything before the last full backup
164
+ unless remove.empty?
165
+ kept = remove.slice!(remove.rindex{ |o| o[:type] == :full }..-1).count
166
+ puts "Keeping #{kept} old backups as part of a chain" if kept > 1
167
+ end
168
+ elsif keep_n = backup_config[:remove_all_but_n_full]
169
+ keep_n = keep_n.to_i
170
+ # Get the date of the last full backup to keep
171
+ if last_full_to_keep = prev_backups.select{ |o| o[:type] == :full }[-keep_n]
172
+ # If there is a last full one...
173
+ remove = prev_backups.select{ |o| o[:date] < last_full_to_keep[:date] }
174
+ end
175
+ end
176
+
177
+ if remove.empty?
178
+ puts "Nothing to do"
179
+ else
180
+ puts "Removing #{remove.count} old backup files"
181
+ end
182
+ remove.each do |object|
183
+ backup_config[:backend].remove_item(object[:name])
184
+ end
185
+ end
186
+
187
+ # Config should have the keys
188
+ # backup_dir, name, soruces, exclude
189
+ def backup_incr(config, verbose=false)
190
+ puts "Starting new incremental backup"
191
+ backup = Backup.new(config[:backup_dir], config[:name], config[:sources], config[:exclude], config[:compression], config[:encryption])
192
+
193
+ # Try and get hold of the snar file
194
+ unless backup.snar_exists?
195
+ puts "Failed to find snar file. Attempting to download..."
196
+ if config[:backend].item_exists?(backup.snar)
197
+ puts "Found file on S3. Downloading"
198
+ config[:backend].download_item(backup.snar, backup.snar_path)
199
+ else
200
+ puts "Failed to download snar file. Defaulting to full backup"
201
+ end
202
+ end
203
+
204
+ backup(config, backup, verbose)
205
+ end
206
+
207
+ def backup_full(config, verbose=false)
208
+ puts "Starting new full backup"
209
+ backup = Backup.new(config[:backup_dir], config[:name], config[:sources], config[:exclude], config[:compression], config[:encryption])
210
+ # Nuke the snar file -- forces a full backup
211
+ File.delete(backup.snar_path) if File.exists?(backup.snar_path)
212
+ backup(config, backup, verbose)
213
+ end
214
+
215
+ def backup(config, backup, verbose=false)
216
+ FileUtils.rm(backup.tmp_snar_path) if File.exists?(backup.tmp_snar_path)
217
+ FileUtils.cp(backup.snar_path, backup.tmp_snar_path) if backup.snar_exists?
218
+ exec(backup.backup_cmd(verbose))
219
+ puts "Uploading #{config[:backend].prefix}/#{File.basename(backup.archive)} (#{bytes_to_human(File.size(backup.archive))})"
220
+ upload(config[:backend], backup.archive, File.basename(backup.archive), true)
221
+ FileUtils.mv(backup.tmp_snar_path, backup.snar_path, :force => true)
222
+ puts "Uploading snar (#{bytes_to_human(File.size(backup.snar_path))})"
223
+ upload(config[:backend], backup.snar_path, File.basename(backup.snar), false)
224
+ end
225
+
226
+ def upload(backend, source, dest_name, remove_original)
227
+ tries = 0
228
+ begin
229
+ backend.upload_item(dest_name, source, remove_original)
230
+ rescue Backend::UploadItemFailedError => e
231
+ tries += 1
232
+ if tries <= UPLOAD_TRIES
233
+ puts "Upload Exception: #{e}"
234
+ puts "Retrying #{tries}/#{UPLOAD_TRIES}..."
235
+ retry
236
+ else
237
+ raise e
238
+ end
239
+ end
240
+ puts "Succeeded" if tries > 0
241
+ end
242
+
243
+ def perform_restore(opts, prev_backups, backup_config)
244
+ puts "===== Restoring profile #{backup_config[:name]} ====="
245
+ # If restore date given, parse
246
+ if opts[:restore_date_given]
247
+ m = opts[:restore_date].match(/(\d\d\d\d)(\d\d)(\d\d)?(\d\d)?(\d\d)?(\d\d)?/)
248
+ raise "Unknown date format in --restore-to" if m.nil?
249
+ restore_to = Time.new(*m[1..-1].map{ |s| s.to_i if s })
250
+ else
251
+ restore_to = Time.now
252
+ end
253
+
254
+ # Find the index of the first backup, incremental or full, before that date
255
+ restore_end_index = prev_backups.rindex{ |o| o[:date] < restore_to }
256
+ raise "Failed to find a backup for that date" unless restore_end_index
257
+
258
+ # Find the first full backup before that one
259
+ restore_start_index = prev_backups[0..restore_end_index].rindex{ |o| o[:type] == :full }
260
+
261
+ restore_dir = opts[:restore].chomp('/') << '/'
262
+
263
+ Dir.mkdir(restore_dir) unless Dir.exists?(restore_dir)
264
+ raise "Destination dir is not a directory" unless File.directory?(restore_dir)
265
+
266
+ prev_backups[restore_start_index..restore_end_index].each do |object|
267
+ puts "Fetching #{backup_config[:backend].prefix}/#{object[:name]} (#{bytes_to_human(object[:size])})"
268
+ dl_file = "#{backup_config[:backup_dir]}/#{object[:name]}"
269
+ backup_config[:backend].download_item(object[:name], dl_file)
270
+ puts "Extracting..."
271
+ exec(Backup.restore_cmd(restore_dir, dl_file, opts[:verbose], opts[:password_file] || backup_config[:password_file]))
272
+ File.delete(dl_file)
273
+ end
274
+ end
275
+
276
+ def perform_list_backups(prev_backups, backup_config)
277
+ # prev_backups alreays contains just the files for the current profile
278
+ puts "===== Backups list for #{backup_config[:name]} ====="
279
+ puts "Type: N: Date:#{' '*18}Size: Chain Size: Compression: Encryption:\n\n"
280
+ prev_type = ''
281
+ total_size = 0
282
+ chain_length = 0
283
+ chain_cum_size = 0
284
+ prev_backups.each do |object|
285
+ type = object[:type] == prev_type && object[:type] == :incr ? " -- " : object[:type].to_s.capitalize
286
+ prev_type = object[:type]
287
+ chain_length += 1
288
+ chain_length = 0 if object[:type] == :full
289
+ chain_cum_size = 0 if object[:type] == :full
290
+ chain_cum_size += object[:size]
291
+
292
+ chain_length_str = (chain_length == 0 ? '' : chain_length.to_s).ljust(3)
293
+ chain_cum_size_str = (object[:type] == :full ? '' : bytes_to_human(chain_cum_size)).ljust(8)
294
+ encryption = case object[:encryption]
295
+ when :gpg_key
296
+ 'Key'
297
+ when :password_file
298
+ 'Password'
299
+ else
300
+ 'None'
301
+ end
302
+ puts "#{type} #{chain_length_str} #{object[:date].strftime('%F %T')} #{bytes_to_human(object[:size]).ljust(8)} " \
303
+ "#{chain_cum_size_str} #{object[:compression].to_s.ljust(12)} #{encryption}"
304
+ total_size += object[:size]
305
+ end
306
+ puts "\n"
307
+ puts "Total size: #{bytes_to_human(total_size)}"
308
+ puts "\n"
309
+ end
310
+
311
+ def get_objects(config, profile)
312
+ objects = config[:backend].list_items.map do |object|
313
+ Backup.parse_object(object, profile)
314
+ end
315
+ objects.compact.sort_by{ |o| o[:date] }
316
+ end
317
+
318
+ def parse_interval(interval_str)
319
+ time = Time.now
320
+ time -= $1.to_i if interval_str =~ /(\d+)s/
321
+ time -= $1.to_i*60 if interval_str =~ /(\d+)m/
322
+ time -= $1.to_i*3600 if interval_str =~ /(\d+)h/
323
+ time -= $1.to_i*86400 if interval_str =~ /(\d+)D/
324
+ time -= $1.to_i*604800 if interval_str =~ /(\d+)W/
325
+ time -= $1.to_i*2592000 if interval_str =~ /(\d+)M/
326
+ time -= $1.to_i*31536000 if interval_str =~ /(\d+)Y/
327
+ time
328
+ end
329
+
330
+ def full_required?(interval_str, objects)
331
+ time = parse_interval(interval_str)
332
+ objects.select{ |o| o[:type] == :full && o[:date] > time }.empty?
333
+ end
334
+
335
+ def bytes_to_human(n)
336
+ count = 0
337
+ while n >= 1014 && count < 4
338
+ n /= 1024.0
339
+ count += 1
340
+ end
341
+ fmt = (count == 0) ? '%i' : '%.2f'
342
+ format(fmt, n) << %w(B KB MB GB TB)[count]
343
+ end
344
+
345
+ def exec(cmd)
346
+ puts "Executing: #{cmd}"
347
+ result = system(cmd)
348
+ unless result
349
+ raise "Unable to run command. See above output for clues."
350
+ end
351
+ end
352
+ end
319
353
  end