lvmsync 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/README.md ADDED
@@ -0,0 +1,310 @@
1
+ # lvmsync
2
+
3
+ Have you ever wanted to do a partial sync on a block device, possibly over a
4
+ network, but were stymied by the fact that rsync just didn't work?
5
+
6
+ Well, fret no longer. As long as you use LVM for your block devices, you
7
+ too can have efficient delta-transfer of changed blocks.
8
+
9
+
10
+ ## What is it good for?
11
+
12
+ Mostly, transferring entire block devices from one machine to another, with
13
+ minimal downtime. Until now, you had to shutdown your service/VM/whatever,
14
+ do a big cross-network dd (using netcat or something), and wait while all
15
+ that transferred.
16
+
17
+ `lvmsync` allows you to use the following workflow to transfer a block
18
+ device "mostly live" to another machine:
19
+
20
+ 1. Take a snapshot of an existing LV.
21
+ 1. Transfer the entire snapshot over the network, while whatever uses the
22
+ block device itself keeps running.
23
+ 1. When the initial transfer is finished, you shutdown/unmount/whatever the
24
+ initial block device.
25
+ 1. Run lvmsync on the snapshot to transfer the changed blocks
26
+ * The only thing transferred over the network is the blocks that have
27
+ changed (which, hopefully, will be minimal)
28
+ 1. If you're paranoid, you can md5sum the content of the source and
29
+ destination block devices, to make sure everything's OK (although this will
30
+ destroy any performance benefit you got by running lvmsync in the first
31
+ lace)
32
+ 1. Bring the service/VM/whatever back up in it's new home in a *much*
33
+ shorter (as in, "orders of magnitude") time than was previously possible.
34
+
35
+ `lvmsync` also has a basic "snapshot-and-rollback" feature, where it can
36
+ save a copy of the data in the LV that you're overwriting to a file for
37
+ later application if you need to rollback. See "Snapback support" under
38
+ "How do I use it?" for more details.
39
+
40
+
41
+ ## How does it work?
42
+
43
+ By the magic of LVM snapshots. `lvmsync` is able to read the metadata that
44
+ device-mapper uses to keep track of what parts of the block device have
45
+ changed, and use that information to only send those modified blocks over
46
+ the network.
47
+
48
+ If you're really interested in the gory details, there's a brief "Theory of
49
+ Operation" section at the bottom of this README, or else you can just head
50
+ straight for the source code.
51
+
52
+
53
+ ## Installation
54
+
55
+ On the machine you're transferring from, you'll need have `dmsetup` and
56
+ `ssh` installed and available on the PATH, and an installation of Ruby 1.8
57
+ (or later). Then just copy the `lvmsync` script to somewhere in root's
58
+ PATH.
59
+
60
+ On the machine you're transferring *to*, you'll need `sshd` installed and
61
+ available for connection, and an installation of Ruby 1.8 (or later). Then
62
+ just copy the `lvmsync` script to somewhere in root's PATH.
63
+
64
+
65
+ ## How do I use it?
66
+
67
+ For an overview of all available options, run `lvmsync -h`.
68
+
69
+
70
+ ### Efficient block device transfer
71
+
72
+ At present, the only part of the block device syncing process that is
73
+ automated is the actual transfer of the snapshot changes -- the rest (making
74
+ the snapshot, doing the initial transfer, and stopping all writes to the LV)
75
+ you'll have to do yourself. Those other steps aren't difficult, though, and
76
+ are trivial to script to suit your local environment (see the example,
77
+ below).
78
+
79
+ Once you've got the snapshot installed, done the initial sync, and stopped
80
+ I/O, you just call `lvmsync` like this:
81
+
82
+ lvmsync <snapshot LV device> <destserver>:<destblock>
83
+
84
+ This requires that `lvmsync` is installed on `<destserver>`, and that you
85
+ have the ability to SSH into `<destserver>` as root. All data transfer
86
+ takes place over SSH, because we don't trust any network, and it simplifies
87
+ so many things (such as link-level compression, if you want it). If CPU is
88
+ an issue, you shouldn't be running LVM on your phone to begin with.
89
+
90
+ The reason why `lvmsync` needs you to specify the snapshot you want to sync,
91
+ and not the base LV, is that you might have more than one snapshot of a
92
+ given LV, and while we can determine the base LV given a snapshot, you can't
93
+ work out which snapshot to sync given a base LV. Remember to always specify
94
+ the full device path, not just the LV name.
95
+
96
+
97
+ #### Example
98
+
99
+ Let's say you've got an LV, named `vmsrv1/somevm`, and you'd like to
100
+ synchronise it to a new VM server, named `vmsrv2`. Assuming that `lvmsync` is
101
+ installed on `vmsrv2` and `vmsrv2` has an LV named `vmsrv2/somevm` large
102
+ enough to take the data, the following will do the trick rather nicely (all
103
+ commands should be run on `vmsrv1`:
104
+
105
+ # Take a snapshot before we do anything, so LVM will record all changes
106
+ # made while we're doing the initial sync
107
+ lvcreate --snapshot -L10G -n somevm-lvmsync vmsrv1/somevm
108
+
109
+ # Pre-sync all data across -- this will take some time, but while it's
110
+ # happening the VM is still serving traffic. pv is a great tool for
111
+ # showing you how fast your data's moving, but you can leave it out of
112
+ # the pipeline if you don't have it installed.
113
+ dd if=/dev/vmsrv1/somevm-lvmsync bs=1M | pv -ptrb | ssh root@vmsrv2 dd of=/dev/vmsrv2/somevm bs=1M
114
+
115
+ # Shutdown the VM -- the command you use will probably vary
116
+ virsh shutdown somevm
117
+
118
+ # Once it's shutdown and the block device isn't going to be written to
119
+ # any more, then you can run lvmsync
120
+ lvmsync /dev/vmsrv1/somevm-lvmsync vmsrv2:/dev/vmsrv2/somevm
121
+
122
+ # You can now start up the VM on vmsrv2, after a fairly small period of
123
+ # downtime. Once you're done, you can remove the snapshot and,
124
+ # presumably, the LV itself, from `vmsrv1`
125
+
126
+
127
+ ### Snapback support
128
+
129
+ In addition to being able to efficiently transfer the changes to an LV
130
+ across a network, `lvmsync` now supports a simple form of point-in-time
131
+ recovery, which I've called 'snapback'.
132
+
133
+ The way this works is startlingly simple: as `lvmsync` writes the changed
134
+ blocks out to the destination block device, it reads the data that is being
135
+ overwritten, and stores it to a file (specified with the `--snapback`
136
+ option). The format of this file is the same as the wire protocol that
137
+ `lvmsync` uses to transfer changed blocks over the network. This means
138
+ that, in the event that you need to rollback a block device to an earlier
139
+ state, you can do so by simply applying the saved snapback files created
140
+ previously, until you get to the desired state.
141
+
142
+
143
+ #### Example
144
+
145
+ To setup a snapback process, you need to have a local LV, with a snapshot,
146
+ whose contents have been sent to a remote server, perhaps something like
147
+ this:
148
+
149
+ lvcreate --snapshot -L10G -n somevm-snapback vmsrv1/somevm
150
+ dd if=/dev/vmsrv1/somevm-snapback bs=1M | pv -ptrb | \
151
+ ssh root@vmsrv2 dd of=/dev/vmsrv2/somevm
152
+
153
+ Now, you can run something like the following periodically (say, out of cron
154
+ each hour):
155
+
156
+ lvcreate --snapshot -L10G -n somevm-snapback-new vmsrv1/somevm
157
+ lvmsync /dev/vmsrv1/somevm-snapback vmsrv2:/dev/vmsrv2/somevm --snapback \
158
+ /var/snapbacks/somevm.$(date +%Y%m%d-%H%M)
159
+ lvremove -f vmsrv1/somevm-snapback
160
+ lvrename vmsrv1/somevm-snapback-new somevm-snapback
161
+
162
+ This will produce files in /var/snapbacks named `somevm.<date-time>`. You
163
+ need to create the `somevm-snapback-new` snapshot before you start
164
+ `lvmsync`, so that you can guarantee no changes won't get noticed.
165
+
166
+ There are some fairly large caveats to this method -- the LV will still be
167
+ collecting writes while you're transferring the snapshots, so you won't get
168
+ a consistent snapshot (in the event you have to rollback, it's almost
169
+ certain you'll need to fsck). You'll almost certainly want to incorporate
170
+ some sort of I/O freezing into the process, but the exact execution of that
171
+ is system-specific, and left as an exercise for the reader.
172
+
173
+ Restoring data from a snapback setup is straightforward -- just take each
174
+ snapback **in reverse order** and run it through `lvmsync --apply` on the
175
+ destination machine (`vmsrv2` in our example). Say at 1145 `vmsrv1`
176
+ crashed, and it was determined that you needed to rollback to the state of
177
+ the system at 8am. You could do this:
178
+
179
+ lvmsync --apply /var/snapbacks/somevm.20120119-1100 /dev/vmsrv2/somevm
180
+ lvmsync --apply /var/snapbacks/somevm.20120119-1000 /dev/vmsrv2/somevm
181
+ lvmsync --apply /var/snapbacks/somevm.20120119-0900 /dev/vmsrv2/somevm
182
+
183
+ And you're done -- `/dev/vmsrv2/somevm` is now at the state it was at at
184
+ 8am. A whole pile of fsck will no doubt be required, but hopefully you'll
185
+ still be able to salvage *something*.
186
+
187
+ If you're wondering why I only restored the 0900 snapback, and not the 0800
188
+ one, it's because the snapback made at 0900 copied the changes that were sent
189
+ at 0800 (and about to be overwritten at 0900) and wrote them to the 0900
190
+ snapback file. Confused much? Good.
191
+
192
+
193
+ ### Transferring snapshots on the same machine
194
+
195
+ If you need to transfer an LV between different VGs on the same machine,
196
+ then running everything through SSH is just an unnecessary overhead. If you
197
+ instead just run `lvmsync` without the `<destserver>:` in the destination
198
+ specification, everything runs locally, like this:
199
+
200
+ lvmsync /dev/vg0/srclv-snapshot /dev/vg1/destlv
201
+
202
+ All other parts of the process (creating the snapshot, doing the initial
203
+ data move with `dd`, and so on) are unchanged.
204
+
205
+ As an aside, if you're trying to move LVs between PVs in the same VG, then
206
+ you don't need `lvmsync`, you need `pvmove`.
207
+
208
+
209
+ ### Taking a space- and IO-efficient snapshot of an LV
210
+
211
+ But wait, there's more! `lvmsync` also has the ability to dump out the
212
+ snapshot data to disk, rather than immediately applying it to another block
213
+ device.
214
+
215
+ To do this, use the `--stdout` option when you're running `lvmsync`, and
216
+ instead of writing the changes to another block device, it'll instead dump
217
+ the "change stream" to stdout (so redirect somewhere useful). This allows
218
+ you to dump the changes to a file, or do some sort of fancy footwork to
219
+ transfer the data to another lvmsync process to apply the changes to a block
220
+ device.
221
+
222
+ For example, if you just wanted to take a copy of the contents of a
223
+ snapshot, you could do something like this:
224
+
225
+ lvmsync --stdout /dev/somevg/somelv-snapshot >~/somechanges
226
+
227
+ At a later date, if you wanted to apply those writes to a block device,
228
+ you'd do it like this:
229
+
230
+ lvmsync --apply ~/somechanges /dev/somevg/someotherlv
231
+
232
+ You can also do things like do an lvmsync *from* the destination -- this is
233
+ useful if (for example) you can SSH from the destination to the source
234
+ machine, but not the other way around (fkkn firewalls, how do they work?).
235
+ You could do this by running something like the following on the destination
236
+ machine:
237
+
238
+ ssh srcmachine lvmsync --stdout /dev/srcvg/srclv-snap | lvmsync --apply - /dev/destvg/destlv
239
+
240
+
241
+ ## Theory of Operation
242
+
243
+ This section is for those people who can't sleep well at night without
244
+ knowing the magic behind the curtain (and to remind myself occasionally how
245
+ this stuff works). It is completely unnecessary to read this section in
246
+ order to work lvmsync.
247
+
248
+ First, a little bit of background about how snapshot LVs work, before I
249
+ describe how lvmsync makes use of them.
250
+
251
+ An LVM snapshot "device" is actually not a block device in the usual sense.
252
+ It isn't just a big area of disk space where you write things. Instead, it
253
+ is a "meta" device, which points to both an "origin" LV, which is the LV
254
+ from which the snapshot was made, and a "metadata" LV, which is where the
255
+ magic happens.
256
+
257
+ The "metadata" LV is a list of "chunks" of the origin LV which have been
258
+ modified, along with the original contents of those chunks. In a way, you
259
+ can think of it as a sort of "binary diff", which says "these are the ways
260
+ in which this snapshot LV differs from the origin LV". When a write happens
261
+ to the origin LV, this "diff" is potentially modified to maintain the
262
+ original "view" from the time the snapshot was taken.
263
+
264
+ (Sidenote: this is why you can write to snapshots -- if you write to a
265
+ snapshot, the "diff" is written to some more, to say "here are some more
266
+ differences between the origin and the snapshot").
267
+
268
+ From here, it shouldn't be hard to work out how LVM uses the combination of
269
+ the origin and metadata LVs to give you a consistent snapshot view -- when
270
+ you ask to read a chunk, LVM looks in the metadata LV to see if it has the
271
+ chunk in there, and if not it can be sure that the chunk hasn't changed, so
272
+ it just reads it from the origin LV. Miiiiighty clever!
273
+
274
+ In lvmsync, we only make use of a tiny fraction of the data stored in the
275
+ metadata LV for the snapshot. We don't care what the original contents were
276
+ (they're what we're trying to get *away* from). What we want is the list of
277
+ which chunks have been modified, because that's what we use to work out
278
+ which blocks on the original LV we need to copy across. lvmsync never
279
+ *actually* reads any disk data from the snapshot block device itself -- all
280
+ it reads is the list of changed blocks, then it reads the changed data from
281
+ the original LV (which is where the modified blocks are stored).
282
+
283
+ By specifying a snapshot to lvmsync, you're telling it "this is the list of
284
+ changes I want you to copy" -- it already knows which original LV it needs
285
+ to copy from (the snapshot metadata has that info available).
286
+
287
+
288
+ ## See Also
289
+
290
+ Whilst I think `lvmsync` is awesome (and I hope you will too), here are some
291
+ other tools that might be of use to you if `lvmsync` doesn't float your
292
+ mustard:
293
+
294
+ * [`blocksync.py`](http://www.bouncybouncy.net/programs/blocksync.py) --
295
+ Implements the "hash the chunks and send the ones that don't match"
296
+ strategy of block device syncing. It needs to read the entire block
297
+ device at each end to work out what to send, so it's not as efficient,
298
+ but on the other hand it doesn't require LVM.
299
+
300
+ * [`bdsync`](http://bdsync.rolf-fokkens.nl/) -- Another "hash the chunks"
301
+ implementation, with the same limitations and advantages as
302
+ `blocksync.py`.
303
+
304
+ * [`ddsnap`](http://zumastor.org/man/ddsnap.8.html) -- Part of the
305
+ "Zumastor" project, appears to provide some sort of network-aware block
306
+ device snapshotting (I'm not sure, the Zumastor homepage includes the word
307
+ "Enterprise", so I fell asleep before finishing reading). Seems to
308
+ require kernel patches, so there's a non-trivial barrier to entry, but
309
+ probably not such a big deal if you're after network-aware snapshots as
310
+ part of your core infrastructure.
data/bin/lvmsync ADDED
@@ -0,0 +1,256 @@
1
+ #!/usr/bin/ruby
2
+
3
+ # Transfer a set of changes made to the origin of a snapshot LV to another
4
+ # block device, possibly using SSH to send to a remote system.
5
+ #
6
+ # Usage: Start with lvmsync --help, or read the README for all the gory
7
+ # details.
8
+ #
9
+ # Copyright (C) 2011-2014 Matt Palmer <matt@hezmatt.org>
10
+ #
11
+ # This program is free software: you can redistribute it and/or modify it
12
+ # under the terms of the GNU General Public License version 3, as published
13
+ # by the Free Software Foundation.
14
+ #
15
+ # This program is distributed in the hope that it will be useful,
16
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
17
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
18
+ # `LICENCE` file for more details.
19
+ #
20
+ require 'optparse'
21
+ require 'lvm'
22
+
23
+ PROTOCOL_VERSION = "lvmsync PROTO[2]"
24
+
25
+ include LVM::Helpers
26
+
27
+ def main()
28
+ # Parse me some options
29
+ options = {}
30
+ OptionParser.new do |opts|
31
+ opts.banner = "Usage: lvmsync [options]"
32
+ opts.separator ""
33
+ opts.separator " lvmsync [--snapback <file>] <snapshot device> [--stdout | [<desthost>:]<destdevice>]"
34
+ opts.separator " lvmsync [--snapback <file>] --apply <changes file> <destdevice>"
35
+ opts.separator ""
36
+
37
+ opts.on("--server", "Run in server mode (deprecated; use '--apply -' instead)") do |v|
38
+ options[:server] = true
39
+ end
40
+ opts.on("-v", "--[no-]verbose",
41
+ "Run verbosely") { |v| options[:verbose] = true }
42
+ opts.on("-b <file>", "--snapback <file>",
43
+ "Make a backup snapshot file on the destination") do |v|
44
+ options[:snapback] = v
45
+ end
46
+ opts.on("-a", "--apply <file>",
47
+ "Apply mode: write the contents of a snapback file to a device") do |v|
48
+ options[:apply] = v
49
+ end
50
+ opts.on("-s", "--stdout", "Write output data to stdout rather than another lvmsync process") do |v|
51
+ options[:stdout] = true
52
+ end
53
+ end.parse!
54
+
55
+ if options[:apply]
56
+ if ARGV[0].nil?
57
+ $stderr.puts "No destination device specified."
58
+ exit 1
59
+ end
60
+ options[:device] = ARGV[0]
61
+ run_apply(options)
62
+ elsif options[:server]
63
+ $stderr.puts "--server is deprecated; please use '--apply -' instead"
64
+ if (ARGV[0].nil?)
65
+ $stderr.puts "No destination block device specified. WTF?"
66
+ exit 1
67
+ end
68
+ options[:apply] = '-'
69
+ options[:device] = ARGV[0]
70
+ run_apply(options)
71
+ else
72
+ if ARGV[0].nil?
73
+ $stderr.puts "ERROR: No snapshot specified. Exiting."
74
+ exit 1
75
+ end
76
+ options[:snapdev] = ARGV[0]
77
+
78
+ if options[:stdout] and options[:snapback]
79
+ $stderr.puts "--snapback cannot be used with --stdout"
80
+ exit 1
81
+ end
82
+
83
+ if (options[:stdout].nil? and ARGV[1].nil?)
84
+ $stderr.puts "No destination specified."
85
+ exit 1
86
+ end
87
+ if options[:stdout].nil?
88
+ dev, host = ARGV[1].split(':', 2).reverse
89
+ options[:desthost] = host
90
+ options[:destdev] = dev
91
+ end
92
+
93
+ run_client(options)
94
+ end
95
+ end
96
+
97
+ def run_apply(opts)
98
+ snapfile = opts[:snapback] ? File.open(opts[:snapback], 'w') : nil
99
+ infile = opts[:apply] == '-' ? $stdin : File.open(opts[:apply], 'r')
100
+ destdev = opts[:device]
101
+
102
+ process_dumpdata(infile, destdev, snapfile)
103
+ ensure
104
+ snapfile.close unless snapfile.nil?
105
+ infile.close unless infile.nil? or infile == $stdin
106
+ end
107
+
108
+ def process_dumpdata(instream, destdev, snapback = nil)
109
+ handshake = instream.readline.chomp
110
+ unless handshake == PROTOCOL_VERSION
111
+ $stderr.puts "Handshake failed; protocol mismatch? (saw '#{handshake}' expected '#{PROTOCOL_VERSION}'"
112
+ exit 1
113
+ end
114
+
115
+ File.open(destdev, 'w+') do |dest|
116
+ while header = instream.read(12)
117
+ offset, chunksize = header.unpack("QN")
118
+ offset = ntohq(offset)
119
+
120
+ begin
121
+ dest.seek offset * chunksize
122
+ rescue Errno::EINVAL
123
+ # In certain rare circumstances, we want to transfer a block
124
+ # device where the destination is smaller than the source (DRBD
125
+ # volumes is the canonical use case). So, we ignore attempts to
126
+ # seek past the end of the device. Yes, this may lose data, but
127
+ # if you didn't notice that your dd shit itself, it's unlikely
128
+ # you're going to notice now.
129
+
130
+ # Skip the chunk of data
131
+ instream.read(chunksize)
132
+ # Go to the next chunk
133
+ next
134
+ end
135
+
136
+ if snapback
137
+ snapback.write(header)
138
+ snapback.write dest.read(chunksize)
139
+ dest.seek offset * chunksize
140
+ end
141
+ dest.write instream.read(chunksize)
142
+ end
143
+ end
144
+ end
145
+
146
+ def run_client(opts)
147
+ snapshot = opts[:snapdev]
148
+ desthost = opts[:desthost]
149
+ destdev = opts[:destdev]
150
+ outfd = nil
151
+
152
+ vg, lv = parse_snapshot_name(snapshot)
153
+
154
+ vgconfig = LVM::VGConfig.new(vg)
155
+
156
+ if vgconfig.logical_volumes[lv].nil?
157
+ $stderr.puts "#{snapshot}: Could not find logical volume"
158
+ exit 1
159
+ end
160
+
161
+ snap = if vgconfig.logical_volumes[lv].snapshot?
162
+ if vgconfig.logical_volumes[lv].thin?
163
+ LVM::ThinSnapshot.new(vg, lv)
164
+ else
165
+ LVM::Snapshot.new(vg, lv)
166
+ end
167
+ else
168
+ $stderr.puts "#{snapshot}: Not a snapshot device"
169
+ exit 1
170
+ end
171
+
172
+ $stderr.puts "Origin device: #{vg}/#{snap.origin}" if opts[:verbose]
173
+
174
+ # Since, in principle, we're not supposed to be reading from snapshot
175
+ # devices directly, the kernel makes no attempt to make the device's read
176
+ # cache stay in sync with the actual state of the device. As a result,
177
+ # we have to manually drop all caches before the data looks consistent.
178
+ # PERFORMANCE WIN!
179
+ File.open("/proc/sys/vm/drop_caches", 'w') { |fd| fd.print "3" }
180
+
181
+ snapback = opts[:snapback] ? "--snapback #{opts[:snapback]}" : ''
182
+
183
+ if opts[:stdout]
184
+ outfd = $stdout
185
+ else
186
+ server_cmd = if desthost
187
+ "ssh #{desthost} lvmsync --apply - #{snapback} #{destdev}"
188
+ else
189
+ "lvmsync --apply - #{snapback} #{destdev}"
190
+ end
191
+
192
+ outfd = IO.popen(server_cmd, 'w')
193
+ end
194
+
195
+ outfd.puts PROTOCOL_VERSION
196
+
197
+ start_time = Time.now
198
+ xfer_count = 0
199
+ xfer_size = 0
200
+ total_size = 0
201
+
202
+ originfile = "/dev/mapper/#{vg.gsub('-', '--')}-#{snap.origin.gsub('-', '--')}"
203
+ File.open(originfile, 'r') do |origindev|
204
+ snap.differences.each do |r|
205
+ xfer_count += 1
206
+ chunk_size = r.last - r.first + 1
207
+ xfer_size += chunk_size
208
+
209
+ $stderr.puts "Sending chunk #{r.to_s}..." if opts[:verbose]
210
+ $stderr.puts "Seeking to #{r.first} in #{originfile}" if opts[:verbose]
211
+
212
+ origindev.seek(r.first, IO::SEEK_SET)
213
+
214
+ outfd.print [htonq(r.first), chunk_size].pack("QN")
215
+ outfd.print origindev.read(chunk_size)
216
+
217
+ # Progress bar!
218
+ if xfer_count % 100 == 50
219
+ $stderr.printf "\e[2K\rSending chunk %i of %i, %.2fMB/s",
220
+ xfer_count,
221
+ snap.differences.length,
222
+ xfer_size / (Time.now - start_time) / 1048576
223
+ $stderr.flush
224
+ end
225
+ end
226
+
227
+ origindev.seek(0, IO::SEEK_END)
228
+ total_size = origindev.tell
229
+ end
230
+
231
+ $stderr.printf "\rTransferred %i bytes in %.2f seconds\n",
232
+ xfer_size, Time.now - start_time
233
+
234
+ $stderr.printf "You transferred your changes %.2fx faster than a full dd!\n",
235
+ total_size.to_f / xfer_size
236
+ ensure
237
+ outfd.close unless outfd.nil? or outfd == $stdout
238
+ end
239
+
240
+ # Take a device name in any number of different formats and return a [VG, LV] pair.
241
+ # Raises ArgumentError if the name couldn't be parsed.
242
+ def parse_snapshot_name(origname)
243
+ case origname
244
+ when %r{^/dev/mapper/(.*[^-])-([^-].*)$} then
245
+ [$1, $2]
246
+ when %r{^/dev/([^/]+)/(.+)$} then
247
+ [$1, $2]
248
+ when %r{^([^/]+)/(.*)$} then
249
+ [$1, $2]
250
+ else
251
+ raise ArgumentError,
252
+ "Could not determine snapshot name and VG from #{origname.inspect}"
253
+ end
254
+ end
255
+
256
+ main