pxcbackup 0.0.1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +7 -0
- data/.editorconfig +9 -0
- data/.gitignore +22 -0
- data/LICENSE +21 -0
- data/README.md +76 -0
- data/bin/pxcbackup +10 -0
- data/lib/pxcbackup/application.rb +95 -0
- data/lib/pxcbackup/array.rb +11 -0
- data/lib/pxcbackup/backup.rb +80 -0
- data/lib/pxcbackup/backupper.rb +409 -0
- data/lib/pxcbackup/mysql.rb +50 -0
- data/lib/pxcbackup/path_resolver.rb +18 -0
- data/lib/pxcbackup/remote_repo.rb +39 -0
- data/lib/pxcbackup/repo.rb +42 -0
- data/lib/pxcbackup/version.rb +3 -0
- data/lib/pxcbackup.rb +2 -0
- data/pxcbackup.gemspec +18 -0
- metadata +60 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA1:
|
3
|
+
metadata.gz: c6bd1e6689acf97d87e4c7afa3ee9096fb730ea1
|
4
|
+
data.tar.gz: 182605c2e2341e9b453a667ad399441c27611235
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: 229b9f667f762529c9cd63e95a2f77bb79fa7e53fae577d4cde0fe2fdb5cfea711436981e11ce8da1655009abca45055640181a7234c53b32ca5aa8744a1277c
|
7
|
+
data.tar.gz: 3ebb735a23741bbff57790c9900855ab51277c7d931bf7ce55ade393fc0e86e8e3178a2d47a9712533be8506dd81c90d015d4519e0f48a2634962306950677c4
|
data/.editorconfig
ADDED
data/.gitignore
ADDED
@@ -0,0 +1,22 @@
|
|
1
|
+
*.gem
|
2
|
+
*.rbc
|
3
|
+
.bundle
|
4
|
+
.config
|
5
|
+
.yardoc
|
6
|
+
Gemfile.lock
|
7
|
+
InstalledFiles
|
8
|
+
_yardoc
|
9
|
+
coverage
|
10
|
+
doc/
|
11
|
+
lib/bundler/man
|
12
|
+
pkg
|
13
|
+
rdoc
|
14
|
+
spec/reports
|
15
|
+
test/tmp
|
16
|
+
test/version_tmp
|
17
|
+
tmp
|
18
|
+
*.bundle
|
19
|
+
*.so
|
20
|
+
*.o
|
21
|
+
*.a
|
22
|
+
mkmf.log
|
data/LICENSE
ADDED
@@ -0,0 +1,21 @@
|
|
1
|
+
The MIT License
|
2
|
+
|
3
|
+
Copyright (c) 2014 Robbert Klarenbeek
|
4
|
+
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
7
|
+
in the Software without restriction, including without limitation the rights
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
10
|
+
furnished to do so, subject to the following conditions:
|
11
|
+
|
12
|
+
The above copyright notice and this permission notice shall be included in
|
13
|
+
all copies or substantial portions of the Software.
|
14
|
+
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
21
|
+
THE SOFTWARE.
|
data/README.md
ADDED
@@ -0,0 +1,76 @@
|
|
1
|
+
# PXCBackup
|
2
|
+
|
3
|
+
PXCBackup is a database backup tool meant for [Percona XtraDB Cluster](http://www.percona.com/software/percona-xtradb-cluster) (PXC), although it could also be used on other related systems, like a [MariaDB](https://mariadb.org) [Galera](http://galeracluster.com/products/) cluster using [XtraBackup](http://www.percona.com/software/percona-xtrabackup), for example.
|
4
|
+
|
5
|
+
The `innobackupex` script provided by Percona makes it very easy to create backups, however restoring backups can become quite complicated, since backups might need to be extracted, uncompressed, decrypted, before restoring they need to be prepared, incremental backups need to be applied on top of full backups, indexes might need to be rebuilt for compact backups, etc. Usually, backups need to be restored in stressful emergency situations, where all of these steps can slow you down quite a bit.
|
6
|
+
|
7
|
+
PXCBackup does all of this for you! As a bonus, PXCBackup provides syncing backups to [Amazon S3](http://aws.amazon.com/s3/) and even restoring straight from S3.
|
8
|
+
|
9
|
+
Since PXCBackup is meant for Galera clusters, it does a few additional things:
|
10
|
+
|
11
|
+
* Run `innobackupex` with `--galera-info` and reconstructing `grastate.dat` when restoring a backup. This preserves the local node state, allowing new nodes to be added from a backup with just an IST!
|
12
|
+
|
13
|
+
* Turning on `wsrep_desync` before a backup, and turning it off again after `wsrep_local_recv_queue` is empty. The reason for this is twofold:
|
14
|
+
* It prevents flow control from kicking in when the backup node takes a performance hit because of the increased disk load (this is similar to what happens on a donor node, during an SST).
|
15
|
+
* Secondly, it makes `clustercheck` report this node as unavailable, which can be very useful to let your loadbalancer(s) skip this node during the backup. This behavior can be turned off by setting `available_when_donor` to `1` in `clustercheck`.
|
16
|
+
|
17
|
+
PXCBackup is basically a server command line tool, which means the following constraints were used:
|
18
|
+
|
19
|
+
* Support ruby >= 1.8.7. Yes, 1.8.7 is EOL, but many cloud provider OS images still contain 1.8.7.
|
20
|
+
* Have no external gem dependencies. This tool should be completely stand-alone, and only require certain command line tools.
|
21
|
+
* Instead, execute command line tools. For example, it uses the `mysql` and `s3cmd` instead of modules / gems.
|
22
|
+
|
23
|
+
## Installation
|
24
|
+
|
25
|
+
Simply install the gem:
|
26
|
+
|
27
|
+
```shell
|
28
|
+
$ gem install pxcbackup
|
29
|
+
```
|
30
|
+
|
31
|
+
Of course, you need to have PXC (or similar) running, which provides most of the tools (`innobackupex`, `xtrabackup`, `xbstream`, `xbcrypt`).
|
32
|
+
|
33
|
+
To sync to Amazon S3, make sure you have [S3cmd](http://s3tools.org/s3cmd) installed and configured (`s3cmd --configure`, which creates a file `~/.s3cfg`).
|
34
|
+
|
35
|
+
## Usage
|
36
|
+
|
37
|
+
Just check the built in command line help:
|
38
|
+
|
39
|
+
```shell
|
40
|
+
$ pxcbackup help
|
41
|
+
```
|
42
|
+
|
43
|
+
Aside from command line flags, you can specify additional options in `~/.pxcbackup`, or another
|
44
|
+
config given by `-c`. Some commonly used settings are:
|
45
|
+
|
46
|
+
```yaml
|
47
|
+
backup_dir: /path/to/local/backups/
|
48
|
+
remote: s3://my-aws-bucket/
|
49
|
+
mysql_username: root
|
50
|
+
mysql_password:
|
51
|
+
compact: false
|
52
|
+
compress: true
|
53
|
+
encrypt: AES256
|
54
|
+
encrypt_key: <secret-key>
|
55
|
+
retention: 100
|
56
|
+
desync_wait: 30
|
57
|
+
threads: 4
|
58
|
+
memory: 1G
|
59
|
+
```
|
60
|
+
|
61
|
+
## Wishlist
|
62
|
+
|
63
|
+
* More complex rotation schemes
|
64
|
+
* Separate rotation scheme for remote
|
65
|
+
* Better error handling for shell commands
|
66
|
+
* Code documentation (RDoc?)
|
67
|
+
* Tests (RSpec?)
|
68
|
+
* Different remote providers
|
69
|
+
|
70
|
+
## Authors
|
71
|
+
|
72
|
+
* Robbert Klarenbeek, <robbertkl@renbeek.nl>
|
73
|
+
|
74
|
+
## License
|
75
|
+
|
76
|
+
DeployHook is published under the [MIT License](http://www.opensource.org/licenses/mit-license.php).
|
data/bin/pxcbackup
ADDED
@@ -0,0 +1,95 @@
|
|
1
|
+
require 'optparse'
|
2
|
+
require 'time'
|
3
|
+
require 'yaml'
|
4
|
+
|
5
|
+
module PXCBackup
|
6
|
+
class Application
|
7
|
+
def initialize(argv)
|
8
|
+
parse_options(argv)
|
9
|
+
|
10
|
+
config = File.join(ENV['HOME'], '.pxcbackup')
|
11
|
+
if @options[:config]
|
12
|
+
config = @options[:config]
|
13
|
+
raise 'cannot find given config file' unless File.file?(config)
|
14
|
+
end
|
15
|
+
if File.file?(config)
|
16
|
+
config_options = YAML.load_file(config)
|
17
|
+
config_options = config_options.inject({}) { |hash, (k, v)| hash[k.to_sym] = v; hash }
|
18
|
+
@options = config_options.merge(@options)
|
19
|
+
end
|
20
|
+
end
|
21
|
+
|
22
|
+
def run
|
23
|
+
backupper = Backupper.new(@options)
|
24
|
+
|
25
|
+
case @command
|
26
|
+
when 'create'
|
27
|
+
backupper.make_backup(@options)
|
28
|
+
when 'list'
|
29
|
+
backupper.list_backups
|
30
|
+
when 'restore'
|
31
|
+
time = @arguments.any? ? Time.parse(@arguments.first) : Time.now
|
32
|
+
backupper.restore_backup(time, !!@options[:skip_confirmation])
|
33
|
+
end
|
34
|
+
end
|
35
|
+
|
36
|
+
def parse_options(argv)
|
37
|
+
@options ||= {}
|
38
|
+
parser = OptionParser.new do |opt|
|
39
|
+
opt.banner = "Usage: #{$0} COMMAND [OPTIONS]"
|
40
|
+
opt.separator ''
|
41
|
+
opt.separator 'Commands'
|
42
|
+
opt.separator ' create create a new backup'
|
43
|
+
opt.separator ' help show this help'
|
44
|
+
opt.separator ' list list available backups'
|
45
|
+
opt.separator ' restore [time] restore to a point in time'
|
46
|
+
opt.separator ''
|
47
|
+
opt.separator 'Options'
|
48
|
+
|
49
|
+
opt.on('-c', '--config', '=CONFIG_FILE', 'config file to use instead of ~/.pxcbackup') do |config_file|
|
50
|
+
@options[:config] = config_file
|
51
|
+
end
|
52
|
+
|
53
|
+
opt.on('-d', '--dir', '=BACKUP_DIR', 'local repository to store backups') do |backup_dir|
|
54
|
+
@options[:backup_dir] = backup_dir
|
55
|
+
end
|
56
|
+
|
57
|
+
opt.on('-f', '--full', 'create a full backup') do
|
58
|
+
@options[:type] = :full
|
59
|
+
end
|
60
|
+
|
61
|
+
opt.on('-i', '--incremental', 'create an incremental backup') do
|
62
|
+
@options[:type] = :incremental
|
63
|
+
end
|
64
|
+
|
65
|
+
opt.on('-l', '--local', 'stay local, i.e. do not communicate with S3') do
|
66
|
+
@options[:local] = true
|
67
|
+
end
|
68
|
+
|
69
|
+
opt.on('-r', '--remote', '=REMOTE_URI', 'remote URI to sync backups to, e.g. s3://my-aws-bucket/') do |remote|
|
70
|
+
@options[:remote] = remote
|
71
|
+
end
|
72
|
+
|
73
|
+
opt.on('-v', '--verbose', 'verbose output') do
|
74
|
+
@options[:verbose] = true
|
75
|
+
end
|
76
|
+
|
77
|
+
opt.on('-y', '--yes', 'skip confirmation on backup restore') do
|
78
|
+
@options[:skip_confirmation] = true
|
79
|
+
end
|
80
|
+
end
|
81
|
+
|
82
|
+
begin
|
83
|
+
@command, *@arguments = parser.parse(argv)
|
84
|
+
if @command == 'help'
|
85
|
+
puts parser
|
86
|
+
exit
|
87
|
+
end
|
88
|
+
raise 'no command given' if @command.to_s == ''
|
89
|
+
raise "invalid command #{@command}" unless ['create', 'list', 'restore'].include?(@command)
|
90
|
+
rescue => e
|
91
|
+
abort "#{$0}: #{e.message}\n#{parser}"
|
92
|
+
end
|
93
|
+
end
|
94
|
+
end
|
95
|
+
end
|
@@ -0,0 +1,80 @@
|
|
1
|
+
module PXCBackup
|
2
|
+
class Backup
|
3
|
+
attr_reader :repo, :path
|
4
|
+
|
5
|
+
def initialize(repo, path)
|
6
|
+
@repo = repo
|
7
|
+
@path = path
|
8
|
+
raise 'invalid backup name' unless match
|
9
|
+
end
|
10
|
+
|
11
|
+
def self.regexp
|
12
|
+
/\/(\d+)_(full|incr)\.(xbstream|tar)(\.xbcrypt)?$/
|
13
|
+
end
|
14
|
+
|
15
|
+
def ==(other)
|
16
|
+
@path == other.path && @repo == other.repo
|
17
|
+
end
|
18
|
+
|
19
|
+
def <=>(other)
|
20
|
+
compare = time <=> other.time
|
21
|
+
compare = remote? ? -1 : 1 if compare == 0 && remote? != other.remote?
|
22
|
+
compare
|
23
|
+
end
|
24
|
+
|
25
|
+
def to_s
|
26
|
+
time.to_s
|
27
|
+
end
|
28
|
+
|
29
|
+
def time
|
30
|
+
Time.at(match[:timestamp].to_i)
|
31
|
+
end
|
32
|
+
|
33
|
+
def type
|
34
|
+
type = match[:type]
|
35
|
+
type = 'incremental' if type == 'incr'
|
36
|
+
type.to_sym
|
37
|
+
end
|
38
|
+
|
39
|
+
def stream
|
40
|
+
match[:stream].to_sym
|
41
|
+
end
|
42
|
+
|
43
|
+
def encrypted?
|
44
|
+
match[:encrypted]
|
45
|
+
end
|
46
|
+
|
47
|
+
def full?
|
48
|
+
type == :full
|
49
|
+
end
|
50
|
+
|
51
|
+
def incremental?
|
52
|
+
type == :incremental
|
53
|
+
end
|
54
|
+
|
55
|
+
def remote?
|
56
|
+
@repo.is_a? RemoteRepo
|
57
|
+
end
|
58
|
+
|
59
|
+
def delete
|
60
|
+
@repo.delete(self)
|
61
|
+
end
|
62
|
+
|
63
|
+
def stream_command
|
64
|
+
@repo.stream_command(self)
|
65
|
+
end
|
66
|
+
|
67
|
+
private
|
68
|
+
|
69
|
+
def match
|
70
|
+
match = self.class.regexp.match(@path)
|
71
|
+
return nil unless match
|
72
|
+
{
|
73
|
+
:timestamp => match[1],
|
74
|
+
:type => match[2],
|
75
|
+
:stream => match[3],
|
76
|
+
:encrypted => !!match[4],
|
77
|
+
}
|
78
|
+
end
|
79
|
+
end
|
80
|
+
end
|
@@ -0,0 +1,409 @@
|
|
1
|
+
require 'fileutils'
|
2
|
+
require 'open3'
|
3
|
+
require 'tmpdir'
|
4
|
+
|
5
|
+
require 'pxcbackup/array'
|
6
|
+
require 'pxcbackup/backup'
|
7
|
+
require 'pxcbackup/mysql'
|
8
|
+
require 'pxcbackup/path_resolver'
|
9
|
+
require 'pxcbackup/remote_repo'
|
10
|
+
require 'pxcbackup/repo'
|
11
|
+
|
12
|
+
module PXCBackup
|
13
|
+
class Backupper
|
14
|
+
def initialize(options)
|
15
|
+
@verbose = options[:verbose] || false
|
16
|
+
@threads = options[:threads] || 1
|
17
|
+
@memory = options[:memory] || '100M'
|
18
|
+
@throttle = options[:throttle] || nil
|
19
|
+
@encrypt = options[:encrypt] || nil
|
20
|
+
@encrypt_key = options[:encrypt_key] || nil
|
21
|
+
|
22
|
+
@which = PathResolver.new(options)
|
23
|
+
|
24
|
+
local_repo_path = options[:backup_dir]
|
25
|
+
@local_repo = local_repo_path ? Repo.new(local_repo_path, options) : nil
|
26
|
+
|
27
|
+
remote_repo_path = options[:remote]
|
28
|
+
@remote_repo = remote_repo_path && !options[:local] ? RemoteRepo.new(remote_repo_path, options) : nil
|
29
|
+
|
30
|
+
@mysql = MySQL.new(options)
|
31
|
+
end
|
32
|
+
|
33
|
+
def make_backup(options = {})
|
34
|
+
type = options[:type] || :full
|
35
|
+
stream = options[:stream] || :xbstream
|
36
|
+
compress = options[:compress] || false
|
37
|
+
compact = options[:compact] || false
|
38
|
+
desync_wait = options[:desync_wait] || 60
|
39
|
+
retention = options[:retention] || 100
|
40
|
+
|
41
|
+
raise 'cannot find backup dir' unless @local_repo && File.directory?(local_repo.path)
|
42
|
+
raise 'cannot enable encryption without encryption key' if @encrypt && !@encrypt_key
|
43
|
+
|
44
|
+
arguments = [
|
45
|
+
@mysql.auth,
|
46
|
+
'--no-timestamp',
|
47
|
+
"--extra-lsndir=#{@local_repo.path}",
|
48
|
+
"--stream=#{stream.to_s}",
|
49
|
+
'--galera-info',
|
50
|
+
]
|
51
|
+
|
52
|
+
if compress
|
53
|
+
arguments << '--compress'
|
54
|
+
end
|
55
|
+
|
56
|
+
if compact
|
57
|
+
arguments << '--compact'
|
58
|
+
end
|
59
|
+
|
60
|
+
if @encrypt
|
61
|
+
arguments << "--encrypt=#{@encrypt.shellescape}"
|
62
|
+
arguments << "--encrypt-key=#{@encrypt_key.shellescape}"
|
63
|
+
end
|
64
|
+
|
65
|
+
filename = "#{Time.now.to_i}"
|
66
|
+
if type == :incremental
|
67
|
+
last_info = read_backup_info(File.join(@local_repo.path, 'xtrabackup_checkpoints'))
|
68
|
+
arguments << '--incremental'
|
69
|
+
arguments << "--incremental-lsn=#{last_info[:to_lsn]}"
|
70
|
+
filename << "_incr"
|
71
|
+
else
|
72
|
+
filename << '_full'
|
73
|
+
end
|
74
|
+
filename << ".#{stream.to_s}"
|
75
|
+
filename << '.xbcrypt' if @encrypt
|
76
|
+
|
77
|
+
desync_enable(desync_wait)
|
78
|
+
|
79
|
+
Dir.mktmpdir('pxcbackup-') do |dir|
|
80
|
+
arguments << dir.shellescape
|
81
|
+
log_action "Creating backup #{filename}" do
|
82
|
+
innobackupex(arguments, File.join(@local_repo.path, filename))
|
83
|
+
end
|
84
|
+
end
|
85
|
+
|
86
|
+
desync_disable
|
87
|
+
rotate(retention)
|
88
|
+
|
89
|
+
@remote_repo.sync(@local_repo) if @remote_repo
|
90
|
+
end
|
91
|
+
|
92
|
+
def restore_backup(time, skip_confirmation = false)
|
93
|
+
incremental_backups = []
|
94
|
+
all_backups.reverse_each do |backup|
|
95
|
+
incremental_backups.unshift(backup) if backup.time <= time
|
96
|
+
break if incremental_backups.any? && backup.full?
|
97
|
+
end
|
98
|
+
raise "cannot find any backup before #{time}" if incremental_backups.empty?
|
99
|
+
raise "cannot find a full backup before #{time}" unless incremental_backups.first.full?
|
100
|
+
restore_time = incremental_backups.last.time
|
101
|
+
|
102
|
+
full_backup = incremental_backups.shift
|
103
|
+
|
104
|
+
log "[1/#{incremental_backups.size + 1}] Processing #{full_backup.type.to_s} backup from #{full_backup}"
|
105
|
+
with_extracted_backup(full_backup) do |full_backup_path, full_backup_info|
|
106
|
+
raise 'unexpected backup type' unless full_backup_info[:backup_type] == full_backup.type
|
107
|
+
raise 'unexpected start LSN' unless full_backup_info[:from_lsn] == 0
|
108
|
+
|
109
|
+
compact = full_backup_info[:compact]
|
110
|
+
|
111
|
+
if full_backup_info[:compress]
|
112
|
+
log_action ' Decompressing' do
|
113
|
+
innobackupex(['--decompress', full_backup_path.shellescape])
|
114
|
+
end
|
115
|
+
end
|
116
|
+
|
117
|
+
if incremental_backups.any?
|
118
|
+
log_action " Preparing base backup (LSN #{full_backup_info[:to_lsn]})" do
|
119
|
+
innobackupex(['--apply-log', '--redo-only', full_backup_path.shellescape])
|
120
|
+
end
|
121
|
+
|
122
|
+
current_lsn = full_backup_info[:to_lsn]
|
123
|
+
|
124
|
+
index = 2
|
125
|
+
incremental_backups.each do |incremental_backup|
|
126
|
+
log "[#{index}/#{incremental_backups.size + 1}] Processing #{incremental_backup.type.to_s} backup from #{incremental_backup}"
|
127
|
+
index += 1
|
128
|
+
with_extracted_backup(incremental_backup) do |incremental_backup_path, incremental_backup_info|
|
129
|
+
raise 'unexpected backup type' unless incremental_backup_info[:backup_type] == incremental_backup.type
|
130
|
+
raise 'unexpected start LSN' unless incremental_backup_info[:from_lsn] == current_lsn
|
131
|
+
|
132
|
+
compact ||= incremental_backup_info[:compact]
|
133
|
+
|
134
|
+
if incremental_backup_info[:compress]
|
135
|
+
log_action ' Decompressing' do
|
136
|
+
innobackupex(['--decompress', incremental_backup_path.shellescape])
|
137
|
+
end
|
138
|
+
end
|
139
|
+
|
140
|
+
log_action " Applying increment (LSN #{incremental_backup_info[:from_lsn]} -> #{incremental_backup_info[:to_lsn]})" do
|
141
|
+
innobackupex(['--apply-log', '--redo-only', full_backup_path.shellescape, "--incremental-dir=#{incremental_backup_path.shellescape}"])
|
142
|
+
end
|
143
|
+
|
144
|
+
current_lsn = incremental_backup_info[:to_lsn]
|
145
|
+
end
|
146
|
+
end
|
147
|
+
end
|
148
|
+
|
149
|
+
action = 'Final prepare'
|
150
|
+
arguments = [
|
151
|
+
'--apply-log',
|
152
|
+
]
|
153
|
+
|
154
|
+
if compact
|
155
|
+
action << ' + rebuild indexes'
|
156
|
+
arguments << '--rebuild-indexes'
|
157
|
+
end
|
158
|
+
|
159
|
+
log_action "#{action}" do
|
160
|
+
arguments << full_backup_path.shellescape
|
161
|
+
innobackupex(arguments)
|
162
|
+
end
|
163
|
+
|
164
|
+
log_action 'Attempting to restore Galera info' do
|
165
|
+
restore_galera_info(full_backup_path)
|
166
|
+
end
|
167
|
+
|
168
|
+
mysql_datadir = @mysql.datadir.chomp('/')
|
169
|
+
mysql_datadir_old = mysql_datadir + '_YYYYMMDDhhmmss'
|
170
|
+
|
171
|
+
unless skip_confirmation
|
172
|
+
puts
|
173
|
+
puts ' BACKUP IS NOW READY TO BE RESTORED'
|
174
|
+
puts " BACKUP TIMESTAMP: #{restore_time}"
|
175
|
+
puts ' PLEASE CONFIRM THIS ACTION'
|
176
|
+
puts
|
177
|
+
puts ' This will:'
|
178
|
+
puts ' - stop the MySQL server'
|
179
|
+
puts " - move the current datadir to #{mysql_datadir_old}"
|
180
|
+
puts " - restore the backup to #{mysql_datadir}"
|
181
|
+
puts ' - start the MySQL server'
|
182
|
+
puts
|
183
|
+
puts ' Afterwards you will have to:'
|
184
|
+
puts ' - confirm everything is working and synced correctly'
|
185
|
+
puts ' - manually create a new full backup (to re-allow incremental backups)'
|
186
|
+
puts
|
187
|
+
puts ' If MySQL server cannot be started, this might be because this is the'
|
188
|
+
puts ' only (remaining) Galera node. If so, manually bootstrap the cluster:'
|
189
|
+
puts ' # service mysql bootstrap-pxc'
|
190
|
+
puts
|
191
|
+
print ' Please type "yes" to continue: '
|
192
|
+
confirmation = STDIN.gets.chomp
|
193
|
+
puts
|
194
|
+
raise 'did not confirm restore' unless confirmation == 'yes'
|
195
|
+
end
|
196
|
+
|
197
|
+
log_action 'Stopping MySQL server' do
|
198
|
+
system("#{@which.service.shellescape} mysql stop")
|
199
|
+
end
|
200
|
+
|
201
|
+
stat = File.stat(mysql_datadir)
|
202
|
+
uid = stat.uid
|
203
|
+
gid = stat.gid
|
204
|
+
|
205
|
+
mysql_datadir_old = mysql_datadir + '_' + Time.now.strftime('%Y%m%d%H%M%S')
|
206
|
+
log_action "Moving current datadir to #{mysql_datadir_old}" do
|
207
|
+
File.rename(mysql_datadir, mysql_datadir_old)
|
208
|
+
end
|
209
|
+
|
210
|
+
log_action "Restoring backup to #{mysql_datadir}" do
|
211
|
+
Dir.mkdir(mysql_datadir)
|
212
|
+
innobackupex(['--move-back', full_backup_path.shellescape])
|
213
|
+
end
|
214
|
+
|
215
|
+
log_action "Chowning #{mysql_datadir}" do
|
216
|
+
FileUtils.chown_R(uid, gid, mysql_datadir)
|
217
|
+
end
|
218
|
+
|
219
|
+
if @local_repo
|
220
|
+
log_action "Removing last backup info" do
|
221
|
+
File.delete(File.join(@local_repo.path, 'xtrabackup_checkpoints'))
|
222
|
+
end
|
223
|
+
end
|
224
|
+
|
225
|
+
log_action 'Starting MySQL server' do
|
226
|
+
system("#{@which.service.shellescape} mysql start")
|
227
|
+
end
|
228
|
+
end
|
229
|
+
end
|
230
|
+
|
231
|
+
def list_backups
|
232
|
+
all_backups.each do |backup|
|
233
|
+
if @verbose
|
234
|
+
puts "#{backup} - #{backup.type.to_s[0..3]} (#{backup.remote? ? 'remote' : 'local'})"
|
235
|
+
else
|
236
|
+
puts backup
|
237
|
+
end
|
238
|
+
end
|
239
|
+
end
|
240
|
+
|
241
|
+
private
|
242
|
+
|
243
|
+
def all_backups
|
244
|
+
backups = []
|
245
|
+
backups += @local_repo.backups if @local_repo
|
246
|
+
backups += @remote_repo.backups if @remote_repo
|
247
|
+
backups = backups.uniq_by { |backup| backup.time }
|
248
|
+
backups.sort
|
249
|
+
end
|
250
|
+
|
251
|
+
def log(text)
|
252
|
+
return unless @verbose
|
253
|
+
previous_stdout = $stdout
|
254
|
+
$stdout = STDOUT
|
255
|
+
puts text if @verbose
|
256
|
+
$stdout = previous_stdout
|
257
|
+
end
|
258
|
+
|
259
|
+
def log_action(text)
|
260
|
+
return yield unless @verbose
|
261
|
+
|
262
|
+
begin
|
263
|
+
print "#{text}... "
|
264
|
+
previous_stdout, previous_stderr = $stdout, $stderr
|
265
|
+
begin
|
266
|
+
$stdout = $stderr = File.new('/dev/null', 'w')
|
267
|
+
t1 = Time.now
|
268
|
+
yield
|
269
|
+
t2 = Time.now
|
270
|
+
ensure
|
271
|
+
$stdout, $stderr = previous_stdout, previous_stderr
|
272
|
+
end
|
273
|
+
rescue => e
|
274
|
+
puts "fail"
|
275
|
+
raise e
|
276
|
+
else
|
277
|
+
puts "done (%.1fs)" % (t2 - t1)
|
278
|
+
end
|
279
|
+
end
|
280
|
+
|
281
|
+
def desync_enable(wait = 60)
|
282
|
+
log "Setting wsrep_desync=ON and waiting for #{wait} seconds"
|
283
|
+
@mysql.set_variable('wsrep_desync', 'ON')
|
284
|
+
sleep(wait)
|
285
|
+
end
|
286
|
+
|
287
|
+
def desync_disable
|
288
|
+
log 'Waiting until wsrep_local_recv_queue is empty'
|
289
|
+
sleep(2) until @mysql.get_status('wsrep_local_recv_queue') == '0'
|
290
|
+
log 'Setting wsrep_desync=OFF'
|
291
|
+
@mysql.set_variable('wsrep_desync', 'OFF')
|
292
|
+
end
|
293
|
+
|
294
|
+
def rotate(retention)
|
295
|
+
log 'Checking if we have old backups to remove'
|
296
|
+
@local_repo.backups.each do |backup|
|
297
|
+
days = (Time.now - backup.time) / 86400
|
298
|
+
break if days < retention && backup.full?
|
299
|
+
log "Deleting backup #{backup}"
|
300
|
+
backup.delete
|
301
|
+
end
|
302
|
+
end
|
303
|
+
|
304
|
+
def innobackupex(arguments, output_file = nil)
|
305
|
+
command = @which.innobackupex.shellescape
|
306
|
+
arguments += [
|
307
|
+
"--ibbackup=#{@which.xtrabackup.shellescape}",
|
308
|
+
"--parallel=#{@threads}",
|
309
|
+
"--compress-threads=#{@threads}",
|
310
|
+
"--rebuild-threads #{@threads}",
|
311
|
+
"--use-memory=#{@memory}",
|
312
|
+
"--tmpdir=#{Dir.tmpdir.shellescape}",
|
313
|
+
]
|
314
|
+
arguments << "--throttle=#{@throttle.shellescape}" if @throttle
|
315
|
+
|
316
|
+
command << ' ' + arguments.join(' ')
|
317
|
+
command << " > #{output_file.shellescape}" if output_file
|
318
|
+
log = Open3.popen3(command) do |stdin, stdout, stderr|
|
319
|
+
stderr.read
|
320
|
+
end
|
321
|
+
exit_status = $?
|
322
|
+
raise 'something went wrong with innobackupex' unless exit_status.success? && log.lines.to_a.last.match(/: completed OK!$/)
|
323
|
+
end
|
324
|
+
|
325
|
+
def read_backup_info(file)
|
326
|
+
raise "cannot open #{file}" unless File.file?(file)
|
327
|
+
result = {}
|
328
|
+
File.open(file, 'r') do |file|
|
329
|
+
file.each_line do |line|
|
330
|
+
key, value = line.chomp.split(/\s*=\s*/, 2)
|
331
|
+
case key
|
332
|
+
when 'backup_type'
|
333
|
+
value = 'full' if value == 'full-backuped'
|
334
|
+
value = value.to_sym
|
335
|
+
when /_lsn$/
|
336
|
+
value = value.to_i
|
337
|
+
when 'compact'
|
338
|
+
value = (value == '1')
|
339
|
+
end
|
340
|
+
result[key.to_sym] = value
|
341
|
+
end
|
342
|
+
end
|
343
|
+
result
|
344
|
+
end
|
345
|
+
|
346
|
+
def with_extracted_backup(backup)
|
347
|
+
Dir.mktmpdir('pxcbackup-') do |dir|
|
348
|
+
command = backup.stream_command
|
349
|
+
action = 'Extracting'
|
350
|
+
if backup.encrypted?
|
351
|
+
raise 'need encryption algorithm and key to decrypt this backup' unless @encrypt && @encrypt_key
|
352
|
+
command << " | #{@which.xbcrypt.shellescape} -d --encrypt-algo=#{@encrypt.shellescape} --encrypt-key=#{@encrypt_key.shellescape}"
|
353
|
+
action << ' + decrypting'
|
354
|
+
end
|
355
|
+
command <<
|
356
|
+
case backup.stream
|
357
|
+
when :xbstream
|
358
|
+
" | #{@which.xbstream.shellescape} -x -C #{dir.shellescape}"
|
359
|
+
when :tar
|
360
|
+
" | #{@which.tar.shellescape} -ixf - -C #{dir.shellescape}"
|
361
|
+
end
|
362
|
+
log_action " #{action}" do
|
363
|
+
system(command)
|
364
|
+
end
|
365
|
+
|
366
|
+
info = read_backup_info(File.join(dir, 'xtrabackup_checkpoints'))
|
367
|
+
info[:compress] = Dir.glob(File.join(dir, '**', '*.qp')).any?
|
368
|
+
|
369
|
+
yield(dir, info)
|
370
|
+
end
|
371
|
+
end
|
372
|
+
|
373
|
+
def restore_galera_info(dir)
|
374
|
+
galera_info_file = File.join(dir, 'xtrabackup_galera_info')
|
375
|
+
return unless File.file?(galera_info_file)
|
376
|
+
uuid, seqno = nil
|
377
|
+
File.open(galera_info_file, 'r') do |file|
|
378
|
+
uuid, seqno = file.gets.chomp.split(':')
|
379
|
+
end
|
380
|
+
|
381
|
+
version = @mysql.get_status('wsrep_provider_version')
|
382
|
+
if version
|
383
|
+
version = version.split('(').first
|
384
|
+
else
|
385
|
+
current_grastate_file = File.join(@mysql.datadir, 'grastate.dat')
|
386
|
+
if File.file?(current_grastate_file)
|
387
|
+
File.open(current_grastate_file, 'r') do |file|
|
388
|
+
file.each_line do |line|
|
389
|
+
match = line.match(/^version:\s+(.*)$/)
|
390
|
+
if match
|
391
|
+
version = match[1]
|
392
|
+
break
|
393
|
+
end
|
394
|
+
end
|
395
|
+
end
|
396
|
+
end
|
397
|
+
end
|
398
|
+
return unless version
|
399
|
+
|
400
|
+
File.open(File.join(dir, 'grastate.dat'), 'w') do |file|
|
401
|
+
file.write("# GALERA saved state\n")
|
402
|
+
file.write("version: #{version}\n")
|
403
|
+
file.write("uuid: #{uuid}\n")
|
404
|
+
file.write("seqno: #{seqno}\n")
|
405
|
+
file.write("cert_index:\n")
|
406
|
+
end
|
407
|
+
end
|
408
|
+
end
|
409
|
+
end
|
@@ -0,0 +1,50 @@
|
|
1
|
+
require 'shellwords'
|
2
|
+
|
3
|
+
module PXCBackup
|
4
|
+
class MySQL
|
5
|
+
attr_reader :datadir
|
6
|
+
|
7
|
+
def initialize(options = {})
|
8
|
+
@which = PathResolver.new(options)
|
9
|
+
@username = options[:mysql_user] || 'root'
|
10
|
+
@password = options[:mysql_pass] || ''
|
11
|
+
@datadir = options[:mysql_datadir] || get_variable('datadir') || '/var/lib/mysql'
|
12
|
+
raise 'Could not find mysql data dir' unless File.directory?(@datadir)
|
13
|
+
end
|
14
|
+
|
15
|
+
def auth
|
16
|
+
"--user=#{@username.shellescape} --password=#{@password.shellescape}"
|
17
|
+
end
|
18
|
+
|
19
|
+
def exec(query)
|
20
|
+
lines = `echo #{query.shellescape} | #{@which.mysql.shellescape} #{auth} 2> /dev/null`.lines.to_a
|
21
|
+
return nil if lines.empty?
|
22
|
+
|
23
|
+
keys = lines.shift.chomp.split("\t")
|
24
|
+
rows = []
|
25
|
+
lines.each do |line|
|
26
|
+
values = line.chomp.split("\t")
|
27
|
+
row = {}
|
28
|
+
keys.each_with_index do |val, key|
|
29
|
+
row[val] = values[key]
|
30
|
+
end
|
31
|
+
rows << row
|
32
|
+
end
|
33
|
+
rows
|
34
|
+
end
|
35
|
+
|
36
|
+
def get_variable(variable, scope = 'GLOBAL')
|
37
|
+
result = exec("SHOW #{scope} VARIABLES LIKE '#{variable}'")
|
38
|
+
result ? result.first['Value'] : nil
|
39
|
+
end
|
40
|
+
|
41
|
+
def set_variable(variable, value, scope = 'GLOBAL')
|
42
|
+
exec("SET #{scope} #{variable}=#{value}")
|
43
|
+
end
|
44
|
+
|
45
|
+
def get_status(variable)
|
46
|
+
result = exec("SHOW STATUS LIKE '#{variable}'")
|
47
|
+
result ? result.first['Value'] : nil
|
48
|
+
end
|
49
|
+
end
|
50
|
+
end
|
@@ -0,0 +1,18 @@
|
|
1
|
+
require 'shellwords'
|
2
|
+
|
3
|
+
module PXCBackup
|
4
|
+
class PathResolver
|
5
|
+
def initialize(options = {})
|
6
|
+
@options = options
|
7
|
+
@paths = {}
|
8
|
+
end
|
9
|
+
|
10
|
+
def method_missing(name, *arguments)
|
11
|
+
unless @paths[name]
|
12
|
+
@paths[name] = @options["#{name.to_s}_path".to_sym] || `which #{name.to_s.shellescape}`.strip
|
13
|
+
raise "cannot find path for #{name.to_s}" unless File.file?(@paths[name])
|
14
|
+
end
|
15
|
+
@paths[name]
|
16
|
+
end
|
17
|
+
end
|
18
|
+
end
|
@@ -0,0 +1,39 @@
|
|
1
|
+
require 'shellwords'
|
2
|
+
|
3
|
+
require 'pxcbackup/backup'
|
4
|
+
require 'pxcbackup/repo'
|
5
|
+
|
6
|
+
module PXCBackup
|
7
|
+
class RemoteRepo < Repo
|
8
|
+
def initialize(path, options = {})
|
9
|
+
super(path, options)
|
10
|
+
@which.s3cmd
|
11
|
+
end
|
12
|
+
|
13
|
+
def backups
|
14
|
+
backups = []
|
15
|
+
`#{@which.s3cmd.shellescape} ls #{@path.shellescape}`.lines.to_a.each do |line|
|
16
|
+
path = line.chomp.split[3]
|
17
|
+
next unless Backup.regexp.match(path)
|
18
|
+
backups << Backup.new(self, path)
|
19
|
+
end
|
20
|
+
backups.sort
|
21
|
+
end
|
22
|
+
|
23
|
+
def sync(local_repo)
|
24
|
+
source = File.join(local_repo.path, '/')
|
25
|
+
target = File.join(path, '/')
|
26
|
+
system("#{@which.s3cmd.shellescape} sync --no-progress --delete-removed #{source.shellescape} #{target.shellescape} > /dev/null")
|
27
|
+
end
|
28
|
+
|
29
|
+
def delete(backup)
|
30
|
+
verify(backup)
|
31
|
+
system("#{@which.s3cmd.shellescape} del #{backup.path.shellescape} > /dev/null")
|
32
|
+
end
|
33
|
+
|
34
|
+
def stream_command(backup)
|
35
|
+
verify(backup)
|
36
|
+
"#{@which.s3cmd.shellescape} get #{backup.path.shellescape} -"
|
37
|
+
end
|
38
|
+
end
|
39
|
+
end
|
@@ -0,0 +1,42 @@
|
|
1
|
+
require 'shellwords'
|
2
|
+
|
3
|
+
require 'pxcbackup/backup'
|
4
|
+
|
5
|
+
module PXCBackup
|
6
|
+
class Repo
|
7
|
+
attr_reader :path
|
8
|
+
|
9
|
+
def initialize(path, options = {})
|
10
|
+
@path = path
|
11
|
+
@which = PathResolver.new(options)
|
12
|
+
end
|
13
|
+
|
14
|
+
def backups
|
15
|
+
backups = []
|
16
|
+
Dir.foreach(@path) do |file|
|
17
|
+
path = File.join(@path, file)
|
18
|
+
next unless File.file?(path)
|
19
|
+
next unless Backup.regexp.match(path)
|
20
|
+
backups << Backup.new(self, path)
|
21
|
+
end
|
22
|
+
backups.sort
|
23
|
+
end
|
24
|
+
|
25
|
+
def delete(backup)
|
26
|
+
verify(backup)
|
27
|
+
system("#{@which.rm.shellescape} #{backup.path.shellescape}")
|
28
|
+
end
|
29
|
+
|
30
|
+
def stream_command(backup)
|
31
|
+
verify(backup)
|
32
|
+
"#{@which.cat.shellescape} #{backup.path.shellescape}"
|
33
|
+
end
|
34
|
+
|
35
|
+
private
|
36
|
+
|
37
|
+
def verify(backup)
|
38
|
+
raise 'backup does not belong to this repo' if backup.repo != self
|
39
|
+
end
|
40
|
+
end
|
41
|
+
end
|
42
|
+
|
data/lib/pxcbackup.rb
ADDED
data/pxcbackup.gemspec
ADDED
@@ -0,0 +1,18 @@
|
|
1
|
+
# coding: utf-8
|
2
|
+
lib = File.expand_path('../lib', __FILE__)
|
3
|
+
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
|
4
|
+
require 'pxcbackup/version'
|
5
|
+
|
6
|
+
Gem::Specification.new do |spec|
|
7
|
+
spec.name = 'pxcbackup'
|
8
|
+
spec.version = PXCBackup::VERSION
|
9
|
+
spec.author = 'Robbert Klarenbeek'
|
10
|
+
spec.email = 'robbertkl@renbeek.nl'
|
11
|
+
spec.summary = 'Backup tool for Percona XtraDB Cluster'
|
12
|
+
spec.description = spec.summary
|
13
|
+
spec.homepage = 'https://github.com/robbertkl/pxcbackup'
|
14
|
+
spec.license = 'MIT'
|
15
|
+
|
16
|
+
spec.files = `git ls-files -z`.split("\x0")
|
17
|
+
spec.executables = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
|
18
|
+
end
|
metadata
ADDED
@@ -0,0 +1,60 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: pxcbackup
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.0.1
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- Robbert Klarenbeek
|
8
|
+
autorequire:
|
9
|
+
bindir: bin
|
10
|
+
cert_chain: []
|
11
|
+
date: 2014-05-08 00:00:00.000000000 Z
|
12
|
+
dependencies: []
|
13
|
+
description: Backup tool for Percona XtraDB Cluster
|
14
|
+
email: robbertkl@renbeek.nl
|
15
|
+
executables:
|
16
|
+
- pxcbackup
|
17
|
+
extensions: []
|
18
|
+
extra_rdoc_files: []
|
19
|
+
files:
|
20
|
+
- ".editorconfig"
|
21
|
+
- ".gitignore"
|
22
|
+
- LICENSE
|
23
|
+
- README.md
|
24
|
+
- bin/pxcbackup
|
25
|
+
- lib/pxcbackup.rb
|
26
|
+
- lib/pxcbackup/application.rb
|
27
|
+
- lib/pxcbackup/array.rb
|
28
|
+
- lib/pxcbackup/backup.rb
|
29
|
+
- lib/pxcbackup/backupper.rb
|
30
|
+
- lib/pxcbackup/mysql.rb
|
31
|
+
- lib/pxcbackup/path_resolver.rb
|
32
|
+
- lib/pxcbackup/remote_repo.rb
|
33
|
+
- lib/pxcbackup/repo.rb
|
34
|
+
- lib/pxcbackup/version.rb
|
35
|
+
- pxcbackup.gemspec
|
36
|
+
homepage: https://github.com/robbertkl/pxcbackup
|
37
|
+
licenses:
|
38
|
+
- MIT
|
39
|
+
metadata: {}
|
40
|
+
post_install_message:
|
41
|
+
rdoc_options: []
|
42
|
+
require_paths:
|
43
|
+
- lib
|
44
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
45
|
+
requirements:
|
46
|
+
- - ">="
|
47
|
+
- !ruby/object:Gem::Version
|
48
|
+
version: '0'
|
49
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
50
|
+
requirements:
|
51
|
+
- - ">="
|
52
|
+
- !ruby/object:Gem::Version
|
53
|
+
version: '0'
|
54
|
+
requirements: []
|
55
|
+
rubyforge_project:
|
56
|
+
rubygems_version: 2.2.2
|
57
|
+
signing_key:
|
58
|
+
specification_version: 4
|
59
|
+
summary: Backup tool for Percona XtraDB Cluster
|
60
|
+
test_files: []
|