jetpants 0.8.0 → 0.8.2

Sign up to get free protection for your applications and to get access to all the features.
Files changed (41) hide show
  1. checksums.yaml +7 -0
  2. data/README.rdoc +4 -9
  3. data/bin/jetpants +7 -6
  4. data/doc/capacity_plan.rdoc +77 -0
  5. data/doc/commands.rdoc +1 -1
  6. data/doc/jetpants_collins.rdoc +2 -1
  7. data/doc/online_schema_change.rdoc +45 -0
  8. data/doc/plugins.rdoc +7 -1
  9. data/doc/requirements.rdoc +1 -1
  10. data/doc/upgrade_helper.rdoc +68 -0
  11. data/lib/jetpants/db/client.rb +2 -1
  12. data/lib/jetpants/db/import_export.rb +12 -3
  13. data/lib/jetpants/db/replication.rb +6 -2
  14. data/lib/jetpants/db/schema.rb +40 -0
  15. data/lib/jetpants/db/server.rb +2 -2
  16. data/lib/jetpants/host.rb +12 -1
  17. data/lib/jetpants/pool.rb +41 -0
  18. data/lib/jetpants/shard.rb +201 -124
  19. data/lib/jetpants/table.rb +80 -10
  20. data/plugins/capacity_plan/capacity_plan.rb +353 -0
  21. data/plugins/capacity_plan/commandsuite.rb +19 -0
  22. data/plugins/capacity_plan/monkeypatch.rb +20 -0
  23. data/plugins/jetpants_collins/db.rb +45 -6
  24. data/plugins/jetpants_collins/jetpants_collins.rb +32 -21
  25. data/plugins/jetpants_collins/pool.rb +22 -1
  26. data/plugins/jetpants_collins/shard.rb +9 -2
  27. data/plugins/jetpants_collins/topology.rb +8 -9
  28. data/plugins/online_schema_change/commandsuite.rb +56 -0
  29. data/plugins/online_schema_change/db.rb +33 -0
  30. data/plugins/online_schema_change/online_schema_change.rb +5 -0
  31. data/plugins/online_schema_change/pool.rb +105 -0
  32. data/plugins/online_schema_change/topology.rb +56 -0
  33. data/plugins/simple_tracker/shard.rb +1 -1
  34. data/plugins/upgrade_helper/commandsuite.rb +212 -0
  35. data/plugins/upgrade_helper/db.rb +78 -0
  36. data/plugins/upgrade_helper/host.rb +22 -0
  37. data/plugins/upgrade_helper/pool.rb +259 -0
  38. data/plugins/upgrade_helper/shard.rb +61 -0
  39. data/plugins/upgrade_helper/upgrade_helper.rb +21 -0
  40. data/scripts/global_rowcount.rb +75 -0
  41. metadata +28 -15
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 942f23ed6af75e4ee9536e0833bdb5ff733ee947
4
+ data.tar.gz: 9938e5eccd30352d14a8871ea7aa28513413d100
5
+ SHA512:
6
+ metadata.gz: 42ab57cd08c826101ed1c3416e50bd6f50c48f00cbd8120fd6c6db521a23e0e66f66d7b8f5fcf5041d757f1531dcdb67cb2c273efdc8354b5df9e15284518b69
7
+ data.tar.gz: ed6db33c0b814c284e59e5528c8e456dbe699b4657b5dc25ec4bcbe77eaf5202eee29396a08f69ae1f79520eb87d8e98d7351dc6ce5afb5c83813c4d52d25b8e
@@ -8,16 +8,11 @@
8
8
 
9
9
  == MOTIVATION:
10
10
 
11
- \Jetpants was created by {Tumblr}[http://www.tumblr.com/] to help manage our database infrastructure. It handles automation tasks for our entire database topology, which as of October 2012 consists of approximately:
12
- * 200 dedicated database servers
13
- * 5 global (unsharded) functional pools
14
- * 58 shard pools
15
- * 28 terabytes total of unique relational data on masters
16
- * 100 billion total unique relational rows on masters
11
+ \Jetpants was created by {Tumblr}[http://www.tumblr.com/] to help manage our database infrastructure. It handles automation tasks for our entire database topology, which as of May 2013 consists of over 215 dedicated database servers and nearly 200 billion total distinct relational rows.
17
12
 
18
- One of the primary requirements for \Jetpants was speed. On our hardware, <b>\Jetpants can divide a 750GB, billion-row shard in half in about six hours</b> -- or even faster if you're diving into thirds or fourths. It can also <b>clone slaves at line speed on gigabit ethernet</b>, including to multiple destinations at once, using a novel "chained copy" approach.
13
+ One of the primary requirements for \Jetpants was speed. On our hardware, <b>\Jetpants can divide a 750GB, billion-row shard in half in about six hours</b> -- or even faster if you're diving into thirds or fourths.
19
14
 
20
- For more background on the initial motivations behind \Jetpants, please see {Evan Elias's presentation at Percona Live NYC 2012}[https://github.com/tumblr/jetpants/blob/master/doc/PerconaLiveNYC2012Presentation.pdf?raw=true].
15
+ For more background on the motivations behind \Jetpants, please see {Evan Elias's presentation at Percona Live 2013}[https://github.com/tumblr/jetpants/blob/master/doc/PerconaLive2013Presentation.pdf?raw=true].
21
16
 
22
17
  == COMMAND SUITE FEATURES:
23
18
 
@@ -76,7 +71,7 @@ If you have a question that isn't covered here, please feel free to email the au
76
71
 
77
72
  == CREDITS:
78
73
 
79
- * <b>Evan Elias</b>: Lead developer
74
+ * <b>Evan Elias</b>: Creator and developer
80
75
  * <b>Dallas Marlow</b>: Developer
81
76
  * <b>Bob Patterson Jr</b>: Developer
82
77
  * <b>Tom Christ</b>: Developer
@@ -286,7 +286,7 @@ module Jetpants
286
286
  end
287
287
  raise "Aborting" unless ask('Please type YES in all capital letters to confirm removing node from its pool: ') == 'YES'
288
288
  node.pool.remove_slave!(node)
289
- node.revoke_all_access!
289
+ node.revoke_all_access! if node.running? && node.available?
290
290
  end
291
291
 
292
292
 
@@ -367,10 +367,9 @@ module Jetpants
367
367
  end
368
368
  children = supply_ranges.count
369
369
  raise "Supplied range does not cover parent completely!" if supply_ranges.first[0] != shard_min.to_i || supply_ranges.last[1] != shard_max.to_i
370
- s.init_children(children, supply_ranges)
371
- s.split!
370
+ s.split!(children, supply_ranges)
372
371
  else
373
- children = options[:count] || ask('Optionally enter how many children to split into [default=2]: ')
372
+ children = options[:count] || ask('Optionally enter how many children to split into [default=2]: ')
374
373
  children = 2 if children == ''
375
374
  s.split!(children.to_i)
376
375
  end
@@ -450,7 +449,7 @@ module Jetpants
450
449
  raise "Specified cutover ID is too low!" unless cutover_id > last_shard.min_id
451
450
 
452
451
  # Ensure enough spare nodes are available before beginning.
453
- # We supply the *previous* last shard as context for counting/claiming spares
452
+ # We supply the *previous* last shard as context for counting spares
454
453
  # because the new last shard can't be created yet (chicken-and-egg problem -- master
455
454
  # must exist before we create the pool). The assumption is the hardware spec
456
455
  # of the new last shard and previous last shard will be the same.
@@ -472,7 +471,9 @@ module Jetpants
472
471
  new_last_shard_master = Jetpants.topology.claim_spare(role: :master, like: last_shard_master)
473
472
  new_last_shard_master.disable_read_only! if new_last_shard_master.running?
474
473
  if Jetpants.standby_slaves_per_pool > 0
475
- new_last_shard_slaves = Jetpants.topology.claim_spares(Jetpants.standby_slaves_per_pool, role: :standby_slave, like: last_shard_master)
474
+ # Verify spare count again, now that we can actually supply the new master as the :like context
475
+ raise "Not enough standby_slave role spare machines!" unless Jetpants.topology.count_spares(role: :standby_slave, like: new_last_shard_master) >= Jetpants.standby_slaves_per_pool
476
+ new_last_shard_slaves = Jetpants.topology.claim_spares(Jetpants.standby_slaves_per_pool, role: :standby_slave, like: new_last_shard_master)
476
477
  new_last_shard_slaves.each do |x|
477
478
  x.change_master_to new_last_shard_master
478
479
  x.resume_replication
@@ -0,0 +1,77 @@
1
+ = capacity_plan
2
+
3
+ == OVERVIEW:
4
+
5
+ This \Jetpants plugin gathers and processes space usage data from your machines and then creates a report with the categories 'Usage and time left', 'Day over day usage', 'Data outliers', 'Hardware status'.
6
+
7
+ == CONFIGURATION:
8
+
9
+ For this plugin the configuration is multi step.
10
+
11
+ First you want to run the following create table statment on a database you would like to store your capacity data
12
+
13
+ CREATE TABLE `storage` (
14
+ `id` int(11) unsigned NOT NULL AUTO_INCREMENT,
15
+ `timestamp` int(11) NOT NULL,
16
+ `pool` varchar(255) NOT NULL,
17
+ `total` BIGINT UNSIGNED NOT NULL,
18
+ `used` BIGINT UNSIGNED NOT NULL,
19
+ `available` BIGINT UNSIGNED NOT NULL,
20
+ `db_sizes` BIGINT UNSIGNED NOT NULL,
21
+ PRIMARY KEY (`id`),
22
+ INDEX (`timestamp`)
23
+ ) ENGINE=InnoDB;
24
+
25
+ Next you want to fill out the \Jetpants configuration file (either <tt>/etc/jetpants.yaml</tt> or <tt>~/.jetpants.yaml</tt>). For example you configuration might look like this:
26
+
27
+ # ... rest of Jetpants config here
28
+
29
+ plugins:
30
+ capacity_plan:
31
+ critical_mount: 0.85
32
+ warning_mount: 0.80
33
+ pool_name: platform
34
+ user: jetpants_cap
35
+ schema: capacity_plan
36
+ pass: xxxxxxxxxxxxxxx
37
+ # ... other plugins configured here
38
+
39
+ critical_mount:: the disk used percent of when the machine will be marked as critical
40
+
41
+ warning_mount:: the disk used percent of when the machine will be marked as warning
42
+
43
+ pool_name:: pool you will be writing too for the historical data
44
+
45
+ user:: user to connect to mysql with for historical data
46
+
47
+ schema:: table name for historical data
48
+
49
+ pass:: password for user for historical data
50
+
51
+
52
+ Next you want to is create a cron to capture the historical data
53
+
54
+ 0 * * * * /your_bin_path/jetpants capacity_snapshot 2>&1 > /dev/null
55
+
56
+ Then you want create a cron that will email you the report everyday (if you want that)
57
+
58
+ 0 10 * * * /your_bin_path/jetpants capacity_plan --email=your_email@example.com 2>&1 > /dev/null
59
+
60
+ If you want the hardware stats part of the email you have to create a function in Jetpants.topology.machine_status_counts that returns a hash that will be used to output the email
61
+
62
+ == ASSUMPTIONS AND REQUIREMENTS:
63
+
64
+ Use of this plugin assumes that you are using \Collins and the jetpants_collins plugin
65
+
66
+ Also you should have the pony gem installed
67
+
68
+ == USAGE:
69
+
70
+ If you want to run the capacity plan you can do
71
+
72
+ jetpants capacity_plan
73
+
74
+ To capture a one off snapshot of your data usage
75
+
76
+ jetpants capacity_snapshot
77
+
@@ -75,7 +75,7 @@ Or if you're deploying a brand new pool in an existing topology:
75
75
  5. Stop replicating writes from the parent shard, and then take the parent pool offline entirely.
76
76
  6. Remove rows that replicated to the wrong child shard. This data will be sparse, since it's only the writes that were made since the shard split process started.
77
77
 
78
- For more information, including diagrams of each step, please see {our presentation at Percona Live NYC 2012}[https://github.com/tumblr/jetpants/blob/master/doc/PerconaLiveNYC2012Presentation.pdf?raw=true].
78
+ For more information, including diagrams of each step, please see {our presentation at Percona Live 2013}[https://github.com/tumblr/jetpants/blob/master/doc/PerconaLive2013Presentation.pdf?raw=true].
79
79
 
80
80
  Separately, \Jetpants also allows you to alter the range of the last shard in your topology. In a range-based sharding scheme, the last shard has a range of X to infinity; eventually this will be too large of a range, so you need to truncate that shard range and create a "new" last shard after it. We call this process "shard cutover".
81
81
 
@@ -38,9 +38,10 @@ This plugin also makes some assumptions about the way in which you use \Collins,
38
38
  * All MySQL database server hosts that are in-use will have a STATUS of either ALLOCATED or MAINTENANCE.
39
39
  * All MySQL database server hosts that are in-use will have a POOL set matching the name of their pool/shard, and a SECONDARY_ROLE set matching their \Jetpants role within the pool (MASTER, ACTIVE_SLAVE, STANDBY_SLAVE, or BACKUP_SLAVE).
40
40
  * You can initially assign PRIMARY_ROLE, STATUS, POOL, and SECONDARY_ROLE to database servers somewhat automatically; see GETTING STARTED, below.
41
- * All database server hosts that are "spares" (not yet in use, but ready for use in shard splits, shard cutover, or slave cloning) need to have a STATUS of PROVISIONED. These nodes must meet the requirements of spares as defined by the REQUIREMENTS doc that comes with \Jetpants. They should NOT have a POOL or SECONDARY_ROLE set in advance; if they do, it will be ignored -- we treat all spares as identical. That said, you can implement custom logic to respect POOL or SECONDARY_ROLE (or any other Collins attribute) by overriding Topology#process_spare_selector_options in a custom plugin loaded after jetpants_collins.
41
+ * All database server hosts that are "spares" (not yet in use, but ready for use in shard splits, shard cutover, or slave cloning) need to have a STATUS of ALLOCATED and a STATE OF SPARE. These nodes must meet the requirements of spares as defined by the REQUIREMENTS doc that comes with \Jetpants. They should NOT have a POOL or SECONDARY_ROLE set in advance; if they do, it will be ignored -- we treat all spares as identical. That said, you can implement custom logic to respect POOL or SECONDARY_ROLE (or any other Collins attribute) by overriding Topology#process_spare_selector_options in a custom plugin loaded after jetpants_collins.
42
42
  * Database server hosts may optionally have an attribute called SLAVE_WEIGHT. The default weight, if omitted, is 100. This field has no effect in \Jetpants, but can be used by your custom configuration generator as needed, if your application supports a notion of different weights for slave selection.
43
43
  * Arbitrary metadata regarding pools and shards will be stored in assets with a TYPE of CONFIGURATION. These assets will have a POOL matching the pool's name, a TAG matching the pool's name but prefixed with 'mysql-', a STATUS reflecting the pool's state, and a PRIMARY_ROLE of either MYSQL_POOL or MYSQL_SHARD depending on the type of pool. You can make jetpants_collins create these automatically; see GETTING STARTED, below.
44
+ * Your jetpants user in Collins must also have permission to set state in collins otherwise you will have to manually create the STATUS:ALLOCATED STATE:SPARE and STATE:ALLOCATED STATE:CLAIMED
44
45
 
45
46
  Please note that jetpants_collins does not generate application configuration files, because every web app/framework uses a different format. You will need to write a custom plugin to generate a configuration file for your application as needed, by overriding the Topology#write_config method.
46
47
 
@@ -0,0 +1,45 @@
1
+ = online_schema_change
2
+
3
+ == OVERVIEW:
4
+
5
+ This \Jetpants plugin allows the use of pt-online-schema-change in the \Jetpants suite of tools. With this plugin there will be checks to see if there is already an online schema change going on. Also you can run a online schema change against all your shards.
6
+
7
+ == CONFIGURATION:
8
+
9
+ This plugin has no extra options, just add the name to your plugins section and that is it.
10
+
11
+ To enable this plugin, add it to your \Jetpants configuration file (either <tt>/etc/jetpants.yaml</tt> or <tt>~/.jetpants.yaml</tt>). For example you configuration might look like this:
12
+
13
+ # ... rest of Jetpants config here
14
+
15
+ plugins:
16
+ online_schema_change:
17
+ # ... other plugins configured here
18
+
19
+ == ASSUMPTIONS AND REQUIREMENTS:
20
+
21
+ Use of this plugin assumes that you already have pt-online-schema-change installed
22
+
23
+ Also you should be using \Collins and the jetpants_collins plugin
24
+
25
+ == EXAMPLES:
26
+
27
+ dry run of an alter on a single pool
28
+ jetpants alter_table --database=allmydata --table=somedata --pool=users --dry-run --alter='ADD COLUMN c1 INT'
29
+
30
+ alter a single pool
31
+ jetpants alter_table --database=allmydata --table=somedata --pool=users --alter='ADD COLUMN c1 INT'
32
+
33
+ dry run of an alter of all the shards of your topology
34
+ jetpants alter_table --database=allmydata --table=somedata --all_shards --dry-run --alter='ADD COLUMN c1 INT'
35
+
36
+ alter all the shards of your topology
37
+ jetpants alter_table --database=allmydata --table=somedata --all_shards --alter='ADD COLUMN c1 INT'
38
+
39
+
40
+ the alter table does not drop the old table automatically, so to remove the table on a sigle pool
41
+ jetpants alter_table_drop --database=allmydata --table=somedata --pool=users
42
+
43
+ to drop the tables on all your shards
44
+ jetpants alter_table_drop --database=allmydata --table=somedata --all_shards
45
+
@@ -104,4 +104,10 @@ Callbacks may interrupt the control flow by raising a Jetpants::CallbackAbortErr
104
104
 
105
105
  * If this is raised in a before_foo method, all lower-priority before_foo stacked callbacks will be skipped, as will the call to foo and any after_foo callbacks. The exception is automatically caught and is not fatal. The return value of foo will be nil.
106
106
 
107
- * If this is raised in an after_foo method, all subsequent lower-priority after_foo stacked callbacks will be skipped. The return value of foo will be unaffected.
107
+ * If this is raised in an after_foo method, all subsequent lower-priority after_foo stacked callbacks will be skipped. The return value of foo will be unaffected.
108
+
109
+ == Other bundled plugins
110
+
111
+ === upgrade_helper
112
+
113
+ This plugin adds additional Jetpants commands to aid in performing a major MySQL upgrade. For full documentation, please see upgrade_helper.rdoc ({view on GitHub}[https://github.com/tumblr/jetpants/blob/master/doc/upgrade_helper.rdoc]).
@@ -64,4 +64,4 @@ By default, the <tt>standby_slaves_per_pool</tt> config option is set to 2. This
64
64
 
65
65
  A "spare" machine in \Jetpants should be in a clean-slate state: MySQL should be installed and have the proper grants and root password, but there should be no data on these machines, and they should not be slaving. \Jetpants will set up replication appropriately when it assigns the nodes to their appropriate pools.
66
66
 
67
- For more information on the shard split process implemented by \Jetpants, including diagrams of each stage of the process, please see {Evan Elias's presentation at Percona Live NYC 2012}[https://github.com/tumblr/jetpants/blob/master/doc/PerconaLiveNYC2012Presentation.pdf?raw=true], starting at slide 20.
67
+ For more information on the shard split process implemented by \Jetpants, including diagrams of each stage of the process, please see {Evan Elias's presentation at Percona Live 2013}[https://github.com/tumblr/jetpants/blob/master/doc/PerconaLive2013Presentation.pdf?raw=true], starting at slide 38.
@@ -0,0 +1,68 @@
1
+ = upgrade_helper
2
+
3
+ == OVERVIEW:
4
+
5
+ This is a plugin to aid in performing a major MySQL upgrade, such as from 5.1 to 5.6. It includes the following functionality:
6
+
7
+ * New Jetpants command to clone a lower-version standby slave to a higher-version spare, and perform the upgrade process on the spare
8
+ * New Jetpants commands to do a master promotion that evicts the final lower-version node from a pool. You can choose to do either a standard locking master promotion (with read_only period), or on shards you may do a promotion that "ejects" the old master in a multi-step process without any locking.
9
+ * New Jetpants commands that wrap two Percona Toolkit tools (pt-upgrade and pt-query-digest) to verify the upgrade process did not cause any data drift or major performance degradations.
10
+
11
+ == CONFIGURATION:
12
+
13
+ This plugin has only one configuration option, which is mandatory:
14
+
15
+ new_version:: major.minor string of MySQL version being upgraded to, for example "5.5". Be sure to wrap in quotes so that it is not misinterpreted as a float. (required)
16
+
17
+ Example usage:
18
+
19
+ # ... rest of Jetpants config here
20
+
21
+ plugins:
22
+ jetpants_collins:
23
+ # config for jetpants_collins here
24
+
25
+ upgrade_helper:
26
+ new_version: "5.5"
27
+
28
+ # ... other plugins configured here
29
+
30
+
31
+ == ASSUMPTIONS AND REQUIREMENTS:
32
+
33
+ This plugin currently requires that you are using jetpants_collins plugin and \Collins as your asset tracker. This may be made more generic in a future release.
34
+
35
+ This plugin never upgrades nodes in-place; instead, it operates on spare nodes. The plugin must have the ability to grab spare nodes which already have the newer version of MySQL installed, as opposed to regular spares which have your old/existing version. In order for this to be possible, a custom plugin loaded AFTER jetpants_collins must override its method Topology#process_spare_selector_options to support these options:
36
+
37
+ :version:: if supplied, will be a major.minor string ('5.1', '5.5', etc). Spares should use the exact specified version.
38
+ :like:: if supplied, will be a Jetpants::DB. Spares should use the same version as this node, unless :version was also supplied, in which case :like is ignored and :version takes precedence.
39
+
40
+ In order to use new commands "jetpants checksum_pool" and "jetpants upgrade_check_pool", Percona Toolkit must be installed on the same machine as Jetpants, and be in your PATH. This plugin has been tested most extensively with Percona Toolkit 2.1.7. It may require some modifications to full work with Percona Toolkit 2.2.x, since pt-upgrade has been completely rewritten.
41
+
42
+ In order to use "jetpants upgrade_check_pool", tcpdump must be installed on all database nodes, and be in root's PATH.
43
+
44
+ == USAGE:
45
+
46
+ === Upgrading functional partitions
47
+
48
+ 1. Use "jetpants upgrade_clone_slave" to clone an existing (older-MySQL-version) standby to a machine that already has your newer version of MySQL. This task will then run mysql_upgrade properly on the node to complete the upgrade process.
49
+ 2. Use "jetpants check_pool_queries" to verify query performance. This will collect a read query log from the master (and from an active slave, if any exist) and replay them against an older-version and newer-version standby slave using pt-upgrade. We automatically run pt-upgrade twice, ignoring the results the first time. The first run is to help populate the buffer pool.
50
+ 3. After the upgraded slave has been replicating for some time (we recommend 24 hours), use "jetpants checksum_pool" to detect data drift on the upgraded slave. This command uses pt-table-checksum.
51
+ 4. Use "jetpants clone_slave" repeatedly to clone the upgraded slave as needed. If the pool has active slaves, you will eventually want to use "jetpants activate_slave" and "jetpants pull_slave" to convert upgraded standbys to actives, and vice versa to pull out the older-version actives. Then use "jetpants destroy_slave" as needed to eliminate older-version standby slaves. Your end result of this step should be that all nodes except the master are upgraded, and you have one extra upgraded standby slave in the pool. (So if you normally have N standbys per pool, you should now have N+1 instead; and all slaves -- standby or active -- are running the newer version of MySQL.)
52
+ 5. Use "jetpants upgrade_promotion" to perform a master promotion. This is identical a normal "jetpants promotion" except that eliminates the old master, instead of enslaving it. This is necessary because older-version nodes cannot have a higher-version master.
53
+
54
+ Repeat this entire process for each functional partition.
55
+
56
+
57
+ === Upgrading shards
58
+
59
+ The above process for functional partitions also works perfectly fine for shards. The first time you upgrade a shard, you should use the above process, including the steps to verify query performance and confirm lack of data drift.
60
+
61
+ For subsequent shard upgrades, you may optionally use this simplified process.
62
+
63
+ 1. Use "jetpants shard_upgrade" to build new upgraded standby slaves of the shard, in a hierarchical replication setup. One new standby slave will directly replicate from the master, and the others will be slaves of that one. In total, if a shard in your environment normally has N standby slaves, this command will create N+1 upgraded standby slaves. (It's essentially creating an upgraded mirror of the shard, except the mirror's master is actually a slave of the true top-level master.)
64
+ 2. Use "jetpants shard_upgrade --reads" to regenerate your application configuration in a way that moves read queries to the upgraded mirror shard's master, but keeps write queries going to the true master. (Your custom app config generator plugin must already do this for shards in the :child state, in order for "jetpants shard_split_child_reads" to work.)
65
+ 3. Use "jetpants shard_upgrade --writes" to regenerate your application configuration in a way that moves read AND write queries to the upgraded mirror shard's master.
66
+ 4. Use "jetpants shard_upgrade --cleanup" to eject all non-upgraded nodes from the pool entirely. This will tear down replication between the version of the shard and the old version.
67
+
68
+ Using a custom Ruby script, this process can be automated to perform each step on several shards at once.
@@ -67,6 +67,7 @@ module Jetpants
67
67
  :after_connect => options[:after_connect] )
68
68
  @user = options[:user]
69
69
  @schema = options[:schema]
70
+ @db.convert_tinyint_to_bool = false
70
71
  @db
71
72
  end
72
73
 
@@ -147,4 +148,4 @@ module Jetpants
147
148
  end
148
149
 
149
150
  end
150
- end
151
+ end
@@ -192,13 +192,18 @@ module Jetpants
192
192
  # MUCH faster than doing a single count, but far more I/O intensive, so
193
193
  # don't use this on a master or active slave.
194
194
  def row_counts(tables, min_id, max_id)
195
+ tables = [tables] unless tables.is_a? Array
195
196
  lock = Mutex.new
196
197
  row_count = {}
197
198
  tables.each do |t|
198
199
  row_count[t.name] = 0
199
- (min_id..max_id).in_chunks(t.chunks, 10) do |min, max|
200
- result = query_return_first_value(t.sql_count_rows(min, max))
201
- lock.synchronize {row_count[t.name] += result}
200
+ if min_id && max_id && t.chunks > 1
201
+ (min_id..max_id).in_chunks(t.chunks, Jetpants.max_concurrency) do |min, max|
202
+ result = query_return_first_value(t.sql_count_rows(min, max))
203
+ lock.synchronize {row_count[t.name] += result}
204
+ end
205
+ else
206
+ row_count[t.name] = query_return_first_value(t.sql_count_rows(false, false))
202
207
  end
203
208
  output "#{row_count[t.name]} rows counted", t
204
209
  end
@@ -246,6 +251,10 @@ module Jetpants
246
251
  id = query_return_first_value(finder_sql)
247
252
  break unless id
248
253
  rows_deleted += query(deleter_sql, id)
254
+
255
+ # Slow down on multi-col sharding key tables, due to queries being far more expensive
256
+ sleep(0.0001) if table.sharding_keys.size > 1
257
+
249
258
  iter += 1
250
259
  output("#{dir_english} deletion progress: through #{col} #{id}, deleted #{rows_deleted} rows so far", table) if iter % 50000 == 0
251
260
  end
@@ -22,7 +22,7 @@ module Jetpants
22
22
  logfile = option_hash[:log_file]
23
23
  pos = option_hash[:log_pos]
24
24
  if !(logfile && pos)
25
- raise "Cannot use coordinates of a new master that is receiving updates" if new_master.master && ! new_master.repl_paused
25
+ raise "Cannot use coordinates of a new master that is receiving updates" if new_master.master && ! new_master.repl_paused?
26
26
  logfile, pos = new_master.binlog_coordinates
27
27
  end
28
28
 
@@ -126,10 +126,12 @@ module Jetpants
126
126
  repl_user ||= replication_credentials[:user]
127
127
  repl_pass ||= replication_credentials[:pass]
128
128
  disable_monitoring
129
+ targets.each {|t| t.disable_monitoring}
129
130
  pause_replication if master && ! @repl_paused
130
131
  file, pos = binlog_coordinates
131
132
  clone_to!(targets)
132
- targets.each do |t|
133
+ targets.each do |t|
134
+ t.enable_monitoring
133
135
  t.change_master_to( self,
134
136
  log_file: file,
135
137
  log_pos: pos,
@@ -148,10 +150,12 @@ module Jetpants
148
150
  def enslave_siblings!(targets)
149
151
  raise "Can only call enslave_siblings! on a slave instance" unless master
150
152
  disable_monitoring
153
+ targets.each {|t| t.disable_monitoring}
151
154
  pause_replication unless @repl_paused
152
155
  file, pos = repl_binlog_coordinates
153
156
  clone_to!(targets)
154
157
  targets.each do |t|
158
+ t.enable_monitoring
155
159
  t.change_master_to( master,
156
160
  log_file: file,
157
161
  log_pos: pos,
@@ -0,0 +1,40 @@
1
+ require 'table'
2
+
3
+ module Jetpants
4
+
5
+ #--
6
+ # Table probing methods
7
+ #++
8
+
9
+ class DB
10
+ # Query the database for information about the table schema, including
11
+ # primary key, create statement, and columns
12
+ def detect_table_schema(table_name)
13
+ table_sql = "SHOW CREATE TABLE `#{table_name}`"
14
+ create_statement = query_return_first(table_sql).values.last
15
+ pk_sql = "SHOW INDEX IN #{table_name} WHERE Key_name = 'PRIMARY'"
16
+ pk_fields = query_return_array(pk_sql)
17
+ pk_fields.sort_by!{|pk| pk[:Seq_in_index]}
18
+
19
+ params = {
20
+ 'primary_key' => pk_fields.map{|pk| pk[:Column_name] },
21
+ 'create_table' => create_statement,
22
+ 'indexes' => connection.indexes(table_name),
23
+ 'pool' => pool,
24
+ 'columns' => connection.schema(table_name).map{|schema| schema[0]}
25
+ }
26
+
27
+ Table.new(table_name, params)
28
+ end
29
+
30
+ # Deletages check for a table existing by name up to the pool
31
+ def has_table?(table)
32
+ pool.has_table? table
33
+ end
34
+
35
+ # List of tables (as defined by the pool)
36
+ def tables
37
+ pool(true).tables
38
+ end
39
+ end
40
+ end
@@ -11,7 +11,7 @@ module Jetpants
11
11
  output "Attempting to shutdown MySQL"
12
12
  disconnect if @db
13
13
  output service(:stop, 'mysql')
14
- running = ssh_cmd "netstat -ln | grep #{@port} | wc -l"
14
+ running = ssh_cmd "netstat -ln | grep ':#{@port}' | wc -l"
15
15
  raise "[#{@ip}] Failed to shut down MySQL: Something is still listening on port #{@port}" unless running.chomp == '0'
16
16
  @options = []
17
17
  @running = false
@@ -26,7 +26,7 @@ module Jetpants
26
26
  if @master
27
27
  @repl_paused = options.include?('--skip-slave-start')
28
28
  end
29
- running = ssh_cmd "netstat -ln | grep #{@port} | wc -l"
29
+ running = ssh_cmd "netstat -ln | grep ':#{@port}' | wc -l"
30
30
  raise "[#{@ip}] Failed to start MySQL: Something is already listening on port #{@port}" unless running.chomp == '0'
31
31
  if options.size == 0
32
32
  output "Attempting to start MySQL, no option overrides supplied"