schema_transformer 0.1.0 → 0.2.0

Sign up to get free protection for your applications and to get access to all the features.
data/README.markdown CHANGED
@@ -11,7 +11,7 @@ Second, you run 2 commands on production.
11
11
 
12
12
  The first command will create a 'temporary' table with the altered schema and incrementally copy the data over until it is close to synced. You can run this command as many times as you want as it want - it work hurt. This first command is slow as it takes a while to copy the data over, especially if you have a really large tables that are several GBs in size.
13
13
 
14
- The second command will do a switheroo with with 'temporarily' new table and the current table. It will then remove the obsoleted table with the old schema structure. Because it is doing a rename (which can screw up replication on a heavily traffic site), this second command should be ran with maintenance page up. This second command is fast because it doe a final incremental sync and quickly switches the new table into place.
14
+ The second command will do a switcheroo with with 'temporarily' new table and the current table. It will then remove the obsoleted table with the old schema structure. Because it is doing a rename (which can screw up replication on a heavily traffic site), this second command should be ran with maintenance page up. This second command is fast because it does a final incremental sync and quickly switches the new table into place.
15
15
 
16
16
  Install
17
17
  -------
@@ -37,7 +37,6 @@ Examples 2:
37
37
  Examples 3:
38
38
  ADD COLUMN smart tinyint(1) DEFAULT '0', DROP COLUMN full_name
39
39
  > ADD COLUMN special tinyint(1) DEFAULT '0'
40
- ss
41
40
  *** Thanks ***
42
41
  Schema transform definitions have been generated and saved to:
43
42
  config/schema_transformations/tags.json
@@ -67,7 +66,7 @@ FAQ
67
66
  -------
68
67
 
69
68
  Q: What table alteration are supported?
70
- A: I've only tested with adding columns and removing columns.
69
+ A: I've only tested with adding columns and removing columns and indexes.
71
70
 
72
71
  Q: Can I add and drop multiple columns and indexes at the same time?
73
72
  A: Yes.
@@ -4,7 +4,6 @@ module SchemaTransformer
4
4
  case action
5
5
  when :generate
6
6
  out =<<-HELP
7
- ss
8
7
  *** Thanks ***
9
8
  Schema transform definitions have been generated and saved to:
10
9
  config/schema_transformations/#{self.table}.json
@@ -1,3 +1,3 @@
1
1
  module SchemaTransformer
2
- VERSION = "0.1.0"
2
+ VERSION = "0.2.0"
3
3
  end
metadata CHANGED
@@ -1,13 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: schema_transformer
3
3
  version: !ruby/object:Gem::Version
4
- hash: 27
4
+ hash: 23
5
5
  prerelease: false
6
6
  segments:
7
7
  - 0
8
- - 1
8
+ - 2
9
9
  - 0
10
- version: 0.1.0
10
+ version: 0.2.0
11
11
  platform: ruby
12
12
  authors:
13
13
  - Tung Nguyen
@@ -15,7 +15,7 @@ autorequire:
15
15
  bindir: bin
16
16
  cert_chain: []
17
17
 
18
- date: 2010-10-22 00:00:00 -07:00
18
+ date: 2010-10-23 00:00:00 -07:00
19
19
  default_executable:
20
20
  dependencies: []
21
21
 
@@ -35,10 +35,6 @@ files:
35
35
  - lib/schema_transformer/help.rb
36
36
  - lib/schema_transformer/version.rb
37
37
  - lib/schema_transformer.rb
38
- - notes/copier.rb
39
- - notes/copier_scratchpad.rb
40
- - notes/pager.rb
41
- - notes/schema_transformer_notes.txt
42
38
  - Rakefile
43
39
  - README.markdown
44
40
  - test/fake_app/config/database.yml
data/notes/copier.rb DELETED
@@ -1,14 +0,0 @@
1
- #!/usr/bin/env ruby
2
-
3
- res = conn.execute("SELECT max(`article_revisions_new`.id) AS max_id FROM `article_revisions_new`")
4
- start = res.fetch_row[0].to_i # nil case is okay: [nil][0].to_i => nil
5
- Article::Revisions.find_in_batches(:start => start, :batch_size => 10_000) do |batch|
6
- lower = batch.first.id
7
- upper = batch.last.id
8
- execute(%{
9
- INSERT INTO article_revisions_new (
10
- SELECT id, title, body, article_id, number, note, editor_id, created_at, blurb, teaser, source, slide_id
11
- FROM article_revisions WHERE id <= #{lower} AND id < #{upper}
12
- );
13
- })
14
- end
@@ -1,45 +0,0 @@
1
- #!/usr/bin/env ruby
2
-
3
- ArticleRevision.find_in_batches
4
-
5
- Activity
6
-
7
- id, title, body, article_id, number, note, editor_id, created_at, blurb, teaser, source, slide_id, NULL test_id
8
-
9
- def find_in_batches(options = {})
10
- raise "You can't specify an order, it's forced to be #{batch_order}" if options[:order]
11
- raise "You can't specify a limit, it's forced to be the batch_size" if options[:limit]
12
-
13
- start = options.delete(:start).to_i
14
- batch_size = options.delete(:batch_size) || 1000
15
-
16
- with_scope(:find => options.merge(:order => batch_order, :limit => batch_size)) do
17
- records = find(:all, :conditions => [ "#{table_name}.#{primary_key} >= ?", start ])
18
-
19
- while records.any?
20
- yield records
21
-
22
- break if records.size < batch_size
23
- records = find(:all, :conditions => [ "#{table_name}.#{primary_key} > ?", records.last.id ])
24
- end
25
- end
26
- end
27
-
28
- res = conn.execute("SELECT max(`article_revisions_new`.id) AS max_id FROM `article_revisions_new`")
29
- start = res.fetch_row[0].to_i # nil case is okay: [nil][0].to_i => nil
30
- Article::Revisions.find_in_batches(:start => start, :batch_size => 10_000) do |batch|
31
- lower = batch.first.id
32
- upper = batch.last.id
33
- execute(%{
34
- INSERT INTO article_revisions_new (
35
- SELECT id, title, body, article_id, number, note, editor_id, created_at, blurb, teaser, source, slide_id
36
- FROM article_revisions WHERE id <= #{lower} AND id < #{upper}
37
- );
38
- })
39
- end
40
-
41
-
42
- pager = Pager.new(:per_page => 10_000, :lower => 300, :upper => 30_000)
43
- pager.each do |page|
44
- puts page.start_index
45
- end
data/notes/pager.rb DELETED
@@ -1,101 +0,0 @@
1
- # Most of this is ripped off from WillPaginate::Collection
2
- #
3
- # Required options:
4
- # * <tt>per_page</tt> - number of items per page
5
- # Optional options:
6
- # * <tt>page</tt> - starting page, defaults to 1
7
- # * <tt>total</tt> - total number of items, defaults to 0
8
- #
9
- # Usage:
10
- #
11
- # pager = Pager.new(:per_page => 5, :total => 23)
12
- # pager.start_index => 0
13
- # pager.end_index => 4
14
- #
15
- # pager = Pager.new(:page => 2, :per_page => 5, :total => 23)
16
- # pager.start_index => 5
17
- # pager.end_index => 9
18
- #
19
- # # interator will always loop starting from page 1, even if you have initialize page another value.
20
- # pager.each do |page|
21
- # page.start_index
22
- # page.end_index
23
- # end
24
- class Pager
25
- include Enumerable
26
-
27
- def each
28
- old = @current_page # want to remember the old current page
29
- @current_page = 1
30
- @total_pages.times do
31
- yield(self)
32
- @current_page += 1
33
- end
34
- @current_page = old
35
- end
36
-
37
- attr_reader :current_page, :per_page, :total_pages
38
- attr_accessor :total_entries
39
-
40
- def initialize(options)
41
- @current_page = options[:page] ? options[:page].to_i : 1
42
- @per_page = options[:per_page].to_i
43
- @total_entries = options[:total].to_i
44
- @total_pages = (@total_entries / @per_page.to_f).ceil
45
- end
46
-
47
- # The total number of pages.
48
- def page_count
49
- @total_pages
50
- end
51
-
52
- # Current offset of the paginated collection. If we're on the first page,
53
- # it is always 0. If we're on the 2nd page and there are 30 entries per page,
54
- # the offset is 30. This property is useful if you want to render ordinals
55
- # besides your records: simply start with offset + 1.
56
- #
57
- def offset
58
- (current_page - 1) * per_page
59
- end
60
-
61
- # current_page - 1 or nil if there is no previous page
62
- def previous_page
63
- current_page > 1 ? (current_page - 1) : nil
64
- end
65
-
66
- def previous_page!
67
- @current_page = previous_page if previous_page
68
- end
69
-
70
- # current_page + 1 or nil if there is no next page
71
- def next_page
72
- current_page < page_count ? (current_page + 1) : nil
73
- end
74
-
75
- def next_page!
76
- @current_page = next_page if next_page
77
- end
78
-
79
- def start_index
80
- offset
81
- end
82
-
83
- def end_index
84
- [start_index + (per_page - 1), @total_entries].min
85
- end
86
-
87
- # true if current_page is the final page
88
- def last_page?
89
- next_page.nil?
90
- end
91
-
92
- # true if current_page is the final page
93
- def first_page?
94
- previous_page.nil?
95
- end
96
-
97
- # true if current_page is the final page
98
- def first_page!
99
- @current_page = 1
100
- end
101
- end
@@ -1,44 +0,0 @@
1
- #!/usr/bin/env ruby
2
-
3
- # 1. create new table with schema
4
- # 2. batch copy data
5
- # 3. maintainenance page
6
- # 4. batch copy final data
7
- # 5. rename tables
8
- # 6. remove maintenance page
9
-
10
- # 1. create new table with schema
11
- CREATE TABLE article_revisions_new LIKE article_revisions;
12
- ALTER TABLE article_revisions_new
13
- ADD INDEX idx_slide_id (slide_id);
14
-
15
-
16
- # 2. batch copy data
17
-
18
- max = SELECT max(`article_revisions`.id) AS max_id FROM `article_revisions`;
19
-
20
- Article::Revision
21
-
22
- Article.find_in_batches(batch_size => 100 ) { |articles| articles.each { |a| ... } }
23
-
24
- Article::Revision.maximum
25
- Pager.new()
26
-
27
- insert into article_revisions_new (select * from )
28
-
29
- insert into article_revisions_new (
30
- select id, title, body, article_id, number, note, editor_id, created_at, blurb, teaser, source, slide_id
31
- from article_revisions where id <= 0 LIMIT 10000
32
- );
33
-
34
-
35
-
36
- Article::Revision Load (170.7ms) SELECT * FROM `article_revisions` WHERE (article_revisions.id >= 0) ORDER BY article_revisions.id ASC LIMIT 1000
37
- Article::Revision Load (78.8ms) SELECT * FROM `article_revisions` WHERE (article_revisions.id > 1430719) ORDER BY article_revisions.id ASC LIMIT 1000
38
- Article::Revision Load (78.1ms) SELECT * FROM `article_revisions` WHERE (article_revisions.id > 1431725) ORDER BY article_revisions.id ASC LIMIT 1000
39
-
40
-
41
- # 3. maintainenance page
42
- # 4. batch copy final data
43
- # 5. rename tables
44
- # 6. remove maintenance page