sequel 3.11.0 → 3.12.0
Sign up to get free protection for your applications and to get access to all the features.
- data/CHANGELOG +70 -0
- data/Rakefile +1 -1
- data/doc/active_record.rdoc +896 -0
- data/doc/advanced_associations.rdoc +46 -31
- data/doc/association_basics.rdoc +14 -9
- data/doc/dataset_basics.rdoc +3 -3
- data/doc/migration.rdoc +1011 -0
- data/doc/model_hooks.rdoc +198 -0
- data/doc/querying.rdoc +811 -86
- data/doc/release_notes/3.12.0.txt +304 -0
- data/doc/sharding.rdoc +17 -0
- data/doc/sql.rdoc +537 -0
- data/doc/validations.rdoc +501 -0
- data/lib/sequel/adapters/jdbc.rb +19 -27
- data/lib/sequel/adapters/jdbc/postgresql.rb +0 -7
- data/lib/sequel/adapters/mysql.rb +5 -4
- data/lib/sequel/adapters/odbc.rb +3 -2
- data/lib/sequel/adapters/shared/mssql.rb +7 -6
- data/lib/sequel/adapters/shared/mysql.rb +2 -7
- data/lib/sequel/adapters/shared/postgres.rb +2 -8
- data/lib/sequel/adapters/shared/sqlite.rb +2 -5
- data/lib/sequel/adapters/sqlite.rb +4 -4
- data/lib/sequel/core.rb +0 -1
- data/lib/sequel/database.rb +2 -1060
- data/lib/sequel/database/connecting.rb +227 -0
- data/lib/sequel/database/dataset.rb +58 -0
- data/lib/sequel/database/dataset_defaults.rb +127 -0
- data/lib/sequel/database/logging.rb +62 -0
- data/lib/sequel/database/misc.rb +246 -0
- data/lib/sequel/database/query.rb +390 -0
- data/lib/sequel/database/schema_generator.rb +7 -3
- data/lib/sequel/database/schema_methods.rb +351 -7
- data/lib/sequel/dataset/actions.rb +9 -2
- data/lib/sequel/dataset/misc.rb +6 -2
- data/lib/sequel/dataset/mutation.rb +3 -11
- data/lib/sequel/dataset/query.rb +49 -6
- data/lib/sequel/exceptions.rb +3 -0
- data/lib/sequel/extensions/migration.rb +395 -113
- data/lib/sequel/extensions/schema_dumper.rb +21 -13
- data/lib/sequel/model.rb +27 -25
- data/lib/sequel/model/associations.rb +72 -34
- data/lib/sequel/model/base.rb +74 -18
- data/lib/sequel/model/errors.rb +8 -1
- data/lib/sequel/plugins/active_model.rb +8 -0
- data/lib/sequel/plugins/association_pks.rb +87 -0
- data/lib/sequel/plugins/association_proxies.rb +8 -0
- data/lib/sequel/plugins/boolean_readers.rb +12 -6
- data/lib/sequel/plugins/caching.rb +14 -7
- data/lib/sequel/plugins/class_table_inheritance.rb +15 -9
- data/lib/sequel/plugins/composition.rb +2 -1
- data/lib/sequel/plugins/force_encoding.rb +10 -7
- data/lib/sequel/plugins/hook_class_methods.rb +12 -11
- data/lib/sequel/plugins/identity_map.rb +9 -0
- data/lib/sequel/plugins/instance_hooks.rb +23 -13
- data/lib/sequel/plugins/lazy_attributes.rb +4 -1
- data/lib/sequel/plugins/many_through_many.rb +18 -4
- data/lib/sequel/plugins/nested_attributes.rb +1 -0
- data/lib/sequel/plugins/optimistic_locking.rb +1 -1
- data/lib/sequel/plugins/rcte_tree.rb +9 -8
- data/lib/sequel/plugins/schema.rb +8 -0
- data/lib/sequel/plugins/serialization.rb +1 -3
- data/lib/sequel/plugins/sharding.rb +135 -0
- data/lib/sequel/plugins/single_table_inheritance.rb +117 -25
- data/lib/sequel/plugins/skip_create_refresh.rb +35 -0
- data/lib/sequel/plugins/string_stripper.rb +26 -0
- data/lib/sequel/plugins/tactical_eager_loading.rb +8 -0
- data/lib/sequel/plugins/timestamps.rb +15 -2
- data/lib/sequel/plugins/touch.rb +13 -0
- data/lib/sequel/plugins/update_primary_key.rb +48 -0
- data/lib/sequel/plugins/validation_class_methods.rb +8 -0
- data/lib/sequel/plugins/validation_helpers.rb +1 -1
- data/lib/sequel/sql.rb +17 -20
- data/lib/sequel/version.rb +1 -1
- data/spec/adapters/postgres_spec.rb +5 -5
- data/spec/core/core_sql_spec.rb +17 -1
- data/spec/core/database_spec.rb +17 -5
- data/spec/core/dataset_spec.rb +31 -8
- data/spec/core/schema_generator_spec.rb +8 -1
- data/spec/core/schema_spec.rb +13 -0
- data/spec/extensions/association_pks_spec.rb +85 -0
- data/spec/extensions/hook_class_methods_spec.rb +9 -9
- data/spec/extensions/migration_spec.rb +339 -219
- data/spec/extensions/schema_dumper_spec.rb +28 -17
- data/spec/extensions/sharding_spec.rb +272 -0
- data/spec/extensions/single_table_inheritance_spec.rb +92 -4
- data/spec/extensions/skip_create_refresh_spec.rb +17 -0
- data/spec/extensions/string_stripper_spec.rb +23 -0
- data/spec/extensions/update_primary_key_spec.rb +65 -0
- data/spec/extensions/validation_class_methods_spec.rb +5 -5
- data/spec/files/bad_down_migration/001_create_alt_basic.rb +4 -0
- data/spec/files/bad_down_migration/002_create_alt_advanced.rb +4 -0
- data/spec/files/bad_timestamped_migrations/1273253849_create_sessions.rb +9 -0
- data/spec/files/bad_timestamped_migrations/1273253851_create_nodes.rb +9 -0
- data/spec/files/bad_timestamped_migrations/1273253853_3_create_users.rb +3 -0
- data/spec/files/bad_up_migration/001_create_alt_basic.rb +4 -0
- data/spec/files/bad_up_migration/002_create_alt_advanced.rb +3 -0
- data/spec/files/convert_to_timestamp_migrations/001_create_sessions.rb +9 -0
- data/spec/files/convert_to_timestamp_migrations/002_create_nodes.rb +9 -0
- data/spec/files/convert_to_timestamp_migrations/003_3_create_users.rb +4 -0
- data/spec/files/convert_to_timestamp_migrations/1273253850_create_artists.rb +9 -0
- data/spec/files/convert_to_timestamp_migrations/1273253852_create_albums.rb +9 -0
- data/spec/files/duplicate_integer_migrations/001_create_alt_advanced.rb +4 -0
- data/spec/files/duplicate_integer_migrations/001_create_alt_basic.rb +4 -0
- data/spec/files/duplicate_timestamped_migrations/1273253849_create_sessions.rb +9 -0
- data/spec/files/duplicate_timestamped_migrations/1273253853_create_nodes.rb +9 -0
- data/spec/files/duplicate_timestamped_migrations/1273253853_create_users.rb +4 -0
- data/spec/files/integer_migrations/001_create_sessions.rb +9 -0
- data/spec/files/integer_migrations/002_create_nodes.rb +9 -0
- data/spec/files/integer_migrations/003_3_create_users.rb +4 -0
- data/spec/files/interleaved_timestamped_migrations/1273253849_create_sessions.rb +9 -0
- data/spec/files/interleaved_timestamped_migrations/1273253850_create_artists.rb +9 -0
- data/spec/files/interleaved_timestamped_migrations/1273253851_create_nodes.rb +9 -0
- data/spec/files/interleaved_timestamped_migrations/1273253852_create_albums.rb +9 -0
- data/spec/files/interleaved_timestamped_migrations/1273253853_3_create_users.rb +4 -0
- data/spec/files/missing_integer_migrations/001_create_alt_basic.rb +4 -0
- data/spec/files/missing_integer_migrations/003_create_alt_advanced.rb +4 -0
- data/spec/files/missing_timestamped_migrations/1273253849_create_sessions.rb +9 -0
- data/spec/files/missing_timestamped_migrations/1273253853_3_create_users.rb +4 -0
- data/spec/files/timestamped_migrations/1273253849_create_sessions.rb +9 -0
- data/spec/files/timestamped_migrations/1273253851_create_nodes.rb +9 -0
- data/spec/files/timestamped_migrations/1273253853_3_create_users.rb +4 -0
- data/spec/files/uppercase_timestamped_migrations/1273253849_CREATE_SESSIONS.RB +9 -0
- data/spec/files/uppercase_timestamped_migrations/1273253851_CREATE_NODES.RB +9 -0
- data/spec/files/uppercase_timestamped_migrations/1273253853_3_CREATE_USERS.RB +4 -0
- data/spec/integration/eager_loader_test.rb +20 -20
- data/spec/integration/migrator_test.rb +187 -0
- data/spec/integration/plugin_test.rb +150 -0
- data/spec/integration/schema_test.rb +13 -2
- data/spec/model/associations_spec.rb +41 -14
- data/spec/model/base_spec.rb +69 -0
- data/spec/model/eager_loading_spec.rb +7 -3
- data/spec/model/record_spec.rb +79 -4
- data/spec/model/validations_spec.rb +21 -9
- metadata +66 -5
- data/doc/schema.rdoc +0 -36
- data/lib/sequel/database/schema_sql.rb +0 -320
@@ -77,9 +77,21 @@ option. Though it can often be verbose (compared to other things in Sequel),
|
|
77
77
|
it allows you complete control over how to eagerly load associations for a
|
78
78
|
group of objects.
|
79
79
|
|
80
|
-
:eager_loader should be a proc that takes 3 arguments
|
81
|
-
|
82
|
-
|
80
|
+
:eager_loader should be a proc that takes 1 or 3 arguments. If the proc
|
81
|
+
takes one argument, it will be given a hash with the following keys:
|
82
|
+
|
83
|
+
:key_hash :: A key_hash, described below
|
84
|
+
:rows :: An array of model objects
|
85
|
+
:associations :: A hash of dependent associations to eagerly load
|
86
|
+
:self :: The dataset that is doing the eager loading
|
87
|
+
|
88
|
+
If the proc takes three arguments, it gets passed the :key_hash, :rows,
|
89
|
+
and :associations values. The only way to get the :self value is to
|
90
|
+
accept one argument. The 3 argument procs are allowed for backwards
|
91
|
+
compatibility, and it is recommended to use the 1 argument proc format
|
92
|
+
for new code.
|
93
|
+
|
94
|
+
Since you are given all of the records, you can do things like filter on
|
83
95
|
associations that are specified by multiple keys, or do multiple
|
84
96
|
queries depending on the content of the records (which would be
|
85
97
|
necessary for polymorphic associations). Inside the :eager_loader
|
@@ -277,9 +289,9 @@ Sequel::Model:
|
|
277
289
|
klass = attachable_type.constantize
|
278
290
|
klass.filter(klass.primary_key=>attachable_id)
|
279
291
|
end), \
|
280
|
-
:eager_loader=>(proc do |
|
292
|
+
:eager_loader=>(proc do |eo|
|
281
293
|
id_map = {}
|
282
|
-
|
294
|
+
eo[:rows].each do |asset|
|
283
295
|
asset.associations[:attachable] = nil
|
284
296
|
((id_map[asset.attachable_type] ||= {})[asset.attachable_id] ||= []) << asset
|
285
297
|
end
|
@@ -370,9 +382,9 @@ design, but sometimes you have to play with the cards you are dealt).
|
|
370
382
|
class Client < Sequel::Model
|
371
383
|
one_to_many :invoices, :reciprocal=>:client, \
|
372
384
|
:dataset=>proc{Invoice.filter(:client_name=>name)}, \
|
373
|
-
:eager_loader=>(proc do |
|
385
|
+
:eager_loader=>(proc do |eo|
|
374
386
|
id_map = {}
|
375
|
-
|
387
|
+
eo[:rows].each do |client|
|
376
388
|
id_map[client.name] = client
|
377
389
|
client.associations[:invoices] = []
|
378
390
|
end
|
@@ -400,9 +412,9 @@ design, but sometimes you have to play with the cards you are dealt).
|
|
400
412
|
class Invoice < Sequel::Model
|
401
413
|
many_to_one :client, :key=>:client_name, \
|
402
414
|
:dataset=>proc{Client.filter(:name=>client_name)}, \
|
403
|
-
:eager_loader=>(proc do |
|
404
|
-
id_map = key_hash[:client_name]
|
405
|
-
|
415
|
+
:eager_loader=>(proc do |eo|
|
416
|
+
id_map = eo[:key_hash][:client_name]
|
417
|
+
eo[:rows].each{|inv| inv.associations[:client] = nil}
|
406
418
|
Client.filter(:name=>id_map.keys).all do |client|
|
407
419
|
id_map[client.name].each{|inv| inv.associations[:client] = client}
|
408
420
|
end
|
@@ -439,9 +451,9 @@ Here's the old way to do it via custom associations:
|
|
439
451
|
:dataset=>(proc do
|
440
452
|
FavoriteTrack.filter(:disc_number=>disc_number, :number=>number, :album_id=>album_id)
|
441
453
|
end), \
|
442
|
-
:eager_loader=>(proc do |
|
454
|
+
:eager_loader=>(proc do |eo|
|
443
455
|
id_map = {}
|
444
|
-
|
456
|
+
eo[:rows].each do |t|
|
445
457
|
t.associations[:favorite_track] = nil
|
446
458
|
id_map[[t[:album_id], t[:disc_number], t[:number]]] = t
|
447
459
|
end
|
@@ -458,9 +470,9 @@ Here's the old way to do it via custom associations:
|
|
458
470
|
:dataset=>(proc do
|
459
471
|
Track.filter(:disc_number=>disc_number, :number=>number, :album_id=>album_id)
|
460
472
|
end), \
|
461
|
-
:eager_loader=>(proc do |
|
473
|
+
:eager_loader=>(proc do |eo|
|
462
474
|
id_map = {}
|
463
|
-
|
475
|
+
eo[:rows].each{|ft| id_map[[ft[:album_id], ft[:disc_number], ft[:number]]] = ft}
|
464
476
|
Track.filter([:album_id, :disc_number, :number]=>id_map.keys).all do |t|
|
465
477
|
id_map[[t[:album_id], t[:disc_number], t[:number]]].associations[:track] = t
|
466
478
|
end
|
@@ -490,10 +502,10 @@ without knowing the depth of the tree?
|
|
490
502
|
|
491
503
|
class Node < Sequel::Model
|
492
504
|
many_to_one :ancestors, :class=>self,
|
493
|
-
:eager_loader=>(proc do |
|
505
|
+
:eager_loader=>(proc do |eo|
|
494
506
|
# Handle cases where the root node has the same parent_id as primary_key
|
495
507
|
# and also when it is NULL
|
496
|
-
non_root_nodes =
|
508
|
+
non_root_nodes = eo[:rows].reject do |n|
|
497
509
|
if [nil, n.pk].include?(n.parent_id)
|
498
510
|
# Make sure root nodes have their parent association set to nil
|
499
511
|
n.associations[:parent] = nil
|
@@ -514,9 +526,9 @@ without knowing the depth of the tree?
|
|
514
526
|
end
|
515
527
|
end
|
516
528
|
end)
|
517
|
-
many_to_one :descendants, :eager_loader=>(proc do |
|
529
|
+
many_to_one :descendants, :eager_loader=>(proc do |eo|
|
518
530
|
id_map = {}
|
519
|
-
|
531
|
+
eo[:rows].each do |n|
|
520
532
|
# Initialize an empty array of child associations for each parent node
|
521
533
|
n.associations[:children] = []
|
522
534
|
# Populate identity map of nodes
|
@@ -549,9 +561,9 @@ supports recursive common table expressions):
|
|
549
561
|
Node.join(:t, :id=>:parent_id).
|
550
562
|
select(:nodes.*))
|
551
563
|
end),
|
552
|
-
:eager_loader=>(proc do |
|
553
|
-
id_map = key_hash[:id]
|
554
|
-
|
564
|
+
:eager_loader=>(proc do |eo|
|
565
|
+
id_map = eo[:key_hash][:id]
|
566
|
+
eo[:rows].each{|n| n.associations[:descendants] = []}
|
555
567
|
Node.from(:t).
|
556
568
|
with_recursive(:t, Node.filter(:parent_id=>id_map.keys).
|
557
569
|
select(:parent_id___root, :id, :parent_id),
|
@@ -565,8 +577,11 @@ supports recursive common table expressions):
|
|
565
577
|
end)
|
566
578
|
end
|
567
579
|
|
568
|
-
|
569
|
-
|
580
|
+
Sequel ships with an +rcte_tree+ plugin that allows simple creation
|
581
|
+
of ancestors and descendants relationships that use recursive common
|
582
|
+
table expressions:
|
583
|
+
|
584
|
+
Node.plugin :rcte_tree
|
570
585
|
|
571
586
|
=== Joining multiple keys to a single key, through a third table
|
572
587
|
|
@@ -584,10 +599,10 @@ name, with no duplicates?
|
|
584
599
|
class Artist < Sequel::Model
|
585
600
|
one_to_many :songs, :order=>:songs__name, \
|
586
601
|
:dataset=>proc{Song.select(:songs.*).join(Lyric, :id=>:lyric_id, id=>[:composer_id, :arranger_id, :vocalist_id, :lyricist_id])}, \
|
587
|
-
:eager_loader=>(proc do |
|
588
|
-
h = key_hash[:id]
|
602
|
+
:eager_loader=>(proc do |eo|
|
603
|
+
h = eo[:key_hash][:id]
|
589
604
|
ids = h.keys
|
590
|
-
|
605
|
+
eo[:rows].each{|r| r.associations[:songs] = []}
|
591
606
|
Song.select(:songs.*, :lyrics__composer_id, :lyrics__arranger_id, :lyrics__vocalist_id, :lyrics__lyricist_id)\
|
592
607
|
.join(Lyric, :id=>:lyric_id){{:composer_id=>ids, :arranger_id=>ids, :vocalist_id=>ids, :lyricist_id=>ids}.sql_or}\
|
593
608
|
.order(:songs__name).all do |song|
|
@@ -596,7 +611,7 @@ name, with no duplicates?
|
|
596
611
|
recs.each{|r| r.associations[:songs] << song} if recs
|
597
612
|
end
|
598
613
|
end
|
599
|
-
|
614
|
+
eo[:rows].each{|r| r.associations[:songs].uniq!}
|
600
615
|
end)
|
601
616
|
end
|
602
617
|
|
@@ -614,13 +629,13 @@ associated tickets.
|
|
614
629
|
one_to_many :tickets
|
615
630
|
many_to_one :ticket_hours, :read_only=>true, :key=>:id,
|
616
631
|
:dataset=>proc{Ticket.filter(:project_id=>id).select{sum(hours).as(hours)}},
|
617
|
-
:eager_loader=>(proc do |
|
618
|
-
|
619
|
-
Ticket.filter(:project_id=>
|
632
|
+
:eager_loader=>(proc do |eo|
|
633
|
+
eo[:rows].each{|p| p.associations[:ticket_hours] = nil}
|
634
|
+
Ticket.filter(:project_id=>eo[:key_hash][:id].keys).
|
620
635
|
group(:project_id).
|
621
636
|
select{[project_id, sum(hours).as(hours)]}.
|
622
637
|
all do |t|
|
623
|
-
p =
|
638
|
+
p = eo[:key_hash][:id][t.values.delete(:project_id)].first
|
624
639
|
p.associations[:ticket_hours] = t
|
625
640
|
end
|
626
641
|
end)
|
data/doc/association_basics.rdoc
CHANGED
@@ -943,6 +943,18 @@ Defaults to primary key of the associated table.
|
|
943
943
|
|
944
944
|
Can use an array of symbols for a composite key association.
|
945
945
|
|
946
|
+
==== :join_table_block [+many_to_many+]
|
947
|
+
|
948
|
+
A proc that can be used to modify the dataset used in the add/remove/remove_all
|
949
|
+
methods. It's separate from the association block, as that is called on a
|
950
|
+
join of the join table and the associated table, whereas this option just
|
951
|
+
applies to the join table. It can be used to make sure additional columns are
|
952
|
+
used when inserting, or that filters are used when deleting.
|
953
|
+
|
954
|
+
Artist.many_to_many :lead_guitar_albums, :join_table_block=>proc do |ds|
|
955
|
+
ds.filter(:instrument_id=>5).set_overrides(:instrument_id=>5)
|
956
|
+
end
|
957
|
+
|
946
958
|
=== Callback Options
|
947
959
|
|
948
960
|
All callbacks can be specified as a Symbol, Proc, or array of both/either
|
@@ -1093,16 +1105,9 @@ to eagerly load:
|
|
1093
1105
|
|
1094
1106
|
==== :eager_loader
|
1095
1107
|
|
1096
|
-
A custom loader to use when eagerly load associated objects via eager.
|
1097
|
-
specified, should be a proc that takes three arguments: a key hash (used
|
1098
|
-
solely to enhance performance), an array of current model instances, and a
|
1099
|
-
hash of dependent associations to eagerly load. The proc is responsible for
|
1100
|
-
querying the database to retrieve all associated records for any of the model
|
1101
|
-
instances (the second argument), and modifying the associations cache for each
|
1102
|
-
record to correctly set the associated records for that record.
|
1103
|
-
|
1108
|
+
A custom loader to use when eagerly load associated objects via eager.
|
1104
1109
|
For many details and examples of custom eager loaders, please see the
|
1105
|
-
{Advanced Associations
|
1110
|
+
{Advanced Associations guide}[link:files/doc/advanced_associations_rdoc.html].
|
1106
1111
|
|
1107
1112
|
==== :eager_loader_key
|
1108
1113
|
|
data/doc/dataset_basics.rdoc
CHANGED
@@ -76,12 +76,12 @@ Most dataset methods fall into this category, which can be further broken down b
|
|
76
76
|
|
77
77
|
SELECT:: select, select_all, select_append, select_more
|
78
78
|
FROM:: from, from_self
|
79
|
-
JOIN:: join,
|
79
|
+
JOIN:: join, left_join, right_join, full_join, natural_join, natural_left_join, natural_right_join, natural_full_join, cross_join, inner_join, left_outer_join, right_outer_join, full_outer_join, join_table
|
80
80
|
WHERE:: where, filter, exclude, and, or, grep, invert, unfiltered
|
81
81
|
GROUP:: group, group_by, group_and_count, ungrouped
|
82
82
|
HAVING:: having, filter, exclude, and, or, grep, invert, unfiltered
|
83
|
-
ORDER:: order, order_by, order_more, reverse, reverse_order, unordered
|
84
|
-
LIMIT:: limit
|
83
|
+
ORDER:: order, order_by, order_append, order_prepend, order_more, reverse, reverse_order, unordered
|
84
|
+
LIMIT:: limit, unlimited
|
85
85
|
compounds:: union, intersect, except
|
86
86
|
locking:: for_update, lock_style
|
87
87
|
common table expressions:: with, with_recursive
|
data/doc/migration.rdoc
ADDED
@@ -0,0 +1,1011 @@
|
|
1
|
+
= Migrations
|
2
|
+
|
3
|
+
This guide is based on http://guides.rubyonrails.org/migrations.html
|
4
|
+
|
5
|
+
== Overview
|
6
|
+
|
7
|
+
Migrations make it easy to alter your database's schema in a systematic manner.
|
8
|
+
They make it easier to coordinate with other developers and make sure that
|
9
|
+
all developers are using the same database schema.
|
10
|
+
|
11
|
+
Migrations are optional, you don't have to use them. You can always just
|
12
|
+
create the necessary database structure manually using Sequel's schema
|
13
|
+
modification methods or another database tool. However, if you are dealing
|
14
|
+
with other developers, you'll have to send them all of the changes you are
|
15
|
+
making. Even if you aren't dealing with other developers, you generally have
|
16
|
+
to make the schema changes in 3 places (development, testing, and then
|
17
|
+
production), and it's probably easier to use the migrations system to apply
|
18
|
+
the schema changes than it is to keep track of the changes manually and
|
19
|
+
execute them manually at the appropriate time.
|
20
|
+
|
21
|
+
Sequel tracks which migrations you have already run, so to apply migrations
|
22
|
+
you generally need to use run Sequel's migrator with <tt>bin/sequel -m</tt>:
|
23
|
+
|
24
|
+
sequel -m path/to/migrations postgres://host/database
|
25
|
+
|
26
|
+
Migrations in Sequel use a very simple DSL via the <tt>Sequel.migration</tt>
|
27
|
+
method, and inside the DSL, use the <tt>Sequel::Database</tt> schema
|
28
|
+
modification methods such as +create_table+ and +alter_table+.
|
29
|
+
|
30
|
+
== A Basic Migration
|
31
|
+
|
32
|
+
Here is a fairly basic Sequel migration:
|
33
|
+
|
34
|
+
Sequel.migration do
|
35
|
+
up do
|
36
|
+
create_table(:artists) do
|
37
|
+
primary_key :id
|
38
|
+
String :name, :null=>false
|
39
|
+
end
|
40
|
+
end
|
41
|
+
|
42
|
+
down do
|
43
|
+
drop_table(:artists)
|
44
|
+
end
|
45
|
+
end
|
46
|
+
|
47
|
+
This migration has an +up+ block which adds an artist table with an integer primary key named id,
|
48
|
+
and a varchar or text column (depending on the database) named name that doesn't accept NULL values.
|
49
|
+
Migrations should include both up and +down+ blocks, with the +down+ block reversing
|
50
|
+
the change made by up. However, if you never need to be able to migrate down
|
51
|
+
(i.e. you are one of the people that doesn't make mistakes), you can leave out
|
52
|
+
the +down+ block. In this case, the +down+ block just reverses the changes made by up,
|
53
|
+
dropping the table.
|
54
|
+
|
55
|
+
In normal usage, when Sequel's migrator runs, it runs the +up+ blocks for all
|
56
|
+
migrations that have not yet been applied. However, you can use the <tt>-M</tt>
|
57
|
+
switch to specify the version to which to migrate, and if it is lower than the
|
58
|
+
current version, Sequel will run the +down+ block on the appropriate migrations.
|
59
|
+
|
60
|
+
You are not limited to creating tables inside a migration, you can alter existing tables
|
61
|
+
as well as modify data. Let's say your artist database originally only included artists
|
62
|
+
from Sacramento, CA, USA, but now you want to branch out and include artists in any city:
|
63
|
+
|
64
|
+
Sequel.migration do
|
65
|
+
up do
|
66
|
+
add_column :artists, :location, String
|
67
|
+
self[:artists].update(:location=>'Sacramento')
|
68
|
+
end
|
69
|
+
|
70
|
+
down do
|
71
|
+
drop_column :artists, :location
|
72
|
+
end
|
73
|
+
end
|
74
|
+
|
75
|
+
This migration adds a +location+ column to the +artists+ table, and sets the +location+ column
|
76
|
+
to <tt>'Sacramento'</tt> for all existing artists. It doesn't use a default on the column,
|
77
|
+
because future artists should not be assumed to come from Sacramento. In the +down+ block, it
|
78
|
+
just drops the +location+ column from the +artists+ table, reversing the actions of the up
|
79
|
+
block.
|
80
|
+
|
81
|
+
Note that when updating the +artists+ table in the update, a plain dataset is used, <tt>self[:artists]</tt>.
|
82
|
+
This looks a little weird, but you need to be aware that inside an up or +down+ block in a migration,
|
83
|
+
self always refers to the <tt>Sequel::Database</tt> object that the migration is being applied to.
|
84
|
+
Since <tt>Database#[]</tt> creates datasets, using <tt>self[:artists]</tt> inside the +up+ block creates
|
85
|
+
a dataset on the database representing all columns in the +artists+ table, and updates it to set the
|
86
|
+
+location+ column to <tt>'Sacramento'</tt>.
|
87
|
+
|
88
|
+
It is possible to use model classes inside migrations, as long as they are loaded into the ruby interpreter,
|
89
|
+
but it's a bad habit as changes to your model classes can then break old migrations, and this breakage is
|
90
|
+
often not caught until much later, such as when a new developer joins the team and wants to run all migrations
|
91
|
+
to create their development database.
|
92
|
+
|
93
|
+
== The +migration+ extension
|
94
|
+
|
95
|
+
The migration code is not technically part of the core of Sequel. It's not loaded by default as it
|
96
|
+
is only useful in specific cases. It is one of the built-in extensions, which receive the same
|
97
|
+
level of support as Sequel's core.
|
98
|
+
|
99
|
+
If you want to play with Sequel's migration tools without using the <tt>bin/sequel</tt> tool, you
|
100
|
+
need to load the migration extension manually:
|
101
|
+
|
102
|
+
Sequel.extension :migration
|
103
|
+
|
104
|
+
== Schema methods
|
105
|
+
|
106
|
+
Migrations themselves do not contain any schema modification methods, but they make it easy to call
|
107
|
+
any of the <tt>Sequel::Database</tt> modification methods, of which there are many. The main
|
108
|
+
ones are +create_table+ and +alter_table+, but Sequel also comes with numerous other schema
|
109
|
+
modification methods, most of which are shortcuts for +alter_table+ (all of these methods are
|
110
|
+
described in more detail later):
|
111
|
+
|
112
|
+
* add_column
|
113
|
+
* add_index
|
114
|
+
* create_view
|
115
|
+
* drop_column
|
116
|
+
* drop_index
|
117
|
+
* drop_table
|
118
|
+
* drop_view
|
119
|
+
* rename_table
|
120
|
+
* rename_column
|
121
|
+
* set_column_default
|
122
|
+
* set_column_type
|
123
|
+
|
124
|
+
These methods handle the vast majority of cross database schema modification SQL. If you
|
125
|
+
need to drop down to SQL to execute some database specific code, you can use the +run+
|
126
|
+
method:
|
127
|
+
|
128
|
+
Sequel.migration do
|
129
|
+
up{run 'CREATE TRIGGER ...'}
|
130
|
+
down{run 'DROP TRIGGER ...'}
|
131
|
+
end
|
132
|
+
|
133
|
+
In this case, we are using { and } instead of do and end to define the blocks. Just as
|
134
|
+
before, the +run+ methods inside the blocks are called on the +Database+ object,
|
135
|
+
which just executes the code on the underlying database.
|
136
|
+
|
137
|
+
== Errors when running migrations
|
138
|
+
|
139
|
+
Sequel attempts to run migrations inside of a transaction. Some databases do not support
|
140
|
+
schema modifications made in transactions, and if the migration raises an error, it will
|
141
|
+
not rollback the previous schema changes made by the migration. In that case, you will
|
142
|
+
need to update the database by hand.
|
143
|
+
|
144
|
+
It's recommended to always run migrations on a test database and ensure they work
|
145
|
+
before running them on any production database.
|
146
|
+
|
147
|
+
== Migration files
|
148
|
+
|
149
|
+
While you can create migration objects yourself and apply them manually, most of the
|
150
|
+
benefit to using migrations come from using Sequel's +Migrator+, which is what the
|
151
|
+
<tt>bin/sequel -m</tt> switch does. Sequel's +Migrator+ expects that each migration
|
152
|
+
will be in a separate file in a specific directory. The <tt>-m</tt> switch requires an
|
153
|
+
argument be specified that is the path to the directory containing the migration files.
|
154
|
+
For example:
|
155
|
+
|
156
|
+
sequel -m db/migrations postgres://localhost/sequel_test
|
157
|
+
|
158
|
+
will look in the <tt>db/migrations</tt> folder relative to the current directory,
|
159
|
+
and run unapplied migrations on the PostgreSQL database sequel_test running on localhost.
|
160
|
+
|
161
|
+
== Two separate migrators
|
162
|
+
|
163
|
+
Sequel actually ships with two separate migrators. One is the +IntegerMigrator+, the other is
|
164
|
+
the +TimestampMigrator+. They both have plusses and minuses:
|
165
|
+
|
166
|
+
=== +IntegerMigrator+
|
167
|
+
|
168
|
+
* Simpler, uses migration versions starting with 1
|
169
|
+
* Doesn't allow duplicate migrations
|
170
|
+
* Doesn't allow missing migrations
|
171
|
+
* Just stores the version of the last migration run
|
172
|
+
* Good for single developer or small teams with close
|
173
|
+
communication
|
174
|
+
* Lower risk of undetected conflicting migrations
|
175
|
+
* Requires manual merging of simultaneous migrations
|
176
|
+
|
177
|
+
=== +TimeStampMigrator+
|
178
|
+
|
179
|
+
* More complex, use migration versions where the version should
|
180
|
+
represent a timestamp
|
181
|
+
* Allows duplicate migrations (since you could have multiple in a given second)
|
182
|
+
* Allows missing migrations (since you obviously don't have one every second)
|
183
|
+
* Stores the file names of all applied migrations
|
184
|
+
* Good for large teams without close communication
|
185
|
+
* Higher risk of undected conflicting migrations
|
186
|
+
* Does not require manual merging of simultaneous migrations
|
187
|
+
|
188
|
+
=== Filenames
|
189
|
+
|
190
|
+
In order for migration files to work with the Sequel, they must be specified as follows:
|
191
|
+
|
192
|
+
version_name.rb
|
193
|
+
|
194
|
+
where <tt>version</tt> is an integer and <tt>name</tt> is a string which should be a very brief
|
195
|
+
description of what the migration does. Each migration class should contain 1 and only 1
|
196
|
+
call to <tt>Sequel.migration</tt>.
|
197
|
+
|
198
|
+
=== +IntegerMigrator+ Filenames
|
199
|
+
|
200
|
+
These are valid migration names for the +IntegerMigrator+:
|
201
|
+
|
202
|
+
1_create_artists.rb
|
203
|
+
2_add_artist_location.rb
|
204
|
+
|
205
|
+
The only problem with this naming format is that if you have more than 9 migrations, the 10th
|
206
|
+
one will look a bit odd:
|
207
|
+
|
208
|
+
1_create_artists.rb
|
209
|
+
2_add_artist_location.rb
|
210
|
+
...
|
211
|
+
9_do_something.rb
|
212
|
+
10_do_something_else.rb
|
213
|
+
|
214
|
+
For this reasons, it's often best to start with 001 instead of 1, as that means you don't need
|
215
|
+
to worry about that issue until the 1000th migration:
|
216
|
+
|
217
|
+
001_create_artists.rb
|
218
|
+
002_add_artist_location.rb
|
219
|
+
...
|
220
|
+
009_do_something.rb
|
221
|
+
010_do_something_else.rb
|
222
|
+
|
223
|
+
It should be fairly obvious, but migrations start at 1, not 0. The migration version number 0
|
224
|
+
is important though, as it is used to mean that all migrations should be unapplied (i.e. all
|
225
|
+
+down+ blocks run). In Sequel, you can do that with:
|
226
|
+
|
227
|
+
sequel -m db/migrations -M 0 postgres://localhost/sequel_test
|
228
|
+
|
229
|
+
=== +TimestampMigrator+ Filenames
|
230
|
+
|
231
|
+
With the +TimestampMigrator+, the version integer should represent a timestamp, though this isn't strictly
|
232
|
+
required.
|
233
|
+
|
234
|
+
For example, for <tt>5/10/2010 12:00:00pm</tt>, you could use any of the following formats:
|
235
|
+
|
236
|
+
# Date
|
237
|
+
20100510_create_artists.rb
|
238
|
+
|
239
|
+
# Date and Time
|
240
|
+
20100510120000_create_artists.rb
|
241
|
+
|
242
|
+
# Unix Epoch Time Integer
|
243
|
+
1273518000_create_artists.rb
|
244
|
+
|
245
|
+
The important thing is that all migration files should be in the same format, otherwise when you
|
246
|
+
update, it'll be difficult to make sure migrations are applied in the correct order, as well as
|
247
|
+
be difficult to unapply some the affected migrations correctly.
|
248
|
+
|
249
|
+
The +TimestampMigrator+ will be used if any filename in the migrations directory has a version
|
250
|
+
greater than 20000101. Otherwise, the +IntegerMigrator+ will be used.
|
251
|
+
|
252
|
+
=== How to choose
|
253
|
+
|
254
|
+
Basically, unless you need the features provided by the +TimestampMigrator+, stick with the
|
255
|
+
+IntegerMigrator+, as it is simpler and makes it easier to detect possible errors.
|
256
|
+
|
257
|
+
For a single developer, the +TimestampMigrator+ has no real benefits, so I would always recommend
|
258
|
+
the +IntegerMigrator+. When dealing with multiple developers, it depends on the size of the
|
259
|
+
development team, the team's communication level, and the level of overlap between developers.
|
260
|
+
|
261
|
+
Let's say Alice works on a new feature that requires a migration at the same time Bob works
|
262
|
+
on a separate feature that requires an unrelated migration. If both developers are committing
|
263
|
+
to their own private respositories, when it comes time to merge, the +TimestampMigrator+ will not
|
264
|
+
require any manually changes. That's because Alice will have a migration such as
|
265
|
+
<tt>20100512_do_this.rb</tt> and Bob will have one such as <tt>20100512_do_that.rb</tt>.
|
266
|
+
|
267
|
+
If the +IntegerMigrator+ was used, Alice would have <tt>34_do_this.rb</tt> and Bob would have
|
268
|
+
<tt>34_do_that.rb</tt>. When the +IntegerMigrator+ was used, it would raise an exception due to
|
269
|
+
the duplicate migration version. The only way to fix it would be to renumber one of the two
|
270
|
+
migrations, and have the affected developer manually modify their database.
|
271
|
+
|
272
|
+
So for unrelated migrations, the +TimestampMigrator+ works fine. However, let's say that the
|
273
|
+
migrations are related, in such a way that if Bob's is run first, Alice's will fail. In this
|
274
|
+
case, the +TimestampMigrator+ would not raise an error when Bob merges Alice's changes, since
|
275
|
+
Bob ran his migration first. However, it would raise an error when Alice runs Bob's migration,
|
276
|
+
and could leave the database in an inconsistant state if the database doesn't support transactional
|
277
|
+
schema changes.
|
278
|
+
|
279
|
+
With the +TimestampMigrator+, you are trading reliability for convenience. That's possibly a valid
|
280
|
+
trade, especially if simultaneous related schema changes by separate developers are unlikely, but
|
281
|
+
you should give it some thought before using it.
|
282
|
+
|
283
|
+
== Modifying existing migrations
|
284
|
+
|
285
|
+
Just don't do it.
|
286
|
+
|
287
|
+
In general, you should not modify any migration that has been run on the database and been committed
|
288
|
+
the source control repository, unless the migration contains a error that causes data loss. As long
|
289
|
+
as it is possible to undo the migration without losing data, you should just add another migration
|
290
|
+
that undoes the actions of the previous bad migration, and maybe does the correct action afterward.
|
291
|
+
|
292
|
+
The main problem with modifying existing migrations is that you will have to manually modify any
|
293
|
+
databases that ran the migration before it was modified. If you are a single developer, that may be
|
294
|
+
an option, but certainly if you have multiple developers, it's a lot more work.
|
295
|
+
|
296
|
+
== Creating a migration
|
297
|
+
|
298
|
+
Sequel doesn't come with generators that create migrations for you. However, creating a migration
|
299
|
+
is as simple as creating a file with the appropriate filename in your migrations directory that
|
300
|
+
contains a <tt>Sequel.migration</tt> call. The minimal do-nothing migration is:
|
301
|
+
|
302
|
+
Sequel.migration{}
|
303
|
+
|
304
|
+
However, the migrations you write should contain an +up+ block that does something, and a +down+ block that
|
305
|
+
reverses the changes made by the +up+ block:
|
306
|
+
|
307
|
+
Sequel.migration do
|
308
|
+
up{...}
|
309
|
+
down{...}
|
310
|
+
end
|
311
|
+
|
312
|
+
== Schema modification methods
|
313
|
+
|
314
|
+
Inside your migration's down and +up+ blocks is where you will call the +Database+ schema modification methods.
|
315
|
+
Here's a brief description of the most common schema modification methods:
|
316
|
+
|
317
|
+
=== +create_table+
|
318
|
+
|
319
|
+
+create_table+ is the most common schema modification method, and it's used for adding new tables
|
320
|
+
to the schema. You provide it with the name of the table as a symbol, as well a block:
|
321
|
+
|
322
|
+
create_table(:artists) do
|
323
|
+
primary_key :id
|
324
|
+
String :name
|
325
|
+
end
|
326
|
+
|
327
|
+
Not that if you want a primary key for the table, you need to specify it, Sequel does not create one
|
328
|
+
by default.
|
329
|
+
|
330
|
+
==== Column types
|
331
|
+
|
332
|
+
Most method calls inside the create_table block will create columns, since +method_missing+ calls +column+
|
333
|
+
Columns are generally created by specifying the column type as the method
|
334
|
+
name, followed by the column name symbol to use, and after that any options that should be used.
|
335
|
+
If the method is a ruby class name that Sequel recognizes, Sequel will transform it into the appropriate
|
336
|
+
type for the given database. So while you specified +String+, Sequel will actually use +varchar+ or
|
337
|
+
+text+ depending on the underlying database. Here's a list of all of ruby classes that Sequel will
|
338
|
+
convert to database types:
|
339
|
+
|
340
|
+
create_table(:columns_types) do # common database type used
|
341
|
+
Integer :a0 # integer
|
342
|
+
String :a1 # varchar(255)
|
343
|
+
String :a2, :size=>50 # varchar(50)
|
344
|
+
String :a3, :fixed=>true # char(255)
|
345
|
+
String :a4, :fixed=>true, :size=>50 # char(50)
|
346
|
+
String :a5, :text=>true # text
|
347
|
+
File :b, # blob
|
348
|
+
Fixnum :c # integer
|
349
|
+
Bignum :d # bigint
|
350
|
+
Float :e # double precision
|
351
|
+
BigDecimal :f # numeric
|
352
|
+
BigDecimal :f2, :size=>10 # numeric(10)
|
353
|
+
BigDecimal :f3, :size=>[10, 2] # numeric(10, 2)
|
354
|
+
Date :g # date
|
355
|
+
DateTime :h # timestamp
|
356
|
+
Time :i # timestamp
|
357
|
+
Time :i2, :only_time=>true # time
|
358
|
+
Numeric :j # numeric
|
359
|
+
TrueClass :k # boolean
|
360
|
+
FalseClass :l # boolean
|
361
|
+
end
|
362
|
+
|
363
|
+
Note that in addition to the ruby class name, Sequel also pays attention to the column options when
|
364
|
+
determining which database type to use.
|
365
|
+
|
366
|
+
Also note that this conversion is only done if you use a supported ruby class name. In all other
|
367
|
+
cases, Sequel uses the type specified verbatim:
|
368
|
+
|
369
|
+
create_table(:columns_types) do # database type used
|
370
|
+
string :a1 # string
|
371
|
+
datetime :a2 # datetime
|
372
|
+
blob :a3 # blob
|
373
|
+
inet :a4 # inet
|
374
|
+
end
|
375
|
+
|
376
|
+
In addition to specifying the types as methods, you can use the +column+ method and specify the types
|
377
|
+
as the second argument, either as ruby classes, symbols, or strings:
|
378
|
+
|
379
|
+
create_table(:columns_types) do # database type used
|
380
|
+
column :a1, :string # string
|
381
|
+
column :a2, String # varchar(255)
|
382
|
+
column :a3, 'string' # string
|
383
|
+
column :a4, :datetime # datetime
|
384
|
+
column :a5, DateTime # timestamp
|
385
|
+
column :a6, 'timestamp(6)' # timestamp(6)
|
386
|
+
end
|
387
|
+
|
388
|
+
==== Column options
|
389
|
+
|
390
|
+
When using the type name as method, the third argument is an options hash, and when using the +column+
|
391
|
+
method, the fourth argument is the options hash. The following options are supported:
|
392
|
+
|
393
|
+
:default :: The default value for the column.
|
394
|
+
:index :: Create an index on this column.
|
395
|
+
:null :: Mark the column as allowing NULL values (if true),
|
396
|
+
or not allowing NULL values (if false). If unspecified, will default
|
397
|
+
to whatever the database default is.
|
398
|
+
:size :: The size of the column, generally used with string
|
399
|
+
columns to specify the maximum number of characters the column will hold.
|
400
|
+
An array of two integers can be provided to set the size and the
|
401
|
+
precision, respectively, of decimal columns.
|
402
|
+
:unique :: Mark the column as unique, generally has the same effect as
|
403
|
+
creating a unique index on the column.
|
404
|
+
:unsigned :: Make the column type unsigned, only useful for integer
|
405
|
+
columns.
|
406
|
+
|
407
|
+
==== Other methods
|
408
|
+
|
409
|
+
In addition to the +column+ method and other methods that create columns, there are a other methods that can be used:
|
410
|
+
|
411
|
+
==== +primary_key+
|
412
|
+
|
413
|
+
You've seen this one used already. It's used to create an autoincrementing integer primary key column.
|
414
|
+
|
415
|
+
create_table(:a0){primary_key :id}
|
416
|
+
|
417
|
+
If you want to create a primary key column that doesn't use an autoincrementing integer, you should
|
418
|
+
not use this method. Instead, you should use the :primary_key option to the +column+ method or type
|
419
|
+
method:
|
420
|
+
|
421
|
+
create_table(:a1){Integer :id, :primary_key=>true} # Non autoincrementing integer primary key
|
422
|
+
create_table(:a2){String :name, :primary_key=>true} # varchar(255) primary key
|
423
|
+
|
424
|
+
If you want to create a composite primary key, you should call the +primary_key+ method with an
|
425
|
+
array of column symbols:
|
426
|
+
|
427
|
+
create_table(:items) do
|
428
|
+
Integer :group_id
|
429
|
+
Integer :position
|
430
|
+
primary_key [:group_id, :position]
|
431
|
+
end
|
432
|
+
|
433
|
+
If provided with an array, +primary_key+ does not create a column, it just sets up the primary key constraint.
|
434
|
+
|
435
|
+
==== +foreign_key+
|
436
|
+
|
437
|
+
+foreign_key+ is used to create a foreign key column that references a column in another table (or the same table).
|
438
|
+
It takes the column name as the first argument, the table it references as the second argument, and an options hash
|
439
|
+
as it's third argument. A simple example is:
|
440
|
+
|
441
|
+
create_table(:albums) do
|
442
|
+
primary_key :id
|
443
|
+
foreign_key :artist_id, :artists
|
444
|
+
String :name
|
445
|
+
end
|
446
|
+
|
447
|
+
+foreign_key+ accepts some specific options:
|
448
|
+
|
449
|
+
:deferrable :: Makes the foreign key constraint checks deferrable, so they aren't checked
|
450
|
+
until the end of the transaction.
|
451
|
+
:key :: For foreign key columns, the column in the associated table
|
452
|
+
that this column references. Unnecessary if this column
|
453
|
+
references the primary key of the associated table, at least
|
454
|
+
on most databases.
|
455
|
+
:on_delete :: Specify the behavior of this foreign key column when the row with the primary key
|
456
|
+
it references is deleted , can be :restrict, :cascade, :set_null, or :set_default.
|
457
|
+
:on_update :: Specify the behavior of this foreign key column when the row with the primary key
|
458
|
+
it references modifies the value of the primary key, can be
|
459
|
+
:restrict, :cascade, :set_null, or :set_default.
|
460
|
+
|
461
|
+
Like +primary_key+, if you provide +foreign_key+ with an array of symbols, it will not create a
|
462
|
+
column, but create a foreign key constraint:
|
463
|
+
|
464
|
+
create_table(:artists) do
|
465
|
+
String :name
|
466
|
+
String :location
|
467
|
+
primary_key [:name, :location]
|
468
|
+
end
|
469
|
+
create_table(:albums) do
|
470
|
+
String :artist_name
|
471
|
+
String :artist_location
|
472
|
+
String :name
|
473
|
+
foreign_key [:artist_name, :artist_location], :artists
|
474
|
+
end
|
475
|
+
|
476
|
+
==== +index+
|
477
|
+
|
478
|
+
+index+ creates indexes on the table. For single columns, calling index is the same as using the
|
479
|
+
<tt>:index</tt> option when creating the column:
|
480
|
+
|
481
|
+
create_table(:a){Integer :id, :index=>true}
|
482
|
+
# Same as:
|
483
|
+
create_table(:a) do
|
484
|
+
Integer :id
|
485
|
+
index :id
|
486
|
+
end
|
487
|
+
|
488
|
+
Similar to the +primary_key+ and +foreign_key+ methods, calling +index+ with an array of symbols
|
489
|
+
will create a multiple column index:
|
490
|
+
|
491
|
+
create_table(:albums) do
|
492
|
+
primary_key :id
|
493
|
+
foreign_key :artist_id, :artists
|
494
|
+
Integer :position
|
495
|
+
index [:artist_id, :position]
|
496
|
+
end
|
497
|
+
|
498
|
+
The +index+ method also accepts some options:
|
499
|
+
|
500
|
+
:name :: The name of the index (generated based on the table and column names if not provided).
|
501
|
+
:type :: The type of index to use (only supported by some databases)
|
502
|
+
:unique :: Make the index unique, so duplicate values are not allowed.
|
503
|
+
:where :: Create a partial index (only supported by some databases)
|
504
|
+
|
505
|
+
==== +unique+
|
506
|
+
|
507
|
+
The +unique+ method creates a unique constraint on the table. A unique constraint generally
|
508
|
+
operates identically to a unique index, so the following three +create_table+ blocks are
|
509
|
+
pretty much identical:
|
510
|
+
|
511
|
+
create_table(:a){Integer :a, :unique=>true}
|
512
|
+
|
513
|
+
create_table(:a) do
|
514
|
+
Integer :a
|
515
|
+
index :a, :unique=>true
|
516
|
+
end
|
517
|
+
|
518
|
+
create_table(:a) do
|
519
|
+
Integer :a
|
520
|
+
unique :a
|
521
|
+
end
|
522
|
+
|
523
|
+
Just like +index+, +unique+ can set up a multiple column unique constraint, where the
|
524
|
+
combination of the columns must be unique:
|
525
|
+
|
526
|
+
create_table(:a) do
|
527
|
+
Integer :a
|
528
|
+
Integer :b
|
529
|
+
unique [:a, :b]
|
530
|
+
end
|
531
|
+
|
532
|
+
==== +full_text_index+ and +spatial_index+
|
533
|
+
|
534
|
+
Both of these create specialized index types supported by some databases. They
|
535
|
+
both take the same options as +index+.
|
536
|
+
|
537
|
+
==== +constraint+
|
538
|
+
|
539
|
+
+constraint+ creates a named table constraint:
|
540
|
+
|
541
|
+
create_table(:artists) do
|
542
|
+
primary_key :id
|
543
|
+
String :name
|
544
|
+
constraint(:name_min_length){char_length(name) > 2}
|
545
|
+
end
|
546
|
+
|
547
|
+
Instead of using a block, you can use arguments that will be handled similarly
|
548
|
+
to <tt>Dataset#filter</tt>:
|
549
|
+
|
550
|
+
create_table(:artists) do
|
551
|
+
primary_key :id
|
552
|
+
String :name
|
553
|
+
constraint(:name_length_range, :char_length.sql_function(:name)=>3..50)
|
554
|
+
end
|
555
|
+
|
556
|
+
==== +check+
|
557
|
+
|
558
|
+
+check+ operates just like +constraint+, except that it doesn't take a name
|
559
|
+
and it creates an unnamed constraint
|
560
|
+
|
561
|
+
create_table(:artists) do
|
562
|
+
primary_key :id
|
563
|
+
String :name
|
564
|
+
check{char_length(name) > 2}
|
565
|
+
end
|
566
|
+
|
567
|
+
=== +alter_table+
|
568
|
+
|
569
|
+
+alter_table+ is used to alter existing tables, changing their columns, indexes,
|
570
|
+
or constraints. It it used just like +create_table+, accepting a block which
|
571
|
+
is instance_evaled, and providing its own methods:
|
572
|
+
|
573
|
+
==== +add_column+
|
574
|
+
|
575
|
+
One of the most common methods, +add_column+ is used to add a column to the table.
|
576
|
+
Its API is similar to that of +create_table+'s +column+ method, where the first
|
577
|
+
argument is the column name, the second is the type, and the third is an options
|
578
|
+
hash:
|
579
|
+
|
580
|
+
alter_table(:albums) do
|
581
|
+
add_column :copies_sold, Integer, :default=>0
|
582
|
+
end
|
583
|
+
|
584
|
+
When adding a column, it's a good idea to provide a default value, unless you
|
585
|
+
want the value for all rows to be set to NULL.
|
586
|
+
|
587
|
+
==== +drop_column+
|
588
|
+
|
589
|
+
As you may expect, +drop_column+ takes a column name and drops the column. It's
|
590
|
+
often used in the +down+ block of a migration to drop a column added in an +up+ block:
|
591
|
+
|
592
|
+
alter_table(:albums) do
|
593
|
+
drop_column :copies_sold
|
594
|
+
end
|
595
|
+
|
596
|
+
==== +rename_column+
|
597
|
+
|
598
|
+
+rename_column+ is used to rename a column. It takes the old column name as the first
|
599
|
+
argument, and the new column name as the second argument:
|
600
|
+
|
601
|
+
alter_table(:albums) do
|
602
|
+
rename_column :copies_sold, :total_sales
|
603
|
+
end
|
604
|
+
|
605
|
+
==== +add_primary_key+
|
606
|
+
|
607
|
+
If you forgot to include a primary key on the table, and want to add one later, you
|
608
|
+
can use +add_primary_key+. A common use of this is to make many_to_many association
|
609
|
+
join tables into real models:
|
610
|
+
|
611
|
+
alter_table(:albums_artists) do
|
612
|
+
add_primary_key :id
|
613
|
+
end
|
614
|
+
|
615
|
+
Just like +create_table+'s +primary_key+ method, if you provide an array of symbols,
|
616
|
+
Sequel will not add a column, but will add a composite primary key constraint:
|
617
|
+
|
618
|
+
alter_table(:albums_artists) do
|
619
|
+
add_primary_key [:album_id, :artist_id]
|
620
|
+
end
|
621
|
+
|
622
|
+
If you just want to take an existing single column and make it a primary key, call
|
623
|
+
+add_primary_key+ with an array with a single symbol:
|
624
|
+
|
625
|
+
alter_table(:artists) do
|
626
|
+
add_primary_key [:id]
|
627
|
+
end
|
628
|
+
|
629
|
+
==== +add_foreign_key+
|
630
|
+
|
631
|
+
+add_foreign_key+ can be used to add a new foreign key column or constraint to a table.
|
632
|
+
Like +add_primary_key+, if you provide it with a symbol as the first argument, it
|
633
|
+
creates a new column:
|
634
|
+
|
635
|
+
alter_table(:albums) do
|
636
|
+
add_foreign_key :artist_id, :artists
|
637
|
+
end
|
638
|
+
|
639
|
+
If you want to add a new foreign key constraint to an existing column, you provide an
|
640
|
+
array with a single element:
|
641
|
+
|
642
|
+
alter_table(:albums) do
|
643
|
+
add_foreign_key [:artist_id], :artists
|
644
|
+
end
|
645
|
+
|
646
|
+
To set up a multiple column foreign key constraint, use an array with multiple column
|
647
|
+
symbols:
|
648
|
+
|
649
|
+
alter_table(:albums) do
|
650
|
+
add_foreign_key [:artist_name, :artist_location], :artists
|
651
|
+
end
|
652
|
+
|
653
|
+
==== +add_index+
|
654
|
+
|
655
|
+
+add_index+ works just like +create_table+'s +index+ method, creating a new index on
|
656
|
+
the table:
|
657
|
+
|
658
|
+
alter_table(:albums) do
|
659
|
+
add_index :artist_id
|
660
|
+
end
|
661
|
+
|
662
|
+
It accepts the same options as +create_table+'s +index+ method, and you can set up
|
663
|
+
a multiple column index using an array:
|
664
|
+
|
665
|
+
alter_table(:albums_artists) do
|
666
|
+
add_index [:album_id, :artist_id], :unique=>true
|
667
|
+
end
|
668
|
+
|
669
|
+
==== +drop_index+
|
670
|
+
|
671
|
+
As you may expect, +drop_index+ drops an existing index:
|
672
|
+
|
673
|
+
alter_table(:albums) do
|
674
|
+
drop_index :artist_id
|
675
|
+
end
|
676
|
+
|
677
|
+
Just like +drop_column+, it is often used in the +down+ block of a migration.
|
678
|
+
|
679
|
+
==== +add_full_text_index+, +add_spatial_index+
|
680
|
+
|
681
|
+
Corresponding to +create_table+'s +full_text_index+ and +spatial_index+ methods,
|
682
|
+
these two methods create new indexes on the table.
|
683
|
+
|
684
|
+
==== +add_constraint+
|
685
|
+
|
686
|
+
This adds a named constraint to the table, similar to +create_table+'s +constraint+
|
687
|
+
method:
|
688
|
+
|
689
|
+
alter_table(:albums) do
|
690
|
+
add_constraint(:name_min_length){char_length(name) > 2}
|
691
|
+
end
|
692
|
+
|
693
|
+
There is no method to add an unnamed constraint, but you can pass nil as the first
|
694
|
+
argument of +add_constraint+ to do so. However, it's not recommend to do that
|
695
|
+
as it is difficult to drop such a constraint.
|
696
|
+
|
697
|
+
==== +add_unique_constraint+
|
698
|
+
|
699
|
+
This adds a unique constraint to the table, similar to +create_table+'s +unique+
|
700
|
+
method. This usually has the same effect as adding a unique index.
|
701
|
+
|
702
|
+
alter_table(:albums) do
|
703
|
+
add_unique_constraint [:artist_id, :name]
|
704
|
+
end
|
705
|
+
|
706
|
+
==== +drop_constraint+
|
707
|
+
|
708
|
+
This method drops an existing named constraint:
|
709
|
+
|
710
|
+
alter_table(:albums) do
|
711
|
+
drop_constraint(:name_min_length)
|
712
|
+
end
|
713
|
+
|
714
|
+
There is no database independent method to drop an unnamed constraint. Generally, the
|
715
|
+
database will give it a name automatically, and you will have to figure out what it is.
|
716
|
+
For that reason, you should not add unnamed constraints that you ever might need to remove.
|
717
|
+
|
718
|
+
==== +set_column_default+
|
719
|
+
|
720
|
+
This modifies the default value of a column:
|
721
|
+
|
722
|
+
alter_table(:albums) do
|
723
|
+
set_column_default :copies_sold, 0
|
724
|
+
end
|
725
|
+
|
726
|
+
==== +set_column_type+
|
727
|
+
|
728
|
+
This modifies a column's type. Most databases will attempt to convert existing values in
|
729
|
+
the columns to the new type:
|
730
|
+
|
731
|
+
alter_table(:albums) do
|
732
|
+
set_column_type :copies_sold, Bignum
|
733
|
+
end
|
734
|
+
|
735
|
+
You can specify the type as a string or symbol, in which case it is used verbatim, or as a supported
|
736
|
+
ruby class, in which case it gets converted to an appropriate database type.
|
737
|
+
|
738
|
+
==== +set_column_allow_null+
|
739
|
+
|
740
|
+
This changes the NULL or NOT NULL setting of a column:
|
741
|
+
|
742
|
+
alter_table(:albums) do
|
743
|
+
set_column_allow_null :artist_id, true # NULL
|
744
|
+
set_column_allow_null :copies_sold, false # NOT NULL
|
745
|
+
end
|
746
|
+
|
747
|
+
=== Other +Database+ schema modification methods
|
748
|
+
|
749
|
+
<tt>Sequel::Database</tt> has many schema modification instance methods,
|
750
|
+
most of which are shortcuts to the same methods in +alter_table+. The
|
751
|
+
following +Database+ instance methods just call +alter_table+ with a
|
752
|
+
block that calls the method with the same name inside the +alter_table+
|
753
|
+
block with all arguments after the first argument (which is used as
|
754
|
+
the table name):
|
755
|
+
|
756
|
+
* +add_column+
|
757
|
+
* +drop_column+
|
758
|
+
* +rename_column+
|
759
|
+
* +add_index+
|
760
|
+
* +drop_index+
|
761
|
+
* +set_column_default+
|
762
|
+
* +set_column_type+
|
763
|
+
|
764
|
+
For example, the following two method calls do the same thing:
|
765
|
+
|
766
|
+
alter_table(:artists){add_column :copies_sold, Integer}
|
767
|
+
add_column :artists, :copies_sold, Integer
|
768
|
+
|
769
|
+
There are some other schema modification methods that have no +alter_table+
|
770
|
+
counterpart:
|
771
|
+
|
772
|
+
==== +drop_table+
|
773
|
+
|
774
|
+
+drop_table+ takes multiple arguments and treats all arguments as a
|
775
|
+
table name to drop:
|
776
|
+
|
777
|
+
drop_table(:albums_artists, :albums, :artists)
|
778
|
+
|
779
|
+
Note that when dropping tables, you may need to drop them in a specific order
|
780
|
+
if you are using foreign keys and the database is enforcing referential
|
781
|
+
integrity. In general, you need to drop the tables containing the foreign
|
782
|
+
keys before the tables containing the primary keys they reference.
|
783
|
+
|
784
|
+
==== +rename_table+
|
785
|
+
|
786
|
+
You can rename an existing table using +rename_table+. Like +rename_column+,
|
787
|
+
the first argument is the current name, and the second is the new name:
|
788
|
+
|
789
|
+
rename_table(:artist, :artists)
|
790
|
+
|
791
|
+
==== <tt>create_table!</tt>
|
792
|
+
|
793
|
+
<tt>create_table!</tt> with the bang drops the table unconditionally (swallowing
|
794
|
+
any errors) before attempting to create it, so:
|
795
|
+
|
796
|
+
create_table!(:artists)
|
797
|
+
primary_key :id
|
798
|
+
end
|
799
|
+
|
800
|
+
is the same as:
|
801
|
+
|
802
|
+
drop_table(:artists) rescue nil
|
803
|
+
create_table(:artists)
|
804
|
+
primary_key :id
|
805
|
+
end
|
806
|
+
|
807
|
+
It should not be used inside migrations, as if the table does not exist, it may
|
808
|
+
mess up the migration.
|
809
|
+
|
810
|
+
==== <tt>create_table?</tt>
|
811
|
+
|
812
|
+
<tt>create_table?</tt> with a question mark only creates the table if it does
|
813
|
+
not already exist, so:
|
814
|
+
|
815
|
+
create_table!(:artists)
|
816
|
+
primary_key :id
|
817
|
+
end
|
818
|
+
|
819
|
+
is the same as:
|
820
|
+
|
821
|
+
create_table(:artists)
|
822
|
+
primary_key :id
|
823
|
+
end unless table_exists?(:artists)
|
824
|
+
|
825
|
+
Like <tt>create_table!</tt>, it should not be used inside migrations.
|
826
|
+
|
827
|
+
==== +create_view+ and +create_or_replace_view+
|
828
|
+
|
829
|
+
These can be used to create views. The difference between them is that
|
830
|
+
+create_or_replace_view+ will unconditionally replace an existing view of
|
831
|
+
the same name, while +create_view+ will probably raise an error. Both methods
|
832
|
+
take the name as the first argument, and either an string or a dataset as the
|
833
|
+
second argument:
|
834
|
+
|
835
|
+
create_view(:gold_albums, DB[:albums].filter{copies_sold > 500000})
|
836
|
+
create_or_replace_view(:gold_albums, "SELECT * FROM albums WHERE copies_sold > 500000")
|
837
|
+
|
838
|
+
==== +drop_view+
|
839
|
+
|
840
|
+
+drop_view+ drops existing views. Just like +drop_table+, it can accept multiple
|
841
|
+
arguments:
|
842
|
+
|
843
|
+
drop_view(:gold_albums, :platinum_albums)
|
844
|
+
|
845
|
+
== What to put in your migration's +down+ block
|
846
|
+
|
847
|
+
It's usually easy to determine what you should put in your migration's +up+ block,
|
848
|
+
as it's whatever change you want to make to the database. The +down+ block is
|
849
|
+
less obvious. In general, it should reverse the changes made by the +up+ block, which means
|
850
|
+
it should execute the opposite of what the +up+ block does in the reverse order in which
|
851
|
+
the +up+ block does it. Here's an example where you are switching from having a single
|
852
|
+
artist per album to multiple artists per album:
|
853
|
+
|
854
|
+
Sequel.migration do
|
855
|
+
up do
|
856
|
+
# Create albums_artists table
|
857
|
+
create_table(:albums_artists) do
|
858
|
+
foreign_key :album_id, :albums
|
859
|
+
foreign_key :artist_id, :artists
|
860
|
+
index [:album_id, :artist_id], :unique=>true
|
861
|
+
end
|
862
|
+
|
863
|
+
# Insert one row in the albums_artists table
|
864
|
+
# for each row in the albums table where there
|
865
|
+
# is an associated artist
|
866
|
+
DB[:albums_artists].insert([:album_id, :artist_id],
|
867
|
+
DB[:albums].select(:id, :artist_id).exclude(:artist_id=>nil))
|
868
|
+
|
869
|
+
# Drop the now unnecesssary column from the albums table
|
870
|
+
drop_column :albums, :artist_id
|
871
|
+
end
|
872
|
+
down do
|
873
|
+
# Add the foreign key column back to the artists table
|
874
|
+
alter_table(:albums){add_foreign_key :artist_id, :artists}
|
875
|
+
|
876
|
+
# If possible, associate each album with one of the artists
|
877
|
+
# it was associated with. This loses information, but
|
878
|
+
# there's no way around that.
|
879
|
+
DB[:albums_artists].
|
880
|
+
group(:album_id).
|
881
|
+
select{[album_id, max(artist_id).as(artist_id)]}.
|
882
|
+
having{artist_id > 0}.
|
883
|
+
all do |r|
|
884
|
+
DB[:artists].
|
885
|
+
filter(:id=>r[:album_id]).
|
886
|
+
update(:artist_id=>r[:artist_id])
|
887
|
+
end
|
888
|
+
|
889
|
+
# Drop the albums_artists table
|
890
|
+
drop_table(:albums_artists)
|
891
|
+
end
|
892
|
+
end
|
893
|
+
|
894
|
+
Note that the order in which things were done in the +down+ block is in
|
895
|
+
reverse order to how they were done in the +up+ block. Also note how it
|
896
|
+
isn't always possible to reverse exactly what was done in the +up+ block.
|
897
|
+
You should try to do so as much as possible, but if you can't, you may
|
898
|
+
want to have your +down+ block raise a <tt>Sequel::Error</tt> exception
|
899
|
+
saying why the migration cannot be reverted.
|
900
|
+
|
901
|
+
== Running migrations
|
902
|
+
|
903
|
+
You can run migrations using the +sequel+ command line program that
|
904
|
+
comes with Sequel. If you use the <tt>-m</tt> switch, +sequel+ will
|
905
|
+
run the migrator instead of giving you an IRB session. The <tt>-m</tt>
|
906
|
+
switch requires an argument that should be a path to a directory of migration
|
907
|
+
files:
|
908
|
+
|
909
|
+
sequel -m relative/path/to/migrations postgres://host/database
|
910
|
+
sequel -m /absolute/path/to/migrations postgres://host/database
|
911
|
+
|
912
|
+
If you do not provide a <tt>-M</tt> switch, +sequel+ will migrate to the latest
|
913
|
+
version in the directory. If you provide a <tt>-M</tt> switch, it should specify
|
914
|
+
an integer version to which to migrate.
|
915
|
+
|
916
|
+
# Migrate all the way down
|
917
|
+
sequel -m db/migrations -M 0 postgres://host/database
|
918
|
+
|
919
|
+
# Migrate to version 10 (IntegerMigrator style migrations)
|
920
|
+
sequel -m db/migrations -M 10 postgres://host/database
|
921
|
+
|
922
|
+
# Migrate to version 20100510 (TimestampMigrator migrations using YYYYMMDD)
|
923
|
+
sequel -m db/migrations -M 20100510 postgres://host/database
|
924
|
+
|
925
|
+
Whether or not migrations use the +up+ or +down+ block depends on the version
|
926
|
+
to which you are migrating. If you don't provide a <tt>-M</tt> switch, all
|
927
|
+
unapplied migrations will be migrated up. If you provide a <tt>-M</tt>, it will
|
928
|
+
depend on which migrations that have been applied. Applied migrations greater
|
929
|
+
than that version will be migrated down, while unapplied migrations less than
|
930
|
+
or equal to that version will be migrated up.
|
931
|
+
|
932
|
+
== Verbose migrations
|
933
|
+
|
934
|
+
By default, <tt>sequel -m</tt> operates as a well behaved command line utility
|
935
|
+
should, printing out nothing if there is no error. If you want to see the SQL
|
936
|
+
being executed during a migration, as well as the amount of time that each
|
937
|
+
migration takes, you can use the <tt>-E</tt> option to +sequel+ to set up a
|
938
|
+
+Database+ logger that logs to +STDOUT+. You can also log that same output to
|
939
|
+
a file using the <tt>-l</tt> option with a log file name.
|
940
|
+
|
941
|
+
== Using models in your migrations
|
942
|
+
|
943
|
+
Just don't do it.
|
944
|
+
|
945
|
+
It can be tempting to use models in your migrations, especially since it's easy
|
946
|
+
to load them at the same time using the <tt>-L</tt> option to +sequel+. However,
|
947
|
+
this ties your migrations to your models, and makes it so that changes in your
|
948
|
+
models can break old migrations.
|
949
|
+
|
950
|
+
With Sequel, it should be easy to use plain datasets to accomplish pretty much
|
951
|
+
anything you would want to accomplish in a migration. Even if you have to
|
952
|
+
copy some code from a model method into a migration itself, it's better than
|
953
|
+
having your migration use models and call model methods.
|
954
|
+
|
955
|
+
== Dumping the current schema as a migration
|
956
|
+
|
957
|
+
Sequel comes with a +schema_dumper+ extension that dumps the current schema of
|
958
|
+
the database as a migration to +STDOUT+ (which you can redirect to a file using
|
959
|
+
>). This is exposed in the +sequel+ command line tool with the <tt>-d</tt> and
|
960
|
+
<tt>-D</tt> switches. <tt>-d</tt> dumps the schema in database independent
|
961
|
+
format, while <tt>-D</tt> dumps the schema using a non-portable format, useful
|
962
|
+
if you are using nonportable columns such as +inet+ in your database.
|
963
|
+
|
964
|
+
Let's say you have an existing database and want to create a migration that
|
965
|
+
would recreate the database's schema:
|
966
|
+
|
967
|
+
sequel -d postgres://host/database > db/migrations/001_start.rb
|
968
|
+
|
969
|
+
or using a nonportable format:
|
970
|
+
|
971
|
+
sequel -D postgres://host/database > db/migrations/001_start.rb
|
972
|
+
|
973
|
+
The main difference between the two is that <tt>-d</tt> will use the type methods
|
974
|
+
with the database independent ruby class types, while <tt>-D</tt> will use
|
975
|
+
the +column+ method with string types.
|
976
|
+
|
977
|
+
Note that Sequel cannot dump constraints other than primary key constraints,
|
978
|
+
so it dumps foreign key columns as plain integers. If you are using any real
|
979
|
+
database features such as foreign keys, constraints, or triggers, you should
|
980
|
+
use your database's dump and restore programs instead of Sequel's schema
|
981
|
+
dumper.
|
982
|
+
|
983
|
+
You can take the migration created by the schema dumper to another computer
|
984
|
+
with an empty database, and attempt to recreate the schema using:
|
985
|
+
|
986
|
+
sequel -m db/migrations postgres://host/database
|
987
|
+
|
988
|
+
== Old-style migration classes
|
989
|
+
|
990
|
+
Before the <tt>Sequel.migration</tt> DSL was introduced, Sequel used classes
|
991
|
+
for Migrations:
|
992
|
+
|
993
|
+
Class.new(Sequel::Migration) do
|
994
|
+
def up
|
995
|
+
end
|
996
|
+
def down
|
997
|
+
end
|
998
|
+
end
|
999
|
+
|
1000
|
+
or:
|
1001
|
+
|
1002
|
+
class DoSomething < Sequel::Migration
|
1003
|
+
def up
|
1004
|
+
end
|
1005
|
+
def down
|
1006
|
+
end
|
1007
|
+
end
|
1008
|
+
|
1009
|
+
This usage is discouraged in new code, but will continue to be supported indefinitely.
|
1010
|
+
It is not recommended to convert old-style migration classes to the <tt>Sequel.migration</tt>
|
1011
|
+
DSL, but it is recommended to use the <tt>Sequel.migration</tt> DSL for all new migrations.
|