sequel 3.46.0 → 3.47.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG +96 -0
- data/Rakefile +7 -1
- data/bin/sequel +6 -4
- data/doc/active_record.rdoc +1 -1
- data/doc/advanced_associations.rdoc +14 -35
- data/doc/association_basics.rdoc +66 -4
- data/doc/migration.rdoc +4 -0
- data/doc/opening_databases.rdoc +6 -0
- data/doc/postgresql.rdoc +302 -0
- data/doc/release_notes/3.47.0.txt +270 -0
- data/doc/security.rdoc +6 -0
- data/lib/sequel/adapters/ibmdb.rb +9 -9
- data/lib/sequel/adapters/jdbc.rb +22 -7
- data/lib/sequel/adapters/jdbc/postgresql.rb +7 -2
- data/lib/sequel/adapters/mock.rb +2 -0
- data/lib/sequel/adapters/postgres.rb +44 -13
- data/lib/sequel/adapters/shared/mssql.rb +1 -1
- data/lib/sequel/adapters/shared/mysql.rb +2 -2
- data/lib/sequel/adapters/shared/postgres.rb +94 -55
- data/lib/sequel/adapters/shared/sqlite.rb +3 -1
- data/lib/sequel/adapters/sqlite.rb +2 -2
- data/lib/sequel/adapters/utils/pg_types.rb +1 -14
- data/lib/sequel/adapters/utils/split_alter_table.rb +3 -3
- data/lib/sequel/connection_pool/threaded.rb +1 -1
- data/lib/sequel/core.rb +1 -1
- data/lib/sequel/database/connecting.rb +2 -2
- data/lib/sequel/database/features.rb +5 -0
- data/lib/sequel/database/misc.rb +47 -5
- data/lib/sequel/database/query.rb +2 -2
- data/lib/sequel/dataset/actions.rb +4 -2
- data/lib/sequel/dataset/misc.rb +1 -1
- data/lib/sequel/dataset/prepared_statements.rb +1 -1
- data/lib/sequel/dataset/query.rb +8 -6
- data/lib/sequel/dataset/sql.rb +8 -6
- data/lib/sequel/extensions/constraint_validations.rb +5 -2
- data/lib/sequel/extensions/migration.rb +10 -8
- data/lib/sequel/extensions/pagination.rb +3 -0
- data/lib/sequel/extensions/pg_array.rb +85 -25
- data/lib/sequel/extensions/pg_hstore.rb +8 -1
- data/lib/sequel/extensions/pg_hstore_ops.rb +4 -1
- data/lib/sequel/extensions/pg_inet.rb +16 -13
- data/lib/sequel/extensions/pg_interval.rb +6 -2
- data/lib/sequel/extensions/pg_json.rb +18 -11
- data/lib/sequel/extensions/pg_range.rb +17 -2
- data/lib/sequel/extensions/pg_range_ops.rb +7 -5
- data/lib/sequel/extensions/pg_row.rb +29 -12
- data/lib/sequel/extensions/pretty_table.rb +3 -0
- data/lib/sequel/extensions/query.rb +3 -0
- data/lib/sequel/extensions/schema_caching.rb +2 -0
- data/lib/sequel/extensions/schema_dumper.rb +3 -1
- data/lib/sequel/extensions/select_remove.rb +3 -0
- data/lib/sequel/model.rb +8 -2
- data/lib/sequel/model/associations.rb +39 -27
- data/lib/sequel/model/base.rb +99 -38
- data/lib/sequel/model/plugins.rb +25 -0
- data/lib/sequel/plugins/association_autoreloading.rb +27 -22
- data/lib/sequel/plugins/association_dependencies.rb +1 -7
- data/lib/sequel/plugins/auto_validations.rb +110 -0
- data/lib/sequel/plugins/boolean_readers.rb +1 -6
- data/lib/sequel/plugins/caching.rb +6 -13
- data/lib/sequel/plugins/class_table_inheritance.rb +1 -0
- data/lib/sequel/plugins/composition.rb +14 -7
- data/lib/sequel/plugins/constraint_validations.rb +2 -13
- data/lib/sequel/plugins/defaults_setter.rb +1 -6
- data/lib/sequel/plugins/dirty.rb +8 -0
- data/lib/sequel/plugins/error_splitter.rb +54 -0
- data/lib/sequel/plugins/force_encoding.rb +1 -5
- data/lib/sequel/plugins/hook_class_methods.rb +1 -6
- data/lib/sequel/plugins/input_transformer.rb +79 -0
- data/lib/sequel/plugins/instance_filters.rb +7 -1
- data/lib/sequel/plugins/instance_hooks.rb +7 -1
- data/lib/sequel/plugins/json_serializer.rb +5 -10
- data/lib/sequel/plugins/lazy_attributes.rb +20 -7
- data/lib/sequel/plugins/list.rb +1 -6
- data/lib/sequel/plugins/many_through_many.rb +1 -2
- data/lib/sequel/plugins/many_to_one_pk_lookup.rb +23 -39
- data/lib/sequel/plugins/optimistic_locking.rb +1 -5
- data/lib/sequel/plugins/pg_row.rb +4 -2
- data/lib/sequel/plugins/pg_typecast_on_load.rb +3 -7
- data/lib/sequel/plugins/prepared_statements.rb +1 -5
- data/lib/sequel/plugins/prepared_statements_safe.rb +2 -11
- data/lib/sequel/plugins/rcte_tree.rb +2 -2
- data/lib/sequel/plugins/serialization.rb +11 -13
- data/lib/sequel/plugins/serialization_modification_detection.rb +13 -1
- data/lib/sequel/plugins/single_table_inheritance.rb +4 -4
- data/lib/sequel/plugins/static_cache.rb +67 -19
- data/lib/sequel/plugins/string_stripper.rb +7 -27
- data/lib/sequel/plugins/subclasses.rb +3 -5
- data/lib/sequel/plugins/tactical_eager_loading.rb +2 -2
- data/lib/sequel/plugins/timestamps.rb +2 -7
- data/lib/sequel/plugins/touch.rb +5 -8
- data/lib/sequel/plugins/tree.rb +1 -6
- data/lib/sequel/plugins/typecast_on_load.rb +1 -5
- data/lib/sequel/plugins/update_primary_key.rb +26 -14
- data/lib/sequel/plugins/validation_class_methods.rb +31 -16
- data/lib/sequel/plugins/validation_helpers.rb +50 -26
- data/lib/sequel/plugins/xml_serializer.rb +3 -6
- data/lib/sequel/sql.rb +1 -1
- data/lib/sequel/version.rb +1 -1
- data/spec/adapters/postgres_spec.rb +131 -15
- data/spec/adapters/sqlite_spec.rb +1 -1
- data/spec/core/connection_pool_spec.rb +16 -17
- data/spec/core/database_spec.rb +111 -40
- data/spec/core/dataset_spec.rb +65 -74
- data/spec/core/expression_filters_spec.rb +6 -5
- data/spec/core/object_graph_spec.rb +0 -1
- data/spec/core/schema_spec.rb +23 -23
- data/spec/core/spec_helper.rb +5 -1
- data/spec/extensions/association_dependencies_spec.rb +1 -1
- data/spec/extensions/association_proxies_spec.rb +1 -1
- data/spec/extensions/auto_validations_spec.rb +90 -0
- data/spec/extensions/caching_spec.rb +6 -0
- data/spec/extensions/class_table_inheritance_spec.rb +8 -1
- data/spec/extensions/composition_spec.rb +12 -5
- data/spec/extensions/constraint_validations_spec.rb +4 -4
- data/spec/extensions/core_refinements_spec.rb +29 -79
- data/spec/extensions/dirty_spec.rb +14 -0
- data/spec/extensions/error_splitter_spec.rb +18 -0
- data/spec/extensions/identity_map_spec.rb +0 -1
- data/spec/extensions/input_transformer_spec.rb +54 -0
- data/spec/extensions/instance_filters_spec.rb +6 -0
- data/spec/extensions/instance_hooks_spec.rb +12 -1
- data/spec/extensions/json_serializer_spec.rb +0 -1
- data/spec/extensions/lazy_attributes_spec.rb +64 -55
- data/spec/extensions/looser_typecasting_spec.rb +1 -1
- data/spec/extensions/many_through_many_spec.rb +3 -4
- data/spec/extensions/many_to_one_pk_lookup_spec.rb +53 -15
- data/spec/extensions/migration_spec.rb +16 -0
- data/spec/extensions/null_dataset_spec.rb +1 -1
- data/spec/extensions/pg_array_spec.rb +48 -1
- data/spec/extensions/pg_hstore_ops_spec.rb +10 -2
- data/spec/extensions/pg_hstore_spec.rb +5 -0
- data/spec/extensions/pg_inet_spec.rb +5 -0
- data/spec/extensions/pg_interval_spec.rb +7 -3
- data/spec/extensions/pg_json_spec.rb +6 -1
- data/spec/extensions/pg_range_ops_spec.rb +4 -1
- data/spec/extensions/pg_range_spec.rb +5 -0
- data/spec/extensions/pg_row_plugin_spec.rb +13 -0
- data/spec/extensions/pg_row_spec.rb +28 -19
- data/spec/extensions/pg_typecast_on_load_spec.rb +6 -1
- data/spec/extensions/prepared_statements_associations_spec.rb +1 -1
- data/spec/extensions/query_literals_spec.rb +1 -1
- data/spec/extensions/rcte_tree_spec.rb +2 -2
- data/spec/extensions/schema_spec.rb +2 -2
- data/spec/extensions/serialization_modification_detection_spec.rb +8 -0
- data/spec/extensions/serialization_spec.rb +15 -1
- data/spec/extensions/sharding_spec.rb +1 -1
- data/spec/extensions/single_table_inheritance_spec.rb +1 -1
- data/spec/extensions/static_cache_spec.rb +59 -9
- data/spec/extensions/tactical_eager_loading_spec.rb +19 -4
- data/spec/extensions/update_primary_key_spec.rb +17 -1
- data/spec/extensions/validation_class_methods_spec.rb +25 -0
- data/spec/extensions/validation_helpers_spec.rb +59 -3
- data/spec/integration/associations_test.rb +5 -5
- data/spec/integration/eager_loader_test.rb +32 -63
- data/spec/integration/model_test.rb +2 -2
- data/spec/integration/plugin_test.rb +88 -56
- data/spec/integration/prepared_statement_test.rb +1 -1
- data/spec/integration/schema_test.rb +1 -1
- data/spec/integration/timezone_test.rb +0 -1
- data/spec/integration/transaction_test.rb +0 -1
- data/spec/model/association_reflection_spec.rb +1 -1
- data/spec/model/associations_spec.rb +106 -84
- data/spec/model/base_spec.rb +4 -4
- data/spec/model/eager_loading_spec.rb +8 -8
- data/spec/model/model_spec.rb +27 -9
- data/spec/model/plugins_spec.rb +71 -0
- data/spec/model/record_spec.rb +99 -13
- metadata +12 -2
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA1:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 145949fd1421f21d5a6f4c980bb3c68ad9d79cbc
|
|
4
|
+
data.tar.gz: 26ae6dfbcb667675caa5b2de7d89a63472fc1c62
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 05bf4a0d1b5ec9b9870a7544e56971ddfbba7c244f909a8fb1acdd035463ba4b05a587f81a492c8f38adb53bc2c8e6b9d5faac7194de563fa29e0ecf2482ea93
|
|
7
|
+
data.tar.gz: 456c3859c61bba059d5ff24dcb0dc3ea79984c919ec1f0e38ef0f7f63a2b94e86d2fd6ab15ed0e7283860509ba081beb2a0f5322bf650f18277ff67b9d720c7a
|
data/CHANGELOG
CHANGED
|
@@ -1,3 +1,99 @@
|
|
|
1
|
+
=== 3.47.0 (2013-05-01)
|
|
2
|
+
|
|
3
|
+
* Don't fail for missing conversion proc in pg_typecast_on_load plugin (jeremyevans)
|
|
4
|
+
|
|
5
|
+
* Rename PGRangeOp #starts_before and #ends_after to #ends_before and #starts_after (soupmatt) (#655)
|
|
6
|
+
|
|
7
|
+
* Add Database#supports_schema_parsing? for checking for schema parsing support (jeremyevans)
|
|
8
|
+
|
|
9
|
+
* Handle hstore[] types on PostgreSQL if using pg_array and pg_hstore extensions (jeremyevans)
|
|
10
|
+
|
|
11
|
+
* Don't reset conversion procs when loading pg_* extensions (jeremyevans)
|
|
12
|
+
|
|
13
|
+
* Handle domain types when parsing the schema on PostgreSQL (jeremyevans)
|
|
14
|
+
|
|
15
|
+
* Handle domain types in composite types in the pg_row extension (jeremyevans)
|
|
16
|
+
|
|
17
|
+
* Add Database.extension, for loading an extension into all future databases (jeremyevans)
|
|
18
|
+
|
|
19
|
+
* Support a :search_path Database option for setting PostgreSQL search_path (jeremyevans)
|
|
20
|
+
|
|
21
|
+
* Support a :convert_infinite_timestamps Database option in the postgres adapter (jeremyevans)
|
|
22
|
+
|
|
23
|
+
* Support a :use_iso_date_format Database option in the postgres adapter, for per-Database specific behavior (jeremyevans)
|
|
24
|
+
|
|
25
|
+
* Add Model.default_set_fields_options, for having a model-wide default setting (jeremyevans)
|
|
26
|
+
|
|
27
|
+
* Make Model.map, .to_hash, and .to_hash_groups work without a query when using the static_cache plugin (jeremyevans)
|
|
28
|
+
|
|
29
|
+
* Support :hash_dup and Proc Model inherited instance variable types (jeremyevans)
|
|
30
|
+
|
|
31
|
+
* Handle aliased tables in the pg_row plugin (jeremyevans)
|
|
32
|
+
|
|
33
|
+
* Add input_transformer plugin, for automatically transform input to model column setters (jeremyevans)
|
|
34
|
+
|
|
35
|
+
* Add auto_validations plugin, for automatically adding not null, type, and unique validations (jeremyevans)
|
|
36
|
+
|
|
37
|
+
* Add validates_not_null to validation_helpers (jeremyevans)
|
|
38
|
+
|
|
39
|
+
* Add :setter, :adder, :remover, and :clearer association options for overriding the default modification behavior (jeremyevans)
|
|
40
|
+
|
|
41
|
+
* Add Database#register_array_type to the pg_array extension, for registering database-specific array types (jeremyevans)
|
|
42
|
+
|
|
43
|
+
* Speed up fetching model instances when using update_primary_key plugin (jeremyevans)
|
|
44
|
+
|
|
45
|
+
* In the update_primary_key plugin, if the primary key column changes, clear related associations (jeremyevans)
|
|
46
|
+
|
|
47
|
+
* Add :allow_missing_migration_files option to migrators, for not raising if migration files are missing (bporterfield) (#652)
|
|
48
|
+
|
|
49
|
+
* Fix race condition related to prepared_sql for newly prepared statements (jeremyevans) (#651)
|
|
50
|
+
|
|
51
|
+
* Support :keep_reference=>false Database option for not adding reference to Sequel::DATABASES (jeremyevans)
|
|
52
|
+
|
|
53
|
+
* Make Postgres::HStoreOp#- explicitly cast a string argument to text, to avoid PostgreSQL assuming it is an hstore (jeremyevans)
|
|
54
|
+
|
|
55
|
+
* Add validates_schema_types validation for validating column values are instances of an appropriate class (jeremyevans)
|
|
56
|
+
|
|
57
|
+
* Allow validates_type validation to accept an array of allowable classes (jeremyevans)
|
|
58
|
+
|
|
59
|
+
* Add Database#schema_type_class for getting the ruby class or classes related to the type symbol (jeremyevans)
|
|
60
|
+
|
|
61
|
+
* Add error_splitter plugin, for splitting multi-column errors into separate errors per column (jeremyevans)
|
|
62
|
+
|
|
63
|
+
* Skip validates_unique validation if underlying columns are not valid (jeremyevans)
|
|
64
|
+
|
|
65
|
+
* Allow Model#modified! to take an optional column argument and mark that column as being modified (jeremyevans)
|
|
66
|
+
|
|
67
|
+
* Allow Model#modified? to take an optional column argument and check if that column has been modified (jeremyevans)
|
|
68
|
+
|
|
69
|
+
* Make Model.count not issue a database query if using the static_cache plugin (jeremyevans)
|
|
70
|
+
|
|
71
|
+
* Handle more corner cases in the many_to_one_pk_lookup plugin (jeremyevans)
|
|
72
|
+
|
|
73
|
+
* Handle database connection during initialization in jdbc adapter (jeremyevans) (#646)
|
|
74
|
+
|
|
75
|
+
* Add Database.after_initialize, which takes a block and calls the block with each newly created Database instance (ged) (#641)
|
|
76
|
+
|
|
77
|
+
* Add a guide detailing PostgreSQL-specific support (jeremyevans)
|
|
78
|
+
|
|
79
|
+
* Make model plugins deal with frozen instances (jeremyevans)
|
|
80
|
+
|
|
81
|
+
* Allow freezing of model instances for models without primary keys (jeremyevans)
|
|
82
|
+
|
|
83
|
+
* Reflect constraint_validations extension :allow_nil=>true setting in the database constraints (jeremyevans)
|
|
84
|
+
|
|
85
|
+
* Add Plugins.after_set_dataset for easily running code after set_dataset (jeremyevans)
|
|
86
|
+
|
|
87
|
+
* Add Plugins.inherited_instance_variables for easily setting class instance variables when subclassing (jeremyevans)
|
|
88
|
+
|
|
89
|
+
* Add Plugins.def_dataset_methods for easily defining class methods that call dataset methods (jeremyevans)
|
|
90
|
+
|
|
91
|
+
* Make lazy_attributes plugin no longer depend on identity_map plugin (jeremyevans)
|
|
92
|
+
|
|
93
|
+
* Make Dataset#get with an array of values handle case where no row is returned (jeremyevans)
|
|
94
|
+
|
|
95
|
+
* Make caching plugin handle memcached API for deletes if ignore_exceptions option is used (rintaun) (#639)
|
|
96
|
+
|
|
1
97
|
=== 3.46.0 (2013-04-02)
|
|
2
98
|
|
|
3
99
|
* Add Dataset#cross_apply and Dataset#outer_apply on Microsoft SQL Server (jeremyevans)
|
data/Rakefile
CHANGED
|
@@ -129,6 +129,13 @@ begin
|
|
|
129
129
|
spec = lambda do |name, files, d|
|
|
130
130
|
lib_dir = File.join(File.dirname(File.expand_path(__FILE__)), 'lib')
|
|
131
131
|
ENV['RUBYLIB'] ? (ENV['RUBYLIB'] += ":#{lib_dir}") : (ENV['RUBYLIB'] = lib_dir)
|
|
132
|
+
|
|
133
|
+
desc "#{d} with -w, some warnings filtered"
|
|
134
|
+
task "#{name}_w" do
|
|
135
|
+
ENV['RUBYOPT'] ? (ENV['RUBYOPT'] += " -w") : (ENV['RUBYOPT'] = '-w')
|
|
136
|
+
sh "#{FileUtils::RUBY} -S rake #{name} 2>&1 | egrep -v \"(spec/.*: warning: (possibly )?useless use of == in void context|: warning: instance variable @.* not initialized|: warning: method redefined; discarding old|: warning: previous definition of)|rspec\""
|
|
137
|
+
end
|
|
138
|
+
|
|
132
139
|
desc d
|
|
133
140
|
spec_class.new(name) do |t|
|
|
134
141
|
t.send spec_files_meth, files
|
|
@@ -201,7 +208,6 @@ desc "Report code statistics (KLOCs, etc) from the application"
|
|
|
201
208
|
task :stats do
|
|
202
209
|
STATS_DIRECTORIES = [%w(Code lib/), %w(Spec spec)].map{|name, dir| [ name, "./#{dir}" ] }.select { |name, dir| File.directory?(dir)}
|
|
203
210
|
require "./extra/stats"
|
|
204
|
-
verbose = true
|
|
205
211
|
CodeStatistics.new(*STATS_DIRECTORIES).to_s
|
|
206
212
|
end
|
|
207
213
|
|
data/bin/sequel
CHANGED
|
@@ -17,7 +17,7 @@ load_dirs = []
|
|
|
17
17
|
exclusive_options = []
|
|
18
18
|
loggers = []
|
|
19
19
|
|
|
20
|
-
|
|
20
|
+
options = OptionParser.new do |opts|
|
|
21
21
|
opts.banner = "Sequel: The Database Toolkit for Ruby"
|
|
22
22
|
opts.define_head "Usage: sequel [options] <uri|path> [file]"
|
|
23
23
|
opts.separator ""
|
|
@@ -107,6 +107,7 @@ opts = OptionParser.new do |opts|
|
|
|
107
107
|
exit
|
|
108
108
|
end
|
|
109
109
|
end
|
|
110
|
+
opts = options
|
|
110
111
|
opts.parse!
|
|
111
112
|
|
|
112
113
|
db = ARGV.shift
|
|
@@ -146,18 +147,19 @@ begin
|
|
|
146
147
|
exit
|
|
147
148
|
end
|
|
148
149
|
if dump_migration
|
|
149
|
-
|
|
150
|
+
DB.extension :schema_dumper
|
|
150
151
|
puts DB.dump_schema_migration(:same_db=>dump_migration==:same_db)
|
|
151
152
|
exit
|
|
152
153
|
end
|
|
153
154
|
if dump_schema
|
|
154
|
-
|
|
155
|
+
DB.extension :schema_caching
|
|
155
156
|
DB.tables.each{|t| DB.schema(Sequel::SQL::Identifier.new(t))}
|
|
156
157
|
DB.dump_schema_cache(dump_schema)
|
|
157
158
|
exit
|
|
158
159
|
end
|
|
159
160
|
if copy_databases
|
|
160
|
-
Sequel.extension :migration
|
|
161
|
+
Sequel.extension :migration
|
|
162
|
+
DB.extension :schema_dumper
|
|
161
163
|
|
|
162
164
|
db2 = ARGV.shift
|
|
163
165
|
error_proc["Error: Must specify database connection string or path to yaml file as second argument for database you want to copy to"] if db2.nil? || db2.empty?
|
data/doc/active_record.rdoc
CHANGED
|
@@ -441,7 +441,7 @@ You can think of <tt>Sequel::Model::Errors</tt> as a subset of <tt>ActiveRecord:
|
|
|
441
441
|
Unlike ActiveRecord, Sequel's behavior depends on how you configure it. In Sequel, you can set flags at the global, class, and instance level that change the behavior of Sequel. Here's a brief description of the flags:
|
|
442
442
|
|
|
443
443
|
+raise_on_save_failure+ :: Whether to raise an error instead of returning nil on a failure to save/create/save_changes/etc due to a validation failure or a before_* hook returning false. By default, an error is raised, when this is set to false, nil is returned instead.
|
|
444
|
-
+raise_on_typecast_failure+ :: Whether to raise an error when unable to typecast data for a column (default: true). This should be set to false if you want to use validations to display nice error messages to the user (e.g. most web applications). You can use the +
|
|
444
|
+
+raise_on_typecast_failure+ :: Whether to raise an error when unable to typecast data for a column (default: true). This should be set to false if you want to use validations to display nice error messages to the user (e.g. most web applications). You can use the +validates_schema_types+ validation in connection with this option to check for typecast failures.
|
|
445
445
|
+require_modification+ :: Whether to raise an error if an UPDATE or DELETE query related to a model instance does not modify exactly 1 row. If set to false, Sequel will not check the number of rows modified (default: true if the database supports it).
|
|
446
446
|
+strict_param_setting+ :: Whether new/set/update and their variants should raise an error if an invalid key is used. A key is invalid if no setter method exists for that key or the access to the setter method is restricted (e.g. due to it being a primary key field). If set to false, silently skip any key where the setter method doesn't exist or access to it is restricted.
|
|
447
447
|
+typecast_empty_string_to_nil+ :: Whether to typecast the empty string ('') to nil for columns that are not string or blob. In most cases the empty string would be the way to specify a NULL SQL value in string form (nil.to_s == ''), and an empty string would not usually be typecast correctly for other types, so the default is true.
|
|
@@ -418,11 +418,15 @@ ActiveRecord:
|
|
|
418
418
|
Sequel::Model:
|
|
419
419
|
|
|
420
420
|
class Asset < Sequel::Model
|
|
421
|
-
many_to_one :attachable, :reciprocal=>:assets,
|
|
421
|
+
many_to_one :attachable, :reciprocal=>:assets,
|
|
422
|
+
:setter=>(proc do |attachable|
|
|
423
|
+
self[:attachable_id] = (attachable.pk if attachable)
|
|
424
|
+
self[:attachable_type] = (attachable.class.name if attachable)
|
|
425
|
+
end),
|
|
422
426
|
:dataset=>(proc do
|
|
423
427
|
klass = attachable_type.constantize
|
|
424
428
|
klass.where(klass.primary_key=>attachable_id)
|
|
425
|
-
end),
|
|
429
|
+
end),
|
|
426
430
|
:eager_loader=>(proc do |eo|
|
|
427
431
|
id_map = {}
|
|
428
432
|
eo[:rows].each do |asset|
|
|
@@ -438,45 +442,20 @@ Sequel::Model:
|
|
|
438
442
|
end
|
|
439
443
|
end
|
|
440
444
|
end)
|
|
441
|
-
|
|
442
|
-
private
|
|
443
|
-
|
|
444
|
-
def _attachable=(attachable)
|
|
445
|
-
self[:attachable_id] = (attachable.pk if attachable)
|
|
446
|
-
self[:attachable_type] = (attachable.class.name if attachable)
|
|
447
|
-
end
|
|
448
445
|
end
|
|
449
446
|
|
|
450
447
|
class Post < Sequel::Model
|
|
451
|
-
one_to_many :assets, :key=>:attachable_id, :reciprocal=>:attachable, :conditions=>{:attachable_type=>'Post'}
|
|
452
|
-
|
|
453
|
-
|
|
454
|
-
|
|
455
|
-
def _add_asset(asset)
|
|
456
|
-
asset.update(:attachable_id=>pk, :attachable_type=>'Post')
|
|
457
|
-
end
|
|
458
|
-
def _remove_asset(asset)
|
|
459
|
-
asset.update(:attachable_id=>nil, :attachable_type=>nil)
|
|
460
|
-
end
|
|
461
|
-
def _remove_all_assets
|
|
462
|
-
assets_dataset.update(:attachable_id=>nil, :attachable_type=>nil)
|
|
463
|
-
end
|
|
448
|
+
one_to_many :assets, :key=>:attachable_id, :reciprocal=>:attachable, :conditions=>{:attachable_type=>'Post'},
|
|
449
|
+
:adder=>proc{|asset| asset.update(:attachable_id=>pk, :attachable_type=>'Post')},
|
|
450
|
+
:remover=>proc{|asset| asset.update(:attachable_id=>nil, :attachable_type=>nil)},
|
|
451
|
+
:clearer=>proc{assets_dataset.update(:attachable_id=>nil, :attachable_type=>nil)}
|
|
464
452
|
end
|
|
465
453
|
|
|
466
454
|
class Note < Sequel::Model
|
|
467
|
-
one_to_many :assets, :key=>:attachable_id, :reciprocal=>:attachable, :conditions=>{:attachable_type=>'Note'}
|
|
468
|
-
|
|
469
|
-
|
|
470
|
-
|
|
471
|
-
def _add_asset(asset)
|
|
472
|
-
asset.update(:attachable_id=>pk, :attachable_type=>'Note')
|
|
473
|
-
end
|
|
474
|
-
def _remove_asset(asset)
|
|
475
|
-
asset.update(:attachable_id=>nil, :attachable_type=>nil)
|
|
476
|
-
end
|
|
477
|
-
def _remove_all_assets
|
|
478
|
-
assets_dataset.update(:attachable_id=>nil, :attachable_type=>nil)
|
|
479
|
-
end
|
|
455
|
+
one_to_many :assets, :key=>:attachable_id, :reciprocal=>:attachable, :conditions=>{:attachable_type=>'Note'},
|
|
456
|
+
:adder=>proc{|asset| asset.update(:attachable_id=>pk, :attachable_type=>'Note')},
|
|
457
|
+
:remover=>proc{|asset| asset.update(:attachable_id=>nil, :attachable_type=>nil)},
|
|
458
|
+
:clearer=>proc{assets_dataset.update(:attachable_id=>nil, :attachable_type=>nil)}
|
|
480
459
|
end
|
|
481
460
|
|
|
482
461
|
@asset.attachable = @post
|
data/doc/association_basics.rdoc
CHANGED
|
@@ -747,7 +747,13 @@ method that is preceeded by an underscore that does the actual
|
|
|
747
747
|
modification. The public method without the underscore handles caching
|
|
748
748
|
and callbacks, and shouldn't be overridden by the user.
|
|
749
749
|
|
|
750
|
-
|
|
750
|
+
In addition to overriding the private method in your class, you can also
|
|
751
|
+
use association options to change which method Sequel defines. The
|
|
752
|
+
only difference between the two is that if you use an association option
|
|
753
|
+
to change the method Sequel defines, you cannot call super to get the
|
|
754
|
+
default behavior.
|
|
755
|
+
|
|
756
|
+
=== _<i>association</i>= (:setter option)
|
|
751
757
|
|
|
752
758
|
Let's say you want to set a specific field whenever associating an object
|
|
753
759
|
using the association setter method. For example, let's say you have
|
|
@@ -775,7 +781,7 @@ The above example is contrived, as you would generally use a before_save model
|
|
|
775
781
|
hook to handle such a modification. However, if you only modify the album's
|
|
776
782
|
artist using the artist= method, this approach may perform better.
|
|
777
783
|
|
|
778
|
-
=== \_add_<i>association</i>
|
|
784
|
+
=== \_add_<i>association</i> (:adder option)
|
|
779
785
|
|
|
780
786
|
Continuing with the same example, here's how would you handle the same case if
|
|
781
787
|
you also wanted to handle the Artist#add_album method:
|
|
@@ -790,7 +796,7 @@ you also wanted to handle the Artist#add_album method:
|
|
|
790
796
|
end
|
|
791
797
|
end
|
|
792
798
|
|
|
793
|
-
=== \_remove_<i>association</i>
|
|
799
|
+
=== \_remove_<i>association</i> (:remover option)
|
|
794
800
|
|
|
795
801
|
Continuing with the same example, here's how would you handle the same case if
|
|
796
802
|
you also wanted to handle the Artist#remove_album method:
|
|
@@ -805,7 +811,7 @@ you also wanted to handle the Artist#remove_album method:
|
|
|
805
811
|
end
|
|
806
812
|
end
|
|
807
813
|
|
|
808
|
-
=== \_remove_all_<i>association</i>
|
|
814
|
+
=== \_remove_all_<i>association</i> (:clearer option)
|
|
809
815
|
|
|
810
816
|
Continuing with the same example, here's how would you handle the same case if
|
|
811
817
|
you also wanted to handle the Artist#remove_all_albums method:
|
|
@@ -1507,6 +1513,62 @@ Like the :left_primary_key option, but :left_primary_key references the method n
|
|
|
1507
1513
|
Like the :right_primary_key option, but :right_primary_key references the column
|
|
1508
1514
|
name, while :right_primary_key_method references the method name.
|
|
1509
1515
|
|
|
1516
|
+
=== Private Method Overriding Options
|
|
1517
|
+
|
|
1518
|
+
These options override the private methods that Sequel defines to do
|
|
1519
|
+
the actual work of associating and deassociating objects.
|
|
1520
|
+
|
|
1521
|
+
==== :setter [*_to_one associations]
|
|
1522
|
+
|
|
1523
|
+
This overrides the default behavior when you call an association setter
|
|
1524
|
+
method. Let's say you want to set a specific field whenever associating an object
|
|
1525
|
+
using the association setter method.
|
|
1526
|
+
|
|
1527
|
+
class Album < Sequel::Model
|
|
1528
|
+
many_to_one :artist, :setter=>(proc do |artist|
|
|
1529
|
+
if artist
|
|
1530
|
+
self.artist_id = artist.id
|
|
1531
|
+
self.file_under = "#{artist.name}-#{name}"
|
|
1532
|
+
else
|
|
1533
|
+
self.artist_id = nil
|
|
1534
|
+
self.file_under = name
|
|
1535
|
+
end
|
|
1536
|
+
end)
|
|
1537
|
+
end
|
|
1538
|
+
|
|
1539
|
+
==== :adder [*_to_many associations]
|
|
1540
|
+
|
|
1541
|
+
Continuing with the same example, here's how would you handle the same case if
|
|
1542
|
+
you also wanted to handle the Artist#add_album method:
|
|
1543
|
+
|
|
1544
|
+
class Artist < Sequel::Model
|
|
1545
|
+
one_to_many :albums, :adder=>(proc do |album|
|
|
1546
|
+
album.update(:artist_id => id, :file_under=>"#{name}-#{album.name}")
|
|
1547
|
+
end)
|
|
1548
|
+
end
|
|
1549
|
+
|
|
1550
|
+
==== :remover [*_to_many associations]
|
|
1551
|
+
|
|
1552
|
+
Continuing with the same example, here's how would you handle the same case if
|
|
1553
|
+
you also wanted to handle the Artist#remove_album method:
|
|
1554
|
+
|
|
1555
|
+
class Artist < Sequel::Model
|
|
1556
|
+
one_to_many :albums, :remover=>(proc do |album|
|
|
1557
|
+
album.update(:artist_id => nil, :file_under=>album.name)
|
|
1558
|
+
end)
|
|
1559
|
+
end
|
|
1560
|
+
|
|
1561
|
+
==== :clearer [*_to_many associations]
|
|
1562
|
+
|
|
1563
|
+
Continuing with the same example, here's how would you handle the same case if
|
|
1564
|
+
you also wanted to handle the Artist#remove_all_albums method:
|
|
1565
|
+
|
|
1566
|
+
class Artist < Sequel::Model
|
|
1567
|
+
one_to_many :albums, :clearer=>(proc do
|
|
1568
|
+
albums_dataset.update(:artist_id => nil, :file_under=>:name)
|
|
1569
|
+
end)
|
|
1570
|
+
end
|
|
1571
|
+
|
|
1510
1572
|
=== Advanced Options
|
|
1511
1573
|
|
|
1512
1574
|
==== :reciprocal
|
data/doc/migration.rdoc
CHANGED
|
@@ -356,6 +356,10 @@ With the +TimestampMigrator+, you are trading reliability for convenience. That
|
|
|
356
356
|
trade, especially if simultaneous related schema changes by separate developers are unlikely, but
|
|
357
357
|
you should give it some thought before using it.
|
|
358
358
|
|
|
359
|
+
== Ignoring missing migrations
|
|
360
|
+
|
|
361
|
+
In some cases, you may want to allow a migration in the database that does not exist in the filesystem (deploying to an older version of code without running a down migration when deploy auto-migrates, for example). If required, you can pass <tt>:allow_missing_migration_files => true<tt> as an option. This will stop errors from being raised if there are migrations in the database that do not exist in the filesystem.
|
|
362
|
+
|
|
359
363
|
== Modifying existing migrations
|
|
360
364
|
|
|
361
365
|
Just don't do it.
|
data/doc/opening_databases.rdoc
CHANGED
|
@@ -355,11 +355,17 @@ use pg are encouraged to install sequel_pg.
|
|
|
355
355
|
The following additional options are supported:
|
|
356
356
|
|
|
357
357
|
:charset :: Same as :encoding, :encoding takes precedence
|
|
358
|
+
:convert_infinite_timestamps :: Whether infinite timestamps/dates should be converted on retrieval. By default, no
|
|
359
|
+
conversion is done, so an error is raised if you attempt to retrieve an infinite
|
|
360
|
+
timestamp/date. You can set this to :nil to convert to nil, :string to leave
|
|
361
|
+
as a string, or :float to convert to an infinite float.
|
|
358
362
|
:encoding :: Set the client_encoding to the given string
|
|
359
363
|
:connect_timeout :: Set the number of seconds to wait for a connection (default 20, only respected
|
|
360
364
|
if using the pg library).
|
|
361
365
|
:sslmode :: Set to 'disable', 'allow', 'prefer', 'require' to choose how to treat SSL (only
|
|
362
366
|
respected if using the pg library)
|
|
367
|
+
:use_iso_date_format :: This can be set to false to not force the ISO date format. Sequel forces
|
|
368
|
+
it by default to allow for an optimization.
|
|
363
369
|
|
|
364
370
|
=== sqlite
|
|
365
371
|
|
data/doc/postgresql.rdoc
ADDED
|
@@ -0,0 +1,302 @@
|
|
|
1
|
+
= PostgreSQL-specific Support in Sequel
|
|
2
|
+
|
|
3
|
+
Sequel's core database and dataset functions are designed to support the features
|
|
4
|
+
shared by most common SQL database implementations. However, Sequel's database
|
|
5
|
+
adapters extend the core support to include support for database-specific features.
|
|
6
|
+
|
|
7
|
+
By far the most extensive database-specific support in Sequel is for PostgreSQL. This
|
|
8
|
+
support is roughly broken into the following areas:
|
|
9
|
+
|
|
10
|
+
* Database Types
|
|
11
|
+
* DDL Support
|
|
12
|
+
* DML Support
|
|
13
|
+
* sequel_pg
|
|
14
|
+
|
|
15
|
+
Note that while this guide is extensive, it is not exhaustive. There are additional
|
|
16
|
+
rarely used PostgreSQL features that Sequel supports which are not mentioned here.
|
|
17
|
+
|
|
18
|
+
== Adapter/Driver Specific Support
|
|
19
|
+
|
|
20
|
+
Some of this this support depends on the specific adapter or underlying driver in use.
|
|
21
|
+
<tt>postgres only</tt> will denote support specific to the postgres adapter (i.e.
|
|
22
|
+
not available when connecting to PostgreSQL via the jdbc, do, or swift adapters).
|
|
23
|
+
<tt>postgres/pg only</tt> will denote support specific to the postgres adapter when
|
|
24
|
+
pg is used as the underlying driver (i.e. not available when using the postgres-pr or
|
|
25
|
+
postgres drivers).
|
|
26
|
+
|
|
27
|
+
== PostgreSQL-specific Database Type Support
|
|
28
|
+
|
|
29
|
+
Sequel's default support on PostgreSQL only includes common database types. However,
|
|
30
|
+
Sequel ships with support for many PostgreSQL-specific types via extensions. In general,
|
|
31
|
+
you load these extensions via <tt>Database#extension</tt>. For example, to load support
|
|
32
|
+
for arrays, you would do:
|
|
33
|
+
|
|
34
|
+
DB.extension :pg_array
|
|
35
|
+
|
|
36
|
+
The following PostgreSQL-specific type extensions are available:
|
|
37
|
+
|
|
38
|
+
pg_array :: arrays (single and multidimensional, for any scalar type), as a ruby Array-like object
|
|
39
|
+
pg_hstore :: hstore, as a ruby Hash-like object
|
|
40
|
+
pg_inet :: inet/cidr, as ruby IPAddr objects
|
|
41
|
+
pg_interval :: interval, as ActiveSupport::Duration objects
|
|
42
|
+
pg_json :: json, as either ruby Array-like or Hash-like objects
|
|
43
|
+
pg_range :: ranges (for any scalar type), as a ruby Range-like object
|
|
44
|
+
pg_row :: row-valued/composite types, as a ruby Hash-like or Sequel::Model object
|
|
45
|
+
|
|
46
|
+
In general, these extensions just add support for Database objects to return retrieved
|
|
47
|
+
column values as the appropriate type (<tt>postgres only</tt>), and support for literalizing
|
|
48
|
+
the objects correctly for use in an SQL string, or using them as bound variable values (<tt>postgres/pg only</tt>).
|
|
49
|
+
|
|
50
|
+
There are also type-specific extensions that make it easy to use database functions
|
|
51
|
+
and operators related to the type. These extensions are:
|
|
52
|
+
|
|
53
|
+
pg_array_ops :: array-related functions and operators
|
|
54
|
+
pg_hstore_ops :: hstore-related functions and operators
|
|
55
|
+
pg_range_ops :: range-related functions and operators
|
|
56
|
+
pg_row_ops :: row-valued/composite type syntax support
|
|
57
|
+
|
|
58
|
+
== PostgreSQL-specific DDL Support
|
|
59
|
+
|
|
60
|
+
=== Exclusion Constraints
|
|
61
|
+
|
|
62
|
+
In +create_table+ blocks, you can use the +exclude+ method to set up exclusion constraints:
|
|
63
|
+
|
|
64
|
+
DB.create_table(:table) do
|
|
65
|
+
daterange :during
|
|
66
|
+
exclude([[:during, '&&']], :name=>:table_during_excl)
|
|
67
|
+
end
|
|
68
|
+
# CREATE TABLE "table" ("during" daterange,
|
|
69
|
+
# CONSTRAINT "table_during_excl" EXCLUDE USING gist ("during" WITH &&))
|
|
70
|
+
|
|
71
|
+
You can also add exclusion constraints in +alter_table+ blocks using add_exclusion_constraint:
|
|
72
|
+
|
|
73
|
+
DB.alter_table(:table) do
|
|
74
|
+
add_exclusion_constraint([[:during, '&&']], :name=>:table_during_excl)
|
|
75
|
+
end
|
|
76
|
+
# ALTER TABLE "table" ADD CONSTRAINT "table_during_excl" EXCLUDE USING gist ("during" WITH &&)
|
|
77
|
+
|
|
78
|
+
=== Adding Foreign Key Constraints Without Initial Validation
|
|
79
|
+
|
|
80
|
+
You can add a <tt>:not_valid=>true</tt> option when adding constraints to existing tables so
|
|
81
|
+
that it doesn't check if all current rows are valid:
|
|
82
|
+
|
|
83
|
+
DB.alter_table(:table) do
|
|
84
|
+
# Assumes t_id column already exists
|
|
85
|
+
add_foreign_key([:t_id], :table, :not_valid=>true, :name=>:table_fk)
|
|
86
|
+
end
|
|
87
|
+
# ALTER TABLE "table" ADD CONSTRAINT "table_fk" FOREIGN KEY ("t_id") REFERENCES "table" NOT VALID
|
|
88
|
+
|
|
89
|
+
Such constraints will be enforced for newly inserted and updated rows, but not for existing rows. After
|
|
90
|
+
all existing rows have been fixed, you can validate the constraint:
|
|
91
|
+
|
|
92
|
+
DB.alter_table(:table) do
|
|
93
|
+
validate_constraint(:table_fk)
|
|
94
|
+
end
|
|
95
|
+
# ALTER TABLE "table" VALIDATE CONSTRAINT "table_fk"
|
|
96
|
+
|
|
97
|
+
=== Creating Indexes Concurrently
|
|
98
|
+
|
|
99
|
+
You can create indexes concurrently using the <tt>:concurrently=>true</tt> option:
|
|
100
|
+
|
|
101
|
+
DB.add_index(:table, :t_id, :concurrently=>true)
|
|
102
|
+
# CREATE INDEX CONCURRENTLY "table_t_id_index" ON "table" ("t_id")
|
|
103
|
+
|
|
104
|
+
Similarly, you can drop indexes concurrently as well:
|
|
105
|
+
|
|
106
|
+
DB.drop_index(:table, :t_id, :concurrently=>true)
|
|
107
|
+
# DROP INDEX CONCURRENTLY "table_t_id_index"
|
|
108
|
+
|
|
109
|
+
=== Specific Conversions When Altering Column Types
|
|
110
|
+
|
|
111
|
+
When altering a column type, PostgreSQL allows the user to specify how to do the
|
|
112
|
+
conversion via a USING clause, and Sequel supports this using the <tt>:using</tt> option:
|
|
113
|
+
|
|
114
|
+
DB.alter_table(:table) do
|
|
115
|
+
# Assume unix_time column is stored as an integer, and you want to change it to timestamp
|
|
116
|
+
set_column_type :unix_time, Time, :using=>(Sequel.cast('epoch', Time) + Sequel.cast('1 second', :interval) * :unix_time)
|
|
117
|
+
end
|
|
118
|
+
# ALTER TABLE "table" ALTER COLUMN "unix_time" TYPE timestamp
|
|
119
|
+
# USING (CAST('epoch' AS timestamp) + (CAST('1 second' AS interval) * "unix_time"))
|
|
120
|
+
|
|
121
|
+
=== Creating Unlogged Tables
|
|
122
|
+
|
|
123
|
+
PostgreSQL allows users to create unlogged tables, which are faster but not crash safe. Sequel
|
|
124
|
+
allows you do create an unlogged table by specifying the <tt>:unlogged=>true</tt> option to +create_table+:
|
|
125
|
+
|
|
126
|
+
DB.create_table(:table, :unlogged=>true){Integer :i}
|
|
127
|
+
# CREATE UNLOGGED TABLE "table" ("i" integer)
|
|
128
|
+
|
|
129
|
+
=== Creating/Dropping Schemas, Languages, Functions, and Triggers
|
|
130
|
+
|
|
131
|
+
Sequel has built in support for creating and dropping PostgreSQL schemas, procedural languages, functions, and triggers:
|
|
132
|
+
|
|
133
|
+
DB.create_schema(:s)
|
|
134
|
+
# CREATE SCHEMA "s"
|
|
135
|
+
DB.drop_schema(:s)
|
|
136
|
+
# DROP SCHEMA "s"
|
|
137
|
+
|
|
138
|
+
DB.create_language(:plperl)
|
|
139
|
+
# CREATE LANGUAGE plperl
|
|
140
|
+
DB.drop_language(:plperl)
|
|
141
|
+
# DROP LANGUAGE plperl
|
|
142
|
+
|
|
143
|
+
DB.create_function(:set_updated_at, <<-SQL, :language=>:plpgsql, :returns=>:trigger)
|
|
144
|
+
BEGIN
|
|
145
|
+
NEW.updated_at := CURRENT_TIMESTAMP;
|
|
146
|
+
RETURN NEW;
|
|
147
|
+
END;
|
|
148
|
+
SQL
|
|
149
|
+
# CREATE FUNCTION set_updated_at() RETURNS trigger LANGUAGE plpgsql AS '
|
|
150
|
+
# BEGIN
|
|
151
|
+
# NEW.updated_at := CURRENT_TIMESTAMP;
|
|
152
|
+
# RETURN NEW;
|
|
153
|
+
# END;'
|
|
154
|
+
DB.drop_function(:set_updated_at)
|
|
155
|
+
# DROP FUNCTION set_updated_at()
|
|
156
|
+
|
|
157
|
+
DB.create_trigger(:table, :trg_updated_at, :set_updated_at, :events=>[:insert, :update], :each_row=>true)
|
|
158
|
+
# CREATE TRIGGER trg_updated_at BEFORE INSERT OR UPDATE ON "table" FOR EACH ROW EXECUTE PROCEDURE set_updated_at()
|
|
159
|
+
DB.drop_trigger(:table, :trg_updated_at)
|
|
160
|
+
# DROP TRIGGER trg_updated_at ON "table"
|
|
161
|
+
|
|
162
|
+
== PostgreSQL-specific DML Support
|
|
163
|
+
|
|
164
|
+
=== Returning Rows From Insert, Update, and Delete Statements
|
|
165
|
+
|
|
166
|
+
Sequel supports the ability to return rows from insert, update, and delete statements, via
|
|
167
|
+
<tt>Dataset#returning</tt>:
|
|
168
|
+
|
|
169
|
+
DB[:table].returning.insert
|
|
170
|
+
# INSERT INTO "table" DEFAULT VALUES RETURNING *
|
|
171
|
+
|
|
172
|
+
DB[:table].returning(:id).delete
|
|
173
|
+
# DELETE FROM "table" RETURNING "id"
|
|
174
|
+
|
|
175
|
+
DB[:table].returning(:id, Sequel.*(:id, :id).as(:idsq)).update(:id=>2)
|
|
176
|
+
# UPDATE "table" SET "id" = 2 RETURNING "id", ("id" * "id") AS "idsq"
|
|
177
|
+
|
|
178
|
+
When returning is used, instead of returning the number of rows affected (for updated/delete)
|
|
179
|
+
or the serial primary key value (for insert), it will return an array of hashes with the
|
|
180
|
+
returned results.
|
|
181
|
+
|
|
182
|
+
=== Distinct On Specific Columns
|
|
183
|
+
|
|
184
|
+
Sequel allows passing columns to <tt>Dataset#distinct</tt>, which will make the dataset return
|
|
185
|
+
rows that are distinct on just those columns:
|
|
186
|
+
|
|
187
|
+
DB[:table].distinct(:id).all
|
|
188
|
+
# SELECT DISTINCT ON ("id") * FROM "table"
|
|
189
|
+
|
|
190
|
+
=== Using a Cursor to Process Large Datasets <tt>postgres only</tt>
|
|
191
|
+
|
|
192
|
+
The postgres adapter offers a <tt>Dataset#use_cursor</tt> method to process large result sets
|
|
193
|
+
without keeping all rows in memory:
|
|
194
|
+
|
|
195
|
+
DB[:table].use_cursor.each{|row| }
|
|
196
|
+
# BEGIN;
|
|
197
|
+
# DECLARE sequel_cursor NO SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM "table";
|
|
198
|
+
# FETCH FORWARD 1000 FROM sequel_cursor
|
|
199
|
+
# FETCH FORWARD 1000 FROM sequel_cursor
|
|
200
|
+
# ...
|
|
201
|
+
# FETCH FORWARD 1000 FROM sequel_cursor
|
|
202
|
+
# CLOSE sequel_cursor
|
|
203
|
+
# COMMIT
|
|
204
|
+
|
|
205
|
+
=== Truncate Modifiers
|
|
206
|
+
|
|
207
|
+
Sequel supports PostgreSQL-specific truncate options:
|
|
208
|
+
|
|
209
|
+
DB[:table].truncate(:cascade => true, :only=>true, :restart=>true)
|
|
210
|
+
# TRUNCATE TABLE ONLY "table" RESTART IDENTITY CASCADE
|
|
211
|
+
|
|
212
|
+
=== COPY Support <tt>postgres/pg and jdbc/postgres only</tt>
|
|
213
|
+
|
|
214
|
+
PostgreSQL's COPY feature is pretty much the fastest way to get data in or out of the database.
|
|
215
|
+
Sequel supports getting data out of the database via <tt>Database#copy_table</tt>, either for
|
|
216
|
+
a specific table or for an arbitrary dataset:
|
|
217
|
+
|
|
218
|
+
DB.copy_table(:table, :format=>:csv)
|
|
219
|
+
# COPY "table" TO STDOUT (FORMAT csv)
|
|
220
|
+
DB.copy_table(DB[:table], :format=>:csv)
|
|
221
|
+
# COPY (SELECT * FROM "table") TO STDOUT (FORMAT csv)
|
|
222
|
+
|
|
223
|
+
It supports putting data into the database via <tt>Database#copy_into</tt>:
|
|
224
|
+
|
|
225
|
+
DB.copy_into(:table, :format=>:csv, :columns=>[:column1, :column2], :data=>"1,2\n2,3\n")
|
|
226
|
+
# COPY "table"("column1", "column2") FROM STDIN (FORMAT csv)
|
|
227
|
+
|
|
228
|
+
=== Anonymous Function Execution
|
|
229
|
+
|
|
230
|
+
You can execute anonymous functions using a database procedural language via <tt>Database#do</tt> (the
|
|
231
|
+
plpgsql language is the default):
|
|
232
|
+
|
|
233
|
+
DB.do <<-SQL
|
|
234
|
+
DECLARE r record;
|
|
235
|
+
BEGIN
|
|
236
|
+
FOR r IN SELECT table_schema, table_name FROM information_schema.tables
|
|
237
|
+
WHERE table_type = 'VIEW' AND table_schema = 'public'
|
|
238
|
+
LOOP
|
|
239
|
+
EXECUTE 'GRANT ALL ON ' || quote_ident(r.table_schema) || '.' || quote_ident(r.table_name) || ' TO webuser';
|
|
240
|
+
END LOOP;
|
|
241
|
+
END;
|
|
242
|
+
SQL
|
|
243
|
+
|
|
244
|
+
=== Listening On and Notifying Channels
|
|
245
|
+
|
|
246
|
+
You can use <tt>Database#notify</tt> to send notification to channels:
|
|
247
|
+
|
|
248
|
+
DB.notify(:channel)
|
|
249
|
+
# NOTIFY "channel"
|
|
250
|
+
|
|
251
|
+
<tt>postgres/pg only</tt> You can listen on channels via <tt>Database#listen</tt>. Note that
|
|
252
|
+
this blocks until the listening thread is notified:
|
|
253
|
+
|
|
254
|
+
DB.listen(:channel)
|
|
255
|
+
# LISTEN "channel"
|
|
256
|
+
# after notification received:
|
|
257
|
+
# UNLISTEN *
|
|
258
|
+
|
|
259
|
+
Note that +listen+ by default only listens for a single notification. If you want to loop and process
|
|
260
|
+
notifications:
|
|
261
|
+
|
|
262
|
+
DB.listen(:channel, :loop=>true){|channel| p channel}
|
|
263
|
+
|
|
264
|
+
=== Locking Tables
|
|
265
|
+
|
|
266
|
+
Sequel makes it easy to lock tables, though it is generally better to let the database
|
|
267
|
+
handle locking:
|
|
268
|
+
|
|
269
|
+
DB[:table].lock('EXCLUSIVE') do
|
|
270
|
+
DB[:table].insert(:id=>DB[:table].max(:id)+1)
|
|
271
|
+
end
|
|
272
|
+
# BEGIN;
|
|
273
|
+
# LOCK TABLE "table" IN EXCLUSIVE MODE;
|
|
274
|
+
# SELECT max("id") FROM "table" LIMIT 1;
|
|
275
|
+
# INSERT INTO "table" ("id") VALUES (2) RETURNING NULL;
|
|
276
|
+
# COMMIT;
|
|
277
|
+
|
|
278
|
+
== sequel_pg (<tt>postgres/pg only</tt>)
|
|
279
|
+
|
|
280
|
+
When the postgres adapter is used with the pg driver, Sequel automatically checks for sequel_pg, and
|
|
281
|
+
loads it if it is available. sequel_pg is a C extension that optimizes the fetching of rows, generally
|
|
282
|
+
resulting in a 2-6x speedup. It is highly recommended to install sequel_pg if you are using the
|
|
283
|
+
postgres adapter with pg.
|
|
284
|
+
|
|
285
|
+
sequel_pg has additional optimizations when using the Dataset +map+, +to_hash+,
|
|
286
|
+
+to_hash_groups+, +select_hash+, +select_hash_groups+, +select_map+, and +select_order_map+ methods,
|
|
287
|
+
which avoids creating intermediate hashes and can add further speedups.
|
|
288
|
+
|
|
289
|
+
In addition to optimization, sequel_pg also adds streaming support if used on PostgreSQL 9.2. Streaming
|
|
290
|
+
support is similar to using a cursor, but it is faster and more transparent.
|
|
291
|
+
|
|
292
|
+
You can enable the streaming support:
|
|
293
|
+
|
|
294
|
+
DB.extension(:pg_streaming)
|
|
295
|
+
|
|
296
|
+
Then you can stream individual datasets:
|
|
297
|
+
|
|
298
|
+
DB[:table].stream.each{|row| }
|
|
299
|
+
|
|
300
|
+
Or stream all datasets by default:
|
|
301
|
+
|
|
302
|
+
DB.stream_all_queries = true
|