sequel 5.65.0 → 5.67.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 24bfe7c123337539140b0385dd9a828fe0fec4d6ad4e39e5fd8f150258ae7a03
4
- data.tar.gz: d820be46e60ba7b08b1129f0a6b67a24a550a27d5b682a84730172bc256f4118
3
+ metadata.gz: ff5af191af88ca381359503cfd9601ec8f18a9d782b68a3de4a8c9dac080eddc
4
+ data.tar.gz: 786d20f7eac9325748c259130951643856e34051e64b02aa17493f6d0283eb52
5
5
  SHA512:
6
- metadata.gz: bd5c7dacdf48f8ada7c34c88210ac804faa1b47741f946ca16cd6f645c1f86055eb85ac6b601e9cef2ab4aa8a5cba3dc444e3005d6ed61d324c1158f2e03a70e
7
- data.tar.gz: d17eb580a3f50f4b0c33561aed320c2ea3089044b905f4dc845ea8682afe1cdbf2798c6cd202f1233b1d168d06c9686a94728d41e0f47264245a33da5a99b251
6
+ metadata.gz: db251488ea0739af5ecf6db332a1cff729cffbf31dfabfc7235f3c27f807366bc9feddf147797e4dd5980348fda6a640cc5ceef7e6255cddcd317acb7172be90
7
+ data.tar.gz: 45b0e24847ccb365f1d7d06d35c85f05718802375877ab3d0315deb952f72171e26b5676861c25cf1e840f831d216f99eaddcbb717fc752d75077f19ae13980c
data/CHANGELOG CHANGED
@@ -1,3 +1,27 @@
1
+ === 5.67.0 (2023-04-01)
2
+
3
+ * Fix dumping of string column sizes in the schema dumper on MSSQL (jeremyevans) (#2013)
4
+
5
+ * Improve dumping of tables in non-default schemas in the schema_dumper extension (jeremyevans) (#2006)
6
+
7
+ * Make Database#{tables,views} support :qualify option on Microsoft SQL Server (jeremyevans)
8
+
9
+ * Avoid use of singleton classes for datasets instances on Ruby 2.4+ (jeremyevans) (#2007)
10
+
11
+ * Deprecate registering datasets extensions using an object other than a module (jeremyevans)
12
+
13
+ * Add set_literalizer extension, for treating set usage in datasets similar to array usage (jeremyevans) (#1997)
14
+
15
+ === 5.66.0 (2023-03-01)
16
+
17
+ * Recognize SQLite error related to strict tables as a constraint violation when using the amalgalite adapter (jeremyevans)
18
+
19
+ * Make Dataset#count work correctly for datasets using Dataset#values (jeremyevans) (#1992)
20
+
21
+ * Make Dataset#count with no argument/block handle dataset with custom SQL using ORDER BY on MSSQL (jeremyevans)
22
+
23
+ * Make Dataset#empty? correctly handle datasets with custom SQL or using Dataset#values where the first value is NULL (andy-k, jeremyevans) (#1990)
24
+
1
25
  === 5.65.0 (2023-02-01)
2
26
 
3
27
  * Allow pg_auto_parameterize extension to use placeholder loaders (jeremyevans)
@@ -0,0 +1,24 @@
1
+ = Improvements
2
+
3
+ * Dataset#empty? now correctly handles datasets using custom SQL or
4
+ Dataset#values where the first value in the first row is NULL.
5
+
6
+ * Dataset#count without an argument or block now works correctly on
7
+ Microsoft SQL Server when using custom SQL that uses ORDER BY.
8
+
9
+ * Dataset#count now works correctly for datasets using Dataset#values.
10
+
11
+ * Sequel now recognizes an additional SQLite constraint violation
12
+ error that occurs with recent versions of amalgalite.
13
+
14
+ * Dataset#values will now raise an exception when called with an empty
15
+ array. Previously, an exception would not be raised until the query
16
+ was sent to the database.
17
+
18
+ = Backwards Compatibility
19
+
20
+ * The changes to make Dataset#empty? and #count work with custom SQL
21
+ on Microsoft SQL Server now result in running the custom SQL, which
22
+ could result in worse performance than in previous versions. You can
23
+ wrap such datasets with Dataset#from_self manually to restore the
24
+ previous behavior.
@@ -0,0 +1,32 @@
1
+ = New Features
2
+
3
+ * A set_literalizer extension has been added, for treating Set
4
+ instances in datasets similar to Array instances:
5
+
6
+ DB.extension :set_literalizer
7
+ DB[:table].where(column: Set.new([1, 2, 3]))
8
+ # SELECT FROM table WHERE (column IN (1, 2, 3))
9
+
10
+ = Improvements
11
+
12
+ * Sequel now avoids the use of singleton classes for datasets on Ruby
13
+ 2.4+, instead creating a regular subclass whenever a dataset would
14
+ be extended via #extension or #with_extend. This significantly
15
+ improves performance, up to 20-40% for common dataset usage,
16
+ because it avoids creating new singleton classes for every dataset
17
+ clone, and it allows for cached method lookup.
18
+
19
+ * Database#tables and #views now support a :qualify option on Microsoft
20
+ SQL Server to returned qualified identifiers.
21
+
22
+ * The schema_dumper extension can now dump tables in non-default schemas
23
+ when using Microsoft SQL Server.
24
+
25
+ * The schema_dumper extension now correctly dumps string column sizes
26
+ when using Microsoft SQL Server.
27
+
28
+ = Backwards Compatibility
29
+
30
+ * Calling Sequel::Dataset.register_extension where the second argument
31
+ is not a module now issues a deprecation warning. Support for this
32
+ will be removed in Sequel 6.
@@ -404,10 +404,15 @@ module Sequel
404
404
  # Backbone of the tables and views support.
405
405
  def information_schema_tables(type, opts)
406
406
  m = output_identifier_meth
407
- metadata_dataset.from(Sequel[:information_schema][:tables].as(:t)).
407
+ schema = opts[:schema]||'dbo'
408
+ tables = metadata_dataset.from(Sequel[:information_schema][:tables].as(:t)).
408
409
  select(:table_name).
409
- where(:table_type=>type, :table_schema=>(opts[:schema]||'dbo').to_s).
410
+ where(:table_type=>type, :table_schema=>schema.to_s).
410
411
  map{|x| m.call(x[:table_name])}
412
+
413
+ tables.map!{|t| Sequel.qualify(m.call(schema).to_s, m.call(t).to_s)} if opts[:qualify]
414
+
415
+ tables
411
416
  end
412
417
 
413
418
  # Always quote identifiers in the metadata_dataset, so schema parsing works.
@@ -591,6 +596,18 @@ module Sequel
591
596
  end
592
597
  end
593
598
 
599
+ # For a dataset with custom SQL, since it may include ORDER BY, you
600
+ # cannot wrap it in a subquery. Load entire query in this case to get
601
+ # the number of rows. In general, you should avoid calling this method
602
+ # on datasets with custom SQL.
603
+ def count(*a, &block)
604
+ if (@opts[:sql] && a.empty? && !block)
605
+ naked.to_a.length
606
+ else
607
+ super
608
+ end
609
+ end
610
+
594
611
  # Uses CROSS APPLY to join the given table into the current dataset.
595
612
  def cross_apply(table)
596
613
  join_table(:cross_apply, table)
@@ -601,6 +618,19 @@ module Sequel
601
618
  clone(:disable_insert_output=>true)
602
619
  end
603
620
 
621
+ # For a dataset with custom SQL, since it may include ORDER BY, you
622
+ # cannot wrap it in a subquery. Run query, and if it returns any
623
+ # records, return true. In general, you should avoid calling this method
624
+ # on datasets with custom SQL.
625
+ def empty?
626
+ if @opts[:sql]
627
+ naked.each{return false}
628
+ true
629
+ else
630
+ super
631
+ end
632
+ end
633
+
604
634
  # MSSQL treats [] as a metacharacter in LIKE expresions.
605
635
  def escape_like(string)
606
636
  string.gsub(/[\\%_\[\]]/){|m| "\\#{m}"}
@@ -385,7 +385,12 @@ module Sequel
385
385
  # Use a custom expression with EXISTS to determine whether a dataset
386
386
  # is empty.
387
387
  def empty?
388
- db[:dual].where(@opts[:offset] ? exists : unordered.exists).get(1) == nil
388
+ if @opts[:sql]
389
+ naked.each{return false}
390
+ true
391
+ else
392
+ db[:dual].where(@opts[:offset] ? exists : unordered.exists).get(1) == nil
393
+ end
389
394
  end
390
395
 
391
396
  # Oracle requires SQL standard datetimes
@@ -805,6 +805,7 @@ module Sequel
805
805
  # DB.values([[1, 2], [3, 4]]).order(:column2).limit(1, 1)
806
806
  # # VALUES ((1, 2), (3, 4)) ORDER BY column2 LIMIT 1 OFFSET 1
807
807
  def values(v)
808
+ raise Error, "Cannot provide an empty array for values" if v.empty?
808
809
  @default_dataset.clone(:values=>v)
809
810
  end
810
811
 
@@ -1706,6 +1707,12 @@ module Sequel
1706
1707
  clone(:disable_insert_returning=>true)
1707
1708
  end
1708
1709
 
1710
+ # Always return false when using VALUES
1711
+ def empty?
1712
+ return false if @opts[:values]
1713
+ super
1714
+ end
1715
+
1709
1716
  # Return the results of an EXPLAIN query as a string
1710
1717
  def explain(opts=OPTS)
1711
1718
  with_sql((opts[:analyze] ? 'EXPLAIN ANALYZE ' : 'EXPLAIN ') + select_sql).map(:'QUERY PLAN').join("\r\n")
@@ -2125,6 +2132,11 @@ module Sequel
2125
2132
  "TRUNCATE TABLE#{' ONLY' if to[:only]} #{table}#{' RESTART IDENTITY' if to[:restart]}#{' CASCADE' if to[:cascade]}"
2126
2133
  end
2127
2134
 
2135
+ # Use from_self for aggregate dataset using VALUES.
2136
+ def aggreate_dataset_use_from_self?
2137
+ super || @opts[:values]
2138
+ end
2139
+
2128
2140
  # Allow truncation of multiple source tables.
2129
2141
  def check_truncation_allowed!
2130
2142
  raise(InvalidOperation, "Grouped datasets cannot be truncated") if opts[:group]
@@ -169,6 +169,7 @@ module Sequel
169
169
  # DB.values([[1, 2], [3, 4]])
170
170
  # # VALUES ((1, 2), (3, 4))
171
171
  def values(v)
172
+ raise Error, "Cannot provide an empty array for values" if v.empty?
172
173
  @default_dataset.clone(:values=>v)
173
174
  end
174
175
 
@@ -356,6 +357,7 @@ module Sequel
356
357
  DATABASE_ERROR_REGEXPS = {
357
358
  /(is|are) not unique\z|PRIMARY KEY must be unique\z|UNIQUE constraint failed: .+\z/ => UniqueConstraintViolation,
358
359
  /foreign key constraint failed\z/i => ForeignKeyConstraintViolation,
360
+ /\ASQLITE ERROR 3091/ => CheckConstraintViolation,
359
361
  /\A(SQLITE ERROR 275 \(CONSTRAINT_CHECK\) : )?CHECK constraint failed/ => CheckConstraintViolation,
360
362
  /\A(SQLITE ERROR 19 \(CONSTRAINT\) : )?constraint failed\z/ => ConstraintViolation,
361
363
  /\Acannot store [A-Z]+ value in [A-Z]+ column / => ConstraintViolation,
@@ -655,6 +657,12 @@ module Sequel
655
657
  @opts[:where] ? super : where(1=>1).delete(&block)
656
658
  end
657
659
 
660
+ # Always return false when using VALUES
661
+ def empty?
662
+ return false if @opts[:values]
663
+ super
664
+ end
665
+
658
666
  # Return an array of strings specifying a query explanation for a SELECT of the
659
667
  # current dataset. Currently, the options are ignored, but it accepts options
660
668
  # to be compatible with other adapters.
@@ -868,6 +876,11 @@ module Sequel
868
876
  end
869
877
  end
870
878
 
879
+ # Use from_self for aggregate dataset using VALUES.
880
+ def aggreate_dataset_use_from_self?
881
+ super || @opts[:values]
882
+ end
883
+
871
884
  # SQLite uses string literals instead of identifiers in AS clauses.
872
885
  def as_sql_append(sql, aliaz, column_aliases=nil)
873
886
  raise Error, "sqlite does not support derived column lists" if column_aliases
@@ -30,19 +30,29 @@ module Sequel
30
30
  @dataset_class.new(self)
31
31
  end
32
32
 
33
- # Fetches records for an arbitrary SQL statement. If a block is given,
34
- # it is used to iterate over the records:
33
+ # Returns a dataset instance for the given SQL string:
35
34
  #
36
- # DB.fetch('SELECT * FROM items'){|r| p r}
35
+ # ds = DB.fetch('SELECT * FROM items')
36
+ #
37
+ # You can then call methods on the dataset to retrieve results:
37
38
  #
38
- # The +fetch+ method returns a dataset instance:
39
+ # ds.all
40
+ # # SELECT * FROM items
41
+ # # => [{:column=>value, ...}, ...]
39
42
  #
40
- # DB.fetch('SELECT * FROM items').all
43
+ # If a block is given, it is passed to #each on the resulting dataset to
44
+ # iterate over the records returned by the query:
45
+ #
46
+ # DB.fetch('SELECT * FROM items'){|r| p r}
47
+ # # {:column=>value, ...}
48
+ # # ...
41
49
  #
42
50
  # +fetch+ can also perform parameterized queries for protection against SQL
43
51
  # injection:
44
52
  #
45
- # DB.fetch('SELECT * FROM items WHERE name = ?', my_name).all
53
+ # ds = DB.fetch('SELECT * FROM items WHERE name = ?', "my name")
54
+ # ds.all
55
+ # # SELECT * FROM items WHERE name = 'my name'
46
56
  #
47
57
  # See caveats listed in Dataset#with_sql regarding datasets using custom
48
58
  # SQL and the methods that can be called on them.
@@ -307,7 +307,7 @@ module Sequel
307
307
  # Examples:
308
308
  # primary_key(:id)
309
309
  # primary_key(:id, type: :Bignum, keep_order: true)
310
- # primary_key([:street_number, :house_number], name: :some constraint_name)
310
+ # primary_key([:street_number, :house_number], name: :some_constraint_name)
311
311
  def primary_key(name, *args)
312
312
  return composite_primary_key(name, *args) if name.is_a?(Array)
313
313
  column = @db.serial_primary_key_options.merge({:name => name})
@@ -174,7 +174,7 @@ module Sequel
174
174
  # # => false
175
175
  def empty?
176
176
  cached_dataset(:_empty_ds) do
177
- single_value_ds.unordered.select(EMPTY_SELECT)
177
+ (@opts[:sql] ? from_self : self).single_value_ds.unordered.select(EMPTY_SELECT)
178
178
  end.single_value!.nil?
179
179
  end
180
180
 
@@ -0,0 +1,42 @@
1
+ # frozen-string-literal: true
2
+
3
+ module Sequel
4
+ class Dataset
5
+ # This module implements methods to support deprecated use of extensions registered
6
+ # not using a module. In such cases, for backwards compatibility, Sequel has to use
7
+ # a singleton class for the dataset.
8
+ module DeprecatedSingletonClassMethods
9
+ # Load the extension into a clone of the receiver.
10
+ def extension(*a)
11
+ c = _clone(:freeze=>false)
12
+ c.send(:_extension!, a)
13
+ c.freeze
14
+ end
15
+
16
+ # Extend the cloned of the receiver with the given modules, instead of the default
17
+ # approach of creating a subclass of the receiver's class and including the modules
18
+ # into that.
19
+ def with_extend(*mods, &block)
20
+ c = _clone(:freeze=>false)
21
+ c.extend(*mods) unless mods.empty?
22
+ c.extend(DatasetModule.new(&block)) if block
23
+ c.freeze
24
+ end
25
+
26
+ private
27
+
28
+ # Load the extensions into the receiver.
29
+ def _extension!(exts)
30
+ Sequel.extension(*exts)
31
+ exts.each do |ext|
32
+ if pr = Sequel.synchronize{EXTENSIONS[ext]}
33
+ pr.call(self)
34
+ else
35
+ raise(Error, "Extension #{ext} does not have specific support handling individual datasets (try: Sequel.extension #{ext.inspect})")
36
+ end
37
+ end
38
+ self
39
+ end
40
+ end
41
+ end
42
+ end
@@ -12,6 +12,10 @@ module Sequel
12
12
  # in the extension).
13
13
  EXTENSIONS = {}
14
14
 
15
+ # Hash of extension name symbols to modules to load to implement the extension.
16
+ EXTENSION_MODULES = {}
17
+ private_constant :EXTENSION_MODULES
18
+
15
19
  EMPTY_ARRAY = [].freeze
16
20
 
17
21
  # The dataset options that require the removal of cached columns if changed.
@@ -45,12 +49,8 @@ module Sequel
45
49
  METHS
46
50
 
47
51
  # Register an extension callback for Dataset objects. ext should be the
48
- # extension name symbol, and mod should either be a Module that the
49
- # dataset is extended with, or a callable object called with the database
50
- # object. If mod is not provided, a block can be provided and is treated
51
- # as the mod object.
52
- #
53
- # If mod is a module, this also registers a Database extension that will
52
+ # extension name symbol, and mod should be a Module that will be
53
+ # included in the dataset's class. This also registers a Database extension that will
54
54
  # extend all of the database's datasets.
55
55
  def self.register_extension(ext, mod=nil, &block)
56
56
  if mod
@@ -58,10 +58,16 @@ module Sequel
58
58
  if mod.is_a?(Module)
59
59
  block = proc{|ds| ds.extend(mod)}
60
60
  Sequel::Database.register_extension(ext){|db| db.extend_datasets(mod)}
61
+ Sequel.synchronize{EXTENSION_MODULES[ext] = mod}
61
62
  else
62
63
  block = mod
63
64
  end
64
65
  end
66
+
67
+ unless mod.is_a?(Module)
68
+ Sequel::Deprecation.deprecate("Providing a block or non-module to Sequel::Dataset.register_extension is deprecated and support for it will be removed in Sequel 6.")
69
+ end
70
+
65
71
  Sequel.synchronize{EXTENSIONS[ext] = block}
66
72
  end
67
73
 
@@ -195,11 +201,15 @@ module Sequel
195
201
  if TRUE_FREEZE
196
202
  # Return a clone of the dataset loaded with the given dataset extensions.
197
203
  # If no related extension file exists or the extension does not have
198
- # specific support for Dataset objects, an Error will be raised.
199
- def extension(*a)
200
- c = _clone(:freeze=>false)
201
- c.send(:_extension!, a)
202
- c.freeze
204
+ # specific support for Dataset objects, an error will be raised.
205
+ def extension(*exts)
206
+ Sequel.extension(*exts)
207
+ mods = exts.map{|ext| Sequel.synchronize{EXTENSION_MODULES[ext]}}
208
+ if mods.all?
209
+ with_extend(*mods)
210
+ else
211
+ with_extend(DeprecatedSingletonClassMethods).extension(*exts)
212
+ end
203
213
  end
204
214
  else
205
215
  # :nocov:
@@ -1199,16 +1209,27 @@ module Sequel
1199
1209
  end
1200
1210
 
1201
1211
  if TRUE_FREEZE
1202
- # Return a clone of the dataset extended with the given modules.
1212
+ # Create a subclass of the receiver's class, and include the given modules
1213
+ # into it. If a block is provided, a DatasetModule is created using the block and
1214
+ # is included into the subclass. Create an instance of the subclass using the
1215
+ # same db and opts, so that the returned dataset operates similarly to a clone
1216
+ # extended with the given modules. This approach is used to avoid singleton
1217
+ # classes, which significantly improves performance.
1218
+ #
1203
1219
  # Note that like Object#extend, when multiple modules are provided
1204
- # as arguments the cloned dataset is extended with the modules in reverse
1205
- # order. If a block is provided, a DatasetModule is created using the block and
1206
- # the clone is extended with that module after any modules given as arguments.
1220
+ # as arguments the subclass includes the modules in reverse order.
1207
1221
  def with_extend(*mods, &block)
1208
- c = _clone(:freeze=>false)
1209
- c.extend(*mods) unless mods.empty?
1210
- c.extend(DatasetModule.new(&block)) if block
1211
- c.freeze
1222
+ c = Class.new(self.class)
1223
+ c.include(*mods) unless mods.empty?
1224
+ c.include(DatasetModule.new(&block)) if block
1225
+ o = c.freeze.allocate
1226
+ o.instance_variable_set(:@db, @db)
1227
+ o.instance_variable_set(:@opts, @opts)
1228
+ o.instance_variable_set(:@cache, {})
1229
+ if cols = cache_get(:_columns)
1230
+ o.send(:columns=, cols)
1231
+ end
1232
+ o.freeze
1212
1233
  end
1213
1234
  else
1214
1235
  # :nocov:
@@ -1315,18 +1336,22 @@ module Sequel
1315
1336
 
1316
1337
  private
1317
1338
 
1318
- # Load the extensions into the receiver, without checking if the receiver is frozen.
1319
- def _extension!(exts)
1320
- Sequel.extension(*exts)
1321
- exts.each do |ext|
1322
- if pr = Sequel.synchronize{EXTENSIONS[ext]}
1323
- pr.call(self)
1324
- else
1325
- raise(Error, "Extension #{ext} does not have specific support handling individual datasets (try: Sequel.extension #{ext.inspect})")
1339
+ # :nocov:
1340
+ unless TRUE_FREEZE
1341
+ # Load the extensions into the receiver, without checking if the receiver is frozen.
1342
+ def _extension!(exts)
1343
+ Sequel.extension(*exts)
1344
+ exts.each do |ext|
1345
+ if pr = Sequel.synchronize{EXTENSIONS[ext]}
1346
+ pr.call(self)
1347
+ else
1348
+ raise(Error, "Extension #{ext} does not have specific support handling individual datasets (try: Sequel.extension #{ext.inspect})")
1349
+ end
1326
1350
  end
1351
+ self
1327
1352
  end
1328
- self
1329
1353
  end
1354
+ # :nocov:
1330
1355
 
1331
1356
  # If invert is true, invert the condition.
1332
1357
  def _invert_filter(cond, invert)
@@ -977,7 +977,12 @@ module Sequel
977
977
  # order if not. Also removes the row_proc, which isn't needed
978
978
  # for aggregate calculations.
979
979
  def aggregate_dataset
980
- (options_overlap(COUNT_FROM_SELF_OPTS) ? from_self : unordered).naked
980
+ (aggreate_dataset_use_from_self? ? from_self : unordered).naked
981
+ end
982
+
983
+ # Whether to use from_self for an aggregate dataset.
984
+ def aggreate_dataset_use_from_self?
985
+ options_overlap(COUNT_FROM_SELF_OPTS)
981
986
  end
982
987
 
983
988
  # Append aliasing expression to SQL string.
@@ -53,4 +53,8 @@ module Sequel
53
53
  require_relative "dataset/sql"
54
54
  require_relative "dataset/placeholder_literalizer"
55
55
  require_relative "dataset/dataset_module"
56
+
57
+ # :nocov:
58
+ require_relative "dataset/deprecated_singleton_class_methods" if Dataset::TRUE_FREEZE
59
+ # :nocov:
56
60
  end
@@ -28,7 +28,7 @@
28
28
  # connections on every checkout without setting up coarse
29
29
  # connection checkouts will hurt performance, in some cases
30
30
  # significantly. Note that setting up coarse connection
31
- # checkouts reduces the concurrency level acheivable. For
31
+ # checkouts reduces the concurrency level achievable. For
32
32
  # example, in a web application, using Database#synchronize
33
33
  # in a rack middleware will limit the number of concurrent
34
34
  # web requests to the number to connections in the database
@@ -88,11 +88,11 @@ module Sequel
88
88
  # Note that the migration this produces does not have a down
89
89
  # block, so you cannot reverse it.
90
90
  def dump_foreign_key_migration(options=OPTS)
91
- ts = tables(options)
91
+ ts = _dump_tables(options)
92
92
  <<END_MIG
93
93
  Sequel.migration do
94
94
  change do
95
- #{ts.sort.map{|t| dump_table_foreign_keys(t)}.reject{|x| x == ''}.join("\n\n").gsub(/^/, ' ')}
95
+ #{ts.map{|t| dump_table_foreign_keys(t)}.reject{|x| x == ''}.join("\n\n").gsub(/^/, ' ')}
96
96
  end
97
97
  end
98
98
  END_MIG
@@ -106,11 +106,11 @@ END_MIG
106
106
  # set to :namespace, prepend the table name to the index name if the
107
107
  # database does not use a global index namespace.
108
108
  def dump_indexes_migration(options=OPTS)
109
- ts = tables(options)
109
+ ts = _dump_tables(options)
110
110
  <<END_MIG
111
111
  Sequel.migration do
112
112
  change do
113
- #{ts.sort.map{|t| dump_table_indexes(t, :add_index, options)}.reject{|x| x == ''}.join("\n\n").gsub(/^/, ' ')}
113
+ #{ts.map{|t| dump_table_indexes(t, :add_index, options)}.reject{|x| x == ''}.join("\n\n").gsub(/^/, ' ')}
114
114
  end
115
115
  end
116
116
  END_MIG
@@ -138,7 +138,7 @@ END_MIG
138
138
  options[:foreign_keys] = false
139
139
  end
140
140
 
141
- ts = sort_dumped_tables(tables(options), options)
141
+ ts = sort_dumped_tables(_dump_tables(options), options)
142
142
  skipped_fks = if sfk = options[:skipped_foreign_keys]
143
143
  # Handle skipped foreign keys by adding them at the end via
144
144
  # alter_table/add_foreign_key. Note that skipped foreign keys
@@ -166,6 +166,21 @@ END_MIG
166
166
 
167
167
  private
168
168
 
169
+ # Handle schema option to dump tables in a different schema. Such
170
+ # tables must be schema qualified for this to work correctly.
171
+ def _dump_tables(opts)
172
+ if opts[:schema]
173
+ _literal_table_sort(tables(opts.merge(:qualify=>true)))
174
+ else
175
+ tables(opts).sort
176
+ end
177
+ end
178
+
179
+ # Sort the given table by the literalized value.
180
+ def _literal_table_sort(tables)
181
+ tables.sort_by{|s| literal(s)}
182
+ end
183
+
169
184
  # If a database default exists and can't be converted, and we are dumping with :same_db,
170
185
  # return a string with the inspect method modified a literal string is created if the code is evaled.
171
186
  def column_schema_to_ruby_default_fallback(default, options)
@@ -204,12 +219,20 @@ END_MIG
204
219
  if database_type == :mysql && h[:type] =~ /\Atimestamp/
205
220
  h[:null] = true
206
221
  end
222
+ if database_type == :mssql && schema[:max_length]
223
+ h[:size] = schema[:max_length]
224
+ end
207
225
  h
208
226
  else
209
227
  column_schema_to_ruby_type(schema)
210
228
  end
211
229
  type = col_opts.delete(:type)
212
- col_opts.delete(:size) if col_opts[:size].nil?
230
+ if col_opts.key?(:size) && col_opts[:size].nil?
231
+ col_opts.delete(:size)
232
+ if max_length = schema[:max_length]
233
+ col_opts[:size] = max_length
234
+ end
235
+ end
213
236
  if schema[:generated]
214
237
  if options[:same_db] && database_type == :postgres
215
238
  col_opts[:generated_always_as] = column_schema_to_ruby_default_fallback(schema[:default], options)
@@ -352,7 +375,7 @@ END_MIG
352
375
  options[:skipped_foreign_keys] = skipped_foreign_keys
353
376
  tables
354
377
  else
355
- tables.sort
378
+ tables
356
379
  end
357
380
  end
358
381
 
@@ -377,14 +400,14 @@ END_MIG
377
400
  # outstanding foreign keys and skipping those foreign keys.
378
401
  # The skipped foreign keys will be added at the end of the
379
402
  # migration.
380
- skip_table, skip_fks = table_fks.sort_by{|table, fks| [fks.length, table]}.first
403
+ skip_table, skip_fks = table_fks.sort_by{|table, fks| [fks.length, literal(table)]}.first
381
404
  skip_fks_hash = skipped_foreign_keys[skip_table] = {}
382
405
  skip_fks.each{|fk| skip_fks_hash[fk[:columns]] = fk}
383
406
  this_loop << skip_table
384
407
  end
385
408
 
386
409
  # Add sorted tables from this loop to the final list
387
- sorted_tables.concat(this_loop.sort)
410
+ sorted_tables.concat(_literal_table_sort(this_loop))
388
411
 
389
412
  # Remove tables that were handled this loop
390
413
  this_loop.each{|t| table_fks.delete(t)}
@@ -0,0 +1,58 @@
1
+ # frozen-string-literal: true
2
+ #
3
+ # The set_literalizer extension allows for using Set instances in many of the
4
+ # same places that you would use Array instances:
5
+ #
6
+ # DB[:table].where(column: Set.new([1, 2, 3]))
7
+ # # SELECT FROM table WHERE (column IN (1, 2, 3))
8
+ #
9
+ # To load the extension into all datasets created from a given Database:
10
+ #
11
+ # DB.extension :set_literalizer
12
+ #
13
+ # Related module: Sequel::Dataset::SetLiteralizer
14
+
15
+ require 'set'
16
+
17
+ module Sequel
18
+ class Dataset
19
+ module SetLiteralizer
20
+ # Try to generate the same SQL for Set instances used in datasets
21
+ # that would be used for equivalent Array instances.
22
+ def complex_expression_sql_append(sql, op, args)
23
+ # Array instances are treated specially by
24
+ # Sequel::SQL::BooleanExpression.from_value_pairs. That cannot
25
+ # be modified by a dataset extension, so this tries to convert
26
+ # the complex expression values generated by default to what would
27
+ # be the complex expression values used for the equivalent array.
28
+ case op
29
+ when :'=', :'!='
30
+ if (set = args[1]).is_a?(Set)
31
+ op = op == :'=' ? :IN : :'NOT IN'
32
+ col = args[0]
33
+ array = set.to_a
34
+ if Sequel.condition_specifier?(array) && col.is_a?(Array)
35
+ array = Sequel.value_list(array)
36
+ end
37
+ args = [col, array]
38
+ end
39
+ end
40
+
41
+ super
42
+ end
43
+
44
+ private
45
+
46
+ # Literalize Set instances by converting the set to array.
47
+ def literal_other_append(sql, v)
48
+ if Set === v
49
+ literal_append(sql, v.to_a)
50
+ else
51
+ super
52
+ end
53
+ end
54
+ end
55
+
56
+ register_extension(:set_literalizer, SetLiteralizer)
57
+ end
58
+ end
@@ -3,7 +3,8 @@
3
3
  module Sequel
4
4
  module Plugins
5
5
  # The prepared_statements plugin modifies the model to use prepared statements for
6
- # instance level inserts and updates.
6
+ # instance level inserts and updates. This plugin exists for backwards compatibility
7
+ # and is not recommended for general use.
7
8
  #
8
9
  # Note that this plugin is unsafe in some circumstances, as it can allow up to
9
10
  # 2^N prepared statements to be created for each type of insert and update query, where
@@ -5,7 +5,8 @@ module Sequel
5
5
  # The prepared_statements_safe plugin modifies the model to reduce the number of
6
6
  # prepared statements that can be created, by setting as many columns as possible
7
7
  # before creating, and by changing +save_changes+ to save all columns instead of
8
- # just the changed ones.
8
+ # just the changed ones. This plugin exists for backwards compatibility
9
+ # and is not recommended for general use.
9
10
  #
10
11
  # This plugin depends on the +prepared_statements+ plugin.
11
12
  #
@@ -6,7 +6,7 @@ module Sequel
6
6
 
7
7
  # The minor version of Sequel. Bumped for every non-patch level
8
8
  # release, generally around once a month.
9
- MINOR = 65
9
+ MINOR = 67
10
10
 
11
11
  # The tiny version of Sequel. Usually 0, only bumped for bugfix
12
12
  # releases that fix regressions from previous versions.
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sequel
3
3
  version: !ruby/object:Gem::Version
4
- version: 5.65.0
4
+ version: 5.67.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Jeremy Evans
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2023-02-01 00:00:00.000000000 Z
11
+ date: 2023-04-01 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: minitest
@@ -197,6 +197,8 @@ extra_rdoc_files:
197
197
  - doc/release_notes/5.63.0.txt
198
198
  - doc/release_notes/5.64.0.txt
199
199
  - doc/release_notes/5.65.0.txt
200
+ - doc/release_notes/5.66.0.txt
201
+ - doc/release_notes/5.67.0.txt
200
202
  - doc/release_notes/5.7.0.txt
201
203
  - doc/release_notes/5.8.0.txt
202
204
  - doc/release_notes/5.9.0.txt
@@ -290,6 +292,8 @@ files:
290
292
  - doc/release_notes/5.63.0.txt
291
293
  - doc/release_notes/5.64.0.txt
292
294
  - doc/release_notes/5.65.0.txt
295
+ - doc/release_notes/5.66.0.txt
296
+ - doc/release_notes/5.67.0.txt
293
297
  - doc/release_notes/5.7.0.txt
294
298
  - doc/release_notes/5.8.0.txt
295
299
  - doc/release_notes/5.9.0.txt
@@ -374,6 +378,7 @@ files:
374
378
  - lib/sequel/dataset.rb
375
379
  - lib/sequel/dataset/actions.rb
376
380
  - lib/sequel/dataset/dataset_module.rb
381
+ - lib/sequel/dataset/deprecated_singleton_class_methods.rb
377
382
  - lib/sequel/dataset/features.rb
378
383
  - lib/sequel/dataset/graph.rb
379
384
  - lib/sequel/dataset/misc.rb
@@ -459,6 +464,7 @@ files:
459
464
  - lib/sequel/extensions/sequel_4_dataset_methods.rb
460
465
  - lib/sequel/extensions/server_block.rb
461
466
  - lib/sequel/extensions/server_logging.rb
467
+ - lib/sequel/extensions/set_literalizer.rb
462
468
  - lib/sequel/extensions/split_array_nil.rb
463
469
  - lib/sequel/extensions/sql_comments.rb
464
470
  - lib/sequel/extensions/sql_expr.rb
@@ -613,7 +619,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
613
619
  - !ruby/object:Gem::Version
614
620
  version: '0'
615
621
  requirements: []
616
- rubygems_version: 3.4.1
622
+ rubygems_version: 3.4.10
617
623
  signing_key:
618
624
  specification_version: 4
619
625
  summary: The Database Toolkit for Ruby