sequel 5.49.0 → 5.50.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG +18 -0
- data/doc/migration.rdoc +1 -1
- data/doc/release_notes/5.50.0.txt +78 -0
- data/lib/sequel/adapters/jdbc/derby.rb +3 -0
- data/lib/sequel/adapters/shared/access.rb +2 -0
- data/lib/sequel/adapters/shared/db2.rb +2 -0
- data/lib/sequel/adapters/shared/sqlanywhere.rb +3 -0
- data/lib/sequel/adapters/utils/columns_limit_1.rb +22 -0
- data/lib/sequel/database/misc.rb +6 -0
- data/lib/sequel/dataset/actions.rb +1 -1
- data/lib/sequel/extensions/migration.rb +4 -1
- data/lib/sequel/extensions/pg_multirange.rb +372 -0
- data/lib/sequel/extensions/pg_range.rb +4 -12
- data/lib/sequel/extensions/pg_range_ops.rb +36 -8
- data/lib/sequel/extensions/sql_log_normalizer.rb +108 -0
- data/lib/sequel/model/associations.rb +1 -1
- data/lib/sequel/plugins/composition.rb +1 -0
- data/lib/sequel/plugins/lazy_attributes.rb +2 -0
- data/lib/sequel/plugins/serialization.rb +1 -0
- data/lib/sequel/plugins/serialization_modification_detection.rb +1 -0
- data/lib/sequel/version.rb +1 -1
- metadata +7 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: '088fd20b772371a3a765e7f8c24103c940fdc4f9a33828fee9e3e2f04dc38768'
|
4
|
+
data.tar.gz: 46daff6f4e927d98035fdf6f967a5c8020aefd4278308e788fa5f640877bda24
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 84b5d7b1dc1689c55da14e745de590f62a9052df9171043d9232b7817751a3c045d1021429fb05f2663a6754958a2c3935426b85e5f1ff4f8895fc57aa6049f3
|
7
|
+
data.tar.gz: 458d85e4f05fd97f4c74a0af4a324b6ee47207d8afb00aee4fb8c1b6b1854175e50162a3b5ad0bac4e35498620cd4b4d3531ad7a71e7e65c3604f0a1677a53d5
|
data/CHANGELOG
CHANGED
@@ -1,3 +1,21 @@
|
|
1
|
+
=== 5.50.0 (2021-11-01)
|
2
|
+
|
3
|
+
* Make Migrator :allow_missing_migration_files also allow down migrations where the current database version is greater than the last migration file version (francisconeves97) (#1789)
|
4
|
+
|
5
|
+
* Fix Model#freeze in composition, serialization, and serialization_modification_detection plugins to return self (jeremyevans) (#1788)
|
6
|
+
|
7
|
+
* Fix typecasting of lazy columns when using lazy_attributes plugin in model where dataset selects from subquery (jeremyevans)
|
8
|
+
|
9
|
+
* Add :before_preconnect Database option, for configuring extensions loaded via :preconnect_extensions (MarcPer, jeremyevans) (#1786)
|
10
|
+
|
11
|
+
* Change Dataset#columns! to use a LIMIT 0 query instead of a LIMIT 1 query (jeremyevans)
|
12
|
+
|
13
|
+
* Add sql_log_normalizer extension for normalizing logged SQL, helpful for analytics and sensitive data (jeremyevans)
|
14
|
+
|
15
|
+
* Add support for range_merge, multirange, and unnest, and PGMultiRange#op to pg_range_ops extension (jeremyevans)
|
16
|
+
|
17
|
+
* Add pg_multirange extension with support for PostgreSQL 14+ multirange types (jeremyevans)
|
18
|
+
|
1
19
|
=== 5.49.0 (2021-10-01)
|
2
20
|
|
3
21
|
* Switch block_given? usage to defined?(yield) (jeremyevans)
|
data/doc/migration.rdoc
CHANGED
@@ -352,7 +352,7 @@ you should give it some thought before using it.
|
|
352
352
|
|
353
353
|
== Ignoring missing migrations
|
354
354
|
|
355
|
-
In some cases, you may want to allow a migration in the database that does not exist in the filesystem (deploying to an older version of code without running a down migration when deploy auto-migrates, for example). If required, you can pass <tt>allow_missing_migration_files: true</tt> as an option. This will stop errors from being raised if there are migrations in the database that do not exist in the filesystem.
|
355
|
+
In some cases, you may want to allow a migration in the database that does not exist in the filesystem (deploying to an older version of code without running a down migration when deploy auto-migrates, for example). If required, you can pass <tt>allow_missing_migration_files: true</tt> as an option. This will stop errors from being raised if there are migrations in the database that do not exist in the filesystem. Note that the migrations themselves can still raise an error when using this option, if the database schema isn't in the state the migrations expect it to be in. In general, the <tt>allow_missing_migration_files: true</tt> option is very risky to use, and should only be used if it is absolutely necessary.
|
356
356
|
|
357
357
|
== Modifying existing migrations
|
358
358
|
|
@@ -0,0 +1,78 @@
|
|
1
|
+
= New Features
|
2
|
+
|
3
|
+
* A pg_multirange extension has been added with support for PostgreSQL
|
4
|
+
14+ multirange types. Multirange types are similar to an array of
|
5
|
+
ranges, where a value is in the multirange if it is in any of the
|
6
|
+
ranges contained in the multirange. Multiranges are useful when you
|
7
|
+
need to check against multiple ranges that do not overlap.
|
8
|
+
|
9
|
+
You can create multiranges using Sequel.pg_multirange, passing
|
10
|
+
an array of ranges and a multirange type:
|
11
|
+
|
12
|
+
DB.extension :pg_multirange
|
13
|
+
multirange = Sequel.pg_multirange(array_of_date_ranges, :datemultirange)
|
14
|
+
|
15
|
+
Sequel.pg_multirange returns a PGMultiRange, which operates as a
|
16
|
+
delegate to an array of PGRange objects. Behavior of the object
|
17
|
+
is similar to an array, except that cover? is supported, which will
|
18
|
+
test if any of the included ranges covers the argument:
|
19
|
+
|
20
|
+
multirange.cover?(Date.today)
|
21
|
+
|
22
|
+
Like the pg_range extension, this also supports registering custom
|
23
|
+
multirange types, and using multirange types as bound variables.
|
24
|
+
|
25
|
+
The pg_range_ops extension now supports both ranges and multiranges,
|
26
|
+
with a few new methods added to Postgres::RangeOp for converting
|
27
|
+
between them:
|
28
|
+
|
29
|
+
* range_merge
|
30
|
+
* multirange
|
31
|
+
* and unnest
|
32
|
+
|
33
|
+
* An sql_log_normalizer extension has been added for normalizing
|
34
|
+
logged SQL, replacing numbers and strings inside the SQL string
|
35
|
+
with question marks. This is useful for analytics and sensitive
|
36
|
+
data.
|
37
|
+
|
38
|
+
DB[:table].first(a: 1, b: 'something')
|
39
|
+
# Without sql_log_normalizer extension, logged SQL is:
|
40
|
+
# SELECT * FROM "table" WHERE (("a" = 1) AND ("b" = 'something')) LIMIT 1
|
41
|
+
|
42
|
+
DB.extension :sql_log_normalizer
|
43
|
+
DB[:table].first(a: 1, b: 'something')
|
44
|
+
# With sql_log_normalizer_extension, logged SQL is:
|
45
|
+
# SELECT * FROM "table" WHERE (("a" = ?) AND ("b" = ?)) LIMIT ?
|
46
|
+
|
47
|
+
This extension scans the logged SQL for numbers and strings,
|
48
|
+
attempting to support the database's rules for string quoting. This
|
49
|
+
means it should work with SQL that Sequel didn't itself create.
|
50
|
+
However, there are corner cases that the extension doesn't handle,
|
51
|
+
such as the use of apostrophes inside quoted identifiers, and
|
52
|
+
potentially other cases of database specific SQL where the normal
|
53
|
+
string quoting rules are changed, such as the use of escape strings
|
54
|
+
on PostgreSQL (E'escape string').
|
55
|
+
|
56
|
+
* A :before_preconnect Database option has been added. This is useful
|
57
|
+
for configuring extensions added via :preconnect_extensions before
|
58
|
+
the connection takes place.
|
59
|
+
|
60
|
+
= Other Improvements
|
61
|
+
|
62
|
+
* Dataset#columns! now uses a LIMIT 0 query instead of a LIMIT 1 query
|
63
|
+
by default. This can improve performance in cases where the row
|
64
|
+
returned would be large. Some databases do not support a LIMIT 0
|
65
|
+
query, and some adapters that ship with Sequel have been updated to
|
66
|
+
continue using LIMIT 1. Custom adapters should be updated to use
|
67
|
+
LIMIT 1 if the database does not support LIMIT 0.
|
68
|
+
|
69
|
+
* The lazy_attributes plugin no longer modifies the database schema.
|
70
|
+
Previously, it could modify the database schema indirectly,
|
71
|
+
resulting in the loss of typecasting for models that were not
|
72
|
+
based on a single table or view, such as usage with the
|
73
|
+
class_table_inheritance plugin.
|
74
|
+
|
75
|
+
* Model#freeze in the composition, serialization, and
|
76
|
+
serialization_modification_detection plugins now returns self. In
|
77
|
+
addition to being more correct, this fixes usage of these plugins
|
78
|
+
with the static_cache plugin.
|
@@ -2,6 +2,7 @@
|
|
2
2
|
|
3
3
|
Sequel::JDBC.load_driver('org.apache.derby.jdbc.EmbeddedDriver', :Derby)
|
4
4
|
require_relative 'transactions'
|
5
|
+
require_relative '../utils/columns_limit_1'
|
5
6
|
|
6
7
|
module Sequel
|
7
8
|
module JDBC
|
@@ -182,6 +183,8 @@ module Sequel
|
|
182
183
|
end
|
183
184
|
|
184
185
|
class Dataset < JDBC::Dataset
|
186
|
+
include ::Sequel::Dataset::ColumnsLimit1
|
187
|
+
|
185
188
|
# Derby doesn't support an expression between CASE and WHEN,
|
186
189
|
# so remove conditions.
|
187
190
|
def case_expression_sql_append(sql, ce)
|
@@ -2,6 +2,7 @@
|
|
2
2
|
|
3
3
|
require_relative '../utils/emulate_offset_with_reverse_and_count'
|
4
4
|
require_relative '../utils/unmodified_identifiers'
|
5
|
+
require_relative '../utils/columns_limit_1'
|
5
6
|
|
6
7
|
module Sequel
|
7
8
|
module Access
|
@@ -83,6 +84,7 @@ module Sequel
|
|
83
84
|
end)
|
84
85
|
include EmulateOffsetWithReverseAndCount
|
85
86
|
include UnmodifiedIdentifiers::DatasetMethods
|
87
|
+
include ::Sequel::Dataset::ColumnsLimit1
|
86
88
|
|
87
89
|
EXTRACT_MAP = {:year=>"'yyyy'", :month=>"'m'", :day=>"'d'", :hour=>"'h'", :minute=>"'n'", :second=>"'s'"}.freeze
|
88
90
|
EXTRACT_MAP.each_value(&:freeze)
|
@@ -1,6 +1,7 @@
|
|
1
1
|
# frozen-string-literal: true
|
2
2
|
|
3
3
|
require_relative '../utils/emulate_offset_with_row_number'
|
4
|
+
require_relative '../utils/columns_limit_1'
|
4
5
|
|
5
6
|
module Sequel
|
6
7
|
module DB2
|
@@ -273,6 +274,7 @@ module Sequel
|
|
273
274
|
|
274
275
|
module DatasetMethods
|
275
276
|
include EmulateOffsetWithRowNumber
|
277
|
+
include ::Sequel::Dataset::ColumnsLimit1
|
276
278
|
|
277
279
|
BITWISE_METHOD_MAP = {:& =>:BITAND, :| => :BITOR, :^ => :BITXOR, :'B~'=>:BITNOT}.freeze
|
278
280
|
|
@@ -1,5 +1,7 @@
|
|
1
1
|
# frozen-string-literal: true
|
2
2
|
|
3
|
+
require_relative '../utils/columns_limit_1'
|
4
|
+
|
3
5
|
module Sequel
|
4
6
|
module SqlAnywhere
|
5
7
|
Sequel::Database.set_shared_adapter_scheme(:sqlanywhere, self)
|
@@ -234,6 +236,7 @@ module Sequel
|
|
234
236
|
module DatasetMethods
|
235
237
|
Dataset.def_sql_method(self, :insert, %w'insert into columns values')
|
236
238
|
Dataset.def_sql_method(self, :select, %w'with select distinct limit columns into from join where group having window compounds order lock')
|
239
|
+
include ::Sequel::Dataset::ColumnsLimit1
|
237
240
|
|
238
241
|
# Whether to convert smallint to boolean arguments for this dataset.
|
239
242
|
# Defaults to the IBMDB module setting.
|
@@ -0,0 +1,22 @@
|
|
1
|
+
# frozen-string-literal: true
|
2
|
+
|
3
|
+
module Sequel
|
4
|
+
class Dataset
|
5
|
+
module ColumnsLimit1
|
6
|
+
COLUMNS_CLONE_OPTIONS = {:distinct => nil, :limit => 1, :offset=>nil, :where=>nil, :having=>nil, :order=>nil, :row_proc=>nil, :graph=>nil, :eager_graph=>nil}.freeze
|
7
|
+
|
8
|
+
# Use a limit of 1 instead of a limit of 0 when
|
9
|
+
# getting the columns.
|
10
|
+
def columns!
|
11
|
+
ds = clone(COLUMNS_CLONE_OPTIONS)
|
12
|
+
ds.each{break}
|
13
|
+
|
14
|
+
if cols = ds.cache[:_columns]
|
15
|
+
self.columns = cols
|
16
|
+
else
|
17
|
+
[]
|
18
|
+
end
|
19
|
+
end
|
20
|
+
end
|
21
|
+
end
|
22
|
+
end
|
data/lib/sequel/database/misc.rb
CHANGED
@@ -95,6 +95,8 @@ module Sequel
|
|
95
95
|
# options hash.
|
96
96
|
#
|
97
97
|
# Accepts the following options:
|
98
|
+
# :before_preconnect :: Callable that runs after extensions from :preconnect_extensions are loaded,
|
99
|
+
# but before any connections are created.
|
98
100
|
# :cache_schema :: Whether schema should be cached for this Database instance
|
99
101
|
# :default_string_column_size :: The default size of string columns, 255 by default.
|
100
102
|
# :extensions :: Extensions to load into this Database instance. Can be a symbol, array of symbols,
|
@@ -160,6 +162,10 @@ module Sequel
|
|
160
162
|
|
161
163
|
initialize_load_extensions(:preconnect_extensions)
|
162
164
|
|
165
|
+
if before_preconnect = @opts[:before_preconnect]
|
166
|
+
before_preconnect.call(self)
|
167
|
+
end
|
168
|
+
|
163
169
|
if typecast_value_boolean(@opts[:preconnect]) && @pool.respond_to?(:preconnect, true)
|
164
170
|
concurrent = typecast_value_string(@opts[:preconnect]) == "concurrently"
|
165
171
|
@pool.send(:preconnect, concurrent)
|
@@ -19,7 +19,7 @@ module Sequel
|
|
19
19
|
METHS
|
20
20
|
|
21
21
|
# The clone options to use when retrieving columns for a dataset.
|
22
|
-
COLUMNS_CLONE_OPTIONS = {:distinct => nil, :limit =>
|
22
|
+
COLUMNS_CLONE_OPTIONS = {:distinct => nil, :limit => 0, :offset=>nil, :where=>nil, :having=>nil, :order=>nil, :row_proc=>nil, :graph=>nil, :eager_graph=>nil}.freeze
|
23
23
|
|
24
24
|
# Inserts the given argument into the database. Returns self so it
|
25
25
|
# can be used safely when chaining:
|
@@ -388,6 +388,9 @@ module Sequel
|
|
388
388
|
|
389
389
|
# Migrates the supplied database using the migration files in the specified directory. Options:
|
390
390
|
# :allow_missing_migration_files :: Don't raise an error if there are missing migration files.
|
391
|
+
# It is very risky to use this option, since it can result in
|
392
|
+
# the database schema version number not matching the expected
|
393
|
+
# database schema.
|
391
394
|
# :column :: The column in the :table argument storing the migration version (default: :version).
|
392
395
|
# :current :: The current version of the database. If not given, it is retrieved from the database
|
393
396
|
# using the :table and :column options.
|
@@ -542,7 +545,7 @@ module Sequel
|
|
542
545
|
|
543
546
|
@direction = current < target ? :up : :down
|
544
547
|
|
545
|
-
if @direction == :down && @current >= @files.length
|
548
|
+
if @direction == :down && @current >= @files.length && !@allow_missing_migration_files
|
546
549
|
raise Migrator::Error, "Missing migration version(s) needed to migrate down to target version (current: #{current}, target: #{target})"
|
547
550
|
end
|
548
551
|
|
@@ -0,0 +1,372 @@
|
|
1
|
+
# frozen-string-literal: true
|
2
|
+
#
|
3
|
+
# The pg_multirange extension adds support for the PostgreSQL 14+ multirange
|
4
|
+
# types to Sequel. PostgreSQL multirange types are similar to an array of
|
5
|
+
# ranges, where a match against the multirange is a match against any of the
|
6
|
+
# ranges in the multirange.
|
7
|
+
#
|
8
|
+
# When PostgreSQL multirange values are retrieved, they are parsed and returned
|
9
|
+
# as instances of Sequel::Postgres::PGMultiRange. PGMultiRange mostly acts
|
10
|
+
# like an array of Sequel::Postgres::PGRange (see the pg_range extension).
|
11
|
+
#
|
12
|
+
# In addition to the parser, this extension comes with literalizers
|
13
|
+
# for PGMultiRanges, so they can be used in queries and as bound variables.
|
14
|
+
#
|
15
|
+
# To turn an existing array of Ranges into a PGMultiRange, use Sequel.pg_multirange.
|
16
|
+
# You must provide the type of multirange when creating the multirange:
|
17
|
+
#
|
18
|
+
# Sequel.pg_multirange(array_of_date_ranges, :datemultirange)
|
19
|
+
#
|
20
|
+
# To use this extension, load it into the Database instance:
|
21
|
+
#
|
22
|
+
# DB.extension :pg_multirange
|
23
|
+
#
|
24
|
+
# See the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]
|
25
|
+
# for details on using multirange type columns in CREATE/ALTER TABLE statements.
|
26
|
+
#
|
27
|
+
# This extension makes it easy to add support for other multirange types. In
|
28
|
+
# general, you just need to make sure that the subtype is handled and has the
|
29
|
+
# appropriate converter installed. For user defined
|
30
|
+
# types, you can do this via:
|
31
|
+
#
|
32
|
+
# DB.add_conversion_proc(subtype_oid){|string| }
|
33
|
+
#
|
34
|
+
# Then you can call
|
35
|
+
# Sequel::Postgres::PGMultiRange::DatabaseMethods#register_multirange_type
|
36
|
+
# to automatically set up a handler for the range type. So if you
|
37
|
+
# want to support the timemultirange type (assuming the time type is already
|
38
|
+
# supported):
|
39
|
+
#
|
40
|
+
# DB.register_multirange_type('timerange')
|
41
|
+
#
|
42
|
+
# This extension integrates with the pg_array extension. If you plan
|
43
|
+
# to use arrays of multirange types, load the pg_array extension before the
|
44
|
+
# pg_multirange extension:
|
45
|
+
#
|
46
|
+
# DB.extension :pg_array, :pg_multirange
|
47
|
+
#
|
48
|
+
# The pg_multirange extension will automatically load the pg_range extension.
|
49
|
+
#
|
50
|
+
# Related module: Sequel::Postgres::PGMultiRange
|
51
|
+
|
52
|
+
require 'delegate'
|
53
|
+
require 'strscan'
|
54
|
+
|
55
|
+
module Sequel
|
56
|
+
module Postgres
|
57
|
+
class PGMultiRange < DelegateClass(Array)
|
58
|
+
include Sequel::SQL::AliasMethods
|
59
|
+
|
60
|
+
# Converts strings into PGMultiRange instances.
|
61
|
+
class Parser < StringScanner
|
62
|
+
def initialize(source, converter)
|
63
|
+
super(source)
|
64
|
+
@converter = converter
|
65
|
+
end
|
66
|
+
|
67
|
+
# Parse the multirange type input string into a PGMultiRange value.
|
68
|
+
def parse
|
69
|
+
raise Sequel::Error, "invalid multirange, doesn't start with {" unless get_byte == '{'
|
70
|
+
ranges = []
|
71
|
+
|
72
|
+
unless scan(/\}/)
|
73
|
+
while true
|
74
|
+
raise Sequel::Error, "unfinished multirange" unless range_string = scan_until(/[\]\)]/)
|
75
|
+
ranges << @converter.call(range_string)
|
76
|
+
|
77
|
+
case sep = get_byte
|
78
|
+
when '}'
|
79
|
+
break
|
80
|
+
when ','
|
81
|
+
# nothing
|
82
|
+
else
|
83
|
+
raise Sequel::Error, "invalid multirange separator: #{sep.inspect}"
|
84
|
+
end
|
85
|
+
end
|
86
|
+
end
|
87
|
+
|
88
|
+
raise Sequel::Error, "invalid multirange, remaining data after }" unless eos?
|
89
|
+
ranges
|
90
|
+
end
|
91
|
+
end
|
92
|
+
|
93
|
+
# Callable object that takes the input string and parses it using Parser.
|
94
|
+
class Creator
|
95
|
+
# The database type to set on the PGMultiRange instances returned.
|
96
|
+
attr_reader :type
|
97
|
+
|
98
|
+
def initialize(type, converter=nil)
|
99
|
+
@type = type
|
100
|
+
@converter = converter
|
101
|
+
end
|
102
|
+
|
103
|
+
# Parse the string using Parser with the appropriate
|
104
|
+
# converter, and return a PGMultiRange with the appropriate database
|
105
|
+
# type.
|
106
|
+
def call(string)
|
107
|
+
PGMultiRange.new(Parser.new(string, @converter).parse, @type)
|
108
|
+
end
|
109
|
+
end
|
110
|
+
|
111
|
+
module DatabaseMethods
|
112
|
+
# Add the default multirange conversion procs to the database
|
113
|
+
def self.extended(db)
|
114
|
+
db.instance_exec do
|
115
|
+
raise Error, "multiranges not supported on this database" unless server_version >= 140000
|
116
|
+
|
117
|
+
extension :pg_range
|
118
|
+
@pg_multirange_schema_types ||= {}
|
119
|
+
|
120
|
+
register_multirange_type('int4multirange', :range_oid=>3904, :oid=>4451)
|
121
|
+
register_multirange_type('nummultirange', :range_oid=>3906, :oid=>4532)
|
122
|
+
register_multirange_type('tsmultirange', :range_oid=>3908, :oid=>4533)
|
123
|
+
register_multirange_type('tstzmultirange', :range_oid=>3910, :oid=>4534)
|
124
|
+
register_multirange_type('datemultirange', :range_oid=>3912, :oid=>4535)
|
125
|
+
register_multirange_type('int8multirange', :range_oid=>3926, :oid=>4536)
|
126
|
+
|
127
|
+
if respond_to?(:register_array_type)
|
128
|
+
register_array_type('int4multirange', :oid=>6150, :scalar_oid=>4451, :scalar_typecast=>:int4multirange)
|
129
|
+
register_array_type('nummultirange', :oid=>6151, :scalar_oid=>4532, :scalar_typecast=>:nummultirange)
|
130
|
+
register_array_type('tsmultirange', :oid=>6152, :scalar_oid=>4533, :scalar_typecast=>:tsmultirange)
|
131
|
+
register_array_type('tstzmultirange', :oid=>6153, :scalar_oid=>4534, :scalar_typecast=>:tstzmultirange)
|
132
|
+
register_array_type('datemultirange', :oid=>6155, :scalar_oid=>4535, :scalar_typecast=>:datemultirange)
|
133
|
+
register_array_type('int8multirange', :oid=>6157, :scalar_oid=>4536, :scalar_typecast=>:int8multirange)
|
134
|
+
end
|
135
|
+
|
136
|
+
[:int4multirange, :nummultirange, :tsmultirange, :tstzmultirange, :datemultirange, :int8multirange].each do |v|
|
137
|
+
@schema_type_classes[v] = PGMultiRange
|
138
|
+
end
|
139
|
+
|
140
|
+
procs = conversion_procs
|
141
|
+
add_conversion_proc(4533, PGMultiRange::Creator.new("tsmultirange", procs[3908]))
|
142
|
+
add_conversion_proc(4534, PGMultiRange::Creator.new("tstzmultirange", procs[3910]))
|
143
|
+
|
144
|
+
if respond_to?(:register_array_type) && defined?(PGArray::Creator)
|
145
|
+
add_conversion_proc(6152, PGArray::Creator.new("tsmultirange", procs[4533]))
|
146
|
+
add_conversion_proc(6153, PGArray::Creator.new("tstzmultirange", procs[4534]))
|
147
|
+
end
|
148
|
+
end
|
149
|
+
end
|
150
|
+
|
151
|
+
# Handle PGMultiRange values in bound variables
|
152
|
+
def bound_variable_arg(arg, conn)
|
153
|
+
case arg
|
154
|
+
when PGMultiRange
|
155
|
+
arg.unquoted_literal(schema_utility_dataset)
|
156
|
+
else
|
157
|
+
super
|
158
|
+
end
|
159
|
+
end
|
160
|
+
|
161
|
+
# Freeze the pg multirange schema types to prevent adding new ones.
|
162
|
+
def freeze
|
163
|
+
@pg_multirange_schema_types.freeze
|
164
|
+
super
|
165
|
+
end
|
166
|
+
|
167
|
+
# Register a database specific multirange type. This can be used to support
|
168
|
+
# different multirange types per Database. Options:
|
169
|
+
#
|
170
|
+
# :converter :: A callable object (e.g. Proc), that is called with the PostgreSQL range string,
|
171
|
+
# and should return a PGRange instance.
|
172
|
+
# :oid :: The PostgreSQL OID for the multirange type. This is used by Sequel to set up automatic type
|
173
|
+
# conversion on retrieval from the database.
|
174
|
+
# :range_oid :: Should be the PostgreSQL OID for the multirange subtype (the range type). If given,
|
175
|
+
# automatically sets the :converter option by looking for scalar conversion
|
176
|
+
# proc.
|
177
|
+
#
|
178
|
+
# If a block is given, it is treated as the :converter option.
|
179
|
+
def register_multirange_type(db_type, opts=OPTS, &block)
|
180
|
+
oid = opts[:oid]
|
181
|
+
soid = opts[:range_oid]
|
182
|
+
|
183
|
+
if has_converter = opts.has_key?(:converter)
|
184
|
+
raise Error, "can't provide both a block and :converter option to register_multirange_type" if block
|
185
|
+
converter = opts[:converter]
|
186
|
+
else
|
187
|
+
has_converter = true if block
|
188
|
+
converter = block
|
189
|
+
end
|
190
|
+
|
191
|
+
unless (soid || has_converter) && oid
|
192
|
+
range_oid, subtype_oid = from(:pg_range).join(:pg_type, :oid=>:rngmultitypid).where(:typname=>db_type.to_s).get([:rngmultitypid, :rngtypid])
|
193
|
+
soid ||= subtype_oid unless has_converter
|
194
|
+
oid ||= range_oid
|
195
|
+
end
|
196
|
+
|
197
|
+
db_type = db_type.to_s.dup.freeze
|
198
|
+
|
199
|
+
if soid
|
200
|
+
raise Error, "can't provide both a converter and :range_oid option to register" if has_converter
|
201
|
+
raise Error, "no conversion proc for :range_oid=>#{soid.inspect} in conversion_procs" unless converter = conversion_procs[soid]
|
202
|
+
end
|
203
|
+
|
204
|
+
raise Error, "cannot add a multirange type without a convertor (use :converter or :range_oid option or pass block)" unless converter
|
205
|
+
creator = Creator.new(db_type, converter)
|
206
|
+
add_conversion_proc(oid, creator)
|
207
|
+
|
208
|
+
@pg_multirange_schema_types[db_type] = db_type.to_sym
|
209
|
+
|
210
|
+
singleton_class.class_eval do
|
211
|
+
meth = :"typecast_value_#{db_type}"
|
212
|
+
scalar_typecast_method = :"typecast_value_#{opts.fetch(:scalar_typecast, db_type.sub('multirange', 'range'))}"
|
213
|
+
define_method(meth){|v| typecast_value_pg_multirange(v, creator, scalar_typecast_method)}
|
214
|
+
private meth
|
215
|
+
end
|
216
|
+
|
217
|
+
@schema_type_classes[db_type] = PGMultiRange
|
218
|
+
nil
|
219
|
+
end
|
220
|
+
|
221
|
+
private
|
222
|
+
|
223
|
+
# Handle arrays of multirange types in bound variables.
|
224
|
+
def bound_variable_array(a)
|
225
|
+
case a
|
226
|
+
when PGMultiRange
|
227
|
+
"\"#{bound_variable_arg(a, nil)}\""
|
228
|
+
else
|
229
|
+
super
|
230
|
+
end
|
231
|
+
end
|
232
|
+
|
233
|
+
# Recognize the registered database multirange types.
|
234
|
+
def schema_column_type(db_type)
|
235
|
+
@pg_multirange_schema_types[db_type] || super
|
236
|
+
end
|
237
|
+
|
238
|
+
# Set the :ruby_default value if the default value is recognized as a multirange.
|
239
|
+
def schema_post_process(_)
|
240
|
+
super.each do |a|
|
241
|
+
h = a[1]
|
242
|
+
db_type = h[:db_type]
|
243
|
+
if @pg_multirange_schema_types[db_type] && h[:default] =~ /\A#{db_type}\(.*\)\z/
|
244
|
+
h[:ruby_default] = get(Sequel.lit(h[:default]))
|
245
|
+
end
|
246
|
+
end
|
247
|
+
end
|
248
|
+
|
249
|
+
# Given a value to typecast and the type of PGMultiRange subclass:
|
250
|
+
# * If given a PGMultiRange with a matching type, use it directly.
|
251
|
+
# * If given a PGMultiRange with a different type, return a PGMultiRange
|
252
|
+
# with the creator's type.
|
253
|
+
# * If given an Array, create a new PGMultiRange instance for it, typecasting
|
254
|
+
# each instance using the scalar_typecast_method.
|
255
|
+
def typecast_value_pg_multirange(value, creator, scalar_typecast_method=nil)
|
256
|
+
case value
|
257
|
+
when PGMultiRange
|
258
|
+
return value if value.db_type == creator.type
|
259
|
+
when Array
|
260
|
+
# nothing
|
261
|
+
else
|
262
|
+
raise Sequel::InvalidValue, "invalid value for multirange type: #{value.inspect}"
|
263
|
+
end
|
264
|
+
|
265
|
+
if scalar_typecast_method && respond_to?(scalar_typecast_method, true)
|
266
|
+
value = value.map{|v| send(scalar_typecast_method, v)}
|
267
|
+
end
|
268
|
+
PGMultiRange.new(value, creator.type)
|
269
|
+
end
|
270
|
+
end
|
271
|
+
|
272
|
+
# The type of this multirange (e.g. 'int4multirange').
|
273
|
+
attr_accessor :db_type
|
274
|
+
|
275
|
+
# Set the array of ranges to delegate to, and the database type.
|
276
|
+
def initialize(ranges, db_type)
|
277
|
+
super(ranges)
|
278
|
+
@db_type = db_type.to_s
|
279
|
+
end
|
280
|
+
|
281
|
+
# Append the multirange SQL to the given sql string.
|
282
|
+
def sql_literal_append(ds, sql)
|
283
|
+
sql << db_type << '('
|
284
|
+
joiner = nil
|
285
|
+
conversion_meth = nil
|
286
|
+
each do |range|
|
287
|
+
if joiner
|
288
|
+
sql << joiner
|
289
|
+
else
|
290
|
+
joiner = ', '
|
291
|
+
end
|
292
|
+
|
293
|
+
unless range.is_a?(PGRange)
|
294
|
+
conversion_meth ||= :"typecast_value_#{db_type.sub('multi', '')}"
|
295
|
+
range = ds.db.send(conversion_meth, range)
|
296
|
+
end
|
297
|
+
|
298
|
+
ds.literal_append(sql, range)
|
299
|
+
end
|
300
|
+
sql << ')'
|
301
|
+
end
|
302
|
+
|
303
|
+
# Return whether the value is inside any of the ranges in the multirange.
|
304
|
+
def cover?(value)
|
305
|
+
any?{|range| range.cover?(value)}
|
306
|
+
end
|
307
|
+
alias === cover?
|
308
|
+
|
309
|
+
# Don't consider multiranges with different database types equal.
|
310
|
+
def eql?(other)
|
311
|
+
if PGMultiRange === other
|
312
|
+
return false unless other.db_type == db_type
|
313
|
+
other = other.__getobj__
|
314
|
+
end
|
315
|
+
__getobj__.eql?(other)
|
316
|
+
end
|
317
|
+
|
318
|
+
# Don't consider multiranges with different database types equal.
|
319
|
+
def ==(other)
|
320
|
+
return false if PGMultiRange === other && other.db_type != db_type
|
321
|
+
super
|
322
|
+
end
|
323
|
+
|
324
|
+
# Return a string containing the unescaped version of the multirange.
|
325
|
+
# Separated out for use by the bound argument code.
|
326
|
+
def unquoted_literal(ds)
|
327
|
+
val = String.new
|
328
|
+
val << "{"
|
329
|
+
|
330
|
+
joiner = nil
|
331
|
+
conversion_meth = nil
|
332
|
+
each do |range|
|
333
|
+
if joiner
|
334
|
+
val << joiner
|
335
|
+
else
|
336
|
+
joiner = ', '
|
337
|
+
end
|
338
|
+
|
339
|
+
unless range.is_a?(PGRange)
|
340
|
+
conversion_meth ||= :"typecast_value_#{db_type.sub('multi', '')}"
|
341
|
+
range = ds.db.send(conversion_meth, range)
|
342
|
+
end
|
343
|
+
|
344
|
+
val << range.unquoted_literal(ds)
|
345
|
+
end
|
346
|
+
|
347
|
+
val << "}"
|
348
|
+
end
|
349
|
+
end
|
350
|
+
end
|
351
|
+
|
352
|
+
module SQL::Builders
|
353
|
+
# Convert the object to a Postgres::PGMultiRange.
|
354
|
+
def pg_multirange(v, db_type)
|
355
|
+
case v
|
356
|
+
when Postgres::PGMultiRange
|
357
|
+
if v.db_type == db_type
|
358
|
+
v
|
359
|
+
else
|
360
|
+
Postgres::PGMultiRange.new(v, db_type)
|
361
|
+
end
|
362
|
+
when Array
|
363
|
+
Postgres::PGMultiRange.new(v, db_type)
|
364
|
+
else
|
365
|
+
# May not be defined unless the pg_range_ops extension is used
|
366
|
+
pg_range_op(v)
|
367
|
+
end
|
368
|
+
end
|
369
|
+
end
|
370
|
+
|
371
|
+
Database.register_extension(:pg_multirange, Postgres::PGMultiRange::DatabaseMethods)
|
372
|
+
end
|
@@ -4,12 +4,9 @@
|
|
4
4
|
# types to Sequel. PostgreSQL range types are similar to ruby's
|
5
5
|
# Range class, representating an array of values. However, they
|
6
6
|
# are more flexible than ruby's ranges, allowing exclusive beginnings
|
7
|
-
# and endings (ruby's range only allows exclusive endings)
|
8
|
-
# unbounded beginnings and endings (which ruby's range does not
|
9
|
-
# support).
|
7
|
+
# and endings (ruby's range only allows exclusive endings).
|
10
8
|
#
|
11
|
-
#
|
12
|
-
# that when range type values are retrieved, they are parsed and returned
|
9
|
+
# When PostgreSQL range values are retreived, they are parsed and returned
|
13
10
|
# as instances of Sequel::Postgres::PGRange. PGRange mostly acts
|
14
11
|
# like a Range, but it's not a Range as not all PostgreSQL range
|
15
12
|
# type values would be valid ruby ranges. If the range type value
|
@@ -19,8 +16,7 @@
|
|
19
16
|
# exception will be raised.
|
20
17
|
#
|
21
18
|
# In addition to the parser, this extension comes with literalizers
|
22
|
-
# for
|
23
|
-
# callbacks, so they work on all adapters.
|
19
|
+
# for PGRange and Range, so they can be used in queries and as bound variables.
|
24
20
|
#
|
25
21
|
# To turn an existing Range into a PGRange, use Sequel.pg_range:
|
26
22
|
#
|
@@ -249,11 +245,7 @@ module Sequel
|
|
249
245
|
|
250
246
|
# Recognize the registered database range types.
|
251
247
|
def schema_column_type(db_type)
|
252
|
-
|
253
|
-
type
|
254
|
-
else
|
255
|
-
super
|
256
|
-
end
|
248
|
+
@pg_range_schema_types[db_type] || super
|
257
249
|
end
|
258
250
|
|
259
251
|
# Set the :ruby_default value if the default value is recognized as a range.
|
@@ -1,7 +1,7 @@
|
|
1
1
|
# frozen-string-literal: true
|
2
2
|
#
|
3
3
|
# The pg_range_ops extension adds support to Sequel's DSL to make
|
4
|
-
# it easier to call PostgreSQL range functions and operators.
|
4
|
+
# it easier to call PostgreSQL range and multirange functions and operators.
|
5
5
|
#
|
6
6
|
# To load the extension:
|
7
7
|
#
|
@@ -11,10 +11,11 @@
|
|
11
11
|
#
|
12
12
|
# r = Sequel.pg_range_op(:range)
|
13
13
|
#
|
14
|
-
# If you have also loaded the pg_range
|
15
|
-
# Sequel.pg_range as well:
|
14
|
+
# If you have also loaded the pg_range or pg_multirange extensions, you can use
|
15
|
+
# Sequel.pg_range or Sequel.pg_multirange as well:
|
16
16
|
#
|
17
17
|
# r = Sequel.pg_range(:range)
|
18
|
+
# r = Sequel.pg_multirange(:range)
|
18
19
|
#
|
19
20
|
# Also, on most Sequel expression objects, you can call the pg_range
|
20
21
|
# method:
|
@@ -46,13 +47,25 @@
|
|
46
47
|
# r.upper_inc # upper_inc(range)
|
47
48
|
# r.lower_inf # lower_inf(range)
|
48
49
|
# r.upper_inf # upper_inf(range)
|
50
|
+
#
|
51
|
+
# All of the above methods work for both ranges and multiranges, as long
|
52
|
+
# as PostgreSQL supports the operation. The following methods are also
|
53
|
+
# supported:
|
54
|
+
#
|
55
|
+
# r.range_merge # range_merge(range)
|
56
|
+
# r.unnest # unnest(range)
|
57
|
+
# r.multirange # multirange(range)
|
58
|
+
#
|
59
|
+
# +range_merge+ and +unnest+ expect the receiver to represent a multirange
|
60
|
+
# value, while +multi_range+ expects the receiver to represent a range value.
|
49
61
|
#
|
50
|
-
# See the PostgreSQL range function and operator documentation for more
|
62
|
+
# See the PostgreSQL range and multirange function and operator documentation for more
|
51
63
|
# details on what these functions and operators do.
|
52
64
|
#
|
53
|
-
# If you are also using the pg_range extension, you should
|
54
|
-
# loading this extension. Doing so will allow you to use
|
55
|
-
#
|
65
|
+
# If you are also using the pg_range or pg_multirange extension, you should
|
66
|
+
# load them before loading this extension. Doing so will allow you to use
|
67
|
+
# PGRange#op and PGMultiRange#op to get a RangeOp, allowing you to perform
|
68
|
+
# range operations on range literals.
|
56
69
|
#
|
57
70
|
# Related module: Sequel::Postgres::RangeOp
|
58
71
|
|
@@ -77,9 +90,12 @@ module Sequel
|
|
77
90
|
:overlaps => ["(".freeze, " && ".freeze, ")".freeze].freeze,
|
78
91
|
}.freeze
|
79
92
|
|
80
|
-
%w'lower upper isempty lower_inc upper_inc lower_inf upper_inf'.each do |f|
|
93
|
+
%w'lower upper isempty lower_inc upper_inc lower_inf upper_inf unnest'.each do |f|
|
81
94
|
class_eval("def #{f}; function(:#{f}) end", __FILE__, __LINE__)
|
82
95
|
end
|
96
|
+
%w'range_merge multirange'.each do |f|
|
97
|
+
class_eval("def #{f}; RangeOp.new(function(:#{f})) end", __FILE__, __LINE__)
|
98
|
+
end
|
83
99
|
OPERATORS.each_key do |f|
|
84
100
|
class_eval("def #{f}(v); operator(:#{f}, v) end", __FILE__, __LINE__)
|
85
101
|
end
|
@@ -127,6 +143,18 @@ module Sequel
|
|
127
143
|
end
|
128
144
|
end
|
129
145
|
end
|
146
|
+
|
147
|
+
# :nocov:
|
148
|
+
if defined?(PGMultiRange)
|
149
|
+
# :nocov:
|
150
|
+
class PGMultiRange
|
151
|
+
# Wrap the PGRange instance in an RangeOp, allowing you to easily use
|
152
|
+
# the PostgreSQL range functions and operators with literal ranges.
|
153
|
+
def op
|
154
|
+
RangeOp.new(self)
|
155
|
+
end
|
156
|
+
end
|
157
|
+
end
|
130
158
|
end
|
131
159
|
|
132
160
|
module SQL::Builders
|
@@ -0,0 +1,108 @@
|
|
1
|
+
# frozen-string-literal: true
|
2
|
+
#
|
3
|
+
# The sql_log_normalizer extension normalizes the SQL that is logged,
|
4
|
+
# removing the literal strings and numbers in the SQL, and removing the
|
5
|
+
# logging of any bound variables:
|
6
|
+
#
|
7
|
+
# ds = DB[:table].first(a: 1, b: 'something')
|
8
|
+
# # Without sql_log_normalizer extension
|
9
|
+
# # SELECT * FROM "table" WHERE (("a" = 1) AND ("b" = 'something')) LIMIT 1
|
10
|
+
#
|
11
|
+
# # With sql_log_normalizer_extension
|
12
|
+
# # SELECT * FROM "table" WHERE (("a" = ?) AND ("b" = ?)) LIMIT ?
|
13
|
+
#
|
14
|
+
# The normalization is done by scanning the SQL string being executed
|
15
|
+
# for literal strings and numbers, and replacing them with question
|
16
|
+
# marks. While this should work for all or almost all production queries,
|
17
|
+
# there are pathlogical queries that will not be handled correctly, such as
|
18
|
+
# the use of apostrophes in identifiers:
|
19
|
+
#
|
20
|
+
# DB[:"asf'bar"].where(a: 1, b: 'something').first
|
21
|
+
# # Logged as:
|
22
|
+
# # SELECT * FROM "asf?something')) LIMIT ?
|
23
|
+
#
|
24
|
+
# The expected use case for this extension is when you want to normalize
|
25
|
+
# logs to group similar queries, or when you want to protect sensitive
|
26
|
+
# data from being stored in the logs.
|
27
|
+
#
|
28
|
+
# Related module: Sequel::SQLLogNormalizer
|
29
|
+
|
30
|
+
#
|
31
|
+
module Sequel
|
32
|
+
module SQLLogNormalizer
|
33
|
+
def self.extended(db)
|
34
|
+
type = case db.literal("'")
|
35
|
+
when "''''"
|
36
|
+
:standard
|
37
|
+
when "'\\''"
|
38
|
+
:backslash
|
39
|
+
when "N''''"
|
40
|
+
:n_standard
|
41
|
+
else
|
42
|
+
raise Error, "SQL log normalization is not supported on this database (' literalized as #{db.literal("'").inspect})"
|
43
|
+
end
|
44
|
+
db.instance_variable_set(:@sql_string_escape_type, type)
|
45
|
+
end
|
46
|
+
|
47
|
+
# Normalize the SQL before calling super.
|
48
|
+
def log_connection_yield(sql, conn, args=nil)
|
49
|
+
unless skip_logging?
|
50
|
+
sql = normalize_logged_sql(sql)
|
51
|
+
args = nil
|
52
|
+
end
|
53
|
+
super
|
54
|
+
end
|
55
|
+
|
56
|
+
# Replace literal strings and numbers in SQL with question mark placeholders.
|
57
|
+
def normalize_logged_sql(sql)
|
58
|
+
sql = sql.dup
|
59
|
+
sql.force_encoding('BINARY')
|
60
|
+
start_index = 0
|
61
|
+
check_n = @sql_string_escape_type == :n_standard
|
62
|
+
outside_string = true
|
63
|
+
|
64
|
+
if @sql_string_escape_type == :backslash
|
65
|
+
search_char = /[\\']/
|
66
|
+
escape_char_offset = 0
|
67
|
+
escape_char_value = 92 # backslash
|
68
|
+
else
|
69
|
+
search_char = "'"
|
70
|
+
escape_char_offset = 1
|
71
|
+
escape_char_value = 39 # apostrophe
|
72
|
+
end
|
73
|
+
|
74
|
+
# The approach used here goes against Sequel's philosophy of never attempting
|
75
|
+
# to parse SQL. However, parsing the SQL is basically the only way to implement
|
76
|
+
# this support with Sequel's design, and it's better to be pragmatic and accept
|
77
|
+
# this than not be able to support this.
|
78
|
+
|
79
|
+
# Replace literal strings
|
80
|
+
while outside_string && (index = start_index = sql.index("'", start_index))
|
81
|
+
if check_n && index != 0 && sql.getbyte(index-1) == 78 # N' start
|
82
|
+
start_index -= 1
|
83
|
+
end
|
84
|
+
index += 1
|
85
|
+
outside_string = false
|
86
|
+
|
87
|
+
while (index = sql.index(search_char, index)) && (sql.getbyte(index + escape_char_offset) == escape_char_value)
|
88
|
+
# skip escaped characters inside string literal
|
89
|
+
index += 2
|
90
|
+
end
|
91
|
+
|
92
|
+
if index
|
93
|
+
# Found end of string
|
94
|
+
sql[start_index..index] = '?'
|
95
|
+
start_index += 1
|
96
|
+
outside_string = true
|
97
|
+
end
|
98
|
+
end
|
99
|
+
|
100
|
+
# Replace integer and decimal floating point numbers
|
101
|
+
sql.gsub!(/\b-?\d+(?:\.\d+)?\b/, '?')
|
102
|
+
|
103
|
+
sql
|
104
|
+
end
|
105
|
+
end
|
106
|
+
|
107
|
+
Database.register_extension(:sql_log_normalizer, SQLLogNormalizer)
|
108
|
+
end
|
@@ -3128,7 +3128,7 @@ module Sequel
|
|
3128
3128
|
|
3129
3129
|
# The secondary eager loading method. Loads all associations in a single query. This
|
3130
3130
|
# method should only be used if you need to filter or order based on columns in associated tables,
|
3131
|
-
# or if you have done comparative benchmarking
|
3131
|
+
# or if you have done comparative benchmarking and determined it is faster.
|
3132
3132
|
#
|
3133
3133
|
# This method uses <tt>Dataset#graph</tt> to create appropriate aliases for columns in all the
|
3134
3134
|
# tables. Then it uses the graph's metadata to build the associations from the single hash, and
|
@@ -52,7 +52,9 @@ module Sequel
|
|
52
52
|
unless select = dataset.opts[:select]
|
53
53
|
select = dataset.columns.map{|c| Sequel.qualify(dataset.first_source, c)}
|
54
54
|
end
|
55
|
+
db_schema = @db_schema
|
55
56
|
set_dataset(dataset.select(*select.reject{|c| attrs.include?(dataset.send(:_hash_key_symbol, c))}))
|
57
|
+
@db_schema = db_schema
|
56
58
|
attrs.each{|a| define_lazy_attribute_getter(a)}
|
57
59
|
end
|
58
60
|
|
data/lib/sequel/version.rb
CHANGED
@@ -6,7 +6,7 @@ module Sequel
|
|
6
6
|
|
7
7
|
# The minor version of Sequel. Bumped for every non-patch level
|
8
8
|
# release, generally around once a month.
|
9
|
-
MINOR =
|
9
|
+
MINOR = 50
|
10
10
|
|
11
11
|
# The tiny version of Sequel. Usually 0, only bumped for bugfix
|
12
12
|
# releases that fix regressions from previous versions.
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: sequel
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 5.
|
4
|
+
version: 5.50.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Jeremy Evans
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2021-
|
11
|
+
date: 2021-11-01 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: minitest
|
@@ -194,6 +194,7 @@ extra_rdoc_files:
|
|
194
194
|
- doc/release_notes/5.48.0.txt
|
195
195
|
- doc/release_notes/5.49.0.txt
|
196
196
|
- doc/release_notes/5.5.0.txt
|
197
|
+
- doc/release_notes/5.50.0.txt
|
197
198
|
- doc/release_notes/5.6.0.txt
|
198
199
|
- doc/release_notes/5.7.0.txt
|
199
200
|
- doc/release_notes/5.8.0.txt
|
@@ -271,6 +272,7 @@ files:
|
|
271
272
|
- doc/release_notes/5.48.0.txt
|
272
273
|
- doc/release_notes/5.49.0.txt
|
273
274
|
- doc/release_notes/5.5.0.txt
|
275
|
+
- doc/release_notes/5.50.0.txt
|
274
276
|
- doc/release_notes/5.6.0.txt
|
275
277
|
- doc/release_notes/5.7.0.txt
|
276
278
|
- doc/release_notes/5.8.0.txt
|
@@ -325,6 +327,7 @@ files:
|
|
325
327
|
- lib/sequel/adapters/sqlanywhere.rb
|
326
328
|
- lib/sequel/adapters/sqlite.rb
|
327
329
|
- lib/sequel/adapters/tinytds.rb
|
330
|
+
- lib/sequel/adapters/utils/columns_limit_1.rb
|
328
331
|
- lib/sequel/adapters/utils/emulate_offset_with_reverse_and_count.rb
|
329
332
|
- lib/sequel/adapters/utils/emulate_offset_with_row_number.rb
|
330
333
|
- lib/sequel/adapters/utils/mysql_mysql2.rb
|
@@ -417,6 +420,7 @@ files:
|
|
417
420
|
- lib/sequel/extensions/pg_json.rb
|
418
421
|
- lib/sequel/extensions/pg_json_ops.rb
|
419
422
|
- lib/sequel/extensions/pg_loose_count.rb
|
423
|
+
- lib/sequel/extensions/pg_multirange.rb
|
420
424
|
- lib/sequel/extensions/pg_range.rb
|
421
425
|
- lib/sequel/extensions/pg_range_ops.rb
|
422
426
|
- lib/sequel/extensions/pg_row.rb
|
@@ -437,6 +441,7 @@ files:
|
|
437
441
|
- lib/sequel/extensions/split_array_nil.rb
|
438
442
|
- lib/sequel/extensions/sql_comments.rb
|
439
443
|
- lib/sequel/extensions/sql_expr.rb
|
444
|
+
- lib/sequel/extensions/sql_log_normalizer.rb
|
440
445
|
- lib/sequel/extensions/string_agg.rb
|
441
446
|
- lib/sequel/extensions/string_date_time.rb
|
442
447
|
- lib/sequel/extensions/symbol_aref.rb
|