sequel 5.69.0 → 5.71.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: dc06712b20f476b85d0a08a4d94d96a2c74e0a632053baeecefbecd4cd60d476
4
- data.tar.gz: dd5fdc130ad5a4fb19579c45e426d2a2da6431b14af18dd87fd8785b45fb0684
3
+ metadata.gz: 17fbd14a63974634d39c289194210ea773d5e2017ad24ac3f6a5c89cc6eb481d
4
+ data.tar.gz: 390a9c664bf0a7710bb341ef8699e53b0ed2ea4d4fd2c221f7e140393d52047e
5
5
  SHA512:
6
- metadata.gz: e0c1138064b489cbcdc7740f047095279e1f6e9a71e2ef1502359fbc440e8a2caed3009745d0aadbd51c7d2c13dbc6b971150f370bbe10a0f2e566e0cf219722
7
- data.tar.gz: 364735d4186439b97d2d7e5a83c3ddd13fdcf5af433b97d1892898b3f04115191a4dc77fd5d711a7ad715e8a707c5389aac6bfb539840464cff53e6d3f04ff6f
6
+ metadata.gz: 0af7a0afdad27b270d69eb8248e734408c0c132d209c111f03957a14d1fc2ef44fc4a6da66eaa4986dadf1565f428a140dc9b807bb0215117b015b69057445a1
7
+ data.tar.gz: a2aade8559d69fe43306b6834069e976f57809170e1a9102501e4031d899dd046119de95f485712024e68aa5b077cf63cccc74b7a77b244e821a2c42b87a09a3
data/CHANGELOG CHANGED
@@ -1,3 +1,29 @@
1
+ === 5.71.0 (2023-08-01)
2
+
3
+ * Support ILIKE ANY on PostgreSQL by not forcing the use of ESCAPE for ILIKE (gilesbowkett) (#2066)
4
+
5
+ * Add pg_xmin_optimistic_locking plugin for optimistic locking for all models without database changes (jeremyevans)
6
+
7
+ * Recognize the xid PostgreSQL type as an integer type in the jdbc/postgresql adapter (jeremyevans)
8
+
9
+ * Make set_column_allow_null method reversible in migrations (enescakir) (#2060)
10
+
11
+ === 5.70.0 (2023-07-01)
12
+
13
+ * Make static_cache plugin better handle cases where forbid_lazy_load plugin is already loaded (jeremyevans)
14
+
15
+ * Fix ShardedThreadedConnectionPool#remove_server to disconnect all connections if removing multiple servers (jeremyevans)
16
+
17
+ * Support SEQUEL_DEFAULT_CONNECTION_POOL environment variable for choosing connection pool when :pool_class Database option is not set (jeremyevans)
18
+
19
+ * Add sharded_timed_queue connection pool (jeremyevans)
20
+
21
+ * Make connection_{validator,expiration} and async_thread_pool extensions work with timed_queue connection pool (jeremyevans)
22
+
23
+ * Make connection_{validator,expiration} extensions raise error when used with single threaded pools (HoneyryderChuck, jeremyevans) (#2049)
24
+
25
+ * Workaround possible resource starvation in threaded connection pool (ioquatix) (#2048)
26
+
1
27
  === 5.69.0 (2023-06-01)
2
28
 
3
29
  * Avoid unsupported flag warning when using the mysql adapter with ruby-mysql 3+ (jeremyevans)
data/doc/migration.rdoc CHANGED
@@ -86,6 +86,7 @@ the following methods:
86
86
  * +add_full_text_index+
87
87
  * +add_spatial_index+
88
88
  * +rename_column+
89
+ * +set_column_allow_null+
89
90
 
90
91
  If you use any other methods, you should create your own +down+ block.
91
92
 
@@ -0,0 +1,35 @@
1
+ = New Features
2
+
3
+ * A sharded_timed_queue connection pool has been added. This offers
4
+ most of the same features as the sharded_threaded connection pool,
5
+ but uses the new Queue#pop :timeout features added in Ruby 3.2 to
6
+ allow for a simpler and possibly faster and more robust
7
+ implementation.
8
+
9
+ * If a :pool_class option is not specified when creating a Database,
10
+ Sequel will now look at the SEQUEL_DEFAULT_CONNECTION_POOL
11
+ environment variable to determine the connection pool class to use.
12
+ This allows you to set SEQUEL_DEFAULT_CONNECTION_POOL=timed_queue
13
+ on Ruby 3.2 to test with the timed_queue connection pool without
14
+ making any code changes. If the :servers Database option is given,
15
+ Sequel will automatically use the sharded version of the connection
16
+ pool specified by SEQUEL_DEFAULT_CONNECTION_POOL.
17
+
18
+ = Other Improvements
19
+
20
+ * The connection_validator, connection_expiration, and
21
+ async_thread_pool extensions now work with the timed_queue and
22
+ sharded_timed_queue connection pools.
23
+
24
+ * The sharded_threaded connection pool now disconnects connections
25
+ for all specified servers instead of just the last specified server
26
+ when using remove_server.
27
+
28
+ * The static_cache plugin now recognizes when the forbid_lazy_load
29
+ plugin is already loaded, and does not return instances that
30
+ forbid lazy load for methods that return a single object, such as
31
+ Database.{[],cache_get_pk,first}.
32
+
33
+ * Sequel now displays an informative error message if attempting to
34
+ load the connection_validator or connection_expiration extensions
35
+ when using the single threaded connection pool.
@@ -0,0 +1,21 @@
1
+ = New Features
2
+
3
+ * A pg_xmin_optimistic_locking plugin has been added. This plugin
4
+ uses PostgreSQL's xmin system column to implement optimistic
5
+ locking. The xmin system column is automatically updated whenever
6
+ the database row is updated. You can load this plugin into a
7
+ base model and have all models that subclass from it use optimistic
8
+ locking, without needing any user-defined lock columns.
9
+
10
+ = Other Improvements
11
+
12
+ * set_column_allow_null is now a reversible migration method inside
13
+ alter_table blocks.
14
+
15
+ * The use of ILIKE no longer forces the ESCAPE clause on PostgreSQL,
16
+ which allows the use of ILIKE ANY and other constructions. There
17
+ is no need to use the ESCAPE clause with ILIKE, because the value
18
+ Sequel uses is PostgreSQL's default.
19
+
20
+ * The xid PostgreSQL type is now recognized as an integer type in the
21
+ jdbc/postgresql adapter.
@@ -199,6 +199,7 @@ module Sequel
199
199
  v.strftime("'%H:%M:%S#{sprintf(".%03d", (v.usec/1000.0).round)}'")
200
200
  end
201
201
 
202
+ INTEGER_TYPE = Java::JavaSQL::Types::INTEGER
202
203
  STRING_TYPE = Java::JavaSQL::Types::VARCHAR
203
204
  ARRAY_TYPE = Java::JavaSQL::Types::ARRAY
204
205
  PG_SPECIFIC_TYPES = [Java::JavaSQL::Types::ARRAY, Java::JavaSQL::Types::OTHER, Java::JavaSQL::Types::STRUCT, Java::JavaSQL::Types::TIME_WITH_TIMEZONE, Java::JavaSQL::Types::TIME].freeze
@@ -219,6 +220,8 @@ module Sequel
219
220
  oid = meta.getField(i).getOID
220
221
  if pr = db.oid_convertor_proc(oid)
221
222
  pr
223
+ elsif oid == 28 # XID (Transaction ID)
224
+ map[INTEGER_TYPE]
222
225
  elsif oid == 2950 # UUID
223
226
  map[STRING_TYPE]
224
227
  elsif meta.getPGType(i) == 'hstore'
@@ -1745,8 +1745,6 @@ module Sequel
1745
1745
  literal_append(sql, args[0])
1746
1746
  sql << ' ' << op.to_s << ' '
1747
1747
  literal_append(sql, args[1])
1748
- sql << " ESCAPE "
1749
- literal_append(sql, "\\")
1750
1748
  sql << ')'
1751
1749
  else
1752
1750
  super
@@ -2,7 +2,7 @@
2
2
 
3
3
  require_relative 'threaded'
4
4
 
5
- # The slowest and most advanced connection, dealing with both multi-threaded
5
+ # The slowest and most advanced connection pool, dealing with both multi-threaded
6
6
  # access and configurations with multiple shards/servers.
7
7
  #
8
8
  # In addition, this pool subclass also handles scheduling in-use connections
@@ -112,7 +112,7 @@ class Sequel::ShardedThreadedConnectionPool < Sequel::ThreadedConnectionPool
112
112
  # available, creates a new connection. Passes the connection to the supplied
113
113
  # block:
114
114
  #
115
- # pool.hold {|conn| conn.execute('DROP TABLE posts')}
115
+ # pool.hold(:server1) {|conn| conn.execute('DROP TABLE posts')}
116
116
  #
117
117
  # Pool#hold is re-entrant, meaning it can be called recursively in
118
118
  # the same thread without blocking.
@@ -145,12 +145,13 @@ class Sequel::ShardedThreadedConnectionPool < Sequel::ThreadedConnectionPool
145
145
  # except that after it is used, future requests for the server will use the
146
146
  # :default server instead.
147
147
  def remove_servers(servers)
148
- conns = nil
148
+ conns = []
149
+ raise(Sequel::Error, "cannot remove default server") if servers.include?(:default)
150
+
149
151
  sync do
150
- raise(Sequel::Error, "cannot remove default server") if servers.include?(:default)
151
152
  servers.each do |server|
152
153
  if @servers.include?(server)
153
- conns = disconnect_server_connections(server)
154
+ conns.concat(disconnect_server_connections(server))
154
155
  @waiters.delete(server)
155
156
  @available_connections.delete(server)
156
157
  @allocated.delete(server)
@@ -159,9 +160,9 @@ class Sequel::ShardedThreadedConnectionPool < Sequel::ThreadedConnectionPool
159
160
  end
160
161
  end
161
162
 
162
- if conns
163
- disconnect_connections(conns)
164
- end
163
+ nil
164
+ ensure
165
+ disconnect_connections(conns)
165
166
  end
166
167
 
167
168
  # Return an array of symbols for servers in the connection pool.
@@ -186,7 +187,7 @@ class Sequel::ShardedThreadedConnectionPool < Sequel::ThreadedConnectionPool
186
187
  # is available. The calling code should NOT already have the mutex when
187
188
  # calling this.
188
189
  #
189
- # This should return a connection is one is available within the timeout,
190
+ # This should return a connection if one is available within the timeout,
190
191
  # or nil if a connection could not be acquired within the timeout.
191
192
  def acquire(thread, server)
192
193
  if conn = assign_connection(thread, server)
@@ -325,7 +326,7 @@ class Sequel::ShardedThreadedConnectionPool < Sequel::ThreadedConnectionPool
325
326
  # Create the maximum number of connections immediately. The calling code should
326
327
  # NOT have the mutex before calling this.
327
328
  def preconnect(concurrent = false)
328
- conn_servers = @servers.keys.map!{|s| Array.new(max_size - _size(s), s)}.flatten!
329
+ conn_servers = sync{@servers.keys}.map!{|s| Array.new(max_size - _size(s), s)}.flatten!
329
330
 
330
331
  if concurrent
331
332
  conn_servers.map!{|s| Thread.new{[s, make_new(s)]}}.map!(&:value)
@@ -0,0 +1,374 @@
1
+ # frozen-string-literal: true
2
+
3
+ # :nocov:
4
+ raise LoadError, "Sequel::ShardedTimedQueueConnectionPool is only available on Ruby 3.2+" unless RUBY_VERSION >= '3.2'
5
+ # :nocov:
6
+
7
+ # A connection pool allowing multi-threaded access to a sharded pool of connections,
8
+ # using a timed queue (only available in Ruby 3.2+).
9
+ class Sequel::ShardedTimedQueueConnectionPool < Sequel::ConnectionPool
10
+ # The maximum number of connections this pool will create per shard.
11
+ attr_reader :max_size
12
+
13
+ # The following additional options are respected:
14
+ # :max_connections :: The maximum number of connections the connection pool
15
+ # will open (default 4)
16
+ # :pool_timeout :: The amount of seconds to wait to acquire a connection
17
+ # before raising a PoolTimeout (default 5)
18
+ # :servers :: A hash of servers to use. Keys should be symbols. If not
19
+ # present, will use a single :default server.
20
+ # :servers_hash :: The base hash to use for the servers. By default,
21
+ # Sequel uses Hash.new(:default). You can use a hash with a default proc
22
+ # that raises an error if you want to catch all cases where a nonexistent
23
+ # server is used.
24
+ def initialize(db, opts = OPTS)
25
+ super
26
+
27
+ @max_size = Integer(opts[:max_connections] || 4)
28
+ raise(Sequel::Error, ':max_connections must be positive') if @max_size < 1
29
+ @mutex = Mutex.new
30
+ @timeout = Float(opts[:pool_timeout] || 5)
31
+
32
+ @allocated = {}
33
+ @sizes = {}
34
+ @queues = {}
35
+ @servers = opts.fetch(:servers_hash, Hash.new(:default))
36
+
37
+ add_servers([:default])
38
+ add_servers(opts[:servers].keys) if opts[:servers]
39
+ end
40
+
41
+ # Adds new servers to the connection pool. Allows for dynamic expansion of the potential replicas/shards
42
+ # at runtime. +servers+ argument should be an array of symbols.
43
+ def add_servers(servers)
44
+ sync do
45
+ servers.each do |server|
46
+ next if @servers.has_key?(server)
47
+
48
+ @servers[server] = server
49
+ @sizes[server] = 0
50
+ @queues[server] = Queue.new
51
+ (@allocated[server] = {}).compare_by_identity
52
+ end
53
+ end
54
+ nil
55
+ end
56
+
57
+ # Yield all of the available connections, and the one currently allocated to
58
+ # this thread (if one is allocated). This will not yield connections currently
59
+ # allocated to other threads, as it is not safe to operate on them.
60
+ def all_connections
61
+ thread = Sequel.current
62
+ sync{@queues.to_a}.each do |server, queue|
63
+ if conn = owned_connection(thread, server)
64
+ yield conn
65
+ end
66
+
67
+ # Use a hash to record all connections already seen. As soon as we
68
+ # come across a connection we've already seen, we stop the loop.
69
+ conns = {}
70
+ conns.compare_by_identity
71
+ while true
72
+ conn = nil
73
+ begin
74
+ break unless (conn = queue.pop(timeout: 0)) && !conns[conn]
75
+ conns[conn] = true
76
+ yield conn
77
+ ensure
78
+ queue.push(conn) if conn
79
+ end
80
+ end
81
+ end
82
+
83
+ nil
84
+ end
85
+
86
+ # Removes all connections currently in the pool's queue. This method has the effect of
87
+ # disconnecting from the database, assuming that no connections are currently
88
+ # being used.
89
+ #
90
+ # Once a connection is requested using #hold, the connection pool
91
+ # creates new connections to the database.
92
+ #
93
+ # If the :server option is provided, it should be a symbol or array of symbols,
94
+ # and then the method will only disconnect connectsion from those specified shards.
95
+ def disconnect(opts=OPTS)
96
+ (opts[:server] ? Array(opts[:server]) : sync{@servers.keys}).each do |server|
97
+ raise Sequel::Error, "invalid server" unless queue = sync{@queues[server]}
98
+ while conn = queue.pop(timeout: 0)
99
+ disconnect_pool_connection(conn, server)
100
+ end
101
+ fill_queue(server)
102
+ end
103
+ nil
104
+ end
105
+
106
+ # Chooses the first available connection for the given server, or if none are
107
+ # available, creates a new connection. Passes the connection to the supplied
108
+ # block:
109
+ #
110
+ # pool.hold(:server1) {|conn| conn.execute('DROP TABLE posts')}
111
+ #
112
+ # Pool#hold is re-entrant, meaning it can be called recursively in
113
+ # the same thread without blocking.
114
+ #
115
+ # If no connection is immediately available and the pool is already using the maximum
116
+ # number of connections, Pool#hold will block until a connection
117
+ # is available or the timeout expires. If the timeout expires before a
118
+ # connection can be acquired, a Sequel::PoolTimeout is raised.
119
+ def hold(server=:default)
120
+ server = pick_server(server)
121
+ t = Sequel.current
122
+ if conn = owned_connection(t, server)
123
+ return yield(conn)
124
+ end
125
+
126
+ begin
127
+ conn = acquire(t, server)
128
+ yield conn
129
+ rescue Sequel::DatabaseDisconnectError, *@error_classes => e
130
+ if disconnect_error?(e)
131
+ oconn = conn
132
+ conn = nil
133
+ disconnect_pool_connection(oconn, server) if oconn
134
+ sync{@allocated[server].delete(t)}
135
+ fill_queue(server)
136
+ end
137
+ raise
138
+ ensure
139
+ release(t, conn, server) if conn
140
+ end
141
+ end
142
+
143
+ # The total number of connections in the pool. Using a non-existant server will return nil.
144
+ def size(server=:default)
145
+ sync{@sizes[server]}
146
+ end
147
+
148
+ # Remove servers from the connection pool. Similar to disconnecting from all given servers,
149
+ # except that after it is used, future requests for the servers will use the
150
+ # :default server instead.
151
+ #
152
+ # Note that an error will be raised if there are any connections currently checked
153
+ # out for the given servers.
154
+ def remove_servers(servers)
155
+ conns = []
156
+ raise(Sequel::Error, "cannot remove default server") if servers.include?(:default)
157
+
158
+ sync do
159
+ servers.each do |server|
160
+ next unless @servers.has_key?(server)
161
+
162
+ queue = @queues[server]
163
+
164
+ while conn = queue.pop(timeout: 0)
165
+ @sizes[server] -= 1
166
+ conns << conn
167
+ end
168
+
169
+ unless @sizes[server] == 0
170
+ raise Sequel::Error, "cannot remove server #{server} as it has allocated connections"
171
+ end
172
+
173
+ @servers.delete(server)
174
+ @sizes.delete(server)
175
+ @queues.delete(server)
176
+ @allocated.delete(server)
177
+ end
178
+ end
179
+
180
+ nil
181
+ ensure
182
+ disconnect_connections(conns)
183
+ end
184
+
185
+ # Return an array of symbols for servers in the connection pool.
186
+ def servers
187
+ sync{@servers.keys}
188
+ end
189
+
190
+ def pool_type
191
+ :sharded_timed_queue
192
+ end
193
+
194
+ private
195
+
196
+ # Create a new connection, after the pool's current size has already
197
+ # been updated to account for the new connection. If there is an exception
198
+ # when creating the connection, decrement the current size.
199
+ #
200
+ # This should only be called after can_make_new?. If there is an exception
201
+ # between when can_make_new? is called and when preallocated_make_new
202
+ # is called, it has the effect of reducing the maximum size of the
203
+ # connection pool by 1, since the current size of the pool will show a
204
+ # higher number than the number of connections allocated or
205
+ # in the queue.
206
+ #
207
+ # Calling code should not have the mutex when calling this.
208
+ def preallocated_make_new(server)
209
+ make_new(server)
210
+ rescue Exception
211
+ sync{@sizes[server] -= 1}
212
+ raise
213
+ end
214
+
215
+ # Disconnect all available connections immediately, and schedule currently allocated connections for disconnection
216
+ # as soon as they are returned to the pool. The calling code should NOT
217
+ # have the mutex before calling this.
218
+ def disconnect_connections(conns)
219
+ conns.each{|conn| disconnect_connection(conn)}
220
+ end
221
+
222
+ # Decrement the current size of the pool for the server when disconnecting connections.
223
+ #
224
+ # Calling code should not have the mutex when calling this.
225
+ def disconnect_pool_connection(conn, server)
226
+ sync{@sizes[server] -= 1}
227
+ disconnect_connection(conn)
228
+ end
229
+
230
+ # If there are any threads waiting on the queue, try to create
231
+ # new connections in a separate thread if the pool is not yet at the
232
+ # maximum size.
233
+ #
234
+ # The reason for this method is to handle cases where acquire
235
+ # could not retrieve a connection immediately, and the pool
236
+ # was already at the maximum size. In that case, the acquire will
237
+ # wait on the queue until the timeout. This method is called
238
+ # after disconnecting to potentially add new connections to the
239
+ # pool, so the threads that are currently waiting for connections
240
+ # do not timeout after the pool is no longer full.
241
+ def fill_queue(server)
242
+ queue = sync{@queues[server]}
243
+ if queue.num_waiting > 0
244
+ Thread.new do
245
+ while queue.num_waiting > 0 && (conn = try_make_new(server))
246
+ queue.push(conn)
247
+ end
248
+ end
249
+ end
250
+ end
251
+
252
+ # Whether the given size is less than the maximum size of the pool.
253
+ # In that case, the pool's current size is incremented. If this
254
+ # method returns true, space in the pool for the connection is
255
+ # preallocated, and preallocated_make_new should be called to
256
+ # create the connection.
257
+ #
258
+ # Calling code should have the mutex when calling this.
259
+ def can_make_new?(server, current_size)
260
+ if @max_size > current_size
261
+ @sizes[server] += 1
262
+ end
263
+ end
264
+
265
+ # Try to make a new connection if there is space in the pool.
266
+ # If the pool is already full, look for dead threads/fibers and
267
+ # disconnect the related connections.
268
+ #
269
+ # Calling code should not have the mutex when calling this.
270
+ def try_make_new(server)
271
+ return preallocated_make_new(server) if sync{can_make_new?(server, @sizes[server])}
272
+
273
+ to_disconnect = nil
274
+ do_make_new = false
275
+
276
+ sync do
277
+ current_size = @sizes[server]
278
+ alloc = @allocated[server]
279
+ alloc.keys.each do |t|
280
+ unless t.alive?
281
+ (to_disconnect ||= []) << alloc.delete(t)
282
+ current_size -= 1
283
+ end
284
+ end
285
+
286
+ do_make_new = true if can_make_new?(server, current_size)
287
+ end
288
+
289
+ begin
290
+ preallocated_make_new(server) if do_make_new
291
+ ensure
292
+ if to_disconnect
293
+ to_disconnect.each{|conn| disconnect_pool_connection(conn, server)}
294
+ fill_queue(server)
295
+ end
296
+ end
297
+ end
298
+
299
+ # Assigns a connection to the supplied thread, if one
300
+ # is available.
301
+ #
302
+ # This should return a connection if one is available within the timeout,
303
+ # or raise PoolTimeout if a connection could not be acquired within the timeout.
304
+ #
305
+ # Calling code should not have the mutex when calling this.
306
+ def acquire(thread, server)
307
+ queue = sync{@queues[server]}
308
+ if conn = queue.pop(timeout: 0) || try_make_new(server) || queue.pop(timeout: @timeout)
309
+ sync{@allocated[server][thread] = conn}
310
+ else
311
+ name = db.opts[:name]
312
+ raise ::Sequel::PoolTimeout, "timeout: #{@timeout}, server: #{server}#{", database name: #{name}" if name}"
313
+ end
314
+ end
315
+
316
+ # Returns the connection owned by the supplied thread for the given server,
317
+ # if any. The calling code should NOT already have the mutex before calling this.
318
+ def owned_connection(thread, server)
319
+ sync{@allocated[server][thread]}
320
+ end
321
+
322
+ # If the server given is in the hash, return it, otherwise, return the default server.
323
+ def pick_server(server)
324
+ sync{@servers[server]}
325
+ end
326
+
327
+ # Create the maximum number of connections immediately. This should not be called
328
+ # with a true argument unless no code is currently operating on the database.
329
+ #
330
+ # Calling code should not have the mutex when calling this.
331
+ def preconnect(concurrent = false)
332
+ conn_servers = sync{@servers.keys}.map!{|s| Array.new(@max_size - @sizes[s], s)}.flatten!
333
+
334
+ if concurrent
335
+ conn_servers.map! do |server|
336
+ queue = sync{@queues[server]}
337
+ Thread.new do
338
+ if conn = try_make_new(server)
339
+ queue.push(conn)
340
+ end
341
+ end
342
+ end.each(&:value)
343
+ else
344
+ conn_servers.each do |server|
345
+ if conn = try_make_new(server)
346
+ sync{@queues[server]}.push(conn)
347
+ end
348
+ end
349
+ end
350
+
351
+ nil
352
+ end
353
+
354
+ # Releases the connection assigned to the supplied thread back to the pool.
355
+ #
356
+ # Calling code should not have the mutex when calling this.
357
+ def release(thread, _, server)
358
+ checkin_connection(sync{@allocated[server].delete(thread)}, server)
359
+ nil
360
+ end
361
+
362
+ # Adds a connection to the queue of available connections, returns the connection.
363
+ def checkin_connection(conn, server)
364
+ sync{@queues[server]}.push(conn)
365
+ conn
366
+ end
367
+
368
+ # Yield to the block while inside the mutex.
369
+ #
370
+ # Calling code should not have the mutex when calling this.
371
+ def sync
372
+ @mutex.synchronize{yield}
373
+ end
374
+ end
@@ -274,6 +274,12 @@ class Sequel::ThreadedConnectionPool < Sequel::ConnectionPool
274
274
  end
275
275
 
276
276
  @waiter.signal
277
+
278
+ # Ensure that after signalling the condition, some other thread is given the
279
+ # opportunity to acquire the mutex.
280
+ # See <https://github.com/socketry/async/issues/99> for more context.
281
+ sleep(0)
282
+
277
283
  nil
278
284
  end
279
285
 
@@ -81,7 +81,7 @@ class Sequel::TimedQueueConnectionPool < Sequel::ConnectionPool
81
81
  # connection can be acquired, a Sequel::PoolTimeout is raised.
82
82
  def hold(server=nil)
83
83
  t = Sequel.current
84
- if conn = sync{@allocated[t]}
84
+ if conn = owned_connection(t)
85
85
  return yield(conn)
86
86
  end
87
87
 
@@ -223,8 +223,14 @@ class Sequel::TimedQueueConnectionPool < Sequel::ConnectionPool
223
223
  end
224
224
  end
225
225
 
226
+ # Returns the connection owned by the supplied thread,
227
+ # if any. The calling code should NOT already have the mutex before calling this.
228
+ def owned_connection(thread)
229
+ sync{@allocated[thread]}
230
+ end
231
+
226
232
  # Create the maximum number of connections immediately. This should not be called
227
- # with a true argument unles no code is currently operating on the database.
233
+ # with a true argument unless no code is currently operating on the database.
228
234
  #
229
235
  # Calling code should not have the mutex when calling this.
230
236
  def preconnect(concurrent = false)
@@ -245,7 +251,14 @@ class Sequel::TimedQueueConnectionPool < Sequel::ConnectionPool
245
251
  #
246
252
  # Calling code should not have the mutex when calling this.
247
253
  def release(thread)
248
- @queue.push(sync{@allocated.delete(thread)})
254
+ checkin_connection(sync{@allocated.delete(thread)})
255
+ nil
256
+ end
257
+
258
+ # Adds a connection to the queue of available connections, returns the connection.
259
+ def checkin_connection(conn)
260
+ @queue.push(conn)
261
+ conn
249
262
  end
250
263
 
251
264
  # Yield to the block while inside the mutex.
@@ -32,6 +32,7 @@ class Sequel::ConnectionPool
32
32
  :sharded_threaded => :ShardedThreadedConnectionPool,
33
33
  :sharded_single => :ShardedSingleConnectionPool,
34
34
  :timed_queue => :TimedQueueConnectionPool,
35
+ :sharded_timed_queue => :ShardedTimedQueueConnectionPool,
35
36
  }
36
37
  POOL_CLASS_MAP.to_a.each{|k, v| POOL_CLASS_MAP[k.to_s] = v}
37
38
  POOL_CLASS_MAP.freeze
@@ -42,7 +43,8 @@ class Sequel::ConnectionPool
42
43
  # Return a pool subclass instance based on the given options. If a <tt>:pool_class</tt>
43
44
  # option is provided is provided, use that pool class, otherwise
44
45
  # use a new instance of an appropriate pool subclass based on the
45
- # <tt>:single_threaded</tt> and <tt>:servers</tt> options.
46
+ # +SEQUEL_DEFAULT_CONNECTION_POOL+ environment variable if set, or
47
+ # the <tt>:single_threaded</tt> and <tt>:servers</tt> options, otherwise.
46
48
  def get_pool(db, opts = OPTS)
47
49
  connection_pool_class(opts).new(db, opts)
48
50
  end
@@ -62,9 +64,14 @@ class Sequel::ConnectionPool
62
64
  end
63
65
 
64
66
  pc
67
+ elsif pc = ENV['SEQUEL_DEFAULT_CONNECTION_POOL']
68
+ pc = "sharded_#{pc}" if opts[:servers] && !pc.start_with?('sharded_')
69
+ connection_pool_class(:pool_class=>pc)
65
70
  else
66
71
  pc = if opts[:single_threaded]
67
72
  opts[:servers] ? :sharded_single : :single
73
+ #elsif RUBY_VERSION >= '3.2' # SEQUEL6 or maybe earlier
74
+ # opts[:servers] ? :sharded_timed_queue : :timed_queue
68
75
  else
69
76
  opts[:servers] ? :sharded_threaded : :threaded
70
77
  end
@@ -338,8 +338,9 @@ module Sequel
338
338
  module DatabaseMethods
339
339
  def self.extended(db)
340
340
  db.instance_exec do
341
- unless pool.pool_type == :threaded || pool.pool_type == :sharded_threaded
342
- raise Error, "can only load async_thread_pool extension if using threaded or sharded_threaded connection pool"
341
+ case pool.pool_type
342
+ when :single, :sharded_single
343
+ raise Error, "cannot load async_thread_pool extension if using single or sharded_single connection pool"
343
344
  end
344
345
 
345
346
  num_async_threads = opts[:num_async_threads] ? typecast_value_integer(opts[:num_async_threads]) : (Integer(opts[:max_connections] || 4))
@@ -15,16 +15,16 @@
15
15
  #
16
16
  # DB.pool.connection_expiration_timeout = 3600 # 1 hour
17
17
  #
18
- # Note that this extension only affects the default threaded
19
- # and the sharded threaded connection pool. The single
20
- # threaded and sharded single threaded connection pools are
21
- # not affected. As the only reason to use the single threaded
18
+ # Note that this extension does not work with the single
19
+ # threaded and sharded single threaded connection pools.
20
+ # As the only reason to use the single threaded
22
21
  # pools is for speed, and this extension makes the connection
23
22
  # pool slower, there's not much point in modifying this
24
23
  # extension to work with the single threaded pools. The
25
- # threaded pools work fine even in single threaded code, so if
26
- # you are currently using a single threaded pool and want to
27
- # use this extension, switch to using a threaded pool.
24
+ # non-single threaded pools work fine even in single threaded
25
+ # code, so if you are currently using a single threaded pool
26
+ # and want to use this extension, switch to using another
27
+ # pool.
28
28
  #
29
29
  # Related module: Sequel::ConnectionExpiration
30
30
 
@@ -45,6 +45,11 @@ module Sequel
45
45
 
46
46
  # Initialize the data structures used by this extension.
47
47
  def self.extended(pool)
48
+ case pool.pool_type
49
+ when :single, :sharded_single
50
+ raise Error, "cannot load connection_expiration extension if using single or sharded_single connection pool"
51
+ end
52
+
48
53
  pool.instance_exec do
49
54
  sync do
50
55
  @connection_expiration_timestamps ||= {}
@@ -79,8 +84,9 @@ module Sequel
79
84
  (cet = sync{@connection_expiration_timestamps[conn]}) &&
80
85
  Sequel.elapsed_seconds_since(cet[0]) > cet[1]
81
86
 
82
- if pool_type == :sharded_threaded
83
- sync{allocated(a.last).delete(Sequel.current)}
87
+ case pool_type
88
+ when :sharded_threaded, :sharded_timed_queue
89
+ sync{@allocated[a.last].delete(Sequel.current)}
84
90
  else
85
91
  sync{@allocated.delete(Sequel.current)}
86
92
  end
@@ -34,16 +34,16 @@
34
34
  # web requests to the number to connections in the database
35
35
  # connection pool.
36
36
  #
37
- # Note that this extension only affects the default threaded
38
- # and the sharded threaded connection pool. The single
39
- # threaded and sharded single threaded connection pools are
40
- # not affected. As the only reason to use the single threaded
37
+ # Note that this extension does not work with the single
38
+ # threaded and sharded single threaded connection pools.
39
+ # As the only reason to use the single threaded
41
40
  # pools is for speed, and this extension makes the connection
42
41
  # pool slower, there's not much point in modifying this
43
42
  # extension to work with the single threaded pools. The
44
- # threaded pools work fine even in single threaded code, so if
45
- # you are currently using a single threaded pool and want to
46
- # use this extension, switch to using a threaded pool.
43
+ # non-single threaded pools work fine even in single threaded
44
+ # code, so if you are currently using a single threaded pool
45
+ # and want to use this extension, switch to using another
46
+ # pool.
47
47
  #
48
48
  # Related module: Sequel::ConnectionValidator
49
49
 
@@ -61,6 +61,11 @@ module Sequel
61
61
 
62
62
  # Initialize the data structures used by this extension.
63
63
  def self.extended(pool)
64
+ case pool.pool_type
65
+ when :single, :sharded_single
66
+ raise Error, "cannot load connection_validator extension if using single or sharded_single connection pool"
67
+ end
68
+
64
69
  pool.instance_exec do
65
70
  sync do
66
71
  @connection_timestamps ||= {}
@@ -103,8 +108,9 @@ module Sequel
103
108
  Sequel.elapsed_seconds_since(timer) > @connection_validation_timeout &&
104
109
  !db.valid_connection?(conn)
105
110
 
106
- if pool_type == :sharded_threaded
107
- sync{allocated(a.last).delete(Sequel.current)}
111
+ case pool_type
112
+ when :sharded_threaded, :sharded_timed_queue
113
+ sync{@allocated[a.last].delete(Sequel.current)}
108
114
  else
109
115
  sync{@allocated.delete(Sequel.current)}
110
116
  end
@@ -120,4 +126,3 @@ module Sequel
120
126
 
121
127
  Database.register_extension(:connection_validator){|db| db.pool.extend(ConnectionValidator)}
122
128
  end
123
-
@@ -270,6 +270,10 @@ module Sequel
270
270
  def rename_column(name, new_name)
271
271
  @actions << [:rename_column, new_name, name]
272
272
  end
273
+
274
+ def set_column_allow_null(name, allow_null=true)
275
+ @actions << [:set_column_allow_null, name, !allow_null]
276
+ end
273
277
  end
274
278
 
275
279
  # The preferred method for writing Sequel migrations, using a DSL:
@@ -69,7 +69,8 @@ module Sequel
69
69
  # Also defines the with_server method on the receiver for easy use.
70
70
  def self.extended(db)
71
71
  pool = db.pool
72
- if defined?(ShardedThreadedConnectionPool) && pool.is_a?(ShardedThreadedConnectionPool)
72
+ case pool.pool_type
73
+ when :sharded_threaded, :sharded_timed_queue
73
74
  pool.extend(ThreadedServerBlock)
74
75
  pool.instance_variable_set(:@default_servers, {})
75
76
  else
@@ -26,57 +26,27 @@ module Sequel
26
26
  module MssqlOptimisticLocking
27
27
  # Load the instance_filters plugin into the model.
28
28
  def self.apply(model, opts=OPTS)
29
- model.plugin :instance_filters
29
+ model.plugin(:optimistic_locking_base)
30
30
  end
31
31
 
32
- # Set the lock_column to the :lock_column option (default: :timestamp)
32
+ # Set the lock column
33
33
  def self.configure(model, opts=OPTS)
34
- model.lock_column = opts[:lock_column] || :timestamp
34
+ model.lock_column = opts[:lock_column] || model.lock_column || :timestamp
35
35
  end
36
-
37
- module ClassMethods
38
- # The timestamp/rowversion column containing the version for the current row.
39
- attr_accessor :lock_column
40
-
41
- Plugins.inherited_instance_variables(self, :@lock_column=>nil)
42
- end
43
-
36
+
44
37
  module InstanceMethods
45
- # Add the lock column instance filter to the object before destroying it.
46
- def before_destroy
47
- lock_column_instance_filter
48
- super
49
- end
50
-
51
- # Add the lock column instance filter to the object before updating it.
52
- def before_update
53
- lock_column_instance_filter
54
- super
55
- end
56
-
57
38
  private
58
39
 
59
- # Add the lock column instance filter to the object.
60
- def lock_column_instance_filter
61
- lc = model.lock_column
62
- instance_filter(lc=>Sequel.blob(get_column_value(lc)))
63
- end
64
-
65
- # Clear the instance filters when refreshing, so that attempting to
66
- # refresh after a failed save removes the previous lock column filter
67
- # (the new one will be added before updating).
68
- def _refresh(ds)
69
- clear_instance_filters
70
- super
40
+ # Make the instance filter value a blob.
41
+ def lock_column_instance_filter_value
42
+ Sequel.blob(super)
71
43
  end
72
44
 
73
45
  # Remove the lock column from the columns to update.
74
46
  # SQL Server automatically updates the lock column value, and does not like
75
47
  # it to be assigned.
76
48
  def _save_update_all_columns_hash
77
- v = @values.dup
78
- cc = changed_columns
79
- Array(primary_key).each{|x| v.delete(x) unless cc.include?(x)}
49
+ v = super
80
50
  v.delete(model.lock_column)
81
51
  v
82
52
  end
@@ -12,64 +12,31 @@ module Sequel
12
12
  # p1 = Person[1]
13
13
  # p2 = Person[1]
14
14
  # p1.update(name: 'Jim') # works
15
- # p2.update(name: 'Bob') # raises Sequel::Plugins::OptimisticLocking::Error
15
+ # p2.update(name: 'Bob') # raises Sequel::NoExistingObject
16
16
  #
17
17
  # In order for this plugin to work, you need to make sure that the database
18
- # table has a +lock_version+ column (or other column you name via the lock_column
19
- # class level accessor) that defaults to 0.
18
+ # table has a +lock_version+ column that defaults to 0. To change the column
19
+ # used, provide a +:lock_column+ option when loading the plugin:
20
+ #
21
+ # plugin :optimistic_locking, lock_column: :version
20
22
  #
21
23
  # This plugin relies on the instance_filters plugin.
22
24
  module OptimisticLocking
23
25
  # Exception class raised when trying to update or destroy a stale object.
24
26
  Error = Sequel::NoExistingObject
25
27
 
26
- # Load the instance_filters plugin into the model.
27
28
  def self.apply(model, opts=OPTS)
28
- model.plugin :instance_filters
29
+ model.plugin(:optimistic_locking_base)
29
30
  end
30
31
 
31
- # Set the lock_column to the :lock_column option, or :lock_version if
32
- # that option is not given.
32
+ # Set the lock column
33
33
  def self.configure(model, opts=OPTS)
34
- model.lock_column = opts[:lock_column] || :lock_version
34
+ model.lock_column = opts[:lock_column] || model.lock_column || :lock_version
35
35
  end
36
-
37
- module ClassMethods
38
- # The column holding the version of the lock
39
- attr_accessor :lock_column
40
-
41
- Plugins.inherited_instance_variables(self, :@lock_column=>nil)
42
- end
43
-
36
+
44
37
  module InstanceMethods
45
- # Add the lock column instance filter to the object before destroying it.
46
- def before_destroy
47
- lock_column_instance_filter
48
- super
49
- end
50
-
51
- # Add the lock column instance filter to the object before updating it.
52
- def before_update
53
- lock_column_instance_filter
54
- super
55
- end
56
-
57
38
  private
58
39
 
59
- # Add the lock column instance filter to the object.
60
- def lock_column_instance_filter
61
- lc = model.lock_column
62
- instance_filter(lc=>get_column_value(lc))
63
- end
64
-
65
- # Clear the instance filters when refreshing, so that attempting to
66
- # refresh after a failed save removes the previous lock column filter
67
- # (the new one will be added before updating).
68
- def _refresh(ds)
69
- clear_instance_filters
70
- super
71
- end
72
-
73
40
  # Only update the row if it has the same lock version, and increment the
74
41
  # lock version.
75
42
  def _update_columns(columns)
@@ -0,0 +1,55 @@
1
+ # frozen-string-literal: true
2
+
3
+ module Sequel
4
+ module Plugins
5
+ # Base for other optimistic locking plugins
6
+ module OptimisticLockingBase
7
+ # Load the instance_filters plugin into the model.
8
+ def self.apply(model)
9
+ model.plugin :instance_filters
10
+ end
11
+
12
+ module ClassMethods
13
+ # The column holding the version of the lock
14
+ attr_accessor :lock_column
15
+
16
+ Plugins.inherited_instance_variables(self, :@lock_column=>nil)
17
+ end
18
+
19
+ module InstanceMethods
20
+ # Add the lock column instance filter to the object before destroying it.
21
+ def before_destroy
22
+ lock_column_instance_filter
23
+ super
24
+ end
25
+
26
+ # Add the lock column instance filter to the object before updating it.
27
+ def before_update
28
+ lock_column_instance_filter
29
+ super
30
+ end
31
+
32
+ private
33
+
34
+ # Add the lock column instance filter to the object.
35
+ def lock_column_instance_filter
36
+ instance_filter(model.lock_column=>lock_column_instance_filter_value)
37
+ end
38
+
39
+ # Use the current value of the lock column
40
+ def lock_column_instance_filter_value
41
+ public_send(model.lock_column)
42
+ end
43
+
44
+ # Clear the instance filters when refreshing, so that attempting to
45
+ # refresh after a failed save removes the previous lock column filter
46
+ # (the new one will be added before updating).
47
+ def _refresh(ds)
48
+ clear_instance_filters
49
+ super
50
+ end
51
+ end
52
+ end
53
+ end
54
+ end
55
+
@@ -0,0 +1,109 @@
1
+ # frozen-string-literal: true
2
+
3
+ module Sequel
4
+ module Plugins
5
+ # This plugin implements optimistic locking mechanism on PostgreSQL based
6
+ # on the xmin of the row. The xmin system column is automatically set to
7
+ # the current transaction id whenever the row is inserted or updated:
8
+ #
9
+ # class Person < Sequel::Model
10
+ # plugin :pg_xmin_optimistic_locking
11
+ # end
12
+ # p1 = Person[1]
13
+ # p2 = Person[1]
14
+ # p1.update(name: 'Jim') # works
15
+ # p2.update(name: 'Bob') # raises Sequel::NoExistingObject
16
+ #
17
+ # The advantage of pg_xmin_optimistic_locking plugin compared to the
18
+ # regular optimistic_locking plugin as that it does not require any
19
+ # additional columns setup on the model. This allows it to be loaded
20
+ # in the base model and have all subclasses automatically use
21
+ # optimistic locking. The disadvantage is that testing can be
22
+ # more difficult if you are modifying the underlying row between
23
+ # when a model is retrieved and when it is saved.
24
+ #
25
+ # This plugin may not work with the class_table_inheritance plugin.
26
+ #
27
+ # This plugin relies on the instance_filters plugin.
28
+ module PgXminOptimisticLocking
29
+ WILDCARD = LiteralString.new('*').freeze
30
+
31
+ # Define the xmin column accessor
32
+ def self.apply(model)
33
+ model.instance_exec do
34
+ plugin(:optimistic_locking_base)
35
+ @lock_column = :xmin
36
+ def_column_accessor(:xmin)
37
+ end
38
+ end
39
+
40
+ # Update the dataset to append the xmin column if it is usable
41
+ # and there is a dataset for the model.
42
+ def self.configure(model)
43
+ model.instance_exec do
44
+ set_dataset(@dataset) if @dataset
45
+ end
46
+ end
47
+
48
+ module ClassMethods
49
+ private
50
+
51
+ # Ensure the dataset selects the xmin column if doing so
52
+ def convert_input_dataset(ds)
53
+ append_xmin_column_if_usable(super)
54
+ end
55
+
56
+ # If the xmin column is not already selected, and selecting it does not
57
+ # raise an error, append it to the selections.
58
+ def append_xmin_column_if_usable(ds)
59
+ select = ds.opts[:select]
60
+
61
+ unless select && select.include?(:xmin)
62
+ xmin_ds = ds.select_append(:xmin)
63
+ begin
64
+ columns = xmin_ds.columns!
65
+ rescue Sequel::DatabaseConnectionError, Sequel::DatabaseDisconnectError
66
+ raise
67
+ rescue Sequel::DatabaseError
68
+ # ignore, could be view, subquery, table returning function, etc.
69
+ else
70
+ ds = xmin_ds if columns.include?(:xmin)
71
+ end
72
+ end
73
+
74
+ ds
75
+ end
76
+ end
77
+
78
+ module InstanceMethods
79
+ private
80
+
81
+ # Only set the lock column instance filter if there is an xmin value.
82
+ def lock_column_instance_filter
83
+ super if @values[:xmin]
84
+ end
85
+
86
+ # Include xmin value when inserting initial row
87
+ def _insert_dataset
88
+ super.returning(WILDCARD, :xmin)
89
+ end
90
+
91
+ # Remove the xmin from the columns to update.
92
+ # PostgreSQL automatically updates the xmin value, and it cannot be assigned.
93
+ def _save_update_all_columns_hash
94
+ v = super
95
+ v.delete(:xmin)
96
+ v
97
+ end
98
+
99
+ # Add an RETURNING clause to fetch the updated xmin when updating the row.
100
+ def _update_without_checking(columns)
101
+ ds = _update_dataset
102
+ rows = ds.clone(ds.send(:default_server_opts, :sql=>ds.returning(:xmin).update_sql(columns))).all
103
+ values[:xmin] = rows.first[:xmin] unless rows.empty?
104
+ rows.length
105
+ end
106
+ end
107
+ end
108
+ end
109
+ end
@@ -64,6 +64,9 @@ module Sequel
64
64
  def self.configure(model, opts=OPTS)
65
65
  model.instance_exec do
66
66
  @static_cache_frozen = opts.fetch(:frozen, true)
67
+ if @static_cache_frozen && defined?(::Sequel::Plugins::ForbidLazyLoad::ClassMethods) && is_a?(::Sequel::Plugins::ForbidLazyLoad::ClassMethods)
68
+ extend ForbidLazyLoadClassMethods
69
+ end
67
70
  load_cache
68
71
  end
69
72
  end
@@ -246,6 +249,41 @@ module Sequel
246
249
  end
247
250
  end
248
251
 
252
+ module ForbidLazyLoadClassMethods
253
+ # Do not forbid lazy loading for single object retrieval.
254
+ def cache_get_pk(pk)
255
+ primary_key_lookup(pk)
256
+ end
257
+
258
+ # Use static cache to return first arguments.
259
+ def first(*args)
260
+ if !defined?(yield) && args.empty?
261
+ if o = @all.first
262
+ _static_cache_frozen_copy(o)
263
+ end
264
+ else
265
+ super
266
+ end
267
+ end
268
+
269
+ private
270
+
271
+ # Return a frozen copy of the object that does not have lazy loading
272
+ # forbidden.
273
+ def _static_cache_frozen_copy(o)
274
+ o = call(Hash[o.values])
275
+ o.errors.freeze
276
+ o.freeze
277
+ end
278
+
279
+ # Do not forbid lazy loading for single object retrieval.
280
+ def primary_key_lookup(pk)
281
+ if o = cache[pk]
282
+ _static_cache_frozen_copy(o)
283
+ end
284
+ end
285
+ end
286
+
249
287
  module InstanceMethods
250
288
  # Disallowing destroying the object unless the frozen: false option was used.
251
289
  def before_destroy
@@ -6,7 +6,7 @@ module Sequel
6
6
 
7
7
  # The minor version of Sequel. Bumped for every non-patch level
8
8
  # release, generally around once a month.
9
- MINOR = 69
9
+ MINOR = 71
10
10
 
11
11
  # The tiny version of Sequel. Usually 0, only bumped for bugfix
12
12
  # releases that fix regressions from previous versions.
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sequel
3
3
  version: !ruby/object:Gem::Version
4
- version: 5.69.0
4
+ version: 5.71.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Jeremy Evans
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2023-06-01 00:00:00.000000000 Z
11
+ date: 2023-08-01 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: minitest
@@ -202,6 +202,8 @@ extra_rdoc_files:
202
202
  - doc/release_notes/5.68.0.txt
203
203
  - doc/release_notes/5.69.0.txt
204
204
  - doc/release_notes/5.7.0.txt
205
+ - doc/release_notes/5.70.0.txt
206
+ - doc/release_notes/5.71.0.txt
205
207
  - doc/release_notes/5.8.0.txt
206
208
  - doc/release_notes/5.9.0.txt
207
209
  files:
@@ -299,6 +301,8 @@ files:
299
301
  - doc/release_notes/5.68.0.txt
300
302
  - doc/release_notes/5.69.0.txt
301
303
  - doc/release_notes/5.7.0.txt
304
+ - doc/release_notes/5.70.0.txt
305
+ - doc/release_notes/5.71.0.txt
302
306
  - doc/release_notes/5.8.0.txt
303
307
  - doc/release_notes/5.9.0.txt
304
308
  - doc/schema_modification.rdoc
@@ -365,6 +369,7 @@ files:
365
369
  - lib/sequel/connection_pool.rb
366
370
  - lib/sequel/connection_pool/sharded_single.rb
367
371
  - lib/sequel/connection_pool/sharded_threaded.rb
372
+ - lib/sequel/connection_pool/sharded_timed_queue.rb
368
373
  - lib/sequel/connection_pool/single.rb
369
374
  - lib/sequel/connection_pool/threaded.rb
370
375
  - lib/sequel/connection_pool/timed_queue.rb
@@ -549,9 +554,11 @@ files:
549
554
  - lib/sequel/plugins/mssql_optimistic_locking.rb
550
555
  - lib/sequel/plugins/nested_attributes.rb
551
556
  - lib/sequel/plugins/optimistic_locking.rb
557
+ - lib/sequel/plugins/optimistic_locking_base.rb
552
558
  - lib/sequel/plugins/pg_array_associations.rb
553
559
  - lib/sequel/plugins/pg_auto_constraint_validations.rb
554
560
  - lib/sequel/plugins/pg_row.rb
561
+ - lib/sequel/plugins/pg_xmin_optimistic_locking.rb
555
562
  - lib/sequel/plugins/prepared_statements.rb
556
563
  - lib/sequel/plugins/prepared_statements_safe.rb
557
564
  - lib/sequel/plugins/primary_key_lookup_check_values.rb