pgtk 0.31.7 → 0.31.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9620e5581d667750f82a36568c6b64fcb27c2c588b93fc433869f1a430159128
4
- data.tar.gz: 281908df6408d31c7ae69ff9ea4a4122881b7ae54c7906e87d10abf885d625d4
3
+ metadata.gz: 52f06b5f116d87eb7229752520f8cfff19e5c97d53c94a5acac935ad5f5d74e7
4
+ data.tar.gz: 83f18d647dbcd7ab1e503ac14da13d3abf77353f70acc2937090b237bcf28af7
5
5
  SHA512:
6
- metadata.gz: f633a5925cbf6f8ad61247c898a7d58a59d8a64dcf7e23597f625ddfad816e856ae00b6b4960839abec9d763b931a4c519bce49c3bd2724fef38854e7c316c7e
7
- data.tar.gz: a70e63d0772e40632333d382d4c8598a6a1ab60d72b3ce7bc8675ed12aced75acdc6117fbfccfc6fd9094e927136f7b9f983ba99491b673a392fd7fdb645597a
6
+ metadata.gz: 4f70d7c3755cee075cd586b4ed3830cc2c932bc4a0213e36fe18104e9b4c4c6ac74fcb00e985aeda37766c62b399ffd829a537f59582be7cb2be31a7eacfaac6
7
+ data.tar.gz: 9cf6517ef4f27aaf6403fd3dbc522b11c69853a7ccc704f009126f29d7b1aedbcf852522f3d90ff242b3bf482f51634f33c9c2373a88c880e8281b20a731403e
data/Gemfile.lock CHANGED
@@ -42,7 +42,7 @@ GEM
42
42
  loog (0.8.0)
43
43
  ellipsized
44
44
  logger (~> 1.0)
45
- minitest (6.0.5)
45
+ minitest (6.0.6)
46
46
  drb (~> 2.0)
47
47
  prism (~> 1.5)
48
48
  minitest-mock (5.27.0)
data/README.md CHANGED
@@ -202,12 +202,20 @@ You can exclude specific queries from timeout enforcement using regex patterns:
202
202
  impatient = Pgtk::Impatient.new(pool, 2, /^SELECT/, /^VACUUM/)
203
203
  ```
204
204
 
205
+ The timeout is enforced on the server side: each query is wrapped in a tiny
206
+ transaction that issues `SET LOCAL statement_timeout`, and PostgreSQL itself
207
+ terminates the query at the deadline. This guarantees the server-side
208
+ connection slot is freed even when the client cannot deliver a cancellation
209
+ request — for example, behind a transaction-pool PgBouncer that does not
210
+ forward client disconnects to in-flight server queries.
211
+
205
212
  Key features:
206
213
 
207
214
  1. Configurable timeout in seconds for each query
208
- 2. Raises `Pgtk::Impatient::TooSlow` exception when timeout is exceeded
209
- 3. Can exclude queries matching specific patterns from timeout checks
210
- 4. Also sets PostgreSQL's `statement_timeout` for transactions
215
+ 1. Raises `Pgtk::Impatient::TooSlow` exception when timeout is exceeded
216
+ 1. Can exclude queries matching specific patterns from timeout checks
217
+ 1. Sets PostgreSQL's `statement_timeout` per query and per transaction,
218
+ so timeouts are enforced server-side and orphan backends do not pile up
211
219
 
212
220
  ## Query Caching with `Pgtk::Stash`
213
221
 
@@ -253,9 +261,9 @@ Note that the caching implementation is basic and only suitable
253
261
  for simple queries:
254
262
 
255
263
  1. Queries must reference tables (using `FROM` or `JOIN`)
256
- 2. Cache is invalidated by table, not by specific rows
257
- 3. Write operations (`INSERT`, `UPDATE`, `DELETE`) bypass
258
- the cache and invalidate all cached queries for affected tables
264
+ 1. Cache is invalidated by table, not by specific rows
265
+ 1. Write operations (`INSERT`, `UPDATE`, `DELETE`) bypass
266
+ the cache and invalidate all cached queries for affected tables
259
267
 
260
268
  ## Automatic Retries with `Pgtk::Retry`
261
269
 
@@ -284,9 +292,11 @@ retry_pool.exec('INSERT INTO logs (message) VALUES ($1)', ['User logged in'])
284
292
  Key features:
285
293
 
286
294
  1. Only `SELECT` queries are retried (to prevent duplicate data modifications)
287
- 2. Retries happen immediately without delay
288
- 3. The original error is raised after all retry attempts are exhausted
289
- 4. Works seamlessly with other decorators like `Pgtk::Spy` and `Pgtk::Impatient`
295
+ 1. Retries happen immediately, except on `PG::ConnectionBad`,
296
+ where an exponential backoff (50ms, 200ms, 1s) is applied
297
+ between attempts to avoid amplifying upstream login storms
298
+ 1. The original error is raised after all retry attempts are exhausted
299
+ 1. Works seamlessly with other decorators like `Pgtk::Spy` and `Pgtk::Impatient`
290
300
 
291
301
  ## Some Examples
292
302
 
@@ -4,18 +4,21 @@
4
4
  # SPDX-License-Identifier: MIT
5
5
 
6
6
  require 'ellipsized'
7
- require 'securerandom'
7
+ require 'pg'
8
8
  require 'tago'
9
- require 'timeout'
10
9
  require_relative '../pgtk'
11
10
 
12
11
  # Impatient is a decorator for Pool that enforces timeouts on all database operations.
13
12
  # It ensures that SQL queries don't run indefinitely, which helps prevent application
14
13
  # hangs and resource exhaustion when database operations are slow or stalled.
15
14
  #
16
- # This class implements the same interface as Pool but wraps each database operation
17
- # in a timeout block. If a query exceeds the specified timeout, it raises a Timeout::Error
18
- # exception, allowing the application to handle slow queries gracefully.
15
+ # This class implements the same interface as Pool but enforces the timeout on the
16
+ # server side, by wrapping each query in a tiny transaction that issues
17
+ # +SET LOCAL statement_timeout+. PostgreSQL itself terminates the query at the
18
+ # deadline, which guarantees that the server-side connection slot is freed even
19
+ # when the client cannot deliver a cancellation request (for example, behind a
20
+ # transaction-pool PgBouncer that does not forward client disconnects to in-flight
21
+ # server queries). On timeout, +TooSlow+ is raised.
19
22
  #
20
23
  # Basic usage:
21
24
  #
@@ -29,7 +32,7 @@ require_relative '../pgtk'
29
32
  # # Execute queries with automatic timeout enforcement
30
33
  # begin
31
34
  # impatient.exec('SELECT * FROM large_table WHERE complex_condition')
32
- # rescue Timeout::Error
35
+ # rescue Pgtk::Impatient::TooSlow
33
36
  # puts "Query timed out after 2 seconds"
34
37
  # end
35
38
  #
@@ -39,7 +42,7 @@ require_relative '../pgtk'
39
42
  # t.exec('UPDATE large_table SET processed = true')
40
43
  # t.exec('DELETE FROM queue WHERE processed = true')
41
44
  # end
42
- # rescue Timeout::Error
45
+ # rescue PG::QueryCanceled
43
46
  # puts "Transaction timed out"
44
47
  # end
45
48
  #
@@ -91,23 +94,30 @@ class Pgtk::Impatient
91
94
  ].join("\n")
92
95
  end
93
96
 
94
- # Execute a SQL query with a timeout.
97
+ # Execute a SQL query with a server-side timeout.
98
+ #
99
+ # The query is wrapped in a tiny transaction that issues
100
+ # +SET LOCAL statement_timeout+, so PostgreSQL itself terminates the query
101
+ # at the deadline. This guarantees the server-side connection slot is freed
102
+ # even when the client cannot deliver a cancellation request (for example,
103
+ # behind a transaction-pool PgBouncer). When the deadline fires, the
104
+ # underlying +PG::QueryCanceled+ is translated to +TooSlow+.
95
105
  #
96
106
  # @param [String, Array] query The SQL query with params inside (possibly)
97
107
  # @param [Array] args List of arguments
98
108
  # @return [Array] Result rows
99
- # @raise [Timeout::Error] If the query takes too long
109
+ # @raise [TooSlow] If the query takes too long
100
110
  def exec(query, *args)
101
111
  sql = query.is_a?(Array) ? query.join(' ') : query
102
112
  return @pool.exec(sql, *args) if @off.any? { |re| re.match?(sql) }
103
113
  start = Time.now
104
- token = SecureRandom.uuid
114
+ ms = [Integer(@timeout * 1000), 1].max
105
115
  begin
106
- Timeout.timeout(@timeout, Timeout::Error, token) do
107
- @pool.exec(sql, *args)
116
+ @pool.transaction do |t|
117
+ t.exec("SET LOCAL statement_timeout = #{ms}")
118
+ t.exec(sql, *args)
108
119
  end
109
- rescue Timeout::Error => e
110
- raise(e) unless e.message == token
120
+ rescue PG::QueryCanceled
111
121
  raise(TooSlow, [
112
122
  'SQL query',
113
123
  ("with #{args.count} argument#{'s' if args.count > 1}" unless args.empty?),
@@ -125,14 +135,14 @@ class Pgtk::Impatient
125
135
  # terminates the session, which frees locks and releases the connection
126
136
  # slot back to the pool.
127
137
  #
128
- # @yield [Pgtk::Impatient] Yields an impatient transaction
138
+ # @yield [Object] Yields a transaction object that responds to +exec+
129
139
  # @return [Object] Result of the block
130
140
  def transaction
131
141
  @pool.transaction do |t|
132
- ms = Integer((@timeout * 1000).to_s, 10)
142
+ ms = [Integer(@timeout * 1000), 1].max
133
143
  t.exec("SET LOCAL statement_timeout = #{ms}")
134
144
  t.exec("SET LOCAL idle_in_transaction_session_timeout = #{ms}")
135
- yield(Pgtk::Impatient.new(t, @timeout))
145
+ yield(t)
136
146
  end
137
147
  end
138
148
  end
@@ -7,7 +7,7 @@ require 'nokogiri'
7
7
  require 'rake/tasklib'
8
8
  require_relative '../pgtk'
9
9
 
10
- # Liquicheck rake task for check Liquibase XML files.
10
+ # Liquicheck rake task to check Liquibase XML files.
11
11
  # Author:: Yegor Bugayenko (yegor256@gmail.com)
12
12
  # Copyright:: Copyright (c) 2019-2026 Yegor Bugayenko
13
13
  # License:: MIT
@@ -63,7 +63,7 @@ class Pgtk::LiquicheckTask < Rake::TaskLib
63
63
  context = node.attr('context')&.to_s
64
64
  on(errors, file) do
65
65
  demand(id, 'ID is empty')
66
- confirm(id, /[-a-z]+/, "ID #{id.inspect} has not suffix in #{context} context") if context
66
+ confirm(id, /[-a-z]+/, "ID #{id.inspect} has no suffix in #{context} context") if context
67
67
  end
68
68
  on(errors, file) do
69
69
  demand(author, 'author is empty')
data/lib/pgtk/pool.rb CHANGED
@@ -114,7 +114,7 @@ class Pgtk::Pool
114
114
  # puts 'Title: ' + row['title']
115
115
  # end
116
116
  #
117
- # All values in the retrieved hash are strings. No matter what types of
117
+ # All values in the retrieved hash are strings. No matter what types
118
118
  # of data you have in the database, you get strings here. It's your job
119
119
  # to convert them to the type you need.
120
120
  #
@@ -322,6 +322,7 @@ class Pgtk::Pool
322
322
  conn = renew(conn, reason)
323
323
  rescue StandardError => e
324
324
  @log.warn("Failed to renew dead connection (#{reason}): #{e.message}")
325
+ raise(e)
325
326
  end
326
327
  end
327
328
  begin
data/lib/pgtk/retry.rb CHANGED
@@ -54,6 +54,8 @@ class Pgtk::Retry
54
54
  # so its message and stack trace are preserved for debugging.
55
55
  class Exhausted < StandardError; end
56
56
 
57
+ BACKOFFS = [0.05, 0.2, 1.0].freeze
58
+
57
59
  # Constructor.
58
60
  #
59
61
  # @param [Pgtk::Pool] pool The pool to decorate
@@ -86,13 +88,15 @@ class Pgtk::Retry
86
88
  end
87
89
 
88
90
  # Execute a SQL query with automatic retry on transient failures.
89
- # SELECT queries are retried on any error, since reads are idempotent.
90
- # Non-SELECT queries are retried only on PG::ConnectionBad, since by
91
- # definition the query never reached the server, so retrying cannot
92
- # duplicate a write. Other errors on writes propagate immediately,
93
- # because a failure may occur after the server received the query but
94
- # before the acknowledgement reached the client, and retrying a
95
- # non-idempotent write could duplicate it.
91
+ # Only SELECT queries are retried, since reads are idempotent.
92
+ # Non-SELECT queries propagate the original error immediately, even
93
+ # on PG::ConnectionBad, because that error can be raised after the
94
+ # server already received the query but before the acknowledgement
95
+ # reached the client, and retrying a non-idempotent write could
96
+ # duplicate it. When the underlying error is PG::ConnectionBad, an
97
+ # exponential backoff (see BACKOFFS) is applied between attempts, so
98
+ # that a SELECT failing against an upstream pool that is in its
99
+ # login-failure cache window does not amplify the storm.
96
100
  #
97
101
  # @param [String] sql The SQL query with params inside (possibly)
98
102
  # @return [Array] Result rows
@@ -101,14 +105,11 @@ class Pgtk::Retry
101
105
  attempt = 0
102
106
  begin
103
107
  @pool.exec(sql, *)
104
- rescue PG::ConnectionBad => e
105
- attempt += 1
106
- raise(Exhausted, "Retry gave up after #{@attempts} attempts: #{e.message}") if attempt >= @attempts
107
- retry
108
108
  rescue StandardError, Pgtk::Impatient::TooSlow => e
109
109
  raise(e) unless query.strip.upcase.start_with?('SELECT')
110
110
  attempt += 1
111
111
  raise(Exhausted, "Retry gave up after #{@attempts} attempts: #{e.message}") if attempt >= @attempts
112
+ sleep(BACKOFFS[attempt - 1] || BACKOFFS.last) if e.is_a?(PG::ConnectionBad)
112
113
  retry
113
114
  end
114
115
  end
data/lib/pgtk/version.rb CHANGED
@@ -10,5 +10,5 @@ require_relative '../pgtk'
10
10
  # Copyright:: Copyright (c) 2019-2026 Yegor Bugayenko
11
11
  # License:: MIT
12
12
  module Pgtk
13
- VERSION = '0.31.7' unless defined?(VERSION)
13
+ VERSION = '0.31.9' unless defined?(VERSION)
14
14
  end
data/resources/pom.xml CHANGED
@@ -10,7 +10,7 @@
10
10
  <version>0.0.0</version>
11
11
  <packaging>pom</packaging>
12
12
  <properties>
13
- <postgresql.version>42.7.10</postgresql.version>
13
+ <postgresql.version>42.7.11</postgresql.version>
14
14
  <liquibase.version>5.0.2</liquibase.version>
15
15
  </properties>
16
16
  <dependencies>
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: pgtk
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.31.7
4
+ version: 0.31.9
5
5
  platform: ruby
6
6
  authors:
7
7
  - Yegor Bugayenko