sidekiq-unique-jobs 7.1.25 → 7.1.27

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of sidekiq-unique-jobs might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 4a5c0d9d30f11e61d3cc7599343951d242c5ccba2c93ee2a46138eab1037db81
4
- data.tar.gz: a0721afd3b1a4e98655f9d1aba34e609d9ddd71bf6fd02d2b6f35bfbf4f9bece
3
+ metadata.gz: f6c5acd562db0da327940682a24bf4c28793a61ffb9904c3f480352253676b4e
4
+ data.tar.gz: 1673f51ceb70a1e6f37e60a411a93fe6ed9a05091a6e044b38283f2dbc93bbaf
5
5
  SHA512:
6
- metadata.gz: 7751e266ae6df52925ba2b0b3fbc72e7fff5e2e979ae00dcc8fb7af433aac22627901a650cdd53f64479af440e68a814ad3d67f446509e4aef4351e6eed71da0
7
- data.tar.gz: 210d306f47c106bab3ae6e109c24d728baf5a81cc6caa3ee309a7c96f547c5cd8503e99cb7f814671c1feccfdaa48ea992581c9f978bb2bfd662c51946e13ffd
6
+ metadata.gz: 90dc8a981cd53213457bec790de3460823317666693ba56d5bc8d4f890a8a376e125ba5034f438109f1e883a847c09341634aa0bc8d509ed0d21534f53f49b4b
7
+ data.tar.gz: b4d4e314415706035afb5b3b078be70acb762134aa75e89be87d8e76bed0cdafec1e27ec9d303cae596b41ad5e504dccc01dd989256cd4272e29ebbb47e8f4cf
data/CHANGELOG.md CHANGED
@@ -1,5 +1,44 @@
1
1
  # Changelog
2
2
 
3
+ ## [Unreleased](https://github.com/mhenrixon/sidekiq-unique-jobs/tree/HEAD)
4
+
5
+ [Full Changelog](https://github.com/mhenrixon/sidekiq-unique-jobs/compare/v7.1.26...HEAD)
6
+
7
+ **Closed issues:**
8
+
9
+ - Memory bloat / dangling keys / reaper not cleaning orphans [\#637](https://github.com/mhenrixon/sidekiq-unique-jobs/issues/637)
10
+
11
+ ## [v7.1.26](https://github.com/mhenrixon/sidekiq-unique-jobs/tree/v7.1.26) (2022-07-28)
12
+
13
+ [Full Changelog](https://github.com/mhenrixon/sidekiq-unique-jobs/compare/v7.1.25...v7.1.26)
14
+
15
+ **Implemented enhancements:**
16
+
17
+ - Fix\(until\_expired\): Fix test and implementation [\#725](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/725) ([mhenrixon](https://github.com/mhenrixon))
18
+
19
+ **Fixed bugs:**
20
+
21
+ - Fix\(until\_and\_while\_executing\): Improve timeouts slightly [\#728](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/728) ([mhenrixon](https://github.com/mhenrixon))
22
+ - Fix\(unlock\): Delete primed keys on last entry [\#726](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/726) ([mhenrixon](https://github.com/mhenrixon))
23
+
24
+ **Merged pull requests:**
25
+
26
+ - Ensure batch delete removes expiring locks [\#724](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/724) ([francesmcmullin](https://github.com/francesmcmullin))
27
+ - Chore: Update dependencies [\#722](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/722) ([mhenrixon](https://github.com/mhenrixon))
28
+ - Move until\_expired digests to separate zset [\#721](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/721) ([francesmcmullin](https://github.com/francesmcmullin))
29
+ - Avoid skipping ranges when looping through queues [\#720](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/720) ([francesmcmullin](https://github.com/francesmcmullin))
30
+ - Bump actions/checkout from 2 to 3 [\#718](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/718) ([dependabot[bot]](https://github.com/apps/dependabot))
31
+ - Add Dependabot for GitHub Actions [\#717](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/717) ([petergoldstein](https://github.com/petergoldstein))
32
+ - Fix Sidekiq::Worker.clear\_all override not being applied [\#714](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/714) ([dsander](https://github.com/dsander))
33
+
34
+ ## [v7.1.25](https://github.com/mhenrixon/sidekiq-unique-jobs/tree/v7.1.25) (2022-06-13)
35
+
36
+ [Full Changelog](https://github.com/mhenrixon/sidekiq-unique-jobs/compare/v7.1.24...v7.1.25)
37
+
38
+ **Fixed bugs:**
39
+
40
+ - Fix: Include the correct middleware [\#716](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/716) ([mhenrixon](https://github.com/mhenrixon))
41
+
3
42
  ## [v7.1.24](https://github.com/mhenrixon/sidekiq-unique-jobs/tree/v7.1.24) (2022-06-09)
4
43
 
5
44
  [Full Changelog](https://github.com/mhenrixon/sidekiq-unique-jobs/compare/v7.1.23...v7.1.24)
data/README.md CHANGED
@@ -31,7 +31,7 @@ Want to show me some ❤️ for the hard work I do on this gem? You can use the
31
31
  - [raise](#raise)
32
32
  - [reject](#reject)
33
33
  - [replace](#replace)
34
- - [Reschedule](#reschedule)
34
+ - [reschedule](#reschedule)
35
35
  - [Custom Strategies](#custom-strategies)
36
36
  - [3 Cleanup Dead Locks](#3-cleanup-dead-locks)
37
37
  - [Debugging](#debugging)
@@ -610,6 +610,7 @@ This has been probably the most confusing part of this gem. People get really co
610
610
  ```ruby
611
611
  SidekiqUniqueJobs.configure do |config|
612
612
  config.enabled = !Rails.env.test?
613
+ config.logger_enabled = !Rails.env.test?
613
614
  end
614
615
  ```
615
616
 
@@ -736,6 +737,7 @@ Configure SidekiqUniqueJobs in an initializer or the sidekiq initializer on appl
736
737
  ```ruby
737
738
  SidekiqUniqueJobs.configure do |config|
738
739
  config.logger = Sidekiq.logger # default, change at your own discretion
740
+ config.logger_enabled = true # default, disable for test environments
739
741
  config.debug_lua = false # Turn on when debugging
740
742
  config.lock_info = false # Turn on when debugging
741
743
  config.lock_ttl = 600 # Expire locks after 10 minutes
@@ -91,6 +91,7 @@ module SidekiqUniqueJobs
91
91
  chunk.each do |digest|
92
92
  del_digest(pipeline, digest)
93
93
  pipeline.zrem(SidekiqUniqueJobs::DIGESTS, digest)
94
+ pipeline.zrem(SidekiqUniqueJobs::EXPIRING_DIGESTS, digest)
94
95
  @count += 1
95
96
  end
96
97
  end
@@ -20,9 +20,11 @@ module SidekiqUniqueJobs
20
20
  option :count, aliases: :c, type: :numeric, default: 1000, desc: "The max number of digests to return"
21
21
  # :nodoc:
22
22
  def list(pattern = "*")
23
- entries = digests.entries(pattern: pattern, count: options[:count])
24
- say "Found #{entries.size} digests matching '#{pattern}':"
25
- print_in_columns(entries.sort) if entries.any?
23
+ max_count = options[:count]
24
+ say "Searching for regular digests"
25
+ list_entries(digests.entries(pattern: pattern, count: max_count), pattern)
26
+ say "Searching for expiring digests"
27
+ list_entries(expiring_digests.entries(pattern: pattern, count: max_count), pattern)
26
28
  end
27
29
 
28
30
  desc "del PATTERN", "deletes unique digests from redis by pattern"
@@ -32,11 +34,9 @@ module SidekiqUniqueJobs
32
34
  def del(pattern)
33
35
  max_count = options[:count]
34
36
  if options[:dry_run]
35
- result = digests.entries(pattern: pattern, count: max_count)
36
- say "Would delete #{result.size} digests matching '#{pattern}'"
37
+ count_entries_for_del(max_count, pattern)
37
38
  else
38
- deleted_count = digests.delete_by_pattern(pattern, count: max_count)
39
- say "Deleted #{deleted_count} digests matching '#{pattern}'"
39
+ del_entries(max_count, pattern)
40
40
  end
41
41
  end
42
42
 
@@ -51,12 +51,17 @@ module SidekiqUniqueJobs
51
51
  console_class.start
52
52
  end
53
53
 
54
- no_commands do
54
+ no_commands do # rubocop:disable Metrics/BlockLength
55
55
  # :nodoc:
56
56
  def digests
57
57
  @digests ||= SidekiqUniqueJobs::Digests.new
58
58
  end
59
59
 
60
+ # :nodoc:
61
+ def expiring_digests
62
+ @expiring_digests ||= SidekiqUniqueJobs::ExpiringDigests.new
63
+ end
64
+
60
65
  # :nodoc:
61
66
  def console_class
62
67
  require "pry"
@@ -65,6 +70,26 @@ module SidekiqUniqueJobs
65
70
  require "irb"
66
71
  IRB
67
72
  end
73
+
74
+ # :nodoc:
75
+ def list_entries(entries, pattern)
76
+ say "Found #{entries.size} digests matching '#{pattern}':"
77
+ print_in_columns(entries.sort) if entries.any?
78
+ end
79
+
80
+ # :nodoc:
81
+ def count_entries_for_del(max_count, pattern)
82
+ count = digests.entries(pattern: pattern, count: max_count).size +
83
+ expiring_digests.entries(pattern: pattern, count: max_count).size
84
+ say "Would delete #{count} digests matching '#{pattern}'"
85
+ end
86
+
87
+ # :nodoc:
88
+ def del_entries(max_count, pattern)
89
+ deleted_count = digests.delete_by_pattern(pattern, count: max_count).to_i +
90
+ expiring_digests.delete_by_pattern(pattern, count: max_count).to_i
91
+ say "Deleted #{deleted_count} digests matching '#{pattern}'"
92
+ end
68
93
  end
69
94
  end
70
95
  end
@@ -8,6 +8,7 @@ module SidekiqUniqueJobs
8
8
  :enabled,
9
9
  :lock_prefix,
10
10
  :logger,
11
+ :logger_enabled,
11
12
  :locks,
12
13
  :strategies,
13
14
  :debug_lua,
@@ -91,6 +92,9 @@ module SidekiqUniqueJobs
91
92
  # @return [nil]
92
93
  LOCK_TTL = nil
93
94
  #
95
+ # @return [true,false] by default false (don't disable logger)
96
+ LOGGER_ENABLED = true
97
+ #
94
98
  # @return [true] by default the gem is enabled
95
99
  ENABLED = true
96
100
  #
@@ -180,6 +184,7 @@ module SidekiqUniqueJobs
180
184
  ENABLED,
181
185
  PREFIX,
182
186
  Sidekiq.logger,
187
+ LOGGER_ENABLED,
183
188
  LOCKS,
184
189
  STRATEGIES,
185
190
  DEBUG_LUA,
@@ -14,6 +14,7 @@ module SidekiqUniqueJobs
14
14
  CREATED_AT = "created_at"
15
15
  DEAD_VERSION = "uniquejobs:dead"
16
16
  DIGESTS = "uniquejobs:digests"
17
+ EXPIRING_DIGESTS = "uniquejobs:expiring_digests"
17
18
  ERRORS = "errors"
18
19
  JID = "jid"
19
20
  LIMIT = "limit"
@@ -14,8 +14,8 @@ module SidekiqUniqueJobs
14
14
  # @return [String] the default pattern to use for matching
15
15
  SCAN_PATTERN = "*"
16
16
 
17
- def initialize
18
- super(DIGESTS)
17
+ def initialize(digests_key = DIGESTS)
18
+ super(digests_key)
19
19
  end
20
20
 
21
21
  #
@@ -0,0 +1,14 @@
1
+ # frozen_string_literal: true
2
+
3
+ module SidekiqUniqueJobs
4
+ #
5
+ # Class ExpiringDigests provides access to the expiring digests used by until_expired locks
6
+ #
7
+ # @author Mikael Henriksson <mikael@mhenrixon.com>
8
+ #
9
+ class ExpiringDigests < Digests
10
+ def initialize
11
+ super(EXPIRING_DIGESTS)
12
+ end
13
+ end
14
+ end
@@ -33,6 +33,10 @@ module SidekiqUniqueJobs
33
33
  # @!attribute [r] digests
34
34
  # @return [String] the zset with locked digests
35
35
  attr_reader :digests
36
+ #
37
+ # @!attribute [r] expiring_digests
38
+ # @return [String] the zset with locked expiring_digests
39
+ attr_reader :expiring_digests
36
40
 
37
41
  #
38
42
  # Initialize a new Key
@@ -40,13 +44,14 @@ module SidekiqUniqueJobs
40
44
  # @param [String] digest the digest to use as key
41
45
  #
42
46
  def initialize(digest)
43
- @digest = digest
44
- @queued = suffixed_key("QUEUED")
45
- @primed = suffixed_key("PRIMED")
46
- @locked = suffixed_key("LOCKED")
47
- @info = suffixed_key("INFO")
48
- @changelog = CHANGELOGS
49
- @digests = DIGESTS
47
+ @digest = digest
48
+ @queued = suffixed_key("QUEUED")
49
+ @primed = suffixed_key("PRIMED")
50
+ @locked = suffixed_key("LOCKED")
51
+ @info = suffixed_key("INFO")
52
+ @changelog = CHANGELOGS
53
+ @digests = DIGESTS
54
+ @expiring_digests = EXPIRING_DIGESTS
50
55
  end
51
56
 
52
57
  #
@@ -81,7 +86,7 @@ module SidekiqUniqueJobs
81
86
  # @return [Array] an ordered array with all keys
82
87
  #
83
88
  def to_a
84
- [digest, queued, primed, locked, info, changelog, digests]
89
+ [digest, queued, primed, locked, info, changelog, digests, expiring_digests]
85
90
  end
86
91
 
87
92
  private
@@ -66,7 +66,7 @@ module SidekiqUniqueJobs
66
66
  pipeline.set(key.digest, job_id)
67
67
  pipeline.hset(key.locked, job_id, now_f)
68
68
  info.set(lock_info, pipeline)
69
- pipeline.zadd(key.digests, now_f, key.digest)
69
+ add_digest_to_set(pipeline, lock_info)
70
70
  pipeline.zadd(key.changelog, now_f, changelog_json(job_id, "queue.lua", "Queued"))
71
71
  pipeline.zadd(key.changelog, now_f, changelog_json(job_id, "lock.lua", "Locked"))
72
72
  end
@@ -321,5 +321,22 @@ module SidekiqUniqueJobs
321
321
  time: now_f,
322
322
  )
323
323
  end
324
+
325
+ #
326
+ # Add the digest to the correct sorted set
327
+ #
328
+ # @param [Object] pipeline a redis pipeline object for issue commands
329
+ # @param [Hash] lock_info the lock info relevant to the digest
330
+ #
331
+ # @return [nil]
332
+ #
333
+ def add_digest_to_set(pipeline, lock_info)
334
+ digest_string = key.digest
335
+ if lock_info["lock"] == :until_expired
336
+ pipeline.zadd(key.expiring_digests, now_f + lock_info["ttl"], digest_string)
337
+ else
338
+ pipeline.zadd(key.digests, now_f, digest_string)
339
+ end
340
+ end
324
341
  end
325
342
  end
@@ -32,6 +32,7 @@ module SidekiqUniqueJobs
32
32
  #
33
33
  # @return [Float] used to take into consideration the inaccuracy of redis timestamps
34
34
  CLOCK_DRIFT_FACTOR = 0.01
35
+ NETWORK_FACTOR = 0.04
35
36
 
36
37
  #
37
38
  # @!attribute [r] key
@@ -184,6 +185,7 @@ module SidekiqUniqueJobs
184
185
  #
185
186
  # @param [Sidekiq::RedisConnection, ConnectionPool] conn the redis connection
186
187
  # @param [Method] primed_method reference to the method to use for getting a primed token
188
+ # @param [nil, Integer, Float] time to wait before timeout
187
189
  #
188
190
  # @yieldparam [string] job_id the sidekiq JID
189
191
  # @yieldreturn [void] whatever the calling block returns
@@ -239,9 +241,18 @@ module SidekiqUniqueJobs
239
241
  # @return [Object] whatever the block returns when lock was acquired
240
242
  #
241
243
  def primed_async(conn, wait = nil, &block)
244
+ timeout = (wait || config.timeout).to_i
245
+ timeout = 1 if timeout.zero?
246
+
247
+ brpoplpush_timeout = timeout
248
+ concurrent_timeout = add_drift(timeout)
249
+
250
+ reflect(:debug, :timeouts, item,
251
+ timeouts: { brpoplpush_timeout: brpoplpush_timeout, concurrent_timeout: concurrent_timeout })
252
+
242
253
  primed_jid = Concurrent::Promises
243
- .future(conn) { |red_con| pop_queued(red_con, wait) }
244
- .value(add_drift(wait || config.timeout))
254
+ .future(conn) { |red_con| pop_queued(red_con, timeout) }
255
+ .value
245
256
 
246
257
  handle_primed(primed_jid, &block)
247
258
  end
@@ -273,7 +284,7 @@ module SidekiqUniqueJobs
273
284
  #
274
285
  # @return [String] a previously enqueued token (now taken off the queue)
275
286
  #
276
- def pop_queued(conn, wait = nil)
287
+ def pop_queued(conn, wait = 1)
277
288
  wait ||= config.timeout if config.wait_for_lock?
278
289
 
279
290
  if wait.nil?
@@ -287,10 +298,15 @@ module SidekiqUniqueJobs
287
298
  # @api private
288
299
  #
289
300
  def brpoplpush(conn, wait)
301
+ # passing timeout 0 to brpoplpush causes it to block indefinitely
290
302
  raise InvalidArgument, "wait must be an integer" unless wait.is_a?(Integer)
303
+ return conn.brpoplpush(key.queued, key.primed, wait) if conn.class.to_s == "Redis::Namespace"
291
304
 
292
- # passing timeout 0 to brpoplpush causes it to block indefinitely
293
- conn.brpoplpush(key.queued, key.primed, timeout: wait)
305
+ if VersionCheck.satisfied?(redis_version, ">= 6.2.0") && conn.respond_to?(:blmove)
306
+ conn.blmove(key.queued, key.primed, "RIGHT", "LEFT", timeout: wait)
307
+ else
308
+ conn.brpoplpush(key.queued, key.primed, timeout: wait)
309
+ end
294
310
  end
295
311
 
296
312
  #
@@ -359,5 +375,9 @@ module SidekiqUniqueJobs
359
375
  TIME => now_f,
360
376
  )
361
377
  end
378
+
379
+ def redis_version
380
+ @redis_version ||= SidekiqUniqueJobs.config.redis_version
381
+ end
362
382
  end
363
383
  end
@@ -30,6 +30,8 @@ module SidekiqUniqueJobs
30
30
  # @yield [String, Exception] the message or exception to use for log message
31
31
  #
32
32
  def log_debug(message_or_exception = nil, item = nil, &block)
33
+ return unless logging?
34
+
33
35
  message = build_message(message_or_exception, item)
34
36
  logger.debug(message, &block)
35
37
  nil
@@ -45,6 +47,8 @@ module SidekiqUniqueJobs
45
47
  # @yield [String, Exception] the message or exception to use for log message
46
48
  #
47
49
  def log_info(message_or_exception = nil, item = nil, &block)
50
+ return unless logging?
51
+
48
52
  message = build_message(message_or_exception, item)
49
53
  logger.info(message, &block)
50
54
  nil
@@ -60,6 +64,8 @@ module SidekiqUniqueJobs
60
64
  # @yield [String, Exception] the message or exception to use for log message
61
65
  #
62
66
  def log_warn(message_or_exception = nil, item = nil, &block)
67
+ return unless logging?
68
+
63
69
  message = build_message(message_or_exception, item)
64
70
  logger.warn(message, &block)
65
71
  nil
@@ -75,6 +81,8 @@ module SidekiqUniqueJobs
75
81
  # @yield [String, Exception] the message or exception to use for log message
76
82
  #
77
83
  def log_error(message_or_exception = nil, item = nil, &block)
84
+ return unless logging?
85
+
78
86
  message = build_message(message_or_exception, item)
79
87
  logger.error(message, &block)
80
88
  nil
@@ -90,6 +98,8 @@ module SidekiqUniqueJobs
90
98
  # @yield [String, Exception] the message or exception to use for log message
91
99
  #
92
100
  def log_fatal(message_or_exception = nil, item = nil, &block)
101
+ return unless logging?
102
+
93
103
  message = build_message(message_or_exception, item)
94
104
  logger.fatal(message, &block)
95
105
 
@@ -218,5 +228,9 @@ module SidekiqUniqueJobs
218
228
 
219
229
  yield
220
230
  end
231
+
232
+ def logging?
233
+ SidekiqUniqueJobs.logging?
234
+ end
221
235
  end
222
236
  end
@@ -1,11 +1,12 @@
1
1
  -------- BEGIN keys ---------
2
- local digest = KEYS[1]
3
- local queued = KEYS[2]
4
- local primed = KEYS[3]
5
- local locked = KEYS[4]
6
- local info = KEYS[5]
7
- local changelog = KEYS[6]
8
- local digests = KEYS[7]
2
+ local digest = KEYS[1]
3
+ local queued = KEYS[2]
4
+ local primed = KEYS[3]
5
+ local locked = KEYS[4]
6
+ local info = KEYS[5]
7
+ local changelog = KEYS[6]
8
+ local digests = KEYS[7]
9
+ local expiring_digests = KEYS[8]
9
10
  -------- END keys ---------
10
11
 
11
12
 
@@ -57,8 +58,13 @@ if limit_exceeded then
57
58
  return nil
58
59
  end
59
60
 
60
- log_debug("ZADD", digests, current_time, digest)
61
- redis.call("ZADD", digests, current_time, digest)
61
+ if lock_type == "until_expired" and pttl and pttl > 0 then
62
+ log_debug("ZADD", expiring_digests, current_time + pttl, digest)
63
+ redis.call("ZADD", expiring_digests, current_time + pttl, digest)
64
+ else
65
+ log_debug("ZADD", digests, current_time, digest)
66
+ redis.call("ZADD", digests, current_time, digest)
67
+ end
62
68
 
63
69
  log_debug("HSET", locked, job_id, current_time)
64
70
  redis.call("HSET", locked, job_id, current_time)
@@ -0,0 +1,92 @@
1
+ -------- BEGIN keys ---------
2
+ local digest = KEYS[1]
3
+ local queued = KEYS[2]
4
+ local primed = KEYS[3]
5
+ local locked = KEYS[4]
6
+ local info = KEYS[5]
7
+ local changelog = KEYS[6]
8
+ local digests = KEYS[7]
9
+ local expiring_digests = KEYS[8]
10
+ -------- END keys ---------
11
+
12
+
13
+ -------- BEGIN lock arguments ---------
14
+ local job_id = ARGV[1]
15
+ local pttl = tonumber(ARGV[2])
16
+ local lock_type = ARGV[3]
17
+ local limit = tonumber(ARGV[4])
18
+ -------- END lock arguments -----------
19
+
20
+
21
+ -------- BEGIN injected arguments --------
22
+ local current_time = tonumber(ARGV[5])
23
+ local debug_lua = ARGV[6] == "true"
24
+ local max_history = tonumber(ARGV[7])
25
+ local script_name = tostring(ARGV[8]) .. ".lua"
26
+ local redisversion = ARGV[9]
27
+ --------- END injected arguments ---------
28
+
29
+
30
+ -------- BEGIN local functions --------
31
+ <%= include_partial "shared/_common.lua" %>
32
+ ---------- END local functions ----------
33
+
34
+
35
+ --------- BEGIN lock.lua ---------
36
+ log_debug("BEGIN lock digest:", digest, "job_id:", job_id)
37
+
38
+ if redis.call("HEXISTS", locked, job_id) == 1 then
39
+ log_debug(locked, "already locked with job_id:", job_id)
40
+ log("Duplicate")
41
+
42
+ log_debug("LREM", queued, -1, job_id)
43
+ redis.call("LREM", queued, -1, job_id)
44
+
45
+ log_debug("LREM", primed, 1, job_id)
46
+ redis.call("LREM", primed, 1, job_id)
47
+
48
+ return job_id
49
+ end
50
+
51
+ local locked_count = redis.call("HLEN", locked)
52
+ local within_limit = limit > locked_count
53
+ local limit_exceeded = not within_limit
54
+
55
+ if limit_exceeded then
56
+ log_debug("Limit exceeded:", digest, "(", locked_count, "of", limit, ")")
57
+ log("Limited")
58
+ return nil
59
+ end
60
+
61
+ log_debug("ZADD", expiring_digests, current_time + pttl, digest)
62
+ redis.call("ZADD", expiring_digests, current_time + pttl, digest)
63
+
64
+ log_debug("HSET", locked, job_id, current_time)
65
+ redis.call("HSET", locked, job_id, current_time)
66
+
67
+ log_debug("LREM", queued, -1, job_id)
68
+ redis.call("LREM", queued, -1, job_id)
69
+
70
+ log_debug("LREM", primed, 1, job_id)
71
+ redis.call("LREM", primed, 1, job_id)
72
+
73
+ -- The Sidekiq client sets pttl
74
+ log_debug("PEXPIRE", digest, pttl)
75
+ redis.call("PEXPIRE", digest, pttl)
76
+
77
+ log_debug("PEXPIRE", locked, pttl)
78
+ redis.call("PEXPIRE", locked, pttl)
79
+
80
+ log_debug("PEXPIRE", info, pttl)
81
+ redis.call("PEXPIRE", info, pttl)
82
+
83
+ log_debug("PEXPIRE", queued, 1000)
84
+ redis.call("PEXPIRE", queued, 1000)
85
+
86
+ log_debug("PEXPIRE", primed, 1000)
87
+ redis.call("PEXPIRE", primed, 1000)
88
+
89
+ log("Locked")
90
+ log_debug("END lock digest:", digest, "job_id:", job_id)
91
+ return job_id
92
+ ---------- END lock.lua ----------
@@ -1,9 +1,10 @@
1
1
  redis.replicate_commands()
2
2
 
3
3
  -------- BEGIN keys ---------
4
- local digests_set = KEYS[1]
5
- local schedule_set = KEYS[2]
6
- local retry_set = KEYS[3]
4
+ local digests_set = KEYS[1]
5
+ local expiring_digests_set = KEYS[2]
6
+ local schedule_set = KEYS[3]
7
+ local retry_set = KEYS[4]
7
8
  -------- END keys ---------
8
9
 
9
10
  -------- BEGIN argv ---------
@@ -90,5 +91,32 @@ repeat
90
91
  index = index + per
91
92
  until index >= total or del_count >= reaper_count
92
93
 
94
+ if del_count < reaper_count then
95
+ index = 0
96
+ total = redis.call("ZCOUNT", expiring_digests_set, 0, current_time)
97
+ repeat
98
+ local digests = redis.call("ZRANGEBYSCORE", expiring_digests_set, 0, current_time, "LIMIT", index, index + per -1)
99
+
100
+ for _, digest in pairs(digests) do
101
+ local queued = digest .. ":QUEUED"
102
+ local primed = digest .. ":PRIMED"
103
+ local locked = digest .. ":LOCKED"
104
+ local info = digest .. ":INFO"
105
+ local run_digest = digest .. ":RUN"
106
+ local run_queued = digest .. ":RUN:QUEUED"
107
+ local run_primed = digest .. ":RUN:PRIMED"
108
+ local run_locked = digest .. ":RUN:LOCKED"
109
+ local run_info = digest .. ":RUN:INFO"
110
+
111
+ redis.call(del_cmd, digest, queued, primed, locked, info, run_digest, run_queued, run_primed, run_locked, run_info)
112
+
113
+ redis.call("ZREM", expiring_digests_set, digest)
114
+ del_count = del_count + 1
115
+ end
116
+
117
+ index = index + per
118
+ until index >= total or del_count >= reaper_count
119
+ end
120
+
93
121
  log_debug("END")
94
122
  return del_count
@@ -85,6 +85,11 @@ if locked_count and locked_count < 1 then
85
85
  redis.call(del_cmd, locked)
86
86
  end
87
87
 
88
+ if redis.call("LLEN", primed) == 0 then
89
+ log_debug(del_cmd, primed)
90
+ redis.call(del_cmd, primed)
91
+ end
92
+
88
93
  if limit and limit <= 1 and locked_count and locked_count <= 1 then
89
94
  log_debug("ZREM", digests, digest)
90
95
  redis.call("ZREM", digests, digest)
@@ -20,7 +20,7 @@ module SidekiqUniqueJobs
20
20
  call_script(
21
21
  :reap_orphans,
22
22
  conn,
23
- keys: [DIGESTS, SCHEDULE, RETRY, PROCESSES],
23
+ keys: [DIGESTS, EXPIRING_DIGESTS, SCHEDULE, RETRY, PROCESSES],
24
24
  argv: [reaper_count, (Time.now - reaper_timeout).to_f],
25
25
  )
26
26
  end
@@ -34,9 +34,15 @@ module SidekiqUniqueJobs
34
34
 
35
35
  #
36
36
  # @!attribute [r] start_time
37
- # @return [Integer] The timestamp this execution started represented as integer
37
+ # @return [Integer] The timestamp this execution started represented as Time (used for locks)
38
38
  attr_reader :start_time
39
39
 
40
+ #
41
+ # @!attribute [r] start_time
42
+ # @return [Integer] The clock stamp this execution started represented as integer
43
+ # (used for redis compatibility as it is more accurate than time)
44
+ attr_reader :start_source
45
+
40
46
  #
41
47
  # @!attribute [r] timeout_ms
42
48
  # @return [Integer] The allowed ms before timeout
@@ -49,11 +55,12 @@ module SidekiqUniqueJobs
49
55
  #
50
56
  def initialize(conn)
51
57
  super(conn)
52
- @digests = SidekiqUniqueJobs::Digests.new
53
- @scheduled = Redis::SortedSet.new(SCHEDULE)
54
- @retried = Redis::SortedSet.new(RETRY)
55
- @start_time = time_source.call
56
- @timeout_ms = SidekiqUniqueJobs.config.reaper_timeout * 1000
58
+ @digests = SidekiqUniqueJobs::Digests.new
59
+ @scheduled = Redis::SortedSet.new(SCHEDULE)
60
+ @retried = Redis::SortedSet.new(RETRY)
61
+ @start_time = Time.now
62
+ @start_source = time_source.call
63
+ @timeout_ms = SidekiqUniqueJobs.config.reaper_timeout * 1000
57
64
  end
58
65
 
59
66
  #
@@ -65,9 +72,20 @@ module SidekiqUniqueJobs
65
72
  def call
66
73
  return if queues_very_full?
67
74
 
75
+ BatchDelete.call(expired_digests, conn)
68
76
  BatchDelete.call(orphans, conn)
69
77
  end
70
78
 
79
+ def expired_digests
80
+ max_score = (start_time - reaper_timeout).to_f
81
+
82
+ if VersionCheck.satisfied?(redis_version, ">= 6.2.0") && VersionCheck.satisfied?(::Redis::VERSION, ">= 4.6.0")
83
+ conn.zrange(EXPIRING_DIGESTS, 0, max_score, byscore: true)
84
+ else
85
+ conn.zrangebyscore(EXPIRING_DIGESTS, 0, max_score)
86
+ end
87
+ end
88
+
71
89
  #
72
90
  # Find orphaned digests
73
91
  #
@@ -104,7 +122,7 @@ module SidekiqUniqueJobs
104
122
  end
105
123
 
106
124
  def elapsed_ms
107
- time_source.call - start_time
125
+ time_source.call - start_source
108
126
  end
109
127
 
110
128
  #
@@ -229,6 +247,7 @@ module SidekiqUniqueJobs
229
247
 
230
248
  loop do
231
249
  range_start = (page * page_size) - deleted_size
250
+
232
251
  range_end = range_start + page_size - 1
233
252
  entries = conn.lrange(queue_key, range_start, range_end)
234
253
  page += 1
@@ -238,6 +257,9 @@ module SidekiqUniqueJobs
238
257
  entries.each(&block)
239
258
 
240
259
  deleted_size = initial_size - conn.llen(queue_key)
260
+
261
+ # The queue is growing, not shrinking, just keep looping
262
+ deleted_size = 0 if deleted_size.negative?
241
263
  end
242
264
  end
243
265
 
@@ -71,6 +71,16 @@ module SidekiqUniqueJobs # rubocop:disable Metrics/ModuleLength
71
71
  config.logger = other
72
72
  end
73
73
 
74
+ #
75
+ # Check if logging is enabled
76
+ #
77
+ #
78
+ # @return [true, false]
79
+ #
80
+ def logging?
81
+ config.logger_enabled
82
+ end
83
+
74
84
  #
75
85
  # Temporarily use another configuration and reset to the old config after yielding
76
86
  #
@@ -90,15 +90,6 @@ module Sidekiq
90
90
  super(options)
91
91
  end
92
92
 
93
- #
94
- # Clears all jobs for this worker and removes all locks
95
- #
96
- def clear_all
97
- super
98
-
99
- SidekiqUniqueJobs::Digests.new.delete_by_pattern("*", count: 10_000)
100
- end
101
-
102
93
  #
103
94
  # Prepends deletion of locks to clear
104
95
  #
@@ -124,5 +115,21 @@ module Sidekiq
124
115
  module ClassMethods
125
116
  prepend Overrides::ClassMethods
126
117
  end
118
+
119
+ #
120
+ # Prepends singleton methods to Sidekiq::Worker
121
+ #
122
+ module SignletonOverrides
123
+ #
124
+ # Clears all jobs for this worker and removes all locks
125
+ #
126
+ def clear_all
127
+ super
128
+
129
+ SidekiqUniqueJobs::Digests.new.delete_by_pattern("*", count: 10_000)
130
+ end
131
+ end
132
+
133
+ singleton_class.prepend SignletonOverrides
127
134
  end
128
135
  end
@@ -3,5 +3,5 @@
3
3
  module SidekiqUniqueJobs
4
4
  #
5
5
  # @return [String] the current SidekiqUniqueJobs version
6
- VERSION = "7.1.25"
6
+ VERSION = "7.1.27"
7
7
  end
@@ -51,6 +51,16 @@ module SidekiqUniqueJobs
51
51
  @digests ||= SidekiqUniqueJobs::Digests.new
52
52
  end
53
53
 
54
+ #
55
+ # The collection of digests
56
+ #
57
+ #
58
+ # @return [SidekiqUniqueJobs::ExpiringDigests] the sorted set with expiring digests
59
+ #
60
+ def expiring_digests
61
+ @expiring_digests ||= SidekiqUniqueJobs::ExpiringDigests.new
62
+ end
63
+
54
64
  #
55
65
  # The collection of changelog entries
56
66
  #
@@ -8,7 +8,7 @@ module SidekiqUniqueJobs
8
8
  #
9
9
  # @author Mikael Henriksson <mikael@mhenrixon.com>
10
10
  module Web
11
- def self.registered(app) # rubocop:disable Metrics/MethodLength, Metrics/AbcSize
11
+ def self.registered(app) # rubocop:disable Metrics/MethodLength, Metrics/AbcSize, Metrics/CyclomaticComplexity, Metrics/PerceivedComplexity
12
12
  app.helpers do
13
13
  include Web::Helpers
14
14
  end
@@ -49,8 +49,25 @@ module SidekiqUniqueJobs
49
49
  erb(unique_template(:locks))
50
50
  end
51
51
 
52
+ app.get "/expiring_locks" do
53
+ @filter = params[:filter] || "*"
54
+ @filter = "*" if @filter == ""
55
+ @count = (params[:count] || 100).to_i
56
+ @current_cursor = params[:cursor]
57
+ @prev_cursor = params[:prev_cursor]
58
+
59
+ @total_size, @next_cursor, @locks = expiring_digests.page(
60
+ cursor: @current_cursor,
61
+ pattern: @filter,
62
+ page_size: @count,
63
+ )
64
+
65
+ erb(unique_template(:locks))
66
+ end
67
+
52
68
  app.get "/locks/delete_all" do
53
69
  digests.delete_by_pattern("*", count: digests.count)
70
+ expiring_digests.delete_by_pattern("*", count: digests.count)
54
71
  redirect_to :locks
55
72
  end
56
73
 
@@ -63,6 +80,7 @@ module SidekiqUniqueJobs
63
80
 
64
81
  app.get "/locks/:digest/delete" do
65
82
  digests.delete_by_digest(params[:digest])
83
+ expiring_digests.delete_by_digest(params[:digest])
66
84
  redirect_to :locks
67
85
  end
68
86
 
@@ -82,8 +100,9 @@ begin
82
100
  require "sidekiq/web" unless defined?(Sidekiq::Web)
83
101
 
84
102
  Sidekiq::Web.register(SidekiqUniqueJobs::Web)
85
- Sidekiq::Web.tabs["Locks"] = "locks"
86
- Sidekiq::Web.tabs["Changelogs"] = "changelogs"
103
+ Sidekiq::Web.tabs["Locks"] = "locks"
104
+ Sidekiq::Web.tabs["Expiring Locks"] = "expiring_locks"
105
+ Sidekiq::Web.tabs["Changelogs"] = "changelogs"
87
106
  Sidekiq::Web.settings.locales << File.join(File.dirname(__FILE__), "locales")
88
107
  rescue NameError, LoadError => ex
89
108
  SidekiqUniqueJobs.logger.error(ex)
@@ -72,6 +72,7 @@ require "sidekiq_unique_jobs/sidekiq_unique_ext"
72
72
  require "sidekiq_unique_jobs/on_conflict"
73
73
  require "sidekiq_unique_jobs/changelog"
74
74
  require "sidekiq_unique_jobs/digests"
75
+ require "sidekiq_unique_jobs/expiring_digests"
75
76
 
76
77
  require "sidekiq_unique_jobs/config"
77
78
  require "sidekiq_unique_jobs/sidekiq_unique_jobs"
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sidekiq-unique-jobs
3
3
  version: !ruby/object:Gem::Version
4
- version: 7.1.25
4
+ version: 7.1.27
5
5
  platform: ruby
6
6
  authors:
7
7
  - Mikael Henriksson
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2022-06-14 00:00:00.000000000 Z
11
+ date: 2022-07-30 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: brpoplpush-redis_script
@@ -116,6 +116,7 @@ files:
116
116
  - lib/sidekiq_unique_jobs/deprecation.rb
117
117
  - lib/sidekiq_unique_jobs/digests.rb
118
118
  - lib/sidekiq_unique_jobs/exceptions.rb
119
+ - lib/sidekiq_unique_jobs/expiring_digests.rb
119
120
  - lib/sidekiq_unique_jobs/job.rb
120
121
  - lib/sidekiq_unique_jobs/json.rb
121
122
  - lib/sidekiq_unique_jobs/key.rb
@@ -144,6 +145,7 @@ files:
144
145
  - lib/sidekiq_unique_jobs/lua/delete_job_by_digest.lua
145
146
  - lib/sidekiq_unique_jobs/lua/find_digest_in_queues.lua
146
147
  - lib/sidekiq_unique_jobs/lua/lock.lua
148
+ - lib/sidekiq_unique_jobs/lua/lock_until_expired.lua
147
149
  - lib/sidekiq_unique_jobs/lua/locked.lua
148
150
  - lib/sidekiq_unique_jobs/lua/queue.lua
149
151
  - lib/sidekiq_unique_jobs/lua/reap_orphans.lua