sidekiq-unique-jobs 8.0.11 → 8.0.12
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +27 -7
- data/README.md +6 -6
- data/lib/sidekiq_unique_jobs/config.rb +14 -1
- data/lib/sidekiq_unique_jobs/job.rb +1 -1
- data/lib/sidekiq_unique_jobs/lock/base_lock.rb +7 -3
- data/lib/sidekiq_unique_jobs/lock/until_and_while_executing.rb +7 -4
- data/lib/sidekiq_unique_jobs/lock/until_executing.rb +1 -1
- data/lib/sidekiq_unique_jobs/lock_ttl.rb +6 -2
- data/lib/sidekiq_unique_jobs/locksmith.rb +19 -1
- data/lib/sidekiq_unique_jobs/lua/shared/_find_digest_in_process_set.lua +8 -3
- data/lib/sidekiq_unique_jobs/lua/shared/_find_digest_in_queues.lua +11 -0
- data/lib/sidekiq_unique_jobs/lua/shared/_find_digest_in_sorted_set.lua +5 -1
- data/lib/sidekiq_unique_jobs/lua/unlock.lua +20 -12
- data/lib/sidekiq_unique_jobs/orphans/ruby_reaper.rb +32 -4
- data/lib/sidekiq_unique_jobs/rspec/matchers/have_valid_sidekiq_options.rb +3 -1
- data/lib/sidekiq_unique_jobs/script/client.rb +11 -3
- data/lib/sidekiq_unique_jobs/script/lua_error.rb +2 -0
- data/lib/sidekiq_unique_jobs/script/scripts.rb +42 -46
- data/lib/sidekiq_unique_jobs/version.rb +1 -1
- metadata +3 -3
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: ac54a1a32a5e8e0d11d10907799a1cc495da0439868857db6f7516e2334e8a2e
|
|
4
|
+
data.tar.gz: '0998be2173de200c826873e1d7e96aceb72f495b3d0b0d3e316196c52e0844e2'
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: e10ce9fe4a23b720f8ecd701bc139ac5f5b3457c53ca6077996abce19d04003b283943fcbb4eeee22284b9df870f477e33c74852e3ec838d50aff28e19440c38
|
|
7
|
+
data.tar.gz: daeead0b5f5b95dcc22987ad6ec3c2b27118f4166b04c7c478ad790d8a0d10629388866cfcd26b341f2bbfd2e48f7d4b1210583b28ea9287788b23ac3333c659
|
data/CHANGELOG.md
CHANGED
|
@@ -1,13 +1,30 @@
|
|
|
1
1
|
# Changelog
|
|
2
2
|
|
|
3
|
-
## [
|
|
3
|
+
## [v8.0.11](https://github.com/mhenrixon/sidekiq-unique-jobs/tree/v8.0.11) (2025-05-25)
|
|
4
4
|
|
|
5
|
-
[Full Changelog](https://github.com/mhenrixon/sidekiq-unique-jobs/compare/v8.0.10...
|
|
5
|
+
[Full Changelog](https://github.com/mhenrixon/sidekiq-unique-jobs/compare/v8.0.10...v8.0.11)
|
|
6
|
+
|
|
7
|
+
**Implemented enhancements:**
|
|
8
|
+
|
|
9
|
+
- chore: address recent rubocop changes [\#880](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/880) ([mhenrixon](https://github.com/mhenrixon))
|
|
10
|
+
- feat\(digest\): allow modern algorithm [\#853](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/853) ([mhenrixon](https://github.com/mhenrixon))
|
|
11
|
+
|
|
12
|
+
**Closed issues:**
|
|
13
|
+
|
|
14
|
+
- Your paypal link doesn't work [\#876](https://github.com/mhenrixon/sidekiq-unique-jobs/issues/876)
|
|
15
|
+
- Replace MD5 with SHA256+ [\#848](https://github.com/mhenrixon/sidekiq-unique-jobs/issues/848)
|
|
16
|
+
- NoMethodError: undefined method `\[\]' for true:TrueClass [\#643](https://github.com/mhenrixon/sidekiq-unique-jobs/issues/643)
|
|
6
17
|
|
|
7
18
|
**Merged pull requests:**
|
|
8
19
|
|
|
9
|
-
-
|
|
10
|
-
-
|
|
20
|
+
- Add support for lock\_ttl to be a Proc/class method [\#879](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/879) ([brayden-onesignal](https://github.com/brayden-onesignal))
|
|
21
|
+
- Move from Sidekiq 8 beta to released version [\#872](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/872) ([jukra](https://github.com/jukra))
|
|
22
|
+
- update Reject\#kill\_with\_options? for Ruby 3 kwargs [\#868](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/868) ([stathis-alexander](https://github.com/stathis-alexander))
|
|
23
|
+
- Remove redundant include to locales \(for Sidekiq 8\) [\#867](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/867) ([jukra](https://github.com/jukra))
|
|
24
|
+
- Support for Sidekiq 8 [\#866](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/866) ([jukra](https://github.com/jukra))
|
|
25
|
+
- 📝 Improve README.md [\#860](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/860) ([jaredsmithse](https://github.com/jaredsmithse))
|
|
26
|
+
- mention ttl and timeout unit \(seconds\) in README [\#859](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/859) ([fwolfst](https://github.com/fwolfst))
|
|
27
|
+
- Add a note to README on `schedule_in` option for `reschedule` conflict strategy [\#849](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/849) ([vittorius](https://github.com/vittorius))
|
|
11
28
|
|
|
12
29
|
## [v8.0.10](https://github.com/mhenrixon/sidekiq-unique-jobs/tree/v8.0.10) (2024-02-22)
|
|
13
30
|
|
|
@@ -18,6 +35,11 @@
|
|
|
18
35
|
- until\_and\_while\_executing and lock\_ttl: jobs silently dropped [\#788](https://github.com/mhenrixon/sidekiq-unique-jobs/issues/788)
|
|
19
36
|
- Slow evalsha causing timeouts [\#668](https://github.com/mhenrixon/sidekiq-unique-jobs/issues/668)
|
|
20
37
|
|
|
38
|
+
**Merged pull requests:**
|
|
39
|
+
|
|
40
|
+
- tweak changelog for 8.0.9 [\#836](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/836) ([Earlopain](https://github.com/Earlopain))
|
|
41
|
+
- Add digest scores for faster deletes in sorted sets [\#835](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/835) ([ezekg](https://github.com/ezekg))
|
|
42
|
+
|
|
21
43
|
## [v7.1.33](https://github.com/mhenrixon/sidekiq-unique-jobs/tree/v7.1.33) (2024-02-12)
|
|
22
44
|
|
|
23
45
|
[Full Changelog](https://github.com/mhenrixon/sidekiq-unique-jobs/compare/v8.0.9...v7.1.33)
|
|
@@ -512,7 +534,7 @@
|
|
|
512
534
|
|
|
513
535
|
**Merged pull requests:**
|
|
514
536
|
|
|
515
|
-
- Update docs [\#644](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/644) ([
|
|
537
|
+
- Update docs [\#644](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/644) ([ruzig](https://github.com/ruzig))
|
|
516
538
|
|
|
517
539
|
## [v7.0.13](https://github.com/mhenrixon/sidekiq-unique-jobs/tree/v7.0.13) (2021-09-27)
|
|
518
540
|
|
|
@@ -2166,7 +2188,6 @@
|
|
|
2166
2188
|
- Attempt to constantize String `worker_class` arguments passed to client middleware [\#17](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/17) ([disbelief](https://github.com/disbelief))
|
|
2167
2189
|
- Compatibility with Sidekiq 2.12.1 Scheduled Jobs [\#16](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/16) ([lsimoneau](https://github.com/lsimoneau))
|
|
2168
2190
|
- Allow worker to specify which arguments to include in uniquing hash [\#12](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/12) ([sax](https://github.com/sax))
|
|
2169
|
-
- Add support for unique when using Sidekiq's delay function [\#11](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/11) ([eduardosasso](https://github.com/eduardosasso))
|
|
2170
2191
|
- Adding the unique prefix option [\#8](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/8) ([KensoDev](https://github.com/KensoDev))
|
|
2171
2192
|
- Remove unnecessary log messages [\#7](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/7) ([marclennox](https://github.com/marclennox))
|
|
2172
2193
|
|
|
@@ -2180,7 +2201,6 @@
|
|
|
2180
2201
|
|
|
2181
2202
|
**Merged pull requests:**
|
|
2182
2203
|
|
|
2183
|
-
- Fix multiple bugs, cleaned up dependencies, and added a feature [\#4](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/4) ([kemper-blinq](https://github.com/kemper-blinq))
|
|
2184
2204
|
- Dependency on sidekiq 2.2.0 and up [\#3](https://github.com/mhenrixon/sidekiq-unique-jobs/pull/3) ([philostler](https://github.com/philostler))
|
|
2185
2205
|
|
|
2186
2206
|
## [v2.2.1](https://github.com/mhenrixon/sidekiq-unique-jobs/tree/v2.2.1) (2012-08-19)
|
data/README.md
CHANGED
|
@@ -197,7 +197,7 @@ A lock is created when `UntilExecuting.perform_async` is called. Then it is eith
|
|
|
197
197
|
|
|
198
198
|
```ruby
|
|
199
199
|
class UntilExecuting
|
|
200
|
-
include Sidekiq::
|
|
200
|
+
include Sidekiq::Worker
|
|
201
201
|
|
|
202
202
|
sidekiq_options lock: :until_executing
|
|
203
203
|
|
|
@@ -219,7 +219,7 @@ A lock is created when `UntilExecuted.perform_async` is called. Then it is eithe
|
|
|
219
219
|
|
|
220
220
|
```ruby
|
|
221
221
|
class UntilExecuted
|
|
222
|
-
include Sidekiq::
|
|
222
|
+
include Sidekiq::Worker
|
|
223
223
|
|
|
224
224
|
sidekiq_options lock: :until_executed
|
|
225
225
|
|
|
@@ -237,9 +237,9 @@ This lock behaves identically to the [Until Executed](#until-executed) except fo
|
|
|
237
237
|
|
|
238
238
|
```ruby
|
|
239
239
|
class UntilExpired
|
|
240
|
-
include Sidekiq::
|
|
240
|
+
include Sidekiq::Worker
|
|
241
241
|
|
|
242
|
-
sidekiq_options lock: :until_expired, lock_ttl: 1.day
|
|
242
|
+
sidekiq_options lock: :until_expired, lock_ttl: 1.day.to_i
|
|
243
243
|
|
|
244
244
|
def perform
|
|
245
245
|
# Do work
|
|
@@ -255,7 +255,7 @@ This lock is a combination of two locks (`:until_executing` and `:while_executin
|
|
|
255
255
|
|
|
256
256
|
```ruby
|
|
257
257
|
class UntilAndWhileExecutingWorker
|
|
258
|
-
include Sidekiq::
|
|
258
|
+
include Sidekiq::Worker
|
|
259
259
|
|
|
260
260
|
sidekiq_options lock: :until_and_while_executing,
|
|
261
261
|
lock_timeout: 2,
|
|
@@ -277,7 +277,7 @@ These locks are put on a queue without any type of locking mechanism, the lockin
|
|
|
277
277
|
|
|
278
278
|
```ruby
|
|
279
279
|
class WhileExecutingWorker
|
|
280
|
-
include Sidekiq::
|
|
280
|
+
include Sidekiq::Worker
|
|
281
281
|
|
|
282
282
|
sidekiq_options lock: :while_executing,
|
|
283
283
|
lock_timeout: 2,
|
|
@@ -31,6 +31,10 @@ module SidekiqUniqueJobs
|
|
|
31
31
|
#
|
|
32
32
|
# @author Mauro Berlanda <mauro.berlanda@gmail.com>
|
|
33
33
|
class Config < ThreadSafeConfig
|
|
34
|
+
def initialize(*)
|
|
35
|
+
super
|
|
36
|
+
@redis_version_mutex = Mutex.new
|
|
37
|
+
end
|
|
34
38
|
#
|
|
35
39
|
# @return [Hash<Symbol, SidekiqUniqueJobs::Lock::BaseLock] all available queued locks
|
|
36
40
|
LOCKS_WHILE_ENQUEUED = {
|
|
@@ -326,11 +330,20 @@ module SidekiqUniqueJobs
|
|
|
326
330
|
#
|
|
327
331
|
# The current version of redis
|
|
328
332
|
#
|
|
333
|
+
# Thread-safe: Uses mutex to prevent multiple threads from fetching version simultaneously
|
|
329
334
|
#
|
|
330
335
|
# @return [String] a version string eg. `5.0.1`
|
|
331
336
|
#
|
|
332
337
|
def redis_version
|
|
333
|
-
|
|
338
|
+
# Fast path: if already fetched, return immediately without locking
|
|
339
|
+
return current_redis_version if current_redis_version != REDIS_VERSION
|
|
340
|
+
|
|
341
|
+
# Slow path: fetch version with mutex protection
|
|
342
|
+
@redis_version_mutex.synchronize do
|
|
343
|
+
# Double-check inside mutex in case another thread just fetched it
|
|
344
|
+
self.current_redis_version = SidekiqUniqueJobs.fetch_redis_version if current_redis_version == REDIS_VERSION
|
|
345
|
+
end
|
|
346
|
+
|
|
334
347
|
current_redis_version
|
|
335
348
|
end
|
|
336
349
|
end
|
|
@@ -134,9 +134,13 @@ module SidekiqUniqueJobs
|
|
|
134
134
|
def callback_safely
|
|
135
135
|
callback&.call
|
|
136
136
|
item[JID]
|
|
137
|
-
rescue StandardError
|
|
138
|
-
reflect(:after_unlock_callback_failed, item)
|
|
139
|
-
raise
|
|
137
|
+
rescue StandardError => ex
|
|
138
|
+
reflect(:after_unlock_callback_failed, item, ex)
|
|
139
|
+
# Don't re-raise: lock is already unlocked, can't rollback
|
|
140
|
+
# Re-raising would cause job retry with lock already released
|
|
141
|
+
# leading to potential double execution
|
|
142
|
+
log_warn("After unlock callback failed: #{ex.class} - #{ex.message}")
|
|
143
|
+
item[JID]
|
|
140
144
|
end
|
|
141
145
|
|
|
142
146
|
def strategy_for(origin)
|
|
@@ -45,9 +45,11 @@ module SidekiqUniqueJobs
|
|
|
45
45
|
else
|
|
46
46
|
reflect(:unlock_failed, item)
|
|
47
47
|
end
|
|
48
|
-
rescue
|
|
48
|
+
rescue StandardError
|
|
49
49
|
reflect(:execution_failed, item)
|
|
50
|
-
|
|
50
|
+
# Re-acquire the "until" lock to prevent duplicates while job is in retry
|
|
51
|
+
# Use non-blocking lock attempt to avoid hanging on shutdown
|
|
52
|
+
locksmith.lock(wait: 0)
|
|
51
53
|
|
|
52
54
|
raise
|
|
53
55
|
end
|
|
@@ -56,9 +58,10 @@ module SidekiqUniqueJobs
|
|
|
56
58
|
|
|
57
59
|
def ensure_relocked
|
|
58
60
|
yield
|
|
59
|
-
rescue
|
|
61
|
+
rescue StandardError
|
|
60
62
|
reflect(:execution_failed, item)
|
|
61
|
-
|
|
63
|
+
# Re-acquire the "until" lock to prevent duplicates while job is in retry
|
|
64
|
+
locksmith.lock(wait: 0)
|
|
62
65
|
|
|
63
66
|
raise
|
|
64
67
|
end
|
|
@@ -40,13 +40,15 @@ module SidekiqUniqueJobs
|
|
|
40
40
|
#
|
|
41
41
|
# Calculates the time until the job is scheduled starting from now
|
|
42
42
|
#
|
|
43
|
+
# @note Ensures result is never negative to prevent TTL calculation issues
|
|
43
44
|
#
|
|
44
|
-
# @return [Integer] the number of seconds until job is scheduled
|
|
45
|
+
# @return [Integer] the number of seconds until job is scheduled (>= 0)
|
|
45
46
|
#
|
|
46
47
|
def time_until_scheduled
|
|
47
48
|
return 0 unless scheduled_at
|
|
48
49
|
|
|
49
|
-
|
|
50
|
+
# Clamp to 0 to prevent negative values if job is already overdue
|
|
51
|
+
[0, scheduled_at.to_i - Time.now.utc.to_i].max
|
|
50
52
|
end
|
|
51
53
|
|
|
52
54
|
# The time a job is scheduled
|
|
@@ -93,6 +95,8 @@ module SidekiqUniqueJobs
|
|
|
93
95
|
ttl.call(item[ARGS])
|
|
94
96
|
when Symbol
|
|
95
97
|
job_class.send(ttl, item[ARGS])
|
|
98
|
+
else
|
|
99
|
+
raise ArgumentError, "#{ttl.class} is not supported for lock_ttl"
|
|
96
100
|
end
|
|
97
101
|
end
|
|
98
102
|
end
|
|
@@ -34,6 +34,11 @@ module SidekiqUniqueJobs
|
|
|
34
34
|
CLOCK_DRIFT_FACTOR = 0.01
|
|
35
35
|
NETWORK_FACTOR = 0.04
|
|
36
36
|
|
|
37
|
+
#
|
|
38
|
+
# @return [Integer] Maximum wait time for blocking Redis operations (in seconds)
|
|
39
|
+
# Prevents blocking web requests indefinitely when used in client middleware
|
|
40
|
+
MAX_BLOCKING_WAIT = 5
|
|
41
|
+
|
|
37
42
|
#
|
|
38
43
|
# @!attribute [r] key
|
|
39
44
|
# @return [Key] the key used for locking
|
|
@@ -81,9 +86,11 @@ module SidekiqUniqueJobs
|
|
|
81
86
|
#
|
|
82
87
|
# Deletes the lock regardless of if it has a pttl set
|
|
83
88
|
#
|
|
89
|
+
# rubocop:disable Naming/PredicateMethod
|
|
84
90
|
def delete!
|
|
85
91
|
call_script(:delete, key.to_a, argv).to_i.positive?
|
|
86
92
|
end
|
|
93
|
+
# rubocop:enable Naming/PredicateMethod
|
|
87
94
|
|
|
88
95
|
#
|
|
89
96
|
# Create a lock for the Sidekiq job
|
|
@@ -259,8 +266,9 @@ module SidekiqUniqueJobs
|
|
|
259
266
|
# NOTE: When debugging, change .value to .value!
|
|
260
267
|
primed_jid = Concurrent::Promises
|
|
261
268
|
.future(conn) { |red_con| pop_queued(red_con, timeout) }
|
|
262
|
-
.value
|
|
269
|
+
.value(concurrent_timeout) # Timeout to prevent indefinite blocking
|
|
263
270
|
|
|
271
|
+
# If promise times out, primed_jid will be nil
|
|
264
272
|
handle_primed(primed_jid, &block)
|
|
265
273
|
end
|
|
266
274
|
|
|
@@ -307,6 +315,16 @@ module SidekiqUniqueJobs
|
|
|
307
315
|
def brpoplpush(conn, wait)
|
|
308
316
|
# passing timeout 0 to brpoplpush causes it to block indefinitely
|
|
309
317
|
raise InvalidArgument, "wait must be an integer" unless wait.is_a?(Integer)
|
|
318
|
+
raise InvalidArgument, "wait must be positive" if wait.negative?
|
|
319
|
+
|
|
320
|
+
# Cap the wait time to prevent blocking requests too long
|
|
321
|
+
# This is especially important when called from client middleware
|
|
322
|
+
if wait > MAX_BLOCKING_WAIT
|
|
323
|
+
log_debug(
|
|
324
|
+
"Capping blocking wait from #{wait}s to #{MAX_BLOCKING_WAIT}s to prevent long request blocks",
|
|
325
|
+
)
|
|
326
|
+
wait = MAX_BLOCKING_WAIT
|
|
327
|
+
end
|
|
310
328
|
|
|
311
329
|
conn.blmove(key.queued, key.primed, "RIGHT", "LEFT", wait)
|
|
312
330
|
end
|
|
@@ -1,9 +1,11 @@
|
|
|
1
1
|
local function find_digest_in_process_set(digest, threshold)
|
|
2
2
|
local process_cursor = 0
|
|
3
3
|
local job_cursor = 0
|
|
4
|
-
local pattern = "*" .. digest .. "*"
|
|
5
4
|
local found = false
|
|
6
5
|
|
|
6
|
+
-- Cache digest transformation outside the loop - major performance win!
|
|
7
|
+
local digest_without_run = string.gsub(digest, ':RUN', '')
|
|
8
|
+
|
|
7
9
|
log_debug("Searching in process list",
|
|
8
10
|
"for digest:", digest,
|
|
9
11
|
"cursor:", process_cursor)
|
|
@@ -26,8 +28,11 @@ local function find_digest_in_process_set(digest, threshold)
|
|
|
26
28
|
log_debug("No entries in:", workers_key)
|
|
27
29
|
else
|
|
28
30
|
for i = 1, #jobs, 2 do
|
|
29
|
-
local jobstr = jobs[i +1]
|
|
30
|
-
|
|
31
|
+
local jobstr = jobs[i + 1]
|
|
32
|
+
-- Use cached digest transformation - avoid repeated string.gsub on digest
|
|
33
|
+
local jobstr_without_run = string.gsub(jobstr, ':RUN', '')
|
|
34
|
+
|
|
35
|
+
if string.find(jobstr_without_run, digest_without_run) then
|
|
31
36
|
log_debug("Found digest", digest, "in:", workers_key)
|
|
32
37
|
found = true
|
|
33
38
|
break
|
|
@@ -32,8 +32,19 @@ local function find_digest_in_queues(digest)
|
|
|
32
32
|
break
|
|
33
33
|
end
|
|
34
34
|
end
|
|
35
|
+
|
|
36
|
+
-- Short-circuit: Stop scanning this queue's batches after finding match
|
|
37
|
+
if found then
|
|
38
|
+
break
|
|
39
|
+
end
|
|
40
|
+
|
|
35
41
|
index = index + per
|
|
36
42
|
end
|
|
43
|
+
|
|
44
|
+
-- Short-circuit: Stop scanning remaining queues after finding match
|
|
45
|
+
if found then
|
|
46
|
+
break
|
|
47
|
+
end
|
|
37
48
|
end
|
|
38
49
|
|
|
39
50
|
cursor = next_cursor
|
|
@@ -1,12 +1,16 @@
|
|
|
1
1
|
local function find_digest_in_sorted_set(name, digest)
|
|
2
2
|
local cursor = 0
|
|
3
|
-
local count =
|
|
3
|
+
local count = 50
|
|
4
4
|
local pattern = "*" .. digest .. "*"
|
|
5
5
|
local found = false
|
|
6
6
|
|
|
7
7
|
log_debug("searching in:", name,
|
|
8
8
|
"for digest:", digest,
|
|
9
9
|
"cursor:", cursor)
|
|
10
|
+
|
|
11
|
+
-- Note: We must use pattern matching because sorted sets contain job JSON strings,
|
|
12
|
+
-- not just digests. The digest is embedded in the JSON as the "lock_digest" field.
|
|
13
|
+
-- ZSCORE won't work here as we need to search within the member content.
|
|
10
14
|
repeat
|
|
11
15
|
local pagination = redis.call("ZSCAN", name, cursor, "MATCH", pattern, "COUNT", count)
|
|
12
16
|
local next_cursor = pagination[1]
|
|
@@ -42,12 +42,25 @@ local locked_count = redis.call("HLEN", locked)
|
|
|
42
42
|
--------- Begin unlock.lua ---------
|
|
43
43
|
log_debug("BEGIN unlock digest:", digest, "(job_id: " .. job_id ..")")
|
|
44
44
|
|
|
45
|
-
|
|
46
|
-
if
|
|
47
|
-
|
|
45
|
+
-- Always clean up this job's queued/primed entries first
|
|
46
|
+
-- This prevents orphaned entries even if job doesn't hold the lock
|
|
47
|
+
log_debug("LREM", queued, -1, job_id)
|
|
48
|
+
redis.call("LREM", queued, -1, job_id)
|
|
49
|
+
|
|
50
|
+
log_debug("LREM", primed, -1, job_id)
|
|
51
|
+
redis.call("LREM", primed, -1, job_id)
|
|
52
|
+
|
|
53
|
+
-- Check if this job actually holds the lock
|
|
54
|
+
local holds_lock = redis.call("HEXISTS", locked, job_id) == 1
|
|
55
|
+
log_debug("HEXISTS", locked, job_id, "=>", holds_lock)
|
|
56
|
+
|
|
57
|
+
if not holds_lock then
|
|
58
|
+
-- Job doesn't hold the lock - check if this is an orphaned lock scenario
|
|
48
59
|
if queued_count == 0 and primed_count == 0 and locked_count == 0 then
|
|
49
|
-
log_debug("Orphaned lock")
|
|
60
|
+
log_debug("Orphaned lock - cleaning up")
|
|
61
|
+
-- Continue with cleanup below
|
|
50
62
|
else
|
|
63
|
+
-- Other jobs still hold locks for this digest
|
|
51
64
|
local result = ""
|
|
52
65
|
for i,v in ipairs(redis.call("HKEYS", locked)) do
|
|
53
66
|
result = result .. v .. ","
|
|
@@ -55,17 +68,12 @@ if redis.call("HEXISTS", locked, job_id) == 0 then
|
|
|
55
68
|
result = locked .. " (" .. result .. ")"
|
|
56
69
|
log("Yielding to: " .. result)
|
|
57
70
|
log_debug("Yielding to", result, locked, "by job", job_id)
|
|
58
|
-
return
|
|
71
|
+
-- Still return job_id to indicate cleanup completed
|
|
72
|
+
-- Caller already removed from queued/primed
|
|
73
|
+
return job_id
|
|
59
74
|
end
|
|
60
75
|
end
|
|
61
76
|
|
|
62
|
-
-- Just in case something went wrong
|
|
63
|
-
log_debug("LREM", queued, -1, job_id)
|
|
64
|
-
redis.call("LREM", queued, -1, job_id)
|
|
65
|
-
|
|
66
|
-
log_debug("LREM", primed, -1, job_id)
|
|
67
|
-
redis.call("LREM", primed, -1, job_id)
|
|
68
|
-
|
|
69
77
|
local redis_version = toversion(redisversion)
|
|
70
78
|
|
|
71
79
|
if lock_type ~= "until_expired" then
|
|
@@ -144,7 +144,10 @@ module SidekiqUniqueJobs
|
|
|
144
144
|
# 1. It checks the scheduled set
|
|
145
145
|
# 2. It checks the retry set
|
|
146
146
|
# 3. It goes through all queues
|
|
147
|
+
# 4. It checks active processes
|
|
147
148
|
#
|
|
149
|
+
# Note: Uses early returns for short-circuit evaluation.
|
|
150
|
+
# We can't pipeline ZSCAN operations as they're iterative.
|
|
148
151
|
#
|
|
149
152
|
# @param [String] digest the digest to search for
|
|
150
153
|
#
|
|
@@ -152,7 +155,17 @@ module SidekiqUniqueJobs
|
|
|
152
155
|
# @return [false] when no job was found for this digest
|
|
153
156
|
#
|
|
154
157
|
def belongs_to_job?(digest)
|
|
155
|
-
|
|
158
|
+
# Short-circuit: Return immediately if found in scheduled set
|
|
159
|
+
return true if scheduled?(digest)
|
|
160
|
+
|
|
161
|
+
# Short-circuit: Return immediately if found in retry set
|
|
162
|
+
return true if retried?(digest)
|
|
163
|
+
|
|
164
|
+
# Short-circuit: Return immediately if found in any queue
|
|
165
|
+
return true if enqueued?(digest)
|
|
166
|
+
|
|
167
|
+
# Last check: active processes
|
|
168
|
+
active?(digest)
|
|
156
169
|
end
|
|
157
170
|
|
|
158
171
|
#
|
|
@@ -218,10 +231,12 @@ module SidekiqUniqueJobs
|
|
|
218
231
|
workers.each_pair do |_tid, job|
|
|
219
232
|
next unless (item = safe_load_json(job))
|
|
220
233
|
|
|
221
|
-
|
|
234
|
+
next unless (raw_payload = item[PAYLOAD])
|
|
235
|
+
|
|
236
|
+
payload = safe_load_json(raw_payload)
|
|
222
237
|
|
|
223
238
|
return true if match?(digest, payload[LOCK_DIGEST])
|
|
224
|
-
return true if considered_active?(payload[CREATED_AT])
|
|
239
|
+
return true if considered_active?(time_from_payload_timestamp(payload[CREATED_AT]).to_f)
|
|
225
240
|
end
|
|
226
241
|
end
|
|
227
242
|
|
|
@@ -239,6 +254,15 @@ module SidekiqUniqueJobs
|
|
|
239
254
|
max_score < time_f
|
|
240
255
|
end
|
|
241
256
|
|
|
257
|
+
def time_from_payload_timestamp(timestamp)
|
|
258
|
+
if timestamp.is_a?(Float)
|
|
259
|
+
# < Sidekiq 8, timestamps were stored as fractional seconds since the epoch
|
|
260
|
+
Time.at(timestamp).utc
|
|
261
|
+
else
|
|
262
|
+
Time.at(timestamp / 1000, timestamp % 1000, :millisecond)
|
|
263
|
+
end
|
|
264
|
+
end
|
|
265
|
+
|
|
242
266
|
#
|
|
243
267
|
# Loops through all the redis queues and yields them one by one
|
|
244
268
|
#
|
|
@@ -296,6 +320,9 @@ module SidekiqUniqueJobs
|
|
|
296
320
|
#
|
|
297
321
|
# Checks a sorted set for the existance of this digest
|
|
298
322
|
#
|
|
323
|
+
# Note: Must use pattern matching because sorted sets contain job JSON strings,
|
|
324
|
+
# not just digests. The digest is embedded in the JSON as the "lock_digest" field.
|
|
325
|
+
# ZSCORE won't work here as we need to search within the member content.
|
|
299
326
|
#
|
|
300
327
|
# @param [String] key the key for the sorted set
|
|
301
328
|
# @param [String] digest the digest to scan for
|
|
@@ -304,7 +331,8 @@ module SidekiqUniqueJobs
|
|
|
304
331
|
# @return [false] when missing
|
|
305
332
|
#
|
|
306
333
|
def in_sorted_set?(key, digest)
|
|
307
|
-
|
|
334
|
+
# Increased count from 1 to 50 for better throughput
|
|
335
|
+
conn.zscan(key, match: "*#{digest}*", count: 50).to_a.any?
|
|
308
336
|
end
|
|
309
337
|
end
|
|
310
338
|
# rubocop:enable Metrics/ClassLength
|
|
@@ -43,9 +43,11 @@ module SidekiqUniqueJobs
|
|
|
43
43
|
#
|
|
44
44
|
# @return [HaveValidSidekiqOptions] an RSpec matcher
|
|
45
45
|
#
|
|
46
|
-
|
|
46
|
+
# rubocop:disable Naming/PredicatePrefix
|
|
47
|
+
def have_valid_sidekiq_options(*args)
|
|
47
48
|
HaveValidSidekiqOptions.new(*args)
|
|
48
49
|
end
|
|
50
|
+
# rubocop:enable Naming/PredicatePrefix
|
|
49
51
|
end
|
|
50
52
|
end
|
|
51
53
|
end
|
|
@@ -27,20 +27,26 @@ module SidekiqUniqueJobs
|
|
|
27
27
|
@scripts = Scripts.fetch(config.scripts_path)
|
|
28
28
|
end
|
|
29
29
|
|
|
30
|
+
#
|
|
31
|
+
# Maximum number of retries for script execution errors
|
|
32
|
+
#
|
|
33
|
+
MAX_RETRIES = 3
|
|
34
|
+
|
|
30
35
|
#
|
|
31
36
|
# Execute a lua script with the provided script_name
|
|
32
37
|
#
|
|
33
38
|
# @note this method is recursive if we need to load a lua script
|
|
34
|
-
# that wasn't previously loaded.
|
|
39
|
+
# that wasn't previously loaded. Limited to MAX_RETRIES to prevent stack overflow.
|
|
35
40
|
#
|
|
36
41
|
# @param [Symbol] script_name the name of the script to execute
|
|
37
42
|
# @param [Redis] conn the redis connection to use for execution
|
|
38
43
|
# @param [Array<String>] keys script keys
|
|
39
44
|
# @param [Array<Object>] argv script arguments
|
|
45
|
+
# @param [Integer] retries number of retries remaining (internal use)
|
|
40
46
|
#
|
|
41
47
|
# @return value from script
|
|
42
48
|
#
|
|
43
|
-
def execute(script_name, conn, keys: [], argv: [])
|
|
49
|
+
def execute(script_name, conn, keys: [], argv: [], retries: MAX_RETRIES)
|
|
44
50
|
result, elapsed = timed do
|
|
45
51
|
scripts.execute(script_name, conn, keys: keys, argv: argv)
|
|
46
52
|
end
|
|
@@ -48,8 +54,10 @@ module SidekiqUniqueJobs
|
|
|
48
54
|
logger.debug("Executed #{script_name}.lua in #{elapsed}ms")
|
|
49
55
|
result
|
|
50
56
|
rescue ::RedisClient::CommandError => ex
|
|
57
|
+
raise if retries <= 0
|
|
58
|
+
|
|
51
59
|
handle_error(script_name, conn, ex) do
|
|
52
|
-
execute(script_name, conn, keys: keys, argv: argv)
|
|
60
|
+
execute(script_name, conn, keys: keys, argv: argv, retries: retries - 1)
|
|
53
61
|
end
|
|
54
62
|
end
|
|
55
63
|
|
|
@@ -11,42 +11,16 @@ module SidekiqUniqueJobs
|
|
|
11
11
|
SCRIPT_PATHS = Concurrent::Map.new
|
|
12
12
|
|
|
13
13
|
#
|
|
14
|
-
# Fetch a scripts configuration for path
|
|
14
|
+
# Fetch or create a scripts configuration for path
|
|
15
15
|
#
|
|
16
|
-
#
|
|
17
|
-
#
|
|
18
|
-
# @return [Scripts] a collection of scripts
|
|
19
|
-
#
|
|
20
|
-
def self.fetch(root_path)
|
|
21
|
-
if (scripts = SCRIPT_PATHS.get(root_path))
|
|
22
|
-
return scripts
|
|
23
|
-
end
|
|
24
|
-
|
|
25
|
-
create(root_path)
|
|
26
|
-
end
|
|
27
|
-
|
|
28
|
-
#
|
|
29
|
-
# Create a new scripts collection based on path
|
|
16
|
+
# Uses Concurrent::Map#fetch_or_store for thread-safe lazy initialization
|
|
30
17
|
#
|
|
31
18
|
# @param [Pathname] root_path the path to scripts
|
|
32
19
|
#
|
|
33
20
|
# @return [Scripts] a collection of scripts
|
|
34
21
|
#
|
|
35
|
-
def self.
|
|
36
|
-
|
|
37
|
-
store(scripts)
|
|
38
|
-
end
|
|
39
|
-
|
|
40
|
-
#
|
|
41
|
-
# Store the scripts collection in memory
|
|
42
|
-
#
|
|
43
|
-
# @param [Scripts] scripts the path to scripts
|
|
44
|
-
#
|
|
45
|
-
# @return [Scripts] the scripts instance that was stored
|
|
46
|
-
#
|
|
47
|
-
def self.store(scripts)
|
|
48
|
-
SCRIPT_PATHS.put(scripts.root_path, scripts)
|
|
49
|
-
scripts
|
|
22
|
+
def self.fetch(root_path)
|
|
23
|
+
SCRIPT_PATHS.fetch_or_store(root_path) { new(root_path) }
|
|
50
24
|
end
|
|
51
25
|
|
|
52
26
|
#
|
|
@@ -66,35 +40,57 @@ module SidekiqUniqueJobs
|
|
|
66
40
|
@root_path = path
|
|
67
41
|
end
|
|
68
42
|
|
|
43
|
+
#
|
|
44
|
+
# Fetch or load a script by name
|
|
45
|
+
#
|
|
46
|
+
# Uses Concurrent::Map#fetch_or_store for thread-safe lazy loading
|
|
47
|
+
#
|
|
48
|
+
# @param [Symbol, String] name the script name
|
|
49
|
+
# @param [Redis] conn the redis connection
|
|
50
|
+
#
|
|
51
|
+
# @return [Script] the loaded script
|
|
52
|
+
#
|
|
69
53
|
def fetch(name, conn)
|
|
70
|
-
|
|
71
|
-
return script
|
|
72
|
-
end
|
|
73
|
-
|
|
74
|
-
load(name, conn)
|
|
54
|
+
scripts.fetch_or_store(name.to_sym) { load(name, conn) }
|
|
75
55
|
end
|
|
76
56
|
|
|
57
|
+
#
|
|
58
|
+
# Load a script from disk, store in Redis, and cache in memory
|
|
59
|
+
#
|
|
60
|
+
# @param [Symbol, String] name the script name
|
|
61
|
+
# @param [Redis] conn the redis connection
|
|
62
|
+
#
|
|
63
|
+
# @return [Script] the loaded script
|
|
64
|
+
#
|
|
77
65
|
def load(name, conn)
|
|
78
66
|
script = Script.load(name, root_path, conn)
|
|
79
67
|
scripts.put(name.to_sym, script)
|
|
80
|
-
|
|
81
68
|
script
|
|
82
69
|
end
|
|
83
70
|
|
|
71
|
+
#
|
|
72
|
+
# Delete a script from the collection
|
|
73
|
+
#
|
|
74
|
+
# @param [Script, Symbol, String] script the script or script name to delete
|
|
75
|
+
#
|
|
76
|
+
# @return [Script, nil] the deleted script
|
|
77
|
+
#
|
|
84
78
|
def delete(script)
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
else
|
|
88
|
-
scripts.delete(script.to_sym)
|
|
89
|
-
end
|
|
79
|
+
key = script.is_a?(Script) ? script.name : script.to_sym
|
|
80
|
+
scripts.delete(key)
|
|
90
81
|
end
|
|
91
82
|
|
|
83
|
+
#
|
|
84
|
+
# Kill a running Redis script
|
|
85
|
+
#
|
|
86
|
+
# @param [Redis] conn the redis connection
|
|
87
|
+
#
|
|
88
|
+
# @return [String] Redis response
|
|
89
|
+
#
|
|
92
90
|
def kill(conn)
|
|
93
|
-
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
conn.script(:kill)
|
|
97
|
-
end
|
|
91
|
+
# Handle both namespaced and non-namespaced Redis connections
|
|
92
|
+
redis = conn.respond_to?(:namespace) ? conn.redis : conn
|
|
93
|
+
redis.script(:kill)
|
|
98
94
|
end
|
|
99
95
|
|
|
100
96
|
#
|
metadata
CHANGED
|
@@ -1,13 +1,13 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: sidekiq-unique-jobs
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 8.0.
|
|
4
|
+
version: 8.0.12
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- Mikael Henriksson
|
|
8
8
|
bindir: bin
|
|
9
9
|
cert_chain: []
|
|
10
|
-
date:
|
|
10
|
+
date: 1980-01-02 00:00:00.000000000 Z
|
|
11
11
|
dependencies:
|
|
12
12
|
- !ruby/object:Gem::Dependency
|
|
13
13
|
name: concurrent-ruby
|
|
@@ -221,7 +221,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
|
221
221
|
- !ruby/object:Gem::Version
|
|
222
222
|
version: '0'
|
|
223
223
|
requirements: []
|
|
224
|
-
rubygems_version: 3.
|
|
224
|
+
rubygems_version: 3.7.2
|
|
225
225
|
specification_version: 4
|
|
226
226
|
summary: Sidekiq middleware that prevents duplicates jobs
|
|
227
227
|
test_files: []
|