sidekiq-unique-jobs 6.0.0.rc7 → 6.0.0.rc8

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of sidekiq-unique-jobs might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: c90e3f7dec1b958a7516b7fe9aa703622abba4e7713c1e18d3022c6f853822b2
4
- data.tar.gz: b3597746d75ce16841f2dc91bd5ba1377850f1341e4bece3478f803f61840979
3
+ metadata.gz: e9d567f37cb10954057a83832bfcaba08b50ee5306c0acd5072f00c2cecbc9be
4
+ data.tar.gz: a76383d74c088f76fb41a56166b5fec7d4cbec89ec4fe3a5eb82f831b18b43e2
5
5
  SHA512:
6
- metadata.gz: fd3fb117d369c9e5ce97d5ea175813e50f76ae59c41cc6aaf91accc9bc613a0cefffd8d20420d854a7378125b0108ddef603011bc0fede46fa0addd7785ef057
7
- data.tar.gz: 9d416d37175611a6de8d8dbe9e416218f5439a9760577d7b337df7b0b8f21da2d35c8a0a4ba4846d8490fe26fa467f21f258c5645701cf81fb9e87d276243434
6
+ metadata.gz: 41a3ebbddb28fd0e3b60493c481b454a8f5562d4e0fd69f20ee19c72cf68d90ab0e8c7492251fc0b19d6b3fb2693898b54c2a04198651150c7f4687d7c71b7cd
7
+ data.tar.gz: e62237661c82579a110af614781bbfaf4d0a7431c484d793c6032428e9158b55df7095df7848044b412edd387478fbce808432242736b16aaab8efd6ab8c0c5f
data/.reek.yml CHANGED
@@ -77,3 +77,4 @@ detectors:
77
77
  TooManyMethods:
78
78
  exclude:
79
79
  - SidekiqUniqueJobs::Locksmith
80
+ - SidekiqUniqueJobs::Lock::BaseLock
data/CHANGELOG.md CHANGED
@@ -13,6 +13,7 @@
13
13
  - Totally delete the hash that was growing out of proportion
14
14
  - Adds a sidekiq web extension for viewing and deleting unique digests
15
15
  - Renamed the configuration `unique:` to `lock:` (still backwards compatible)
16
+ - Added some very simplistic conflict strategies.
16
17
 
17
18
  ## v5.1.0
18
19
 
data/README.md CHANGED
@@ -9,23 +9,30 @@
9
9
  * [Support Me](#support-me)
10
10
  * [General Information](#general-information)
11
11
  * [Options](#options)
12
- * [Lock Expiration](#lock-expiration)
13
- * [Lock Timeout](#lock-timeout)
14
- * [Unique Across Queues](#unique-across-queues)
15
- * [Unique Across Workers](#unique-across-workers)
12
+ * [Lock Expiration](#lock-expiration)
13
+ * [Lock Timeout](#lock-timeout)
14
+ * [Unique Across Queues](#unique-across-queues)
15
+ * [Unique Across Workers](#unique-across-workers)
16
16
  * [Locks](#locks)
17
- * [Until Executing](#until-executing)
18
- * [Until Executed](#until-executed)
19
- * [Until Timeout](#until-timeout)
20
- * [Unique Until And While Executing](#unique-until-and-while-executing)
21
- * [While Executing](#while-executing)
17
+ * [Until Executing](#until-executing)
18
+ * [Until Executed](#until-executed)
19
+ * [Until Timeout](#until-timeout)
20
+ * [Unique Until And While Executing](#unique-until-and-while-executing)
21
+ * [While Executing](#while-executing)
22
22
  * [Conflict Strategy](#conflict-strategy)
23
+ * [Log](#log)
24
+ * [Raise](#raise)
25
+ * [Reject](#reject)
26
+ * [Replace](#replace)
27
+ * [Reschedule](#reschedule)
23
28
  * [Usage](#usage)
24
- * [Finer Control over Uniqueness](#finer-control-over-uniqueness)
25
- * [After Unlock Callback](#after-unlock-callback)
26
- * [Logging](#logging)
29
+ * [Finer Control over Uniqueness](#finer-control-over-uniqueness)
30
+ * [After Unlock Callback](#after-unlock-callback)
31
+ * [Logging](#logging)
27
32
  * [Debugging](#debugging)
28
- * [Sidekiq Web](#sidekiq-web)
33
+ * [Sidekiq Web](#sidekiq-web)
34
+ * [Show Unique Digests](#show-unique-digests)
35
+ * [Show keys for digest](#show-keys-for-digest)
29
36
  * [Communication](#communication)
30
37
  * [Testing](#testing)
31
38
  * [Contributing](#contributing)
@@ -108,7 +115,7 @@ This configuration option is slightly misleading. It doesn't disregard the queue
108
115
  class Worker
109
116
  include Sidekiq::Worker
110
117
 
111
- sidekiq_options: unique_across_queues: true, queue: 'default'
118
+ sidekiq_options unique_across_queues: true, queue: 'default'
112
119
 
113
120
  def perform(args); end
114
121
  end
@@ -124,7 +131,7 @@ This configuration option is slightly misleading. It doesn't disregard the worke
124
131
  class WorkerOne
125
132
  include Sidekiq::Worker
126
133
 
127
- sidekiq_options: unique_across_workers: true, queue: 'default'
134
+ sidekiq_options unique_across_workers: true, queue: 'default'
128
135
 
129
136
  def perform(args); end
130
137
  end
@@ -132,7 +139,7 @@ end
132
139
  class WorkerTwo
133
140
  include Sidekiq::Worker
134
141
 
135
- sidekiq_options: unique_across_workers: true, queue: 'default'
142
+ sidekiq_options unique_across_workers: true, queue: 'default'
136
143
 
137
144
  def perform(args); end
138
145
  end
@@ -216,6 +223,12 @@ Decides how we handle conflict. We can either reject the job to the dead queue o
216
223
 
217
224
  The last one is log which can be be used with the lock `UntilExecuted` and `UntilExpired`. Now we write a log entry saying the job could not be pushed because it is a duplicate of another job with the same arguments
218
225
 
226
+ ### Log
227
+
228
+ This strategy is intended to be used with `UntilExecuted` and `UntilExpired`. It will log a line about that this is job is a duplicate of another.
229
+
230
+ `sidekiq_options lock: :until_executed, on_conflict: :log`
231
+
219
232
  ### Raise
220
233
 
221
234
  This strategy is intended to be used with `WhileExecuting`. Basically it will allow us to let the server process crash with a specific error message and be retried without messing up the Sidekiq stats.
@@ -228,17 +241,22 @@ This strategy is intended to be used with `WhileExecuting` and will push the job
228
241
 
229
242
  `sidekiq_options lock: :while_executing, on_conflict: :reject`
230
243
 
231
- ### Reschedule
244
+ ### Replace
232
245
 
233
- This strategy is intended to be used with `WhileExecuting` and will delay the job to be tried again in 5 seconds. This will mess up the sidekiq stats but will prevent exceptions from being logged and confuse your sysadmins.
246
+ This strategy is intended to be used with client locks like `UntilExecuted`.
247
+ It will delete any existing job for these arguments from retry, schedule and
248
+ queue and retry the lock again.
234
249
 
235
- `sidekiq_options lock: :while_executing, on_conflict: :reschedule`
250
+ This is slightly dangerous and should probably only be used for jobs that are
251
+ always scheduled in the future. Currently only attempting to retry one time.
236
252
 
237
- ### Log
253
+ `sidekiq_options lock: :until_executed, on_conflict: :replace`
238
254
 
239
- This strategy is intended to be used with `UntilExecuted` and `UntilExpired`. It will log a line about that this is job is a duplicate of another.
255
+ ### Reschedule
240
256
 
241
- `sidekiq_options lock: :until_executed, on_conflict: :log`
257
+ This strategy is intended to be used with `WhileExecuting` and will delay the job to be tried again in 5 seconds. This will mess up the sidekiq stats but will prevent exceptions from being logged and confuse your sysadmins.
258
+
259
+ `sidekiq_options lock: :while_executing, on_conflict: :reschedule`
242
260
 
243
261
  ## Usage
244
262
 
@@ -379,6 +397,44 @@ SidekiqUniqueJobs.configure do |config|
379
397
  end
380
398
  ```
381
399
 
400
+ If you truly wanted to test the sidekiq client push you could do something like below. Note that it will only work for the jobs that lock when the client pushes the job to redis (UntilExecuted, UntilAndWhileExecuting and UntilExpired).
401
+
402
+ ```ruby
403
+ RSpec.describe Workers::CoolOne do
404
+ before do
405
+ SidekiqUniqueJobs.config.enabled = false
406
+ end
407
+
408
+ # ... your tests that don't test uniqueness
409
+
410
+ context 'when Sidekiq::Testing.disabled?' do
411
+ before do
412
+ Sidekiq::Testing.disable!
413
+ Sidekiq.redis(&:flushdb)
414
+ end
415
+
416
+ after do
417
+ Sidekiq.redis(&:flushdb)
418
+ end
419
+
420
+ it 'prevents duplicate jobs from being scheduled' do
421
+ SidekiqUniqueJobs.use_config(enabled: true) do
422
+ expect(described_class.perform_async(1)).not_to eq(nil)
423
+ expect(described_class.perform_async(1)).to eq(nil)
424
+ end
425
+ end
426
+ end
427
+ end
428
+ ```
429
+
430
+ I would strongly suggest you let this gem test uniqueness. If you care about how the gem is integration tested have a look at the following specs:
431
+
432
+ - [spec/integration/sidekiq_unique_jobs/lock/until_and_while_executing_spec.rb](https://github.com/mhenrixon/sidekiq-unique-jobs/blob/master/spec/integration/sidekiq_unique_jobs/lock/until_and_while_executing_spec.rb)
433
+ - [spec/integration/sidekiq_unique_jobs/lock/until_executed_spec.rb](https://github.com/mhenrixon/sidekiq-unique-jobs/blob/master/spec/integration/sidekiq_unique_jobs/lock/until_executed_spec.rb)
434
+ - [spec/integration/sidekiq_unique_jobs/lock/until_expired_spec.rb](https://github.com/mhenrixon/sidekiq-unique-jobs/blob/master/spec/integration/sidekiq_unique_jobs/lock/until_expired_spec.rb)
435
+ - [spec/integration/sidekiq_unique_jobs/lock/while_executing_reject_spec.rb](https://github.com/mhenrixon/sidekiq-unique-jobs/blob/master/spec/integration/sidekiq_unique_jobs/lock/while_executing_reject_spec.rb)
436
+ - [spec/integration/sidekiq_unique_jobs/lock/while_executing_spec.rb](https://github.com/mhenrixon/sidekiq-unique-jobs/blob/master/spec/integration/sidekiq_unique_jobs/lock/while_executing_spec.rb)
437
+
382
438
  ## Contributing
383
439
 
384
440
  1. Fork it
@@ -22,10 +22,12 @@ module SidekiqUniqueJobs
22
22
  # Will call a conflict strategy if lock can't be achieved.
23
23
  # @return [String] the sidekiq job id
24
24
  def lock
25
+ @attempt = 0
26
+
25
27
  if (token = locksmith.lock(item[LOCK_TIMEOUT_KEY]))
26
28
  token
27
29
  else
28
- strategy.call
30
+ call_strategy
29
31
  end
30
32
  end
31
33
 
@@ -62,6 +64,15 @@ module SidekiqUniqueJobs
62
64
 
63
65
  private
64
66
 
67
+ def call_strategy
68
+ @attempt += 1
69
+ strategy.call { lock if replace? }
70
+ end
71
+
72
+ def replace?
73
+ strategy.replace? && attempt < 2
74
+ end
75
+
65
76
  # The sidekiq job hash
66
77
  # @return [Hash] the Sidekiq job hash
67
78
  attr_reader :item
@@ -74,6 +85,10 @@ module SidekiqUniqueJobs
74
85
  # @return [Proc] the callback to use after unlock
75
86
  attr_reader :callback
76
87
 
88
+ # The current attempt to lock the job
89
+ # @return [Integer] the numerical value of the attempt
90
+ attr_reader :attempt
91
+
77
92
  # The interface to the locking mechanism
78
93
  # @return [SidekiqUniqueJobs::Locksmith]
79
94
  def locksmith
@@ -5,6 +5,7 @@ require_relative 'on_conflict/null_strategy'
5
5
  require_relative 'on_conflict/log'
6
6
  require_relative 'on_conflict/raise'
7
7
  require_relative 'on_conflict/reject'
8
+ require_relative 'on_conflict/replace'
8
9
  require_relative 'on_conflict/reschedule'
9
10
 
10
11
  module SidekiqUniqueJobs
@@ -0,0 +1,37 @@
1
+ # frozen_string_literal: true
2
+
3
+ module SidekiqUniqueJobs
4
+ module OnConflict
5
+ # Strategy to raise an error on conflict
6
+ #
7
+ # @author Mikael Henriksson <mikael@zoolutions.se>
8
+ class Replace < OnConflict::Strategy
9
+ attr_reader :queue, :unique_digest
10
+
11
+ # @param [Hash] item sidekiq job hash
12
+ def initialize(item)
13
+ super
14
+ @queue = item[QUEUE_KEY]
15
+ @unique_digest = item[UNIQUE_DIGEST_KEY]
16
+ end
17
+
18
+ # Replace the old job in the queue
19
+ # @yield to retry the lock after deleting the old one
20
+ def call(&block)
21
+ return unless delete_job_by_digest
22
+ delete_lock
23
+ block&.call
24
+ end
25
+
26
+ # Delete the job from either schedule, retry or the queue
27
+ def delete_job_by_digest
28
+ Scripts.call(:delete_job_by_digest, nil, keys: [queue, unique_digest])
29
+ end
30
+
31
+ # Delete the keys belonging to the job
32
+ def delete_lock
33
+ Scripts.call(:delete_by_digest, nil, keys: [UNIQUE_SET, unique_digest])
34
+ end
35
+ end
36
+ end
37
+ end
@@ -23,6 +23,10 @@ module SidekiqUniqueJobs
23
23
  def call
24
24
  fail NotImplementedError, 'needs to be implemented in child class'
25
25
  end
26
+
27
+ def replace?
28
+ is_a?(Replace)
29
+ end
26
30
  end
27
31
  end
28
32
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module SidekiqUniqueJobs
4
- VERSION = '6.0.0.rc7'
4
+ VERSION = '6.0.0.rc8'
5
5
  end
@@ -11,7 +11,7 @@ local run_grabbed_key = unique_digest .. ':RUN:GRABBED'
11
11
  local run_available_key = unique_digest .. ':RUN:AVAILABLE'
12
12
  local run_version_key = unique_digest .. ':RUN:VERSION'
13
13
 
14
- redis.call('SREM', unique_keys, unique_digest)
14
+ local count = redis.call('SREM', unique_keys, unique_digest)
15
15
  redis.call('DEL', exists_key)
16
16
  redis.call('DEL', grabbed_key)
17
17
  redis.call('DEL', available_key)
@@ -20,3 +20,5 @@ redis.call('DEL', run_exists_key)
20
20
  redis.call('DEL', run_grabbed_key)
21
21
  redis.call('DEL', run_available_key)
22
22
  redis.call('DEL', run_version_key)
23
+
24
+ return count
@@ -0,0 +1,58 @@
1
+ local queue = "queue:" .. KEYS[1]
2
+ local unique_digest = KEYS[2]
3
+
4
+ local function delete_from_sorted_set(name, digest)
5
+ local per = 50
6
+ local total = redis.call('zcard', name)
7
+ local index = 0
8
+ local result
9
+ -- redis.log(redis.LOG_DEBUG, "delete_from_sorted_set("..name..","..digest..")")
10
+ while (index < total) do
11
+ -- redis.log(redis.LOG_DEBUG, "delete_from_sorted_set("..name..","..digest..") - "..index.."-"..per)
12
+ local items = redis.call('ZRANGE', name, index, index + per -1)
13
+ for _, item in pairs(items) do
14
+ -- redis.log(redis.LOG_DEBUG, "delete_from_sorted_set("..name..","..digest..") - current item: " .. item)
15
+ if string.find(item, digest) then
16
+ -- redis.log(redis.LOG_DEBUG, "delete_from_sorted_set("..name..","..digest..") - deleting item")
17
+ redis.call('ZREM', name, item)
18
+ result = item
19
+ break
20
+ end
21
+ end
22
+ index = index + per
23
+ end
24
+ return result
25
+ end
26
+
27
+ local per = 50
28
+ local total = redis.call('LLEN', queue)
29
+ local index = 0
30
+ local result = nil
31
+
32
+ -- redis.log(redis.LOG_DEBUG, "delete_job_by_digest.lua - looping through: " .. queue)
33
+ while (index < total) do
34
+ -- redis.log(redis.LOG_DEBUG, "delete_job_by_digest.lua - " .. index .. "-" .. per)
35
+ local items = redis.call('LRANGE', queue, index, index + per -1)
36
+ for _, item in pairs(items) do
37
+ -- redis.log(redis.LOG_DEBUG, "delete_job_by_digest.lua - item: " .. item)
38
+ if string.find(item, unique_digest) then
39
+ -- redis.log(redis.LOG_DEBUG, "delete_job_by_digest.lua - found item with digest: " .. unique_digest .. " in: " ..queue)
40
+ redis.call('LREM', queue, 1, item)
41
+ result = item
42
+ break
43
+ end
44
+ end
45
+ index = index + per
46
+ end
47
+
48
+ if result then
49
+ return result
50
+ end
51
+
52
+ result = delete_from_sorted_set('schedule', unique_digest)
53
+ if result then
54
+ return result
55
+ end
56
+
57
+ result = delete_from_sorted_set('retry', unique_digest)
58
+ return result
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sidekiq-unique-jobs
3
3
  version: !ruby/object:Gem::Version
4
- version: 6.0.0.rc7
4
+ version: 6.0.0.rc8
5
5
  platform: ruby
6
6
  authors:
7
7
  - Mikael Henriksson
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2018-07-23 00:00:00.000000000 Z
11
+ date: 2018-07-24 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: concurrent-ruby
@@ -289,6 +289,7 @@ files:
289
289
  - lib/sidekiq_unique_jobs/on_conflict/null_strategy.rb
290
290
  - lib/sidekiq_unique_jobs/on_conflict/raise.rb
291
291
  - lib/sidekiq_unique_jobs/on_conflict/reject.rb
292
+ - lib/sidekiq_unique_jobs/on_conflict/replace.rb
292
293
  - lib/sidekiq_unique_jobs/on_conflict/reschedule.rb
293
294
  - lib/sidekiq_unique_jobs/on_conflict/strategy.rb
294
295
  - lib/sidekiq_unique_jobs/options_with_fallback.rb
@@ -311,6 +312,7 @@ files:
311
312
  - redis/create.lua
312
313
  - redis/delete.lua
313
314
  - redis/delete_by_digest.lua
315
+ - redis/delete_job_by_digest.lua
314
316
  - redis/release_lock.lua
315
317
  - redis/release_stale_locks.lua
316
318
  - redis/signal.lua