sidekiq-unique-jobs 5.0.5 → 5.0.6
Sign up to get free protection for your applications and to get access to all the features.
Potentially problematic release.
This version of sidekiq-unique-jobs might be problematic. Click here for more details.
- checksums.yaml +4 -4
- data/CHANGELOG.md +3 -0
- data/README.md +45 -1
- data/lib/sidekiq-unique-jobs.rb +1 -1
- data/lib/sidekiq_unique_jobs/constants.rb +1 -0
- data/lib/sidekiq_unique_jobs/unique_args.rb +15 -9
- data/lib/sidekiq_unique_jobs/version.rb +1 -1
- metadata +1 -1
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 36eb8099e1e3a20e4a3e710c7bd8fab715454205
|
4
|
+
data.tar.gz: f46d3ddbbc3a1a436ecb4399b59c8f0565d882b9
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 3251e6623db4af9da2dd4b35eba9f6f6a5b2343faa41d23688b2cc0499dd01a1c29bdccc3a2fa2d3974c026f8d8cb4a99b9b900f12475593c745704611451a63
|
7
|
+
data.tar.gz: c9f974e698449faf6e10c33f1c5d0ed01dcafe5c26c9c60da23037daf26d1267d42b2e753dc306413ea22dfb3b58e1633bc52331e410532cafd1b704dfb9ed95
|
data/CHANGELOG.md
CHANGED
data/README.md
CHANGED
@@ -41,6 +41,27 @@ sidekiq_options unique: :while_executing
|
|
41
41
|
|
42
42
|
Is to make sure that a job can be scheduled any number of times but only executed a single time per argument provided to the job we call this runtime uniqueness. This is probably most useful for background jobs that are fast to execute. (See mhenrixon/sidekiq-unique-jobs#111 for a great example of when this would be right.) While the job is executing/performing no other jobs can be executed at the same time.
|
43
43
|
|
44
|
+
The way it currently works is that the jobs can be put on the queue but any succeedent job will wait until the first one finishes.
|
45
|
+
|
46
|
+
There is an example of this to try it out in the `rails_example` application. Run `foreman start` in the root of the directory and open the url: `localhost:5000/work/duplicate_while_executing`.
|
47
|
+
|
48
|
+
In the console you should see something like:
|
49
|
+
|
50
|
+
```
|
51
|
+
0:32:24 worker.1 | 2017-04-23T08:32:24.955Z 84404 TID-ougq4thko WhileExecutingWorker JID-400ec51c9523f41cd4a35058 INFO: start
|
52
|
+
10:32:24 worker.1 | 2017-04-23T08:32:24.956Z 84404 TID-ougq8csew WhileExecutingWorker JID-8d6d9168368eedaed7f75763 INFO: start
|
53
|
+
10:32:24 worker.1 | 2017-04-23T08:32:24.957Z 84404 TID-ougq8crt8 WhileExecutingWorker JID-affcd079094c9b26e8b9ba60 INFO: start
|
54
|
+
10:32:24 worker.1 | 2017-04-23T08:32:24.959Z 84404 TID-ougq8cs8s WhileExecutingWorker JID-9e197460c067b22eb1b5d07f INFO: start
|
55
|
+
10:32:24 worker.1 | 2017-04-23T08:32:24.959Z 84404 TID-ougq4thko WhileExecutingWorker JID-400ec51c9523f41cd4a35058 WhileExecutingWorker INFO: perform(1, 2)
|
56
|
+
10:32:34 worker.1 | 2017-04-23T08:32:34.964Z 84404 TID-ougq4thko WhileExecutingWorker JID-400ec51c9523f41cd4a35058 INFO: done: 10.009 sec
|
57
|
+
10:32:34 worker.1 | 2017-04-23T08:32:34.965Z 84404 TID-ougq8csew WhileExecutingWorker JID-8d6d9168368eedaed7f75763 WhileExecutingWorker INFO: perform(1, 2)
|
58
|
+
10:32:44 worker.1 | 2017-04-23T08:32:44.965Z 84404 TID-ougq8crt8 WhileExecutingWorker JID-affcd079094c9b26e8b9ba60 WhileExecutingWorker INFO: perform(1, 2)
|
59
|
+
10:32:44 worker.1 | 2017-04-23T08:32:44.965Z 84404 TID-ougq8csew WhileExecutingWorker JID-8d6d9168368eedaed7f75763 INFO: done: 20.009 sec
|
60
|
+
10:32:54 worker.1 | 2017-04-23T08:32:54.970Z 84404 TID-ougq8cs8s WhileExecutingWorker JID-9e197460c067b22eb1b5d07f WhileExecutingWorker INFO: perform(1, 2)
|
61
|
+
10:32:54 worker.1 | 2017-04-23T08:32:54.969Z 84404 TID-ougq8crt8 WhileExecutingWorker JID-affcd079094c9b26e8b9ba60 INFO: done: 30.012 sec
|
62
|
+
10:33:04 worker.1 | 2017-04-23T08:33:04.973Z 84404 TID-ougq8cs8s WhileExecutingWorker JID-9e197460c067b22eb1b5d07f INFO: done: 40.014 sec
|
63
|
+
```
|
64
|
+
|
44
65
|
### Until Executing
|
45
66
|
|
46
67
|
```ruby
|
@@ -81,8 +102,10 @@ That means it locks for any job with the same arguments to be persisted into red
|
|
81
102
|
|
82
103
|
### Uniqueness Scope
|
83
104
|
|
105
|
+
|
84
106
|
- Queue specific locks
|
85
|
-
- Across all queues.
|
107
|
+
- Across all queues - [spec/jobs/unique_on_all_queues_job.rb](https://github.com/mhenrixon/sidekiq-unique-jobs/blob/master/spec/jobs/unique_on_all_queues_job.rb)
|
108
|
+
- Across all workers - [spec/jobs/unique_across_workers_job.rb](https://github.com/mhenrixon/sidekiq-unique-jobs/blob/master/spec/jobs/unique_across_workers_job.rb)
|
86
109
|
- Timed / Scheduled jobs
|
87
110
|
|
88
111
|
## Usage
|
@@ -158,6 +181,27 @@ end
|
|
158
181
|
|
159
182
|
The previous problems with unique args being string in server and symbol in client is no longer a problem because the `UniqueArgs` class accounts for this and converts everything to json now. If you find an edge case please provide and example so that we can add coverage and fix it.
|
160
183
|
|
184
|
+
|
185
|
+
It is also quite possible to ensure different types of unique args based on context. I can't vouch for the below example but see [#203](https://github.com/mhenrixon/sidekiq-unique-jobs/issues/203) for the discussion.
|
186
|
+
|
187
|
+
```ruby
|
188
|
+
class UniqueJobWithFilterMethod
|
189
|
+
include Sidekiq::Worker
|
190
|
+
sidekiq_options unique: :until_and_while_executing, unique_args: :unique_args
|
191
|
+
|
192
|
+
def self.unique_args(args)
|
193
|
+
if Sidekiq::ProcessSet.new.size > 1
|
194
|
+
# sidekiq runtime; uniqueness for the object (first arg)
|
195
|
+
args.first
|
196
|
+
else
|
197
|
+
# queuing from the app; uniqueness for all params
|
198
|
+
args
|
199
|
+
end
|
200
|
+
end
|
201
|
+
end
|
202
|
+
```
|
203
|
+
|
204
|
+
|
161
205
|
### After Unlock Callback
|
162
206
|
|
163
207
|
If you are using :after_yield as your unlock ordering, Unique Job offers a callback to perform some work after the block is yielded.
|
data/lib/sidekiq-unique-jobs.rb
CHANGED
@@ -17,5 +17,6 @@ module SidekiqUniqueJobs
|
|
17
17
|
UNIQUE_PREFIX_KEY ||= 'unique_prefix'
|
18
18
|
UNIQUE_DIGEST_KEY ||= 'unique_digest'
|
19
19
|
UNIQUE_ON_ALL_QUEUES_KEY ||= 'unique_on_all_queues'
|
20
|
+
UNIQUE_ACROSS_WORKERS_KEY ||= 'unique_across_workers'
|
20
21
|
UNIQUE_ARGS_ENABLED_KEY ||= 'unique_args_enabled'
|
21
22
|
end
|
@@ -18,9 +18,9 @@ module SidekiqUniqueJobs
|
|
18
18
|
new(item).unique_digest
|
19
19
|
end
|
20
20
|
|
21
|
-
def initialize(
|
21
|
+
def initialize(item)
|
22
22
|
Sidekiq::Logging.with_context(CLASS_NAME) do
|
23
|
-
@item =
|
23
|
+
@item = item
|
24
24
|
@worker_class ||= worker_class_constantize(@item[CLASS_KEY])
|
25
25
|
@item[UNIQUE_PREFIX_KEY] ||= unique_prefix
|
26
26
|
@item[UNIQUE_ARGS_KEY] = unique_args(@item[ARGS_KEY])
|
@@ -43,15 +43,16 @@ module SidekiqUniqueJobs
|
|
43
43
|
end
|
44
44
|
|
45
45
|
def digestable_hash
|
46
|
-
|
47
|
-
|
48
|
-
|
49
|
-
|
50
|
-
|
46
|
+
@item.slice(CLASS_KEY, QUEUE_KEY, UNIQUE_ARGS_KEY).tap do |hash|
|
47
|
+
if unique_on_all_queues?
|
48
|
+
logger.debug { "#{__method__} deleting queue: #{@item[QUEUE_KEY]}" }
|
49
|
+
hash.delete(QUEUE_KEY)
|
50
|
+
end
|
51
|
+
if unique_across_workers?
|
52
|
+
logger.debug { "#{__method__} deleting class: #{@item[CLASS_KEY]}" }
|
53
|
+
hash.delete(CLASS_KEY)
|
51
54
|
end
|
52
|
-
hash.delete(QUEUE_KEY)
|
53
55
|
end
|
54
|
-
hash
|
55
56
|
end
|
56
57
|
|
57
58
|
def unique_args(args)
|
@@ -71,6 +72,11 @@ module SidekiqUniqueJobs
|
|
71
72
|
@item[UNIQUE_ON_ALL_QUEUES_KEY] || @worker_class.get_sidekiq_options[UNIQUE_ON_ALL_QUEUES_KEY]
|
72
73
|
end
|
73
74
|
|
75
|
+
def unique_across_workers?
|
76
|
+
return unless sidekiq_worker_class?
|
77
|
+
@item[UNIQUE_ACROSS_WORKERS_KEY] || @worker_class.get_sidekiq_options[UNIQUE_ACROSS_WORKERS_KEY]
|
78
|
+
end
|
79
|
+
|
74
80
|
def unique_args_enabled?
|
75
81
|
return unless sidekiq_worker_class?
|
76
82
|
unique_args_method # && !unique_args_method.is_a?(Boolean)
|