sidekiq 5.1.1 → 5.2.2
Sign up to get free protection for your applications and to get access to all the features.
Potentially problematic release.
This version of sidekiq might be problematic. Click here for more details.
- checksums.yaml +4 -4
- data/.travis.yml +5 -5
- data/Changes.md +39 -0
- data/Ent-Changes.md +11 -0
- data/Gemfile +9 -3
- data/LICENSE +1 -1
- data/Pro-Changes.md +30 -0
- data/lib/sidekiq/api.rb +59 -27
- data/lib/sidekiq/cli.rb +13 -6
- data/lib/sidekiq/client.rb +31 -30
- data/lib/sidekiq/delay.rb +1 -0
- data/lib/sidekiq/fetch.rb +1 -1
- data/lib/sidekiq/job_logger.rb +2 -1
- data/lib/sidekiq/job_retry.rb +7 -2
- data/lib/sidekiq/launcher.rb +17 -11
- data/lib/sidekiq/logging.rb +3 -3
- data/lib/sidekiq/manager.rb +0 -1
- data/lib/sidekiq/middleware/server/active_record.rb +2 -1
- data/lib/sidekiq/processor.rb +55 -11
- data/lib/sidekiq/rails.rb +4 -9
- data/lib/sidekiq/redis_connection.rb +10 -2
- data/lib/sidekiq/scheduled.rb +33 -4
- data/lib/sidekiq/testing.rb +4 -4
- data/lib/sidekiq/util.rb +1 -1
- data/lib/sidekiq/version.rb +1 -1
- data/lib/sidekiq/web/action.rb +1 -1
- data/lib/sidekiq/web/application.rb +24 -2
- data/lib/sidekiq/web/helpers.rb +4 -4
- data/lib/sidekiq/web/router.rb +10 -10
- data/lib/sidekiq/web.rb +4 -4
- data/lib/sidekiq/worker.rb +7 -7
- data/lib/sidekiq.rb +4 -5
- data/sidekiq.gemspec +3 -8
- data/web/assets/javascripts/application.js +0 -0
- data/web/assets/stylesheets/application.css +0 -0
- data/web/assets/stylesheets/bootstrap.css +2 -2
- data/web/locales/ar.yml +1 -0
- data/web/locales/en.yml +1 -0
- data/web/locales/es.yml +3 -3
- data/web/views/_footer.erb +3 -0
- data/web/views/layout.erb +1 -1
- data/web/views/queue.erb +1 -0
- data/web/views/retries.erb +4 -0
- metadata +4 -87
- data/lib/sidekiq/middleware/server/active_record_cache.rb +0 -11
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 985b2f1c0778b1e4d2587eb2f182fc58723277c7
|
4
|
+
data.tar.gz: 2d20a6e15e248bea802b1ca9438c9507a1f94ab4
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 4e4be1e23c16eb53d43424a0a07e6b69ab2001ffbbaf28c7c09db1dac89d9f1e9ac3513c70ac5abd398efb12c8a712da20f563df5fd055c6855eb5d2d7abea05
|
7
|
+
data.tar.gz: 8947fc9287e9a1336ba57b6ef380e5f76d7ea7382fff3930acd914c6c08fed9c08940780c1537bbedf349ed29257b9c249f8356eed095d8537a4195e7426b676
|
data/.travis.yml
CHANGED
data/Changes.md
CHANGED
@@ -2,6 +2,45 @@
|
|
2
2
|
|
3
3
|
[Sidekiq Changes](https://github.com/mperham/sidekiq/blob/master/Changes.md) | [Sidekiq Pro Changes](https://github.com/mperham/sidekiq/blob/master/Pro-Changes.md) | [Sidekiq Enterprise Changes](https://github.com/mperham/sidekiq/blob/master/Ent-Changes.md)
|
4
4
|
|
5
|
+
5.2.2
|
6
|
+
---------
|
7
|
+
|
8
|
+
- Raise error for duplicate queue names in config to avoid unexpected fetch algorithm change [#3911]
|
9
|
+
- Fix concurrency bug on JRuby [#3958, mattbooks]
|
10
|
+
- Add "Kill All" button to the retries page [#3938]
|
11
|
+
|
12
|
+
5.2.1
|
13
|
+
-----------
|
14
|
+
|
15
|
+
- Fix concurrent modification error during heartbeat [#3921]
|
16
|
+
|
17
|
+
5.2.0
|
18
|
+
-----------
|
19
|
+
|
20
|
+
- **Decrease default concurrency from 25 to 10** [#3892]
|
21
|
+
- Verify connection pool sizing upon startup [#3917]
|
22
|
+
- Smoother scheduling for large Sidekiq clusters [#3889]
|
23
|
+
- Switch Sidekiq::Testing impl from alias\_method to Module#prepend, for resiliency [#3852]
|
24
|
+
- Update Sidekiq APIs to use SCAN for scalability [#3848, ffiller]
|
25
|
+
- Remove concurrent-ruby gem dependency [#3830]
|
26
|
+
- Optimize Web UI's bootstrap.css [#3914]
|
27
|
+
|
28
|
+
5.1.3
|
29
|
+
-----------
|
30
|
+
|
31
|
+
- Fix version comparison so Ruby 2.2.10 works. [#3808, nateberkopec]
|
32
|
+
|
33
|
+
5.1.2
|
34
|
+
-----------
|
35
|
+
|
36
|
+
- Add link to docs in Web UI footer
|
37
|
+
- Fix crash on Ctrl-C in Windows [#3775, Bernica]
|
38
|
+
- Remove `freeze` calls on String constants. This is superfluous with Ruby
|
39
|
+
2.3+ and `frozen_string_literal: true`. [#3759]
|
40
|
+
- Fix use of AR middleware outside of Rails [#3787]
|
41
|
+
- Sidekiq::Worker `sidekiq_retry_in` block can now return nil or 0 to use
|
42
|
+
the default backoff delay [#3796, dsalahutdinov]
|
43
|
+
|
5
44
|
5.1.1
|
6
45
|
-----------
|
7
46
|
|
data/Ent-Changes.md
CHANGED
@@ -4,6 +4,17 @@
|
|
4
4
|
|
5
5
|
Please see [http://sidekiq.org/](http://sidekiq.org/) for more details and how to buy.
|
6
6
|
|
7
|
+
HEAD
|
8
|
+
-------------
|
9
|
+
|
10
|
+
- Add support for sidekiqswarm memory monitoring on FreeBSD [#3884]
|
11
|
+
|
12
|
+
1.7.1
|
13
|
+
-------------
|
14
|
+
|
15
|
+
- Fix Lua error in concurrent rate limiter under heavy contention
|
16
|
+
- Remove superfluous `freeze` calls on Strings [#3759]
|
17
|
+
|
7
18
|
1.7.0
|
8
19
|
-------------
|
9
20
|
|
data/Gemfile
CHANGED
@@ -1,8 +1,14 @@
|
|
1
1
|
source 'https://rubygems.org'
|
2
2
|
gemspec
|
3
3
|
|
4
|
-
|
4
|
+
# load testing
|
5
5
|
#gem "hiredis"
|
6
|
-
gem 'simplecov'
|
7
|
-
gem 'minitest'
|
8
6
|
#gem 'toxiproxy'
|
7
|
+
|
8
|
+
group :test do
|
9
|
+
gem 'rails', '>= 5.0.1'
|
10
|
+
gem 'minitest'
|
11
|
+
gem 'rake'
|
12
|
+
gem 'redis-namespace'
|
13
|
+
gem 'simplecov'
|
14
|
+
end
|
data/LICENSE
CHANGED
@@ -5,5 +5,5 @@ the LGPLv3 license. Please see <http://www.gnu.org/licenses/lgpl-3.0.html>
|
|
5
5
|
for license text.
|
6
6
|
|
7
7
|
Sidekiq Pro has a commercial-friendly license allowing private forks
|
8
|
-
and modifications of Sidekiq. Please see
|
8
|
+
and modifications of Sidekiq. Please see https://sidekiq.org/products/pro.html for
|
9
9
|
more detail. You can find the commercial license terms in COMM-LICENSE.
|
data/Pro-Changes.md
CHANGED
@@ -4,6 +4,36 @@
|
|
4
4
|
|
5
5
|
Please see [http://sidekiq.org/](http://sidekiq.org/) for more details and how to buy.
|
6
6
|
|
7
|
+
4.0.4
|
8
|
+
---------
|
9
|
+
|
10
|
+
- Update Sidekiq::Client patches to work with new Module#prepend
|
11
|
+
mechanism in Sidekiq 5.2.0. [#3930]
|
12
|
+
|
13
|
+
4.0.3
|
14
|
+
---------
|
15
|
+
|
16
|
+
- Add at\_exit handler to push any saved jobs in `reliable_push` when exiting. [#3823]
|
17
|
+
- Implement batch death callback. This is fired the first time a job within a batch dies. [#3841]
|
18
|
+
```ruby
|
19
|
+
batch = Sidekiq::Batch.new
|
20
|
+
batch.on(:death, ...)
|
21
|
+
```
|
22
|
+
|
23
|
+
4.0.2
|
24
|
+
---------
|
25
|
+
|
26
|
+
- Remove super\_fetch edge case leading to an unnecessary `sleep(1)`
|
27
|
+
call and resulting latency [#3790]
|
28
|
+
- Fix possible bad statsd metric call on super\_fetch startup
|
29
|
+
- Remove superfluous `freeze` calls on Strings [#3759]
|
30
|
+
|
31
|
+
4.0.1
|
32
|
+
---------
|
33
|
+
|
34
|
+
- Fix incompatibility with the statsd-ruby gem [#3740]
|
35
|
+
- Add tags to Statsd metrics when using Datadog [#3744]
|
36
|
+
|
7
37
|
4.0.0
|
8
38
|
---------
|
9
39
|
|
data/lib/sidekiq/api.rb
CHANGED
@@ -1,9 +1,24 @@
|
|
1
|
-
# encoding: utf-8
|
2
1
|
# frozen_string_literal: true
|
3
2
|
require 'sidekiq'
|
4
3
|
|
5
4
|
module Sidekiq
|
5
|
+
|
6
|
+
module RedisScanner
|
7
|
+
def sscan(conn, key)
|
8
|
+
cursor = '0'
|
9
|
+
result = []
|
10
|
+
loop do
|
11
|
+
cursor, values = conn.sscan(key, cursor)
|
12
|
+
result.push(*values)
|
13
|
+
break if cursor == '0'
|
14
|
+
end
|
15
|
+
result
|
16
|
+
end
|
17
|
+
end
|
18
|
+
|
6
19
|
class Stats
|
20
|
+
include RedisScanner
|
21
|
+
|
7
22
|
def initialize
|
8
23
|
fetch_stats!
|
9
24
|
end
|
@@ -51,33 +66,39 @@ module Sidekiq
|
|
51
66
|
def fetch_stats!
|
52
67
|
pipe1_res = Sidekiq.redis do |conn|
|
53
68
|
conn.pipelined do
|
54
|
-
conn.get('stat:processed'
|
55
|
-
conn.get('stat:failed'
|
56
|
-
conn.zcard('schedule'
|
57
|
-
conn.zcard('retry'
|
58
|
-
conn.zcard('dead'
|
59
|
-
conn.scard('processes'
|
60
|
-
conn.lrange('queue:default'
|
61
|
-
conn.smembers('processes'.freeze)
|
62
|
-
conn.smembers('queues'.freeze)
|
69
|
+
conn.get('stat:processed')
|
70
|
+
conn.get('stat:failed')
|
71
|
+
conn.zcard('schedule')
|
72
|
+
conn.zcard('retry')
|
73
|
+
conn.zcard('dead')
|
74
|
+
conn.scard('processes')
|
75
|
+
conn.lrange('queue:default', -1, -1)
|
63
76
|
end
|
64
77
|
end
|
65
78
|
|
79
|
+
processes = Sidekiq.redis do |conn|
|
80
|
+
sscan(conn, 'processes')
|
81
|
+
end
|
82
|
+
|
83
|
+
queues = Sidekiq.redis do |conn|
|
84
|
+
sscan(conn, 'queues')
|
85
|
+
end
|
86
|
+
|
66
87
|
pipe2_res = Sidekiq.redis do |conn|
|
67
88
|
conn.pipelined do
|
68
|
-
|
69
|
-
|
89
|
+
processes.each {|key| conn.hget(key, 'busy') }
|
90
|
+
queues.each {|queue| conn.llen("queue:#{queue}") }
|
70
91
|
end
|
71
92
|
end
|
72
93
|
|
73
|
-
s =
|
94
|
+
s = processes.size
|
74
95
|
workers_size = pipe2_res[0...s].map(&:to_i).inject(0, &:+)
|
75
96
|
enqueued = pipe2_res[s..-1].map(&:to_i).inject(0, &:+)
|
76
97
|
|
77
98
|
default_queue_latency = if (entry = pipe1_res[6].first)
|
78
99
|
job = Sidekiq.load_json(entry) rescue {}
|
79
100
|
now = Time.now.to_f
|
80
|
-
thence = job['enqueued_at'
|
101
|
+
thence = job['enqueued_at'] || now
|
81
102
|
now - thence
|
82
103
|
else
|
83
104
|
0
|
@@ -117,9 +138,11 @@ module Sidekiq
|
|
117
138
|
end
|
118
139
|
|
119
140
|
class Queues
|
141
|
+
include RedisScanner
|
142
|
+
|
120
143
|
def lengths
|
121
144
|
Sidekiq.redis do |conn|
|
122
|
-
queues = conn
|
145
|
+
queues = sscan(conn, 'queues')
|
123
146
|
|
124
147
|
lengths = conn.pipelined do
|
125
148
|
queues.each do |queue|
|
@@ -163,7 +186,7 @@ module Sidekiq
|
|
163
186
|
|
164
187
|
while i < @days_previous
|
165
188
|
date = @start_date - i
|
166
|
-
datestr = date.strftime("%Y-%m-%d"
|
189
|
+
datestr = date.strftime("%Y-%m-%d")
|
167
190
|
keys << "stat:#{stat}:#{datestr}"
|
168
191
|
dates << datestr
|
169
192
|
i += 1
|
@@ -199,18 +222,19 @@ module Sidekiq
|
|
199
222
|
#
|
200
223
|
class Queue
|
201
224
|
include Enumerable
|
225
|
+
extend RedisScanner
|
202
226
|
|
203
227
|
##
|
204
228
|
# Return all known queues within Redis.
|
205
229
|
#
|
206
230
|
def self.all
|
207
|
-
Sidekiq.redis { |c| c
|
231
|
+
Sidekiq.redis { |c| sscan(c, 'queues') }.sort.map { |q| Sidekiq::Queue.new(q) }
|
208
232
|
end
|
209
233
|
|
210
234
|
attr_reader :name
|
211
235
|
|
212
236
|
def initialize(name="default")
|
213
|
-
@name = name
|
237
|
+
@name = name.to_s
|
214
238
|
@rname = "queue:#{name}"
|
215
239
|
end
|
216
240
|
|
@@ -273,7 +297,7 @@ module Sidekiq
|
|
273
297
|
Sidekiq.redis do |conn|
|
274
298
|
conn.multi do
|
275
299
|
conn.del(@rname)
|
276
|
-
conn.srem("queues"
|
300
|
+
conn.srem("queues", name)
|
277
301
|
end
|
278
302
|
end
|
279
303
|
end
|
@@ -349,9 +373,9 @@ module Sidekiq
|
|
349
373
|
job_args
|
350
374
|
end
|
351
375
|
else
|
352
|
-
if self['encrypt'
|
376
|
+
if self['encrypt']
|
353
377
|
# no point in showing 150+ bytes of random garbage
|
354
|
-
args[-1] = '[encrypted data]'
|
378
|
+
args[-1] = '[encrypted data]'
|
355
379
|
end
|
356
380
|
args
|
357
381
|
end
|
@@ -646,6 +670,12 @@ module Sidekiq
|
|
646
670
|
each(&:retry)
|
647
671
|
end
|
648
672
|
end
|
673
|
+
|
674
|
+
def kill_all
|
675
|
+
while size > 0
|
676
|
+
each(&:kill)
|
677
|
+
end
|
678
|
+
end
|
649
679
|
end
|
650
680
|
|
651
681
|
##
|
@@ -701,17 +731,18 @@ module Sidekiq
|
|
701
731
|
#
|
702
732
|
class ProcessSet
|
703
733
|
include Enumerable
|
734
|
+
include RedisScanner
|
704
735
|
|
705
736
|
def initialize(clean_plz=true)
|
706
|
-
|
737
|
+
cleanup if clean_plz
|
707
738
|
end
|
708
739
|
|
709
740
|
# Cleans up dead processes recorded in Redis.
|
710
741
|
# Returns the number of processes cleaned.
|
711
|
-
def
|
742
|
+
def cleanup
|
712
743
|
count = 0
|
713
744
|
Sidekiq.redis do |conn|
|
714
|
-
procs = conn
|
745
|
+
procs = sscan(conn, 'processes').sort
|
715
746
|
heartbeats = conn.pipelined do
|
716
747
|
procs.each do |key|
|
717
748
|
conn.hget(key, 'info')
|
@@ -731,7 +762,7 @@ module Sidekiq
|
|
731
762
|
end
|
732
763
|
|
733
764
|
def each
|
734
|
-
procs = Sidekiq.redis { |conn| conn
|
765
|
+
procs = Sidekiq.redis { |conn| sscan(conn, 'processes') }.sort
|
735
766
|
|
736
767
|
Sidekiq.redis do |conn|
|
737
768
|
# We're making a tradeoff here between consuming more memory instead of
|
@@ -866,10 +897,11 @@ module Sidekiq
|
|
866
897
|
#
|
867
898
|
class Workers
|
868
899
|
include Enumerable
|
900
|
+
include RedisScanner
|
869
901
|
|
870
902
|
def each
|
871
903
|
Sidekiq.redis do |conn|
|
872
|
-
procs = conn
|
904
|
+
procs = sscan(conn, 'processes')
|
873
905
|
procs.sort.each do |key|
|
874
906
|
valid, workers = conn.pipelined do
|
875
907
|
conn.exists(key)
|
@@ -891,7 +923,7 @@ module Sidekiq
|
|
891
923
|
# which can easily get out of sync with crashy processes.
|
892
924
|
def size
|
893
925
|
Sidekiq.redis do |conn|
|
894
|
-
procs = conn
|
926
|
+
procs = sscan(conn, 'processes')
|
895
927
|
if procs.empty?
|
896
928
|
0
|
897
929
|
else
|
data/lib/sidekiq/cli.rb
CHANGED
@@ -1,4 +1,3 @@
|
|
1
|
-
# encoding: utf-8
|
2
1
|
# frozen_string_literal: true
|
3
2
|
$stdout.sync = true
|
4
3
|
|
@@ -17,7 +16,7 @@ module Sidekiq
|
|
17
16
|
include Singleton unless $TESTING
|
18
17
|
|
19
18
|
PROCTITLES = [
|
20
|
-
proc { 'sidekiq'
|
19
|
+
proc { 'sidekiq' },
|
21
20
|
proc { Sidekiq::VERSION },
|
22
21
|
proc { |me, data| data['tag'] },
|
23
22
|
proc { |me, data| "[#{Processor::WORKER_STATE.size} of #{data['concurrency']} busy]" },
|
@@ -65,7 +64,7 @@ module Sidekiq
|
|
65
64
|
sigs.each do |sig|
|
66
65
|
begin
|
67
66
|
trap sig do
|
68
|
-
self_write.
|
67
|
+
self_write.write("#{sig}\n")
|
69
68
|
end
|
70
69
|
rescue ArgumentError
|
71
70
|
puts "Signal #{sig} not supported"
|
@@ -81,6 +80,12 @@ module Sidekiq
|
|
81
80
|
ver = Sidekiq.redis_info['redis_version']
|
82
81
|
raise "You are using Redis v#{ver}, Sidekiq requires Redis v2.8.0 or greater" if ver < '2.8'
|
83
82
|
|
83
|
+
# Since the user can pass us a connection pool explicitly in the initializer, we
|
84
|
+
# need to verify the size is large enough or else Sidekiq's performance is dramatically slowed.
|
85
|
+
cursize = Sidekiq.redis_pool.size
|
86
|
+
needed = Sidekiq.options[:concurrency] + 2
|
87
|
+
raise "Your pool of #{cursize} Redis connections is too small, please increase the size to at least #{needed}" if cursize < needed
|
88
|
+
|
84
89
|
# cache process identity
|
85
90
|
Sidekiq.options[:identity] = identity
|
86
91
|
|
@@ -330,6 +335,8 @@ module Sidekiq
|
|
330
335
|
opts[:tag] = arg
|
331
336
|
end
|
332
337
|
|
338
|
+
# this index remains here for backwards compatibility but none of the Sidekiq
|
339
|
+
# family use this value anymore. it was used by Pro's original reliable_fetch.
|
333
340
|
o.on '-i', '--index INT', "unique process index on this machine" do |arg|
|
334
341
|
opts[:index] = Integer(arg.match(/\d+/)[0])
|
335
342
|
end
|
@@ -429,9 +436,9 @@ module Sidekiq
|
|
429
436
|
end
|
430
437
|
|
431
438
|
def parse_queue(opts, q, weight=nil)
|
432
|
-
[
|
433
|
-
|
434
|
-
|
439
|
+
opts[:queues] ||= []
|
440
|
+
raise ArgumentError, "queues: #{q} cannot be defined twice" if opts[:queues].include?(q)
|
441
|
+
[weight.to_i, 1].max.times { opts[:queues] << q }
|
435
442
|
opts[:strict] = false if weight.to_i > 0
|
436
443
|
end
|
437
444
|
end
|
data/lib/sidekiq/client.rb
CHANGED
@@ -68,18 +68,19 @@ module Sidekiq
|
|
68
68
|
#
|
69
69
|
def push(item)
|
70
70
|
normed = normalize_item(item)
|
71
|
-
payload = process_single(item['class'
|
71
|
+
payload = process_single(item['class'], normed)
|
72
72
|
|
73
73
|
if payload
|
74
74
|
raw_push([payload])
|
75
|
-
payload['jid'
|
75
|
+
payload['jid']
|
76
76
|
end
|
77
77
|
end
|
78
78
|
|
79
79
|
##
|
80
|
-
# Push a large number of jobs to Redis.
|
81
|
-
#
|
82
|
-
#
|
80
|
+
# Push a large number of jobs to Redis. This method cuts out the redis
|
81
|
+
# network round trip latency. I wouldn't recommend pushing more than
|
82
|
+
# 1000 per call but YMMV based on network quality, size of job args, etc.
|
83
|
+
# A large number of jobs can cause a bit of Redis command processing latency.
|
83
84
|
#
|
84
85
|
# Takes the same arguments as #push except that args is expected to be
|
85
86
|
# an Array of Arrays. All other keys are duplicated for each job. Each job
|
@@ -89,19 +90,19 @@ module Sidekiq
|
|
89
90
|
# Returns an array of the of pushed jobs' jids. The number of jobs pushed can be less
|
90
91
|
# than the number given if the middleware stopped processing for one or more jobs.
|
91
92
|
def push_bulk(items)
|
92
|
-
arg = items['args'
|
93
|
+
arg = items['args'].first
|
93
94
|
return [] unless arg # no jobs to push
|
94
95
|
raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" if !arg.is_a?(Array)
|
95
96
|
|
96
97
|
normed = normalize_item(items)
|
97
|
-
payloads = items['args'
|
98
|
-
copy = normed.merge('args'
|
99
|
-
result = process_single(items['class'
|
98
|
+
payloads = items['args'].map do |args|
|
99
|
+
copy = normed.merge('args' => args, 'jid' => SecureRandom.hex(12), 'enqueued_at' => Time.now.to_f)
|
100
|
+
result = process_single(items['class'], copy)
|
100
101
|
result ? result : nil
|
101
102
|
end.compact
|
102
103
|
|
103
104
|
raw_push(payloads) if !payloads.empty?
|
104
|
-
payloads.collect { |payload| payload['jid'
|
105
|
+
payloads.collect { |payload| payload['jid'] }
|
105
106
|
end
|
106
107
|
|
107
108
|
# Allows sharding of jobs across any number of Redis instances. All jobs
|
@@ -144,14 +145,14 @@ module Sidekiq
|
|
144
145
|
# Messages are enqueued to the 'default' queue.
|
145
146
|
#
|
146
147
|
def enqueue(klass, *args)
|
147
|
-
klass.client_push('class'
|
148
|
+
klass.client_push('class' => klass, 'args' => args)
|
148
149
|
end
|
149
150
|
|
150
151
|
# Example usage:
|
151
152
|
# Sidekiq::Client.enqueue_to(:queue_name, MyWorker, 'foo', 1, :bat => 'bar')
|
152
153
|
#
|
153
154
|
def enqueue_to(queue, klass, *args)
|
154
|
-
klass.client_push('queue'
|
155
|
+
klass.client_push('queue' => queue, 'class' => klass, 'args' => args)
|
155
156
|
end
|
156
157
|
|
157
158
|
# Example usage:
|
@@ -162,8 +163,8 @@ module Sidekiq
|
|
162
163
|
now = Time.now.to_f
|
163
164
|
ts = (int < 1_000_000_000 ? now + int : int)
|
164
165
|
|
165
|
-
item = { 'class'
|
166
|
-
item.delete('at'
|
166
|
+
item = { 'class' => klass, 'args' => args, 'at' => ts, 'queue' => queue }
|
167
|
+
item.delete('at') if ts <= now
|
167
168
|
|
168
169
|
klass.client_push(item)
|
169
170
|
end
|
@@ -188,25 +189,25 @@ module Sidekiq
|
|
188
189
|
end
|
189
190
|
|
190
191
|
def atomic_push(conn, payloads)
|
191
|
-
if payloads.first['at'
|
192
|
-
conn.zadd('schedule'
|
193
|
-
at = hash.delete('at'
|
192
|
+
if payloads.first['at']
|
193
|
+
conn.zadd('schedule', payloads.map do |hash|
|
194
|
+
at = hash.delete('at').to_s
|
194
195
|
[at, Sidekiq.dump_json(hash)]
|
195
196
|
end)
|
196
197
|
else
|
197
|
-
q = payloads.first['queue'
|
198
|
+
q = payloads.first['queue']
|
198
199
|
now = Time.now.to_f
|
199
200
|
to_push = payloads.map do |entry|
|
200
|
-
entry['enqueued_at'
|
201
|
+
entry['enqueued_at'] = now
|
201
202
|
Sidekiq.dump_json(entry)
|
202
203
|
end
|
203
|
-
conn.sadd('queues'
|
204
|
+
conn.sadd('queues', q)
|
204
205
|
conn.lpush("queue:#{q}", to_push)
|
205
206
|
end
|
206
207
|
end
|
207
208
|
|
208
209
|
def process_single(worker_class, item)
|
209
|
-
queue = item['queue'
|
210
|
+
queue = item['queue']
|
210
211
|
|
211
212
|
middleware.invoke(worker_class, item, queue, @redis_pool) do
|
212
213
|
item
|
@@ -214,25 +215,25 @@ module Sidekiq
|
|
214
215
|
end
|
215
216
|
|
216
217
|
def normalize_item(item)
|
217
|
-
raise(ArgumentError, "Job must be a Hash with 'class' and 'args' keys: { 'class' => SomeWorker, 'args' => ['bob', 1, :foo => 'bar'] }") unless item.is_a?(Hash) && item.has_key?('class'
|
218
|
+
raise(ArgumentError, "Job must be a Hash with 'class' and 'args' keys: { 'class' => SomeWorker, 'args' => ['bob', 1, :foo => 'bar'] }") unless item.is_a?(Hash) && item.has_key?('class') && item.has_key?('args')
|
218
219
|
raise(ArgumentError, "Job args must be an Array") unless item['args'].is_a?(Array)
|
219
|
-
raise(ArgumentError, "Job class must be either a Class or String representation of the class name") unless item['class'
|
220
|
-
raise(ArgumentError, "Job 'at' must be a Numeric timestamp") if item.has_key?('at'
|
220
|
+
raise(ArgumentError, "Job class must be either a Class or String representation of the class name") unless item['class'].is_a?(Class) || item['class'].is_a?(String)
|
221
|
+
raise(ArgumentError, "Job 'at' must be a Numeric timestamp") if item.has_key?('at') && !item['at'].is_a?(Numeric)
|
221
222
|
#raise(ArgumentError, "Arguments must be native JSON types, see https://github.com/mperham/sidekiq/wiki/Best-Practices") unless JSON.load(JSON.dump(item['args'])) == item['args']
|
222
223
|
|
223
|
-
normalized_hash(item['class'
|
224
|
+
normalized_hash(item['class'])
|
224
225
|
.each{ |key, value| item[key] = value if item[key].nil? }
|
225
226
|
|
226
|
-
item['class'
|
227
|
-
item['queue'
|
228
|
-
item['jid'
|
229
|
-
item['created_at'
|
227
|
+
item['class'] = item['class'].to_s
|
228
|
+
item['queue'] = item['queue'].to_s
|
229
|
+
item['jid'] ||= SecureRandom.hex(12)
|
230
|
+
item['created_at'] ||= Time.now.to_f
|
230
231
|
item
|
231
232
|
end
|
232
233
|
|
233
234
|
def normalized_hash(item_class)
|
234
235
|
if item_class.is_a?(Class)
|
235
|
-
raise(ArgumentError, "Message must include a Sidekiq::Worker class, not class name: #{item_class.ancestors.inspect}") if !item_class.respond_to?('get_sidekiq_options'
|
236
|
+
raise(ArgumentError, "Message must include a Sidekiq::Worker class, not class name: #{item_class.ancestors.inspect}") if !item_class.respond_to?('get_sidekiq_options')
|
236
237
|
item_class.get_sidekiq_options
|
237
238
|
else
|
238
239
|
Sidekiq.default_worker_options
|
data/lib/sidekiq/delay.rb
CHANGED
data/lib/sidekiq/fetch.rb
CHANGED
data/lib/sidekiq/job_logger.rb
CHANGED
data/lib/sidekiq/job_retry.rb
CHANGED
@@ -1,3 +1,4 @@
|
|
1
|
+
# frozen_string_literal: true
|
1
2
|
require 'sidekiq/scheduled'
|
2
3
|
require 'sidekiq/api'
|
3
4
|
|
@@ -204,7 +205,11 @@ module Sidekiq
|
|
204
205
|
end
|
205
206
|
|
206
207
|
def delay_for(worker, count, exception)
|
207
|
-
worker && worker.sidekiq_retry_in_block
|
208
|
+
if worker && worker.sidekiq_retry_in_block
|
209
|
+
custom_retry_in = retry_in(worker, count, exception).to_i
|
210
|
+
return custom_retry_in if custom_retry_in > 0
|
211
|
+
end
|
212
|
+
seconds_to_delay(count)
|
208
213
|
end
|
209
214
|
|
210
215
|
# delayed_job uses the same basic formula
|
@@ -214,7 +219,7 @@ module Sidekiq
|
|
214
219
|
|
215
220
|
def retry_in(worker, count, exception)
|
216
221
|
begin
|
217
|
-
worker.sidekiq_retry_in_block.call(count, exception)
|
222
|
+
worker.sidekiq_retry_in_block.call(count, exception)
|
218
223
|
rescue Exception => e
|
219
224
|
handle_exception(e, { context: "Failure scheduling retry using the defined `sidekiq_retry_in` in #{worker.class.name}, falling back to default" })
|
220
225
|
nil
|