sidekiq 5.1.1 → 5.2.1

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of sidekiq might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 2fff631710094e52bd8a57c3ddd0367fa2466425
4
- data.tar.gz: 1310f9de542f0c123d39d45b1a8c7e1c09f36d58
3
+ metadata.gz: 2b5307056552014ab284241c16aa83a8d334d31b
4
+ data.tar.gz: acc9588810366d8c6dbb03d3c5cbe140a169f501
5
5
  SHA512:
6
- metadata.gz: c66ac1497ae96aa4c83b89966bfd79babd6ef7629e2916c3d0f441ca7e8f0a34c3d19cd86f37b1660965beded437f1059853a28547b0b5371456c5fe159648a2
7
- data.tar.gz: 26bbe0b74f3d76e821f05e7811c1ae7dd9021cffad3fe9f9644e85b8ecc8f9b2ece98632cd4de17f27a3ffa2f3913ce8e60e6149c8dd5758fd409b4e93725347
6
+ metadata.gz: 8b051912f610e763585e847a991cee6e684d4b27111f91cfe395a1e02539424e931f5ba16141e30a06c3dcf9de3c54db578be47a9fcd629222b81d8295f7c12a
7
+ data.tar.gz: 836cc95385406201ad9e26363698d838381ef993e75034bfc030c4ef0d2821c80e3a4612c5a7ec8bfca1d7acdadd02bcf29f5b979d56ece6752686f6293216ef
data/.travis.yml CHANGED
@@ -7,8 +7,8 @@ before_install:
7
7
  - gem install bundler
8
8
  - gem update bundler
9
9
  rvm:
10
- - 2.2.2
11
- - 2.3.5
12
- - 2.4.2
13
- - 2.5.0
14
- - jruby-9.1.14.0
10
+ - 2.2.10
11
+ - 2.3.7
12
+ - 2.4.4
13
+ - 2.5.1
14
+ - jruby-9.1.17.0
data/Changes.md CHANGED
@@ -2,6 +2,38 @@
2
2
 
3
3
  [Sidekiq Changes](https://github.com/mperham/sidekiq/blob/master/Changes.md) | [Sidekiq Pro Changes](https://github.com/mperham/sidekiq/blob/master/Pro-Changes.md) | [Sidekiq Enterprise Changes](https://github.com/mperham/sidekiq/blob/master/Ent-Changes.md)
4
4
 
5
+ 5.2.1
6
+ -----------
7
+
8
+ - Fix concurrent modification error during heartbeat [#3921]
9
+
10
+ 5.2.0
11
+ -----------
12
+
13
+ - **Decrease default concurrency from 25 to 10** [#3892]
14
+ - Verify connection pool sizing upon startup [#3917]
15
+ - Smoother scheduling for large Sidekiq clusters [#3889]
16
+ - Switch Sidekiq::Testing impl from alias\_method to Module#prepend, for resiliency [#3852]
17
+ - Update Sidekiq APIs to use SCAN for scalability [#3848, ffiller]
18
+ - Remove concurrent-ruby gem dependency [#3830]
19
+ - Optimize Web UI's bootstrap.css [#3914]
20
+
21
+ 5.1.3
22
+ -----------
23
+
24
+ - Fix version comparison so Ruby 2.2.10 works. [#3808, nateberkopec]
25
+
26
+ 5.1.2
27
+ -----------
28
+
29
+ - Add link to docs in Web UI footer
30
+ - Fix crash on Ctrl-C in Windows [#3775, Bernica]
31
+ - Remove `freeze` calls on String constants. This is superfluous with Ruby
32
+ 2.3+ and `frozen_string_literal: true`. [#3759]
33
+ - Fix use of AR middleware outside of Rails [#3787]
34
+ - Sidekiq::Worker `sidekiq_retry_in` block can now return nil or 0 to use
35
+ the default backoff delay [#3796, dsalahutdinov]
36
+
5
37
  5.1.1
6
38
  -----------
7
39
 
data/Ent-Changes.md CHANGED
@@ -4,6 +4,17 @@
4
4
 
5
5
  Please see [http://sidekiq.org/](http://sidekiq.org/) for more details and how to buy.
6
6
 
7
+ HEAD
8
+ -------------
9
+
10
+ - Add support for sidekiqswarm memory monitoring on FreeBSD [#3884]
11
+
12
+ 1.7.1
13
+ -------------
14
+
15
+ - Fix Lua error in concurrent rate limiter under heavy contention
16
+ - Remove superfluous `freeze` calls on Strings [#3759]
17
+
7
18
  1.7.0
8
19
  -------------
9
20
 
data/Gemfile CHANGED
@@ -1,8 +1,14 @@
1
1
  source 'https://rubygems.org'
2
2
  gemspec
3
3
 
4
- gem 'rails', '>= 5.0.1'
4
+ # load testing
5
5
  #gem "hiredis"
6
- gem 'simplecov'
7
- gem 'minitest'
8
6
  #gem 'toxiproxy'
7
+
8
+ group :test do
9
+ gem 'rails', '>= 5.0.1'
10
+ gem 'minitest'
11
+ gem 'rake'
12
+ gem 'redis-namespace'
13
+ gem 'simplecov'
14
+ end
data/LICENSE CHANGED
@@ -5,5 +5,5 @@ the LGPLv3 license. Please see <http://www.gnu.org/licenses/lgpl-3.0.html>
5
5
  for license text.
6
6
 
7
7
  Sidekiq Pro has a commercial-friendly license allowing private forks
8
- and modifications of Sidekiq. Please see http://sidekiq.org/pro/ for
8
+ and modifications of Sidekiq. Please see https://sidekiq.org/products/pro.html for
9
9
  more detail. You can find the commercial license terms in COMM-LICENSE.
data/Pro-Changes.md CHANGED
@@ -4,6 +4,29 @@
4
4
 
5
5
  Please see [http://sidekiq.org/](http://sidekiq.org/) for more details and how to buy.
6
6
 
7
+ 4.0.3
8
+ ---------
9
+
10
+ - Add at\_exit handler to push any saved jobs in `reliable_push` when exiting. [#3823]
11
+ - Implement batch death callback. This is fired the first time a job within a batch dies. [#3841]
12
+ ```ruby
13
+ batch = Sidekiq::Batch.new
14
+ batch.on(:death, ...)
15
+ ```
16
+
17
+ 4.0.2
18
+ ---------
19
+
20
+ - Remove super\_fetch edge case leading to an unnecessary `sleep(1)`
21
+ call and resulting latency [#3790]
22
+ - Fix possible bad statsd metric call on super\_fetch startup
23
+ - Remove superfluous `freeze` calls on Strings [#3759]
24
+
25
+ 4.0.1
26
+ ---------
27
+
28
+ - Fix incompatibility with the statsd-ruby gem [#3740]
29
+
7
30
  4.0.0
8
31
  ---------
9
32
 
data/lib/sidekiq/api.rb CHANGED
@@ -1,9 +1,24 @@
1
- # encoding: utf-8
2
1
  # frozen_string_literal: true
3
2
  require 'sidekiq'
4
3
 
5
4
  module Sidekiq
5
+
6
+ module RedisScanner
7
+ def sscan(conn, key)
8
+ cursor = '0'
9
+ result = []
10
+ loop do
11
+ cursor, values = conn.sscan(key, cursor)
12
+ result.push(*values)
13
+ break if cursor == '0'
14
+ end
15
+ result
16
+ end
17
+ end
18
+
6
19
  class Stats
20
+ include RedisScanner
21
+
7
22
  def initialize
8
23
  fetch_stats!
9
24
  end
@@ -51,33 +66,39 @@ module Sidekiq
51
66
  def fetch_stats!
52
67
  pipe1_res = Sidekiq.redis do |conn|
53
68
  conn.pipelined do
54
- conn.get('stat:processed'.freeze)
55
- conn.get('stat:failed'.freeze)
56
- conn.zcard('schedule'.freeze)
57
- conn.zcard('retry'.freeze)
58
- conn.zcard('dead'.freeze)
59
- conn.scard('processes'.freeze)
60
- conn.lrange('queue:default'.freeze, -1, -1)
61
- conn.smembers('processes'.freeze)
62
- conn.smembers('queues'.freeze)
69
+ conn.get('stat:processed')
70
+ conn.get('stat:failed')
71
+ conn.zcard('schedule')
72
+ conn.zcard('retry')
73
+ conn.zcard('dead')
74
+ conn.scard('processes')
75
+ conn.lrange('queue:default', -1, -1)
63
76
  end
64
77
  end
65
78
 
79
+ processes = Sidekiq.redis do |conn|
80
+ sscan(conn, 'processes')
81
+ end
82
+
83
+ queues = Sidekiq.redis do |conn|
84
+ sscan(conn, 'queues')
85
+ end
86
+
66
87
  pipe2_res = Sidekiq.redis do |conn|
67
88
  conn.pipelined do
68
- pipe1_res[7].each {|key| conn.hget(key, 'busy'.freeze) }
69
- pipe1_res[8].each {|queue| conn.llen("queue:#{queue}") }
89
+ processes.each {|key| conn.hget(key, 'busy') }
90
+ queues.each {|queue| conn.llen("queue:#{queue}") }
70
91
  end
71
92
  end
72
93
 
73
- s = pipe1_res[7].size
94
+ s = processes.size
74
95
  workers_size = pipe2_res[0...s].map(&:to_i).inject(0, &:+)
75
96
  enqueued = pipe2_res[s..-1].map(&:to_i).inject(0, &:+)
76
97
 
77
98
  default_queue_latency = if (entry = pipe1_res[6].first)
78
99
  job = Sidekiq.load_json(entry) rescue {}
79
100
  now = Time.now.to_f
80
- thence = job['enqueued_at'.freeze] || now
101
+ thence = job['enqueued_at'] || now
81
102
  now - thence
82
103
  else
83
104
  0
@@ -117,9 +138,11 @@ module Sidekiq
117
138
  end
118
139
 
119
140
  class Queues
141
+ include RedisScanner
142
+
120
143
  def lengths
121
144
  Sidekiq.redis do |conn|
122
- queues = conn.smembers('queues'.freeze)
145
+ queues = sscan(conn, 'queues')
123
146
 
124
147
  lengths = conn.pipelined do
125
148
  queues.each do |queue|
@@ -163,7 +186,7 @@ module Sidekiq
163
186
 
164
187
  while i < @days_previous
165
188
  date = @start_date - i
166
- datestr = date.strftime("%Y-%m-%d".freeze)
189
+ datestr = date.strftime("%Y-%m-%d")
167
190
  keys << "stat:#{stat}:#{datestr}"
168
191
  dates << datestr
169
192
  i += 1
@@ -199,18 +222,19 @@ module Sidekiq
199
222
  #
200
223
  class Queue
201
224
  include Enumerable
225
+ extend RedisScanner
202
226
 
203
227
  ##
204
228
  # Return all known queues within Redis.
205
229
  #
206
230
  def self.all
207
- Sidekiq.redis { |c| c.smembers('queues'.freeze) }.sort.map { |q| Sidekiq::Queue.new(q) }
231
+ Sidekiq.redis { |c| sscan(c, 'queues') }.sort.map { |q| Sidekiq::Queue.new(q) }
208
232
  end
209
233
 
210
234
  attr_reader :name
211
235
 
212
236
  def initialize(name="default")
213
- @name = name
237
+ @name = name.to_s
214
238
  @rname = "queue:#{name}"
215
239
  end
216
240
 
@@ -273,7 +297,7 @@ module Sidekiq
273
297
  Sidekiq.redis do |conn|
274
298
  conn.multi do
275
299
  conn.del(@rname)
276
- conn.srem("queues".freeze, name)
300
+ conn.srem("queues", name)
277
301
  end
278
302
  end
279
303
  end
@@ -349,9 +373,9 @@ module Sidekiq
349
373
  job_args
350
374
  end
351
375
  else
352
- if self['encrypt'.freeze]
376
+ if self['encrypt']
353
377
  # no point in showing 150+ bytes of random garbage
354
- args[-1] = '[encrypted data]'.freeze
378
+ args[-1] = '[encrypted data]'
355
379
  end
356
380
  args
357
381
  end
@@ -701,17 +725,18 @@ module Sidekiq
701
725
  #
702
726
  class ProcessSet
703
727
  include Enumerable
728
+ include RedisScanner
704
729
 
705
730
  def initialize(clean_plz=true)
706
- self.class.cleanup if clean_plz
731
+ cleanup if clean_plz
707
732
  end
708
733
 
709
734
  # Cleans up dead processes recorded in Redis.
710
735
  # Returns the number of processes cleaned.
711
- def self.cleanup
736
+ def cleanup
712
737
  count = 0
713
738
  Sidekiq.redis do |conn|
714
- procs = conn.smembers('processes').sort
739
+ procs = sscan(conn, 'processes').sort
715
740
  heartbeats = conn.pipelined do
716
741
  procs.each do |key|
717
742
  conn.hget(key, 'info')
@@ -731,7 +756,7 @@ module Sidekiq
731
756
  end
732
757
 
733
758
  def each
734
- procs = Sidekiq.redis { |conn| conn.smembers('processes') }.sort
759
+ procs = Sidekiq.redis { |conn| sscan(conn, 'processes') }.sort
735
760
 
736
761
  Sidekiq.redis do |conn|
737
762
  # We're making a tradeoff here between consuming more memory instead of
@@ -866,10 +891,11 @@ module Sidekiq
866
891
  #
867
892
  class Workers
868
893
  include Enumerable
894
+ include RedisScanner
869
895
 
870
896
  def each
871
897
  Sidekiq.redis do |conn|
872
- procs = conn.smembers('processes')
898
+ procs = sscan(conn, 'processes')
873
899
  procs.sort.each do |key|
874
900
  valid, workers = conn.pipelined do
875
901
  conn.exists(key)
@@ -891,7 +917,7 @@ module Sidekiq
891
917
  # which can easily get out of sync with crashy processes.
892
918
  def size
893
919
  Sidekiq.redis do |conn|
894
- procs = conn.smembers('processes')
920
+ procs = sscan(conn, 'processes')
895
921
  if procs.empty?
896
922
  0
897
923
  else
data/lib/sidekiq/cli.rb CHANGED
@@ -1,4 +1,3 @@
1
- # encoding: utf-8
2
1
  # frozen_string_literal: true
3
2
  $stdout.sync = true
4
3
 
@@ -17,7 +16,7 @@ module Sidekiq
17
16
  include Singleton unless $TESTING
18
17
 
19
18
  PROCTITLES = [
20
- proc { 'sidekiq'.freeze },
19
+ proc { 'sidekiq' },
21
20
  proc { Sidekiq::VERSION },
22
21
  proc { |me, data| data['tag'] },
23
22
  proc { |me, data| "[#{Processor::WORKER_STATE.size} of #{data['concurrency']} busy]" },
@@ -65,7 +64,7 @@ module Sidekiq
65
64
  sigs.each do |sig|
66
65
  begin
67
66
  trap sig do
68
- self_write.puts(sig)
67
+ self_write.write("#{sig}\n")
69
68
  end
70
69
  rescue ArgumentError
71
70
  puts "Signal #{sig} not supported"
@@ -81,6 +80,12 @@ module Sidekiq
81
80
  ver = Sidekiq.redis_info['redis_version']
82
81
  raise "You are using Redis v#{ver}, Sidekiq requires Redis v2.8.0 or greater" if ver < '2.8'
83
82
 
83
+ # Since the user can pass us a connection pool explicitly in the initializer, we
84
+ # need to verify the size is large enough or else Sidekiq's performance is dramatically slowed.
85
+ cursize = Sidekiq.redis_pool.size
86
+ needed = Sidekiq.options[:concurrency] + 2
87
+ raise "Your pool of #{cursize} Redis connections is too small, please increase the size to at least #{needed}" if cursize < needed
88
+
84
89
  # cache process identity
85
90
  Sidekiq.options[:identity] = identity
86
91
 
@@ -330,6 +335,8 @@ module Sidekiq
330
335
  opts[:tag] = arg
331
336
  end
332
337
 
338
+ # this index remains here for backwards compatibility but none of the Sidekiq
339
+ # family use this value anymore. it was used by Pro's original reliable_fetch.
333
340
  o.on '-i', '--index INT', "unique process index on this machine" do |arg|
334
341
  opts[:index] = Integer(arg.match(/\d+/)[0])
335
342
  end
@@ -68,18 +68,19 @@ module Sidekiq
68
68
  #
69
69
  def push(item)
70
70
  normed = normalize_item(item)
71
- payload = process_single(item['class'.freeze], normed)
71
+ payload = process_single(item['class'], normed)
72
72
 
73
73
  if payload
74
74
  raw_push([payload])
75
- payload['jid'.freeze]
75
+ payload['jid']
76
76
  end
77
77
  end
78
78
 
79
79
  ##
80
- # Push a large number of jobs to Redis. In practice this method is only
81
- # useful if you are pushing thousands of jobs or more. This method
82
- # cuts out the redis network round trip latency.
80
+ # Push a large number of jobs to Redis. This method cuts out the redis
81
+ # network round trip latency. I wouldn't recommend pushing more than
82
+ # 1000 per call but YMMV based on network quality, size of job args, etc.
83
+ # A large number of jobs can cause a bit of Redis command processing latency.
83
84
  #
84
85
  # Takes the same arguments as #push except that args is expected to be
85
86
  # an Array of Arrays. All other keys are duplicated for each job. Each job
@@ -89,19 +90,19 @@ module Sidekiq
89
90
  # Returns an array of the of pushed jobs' jids. The number of jobs pushed can be less
90
91
  # than the number given if the middleware stopped processing for one or more jobs.
91
92
  def push_bulk(items)
92
- arg = items['args'.freeze].first
93
+ arg = items['args'].first
93
94
  return [] unless arg # no jobs to push
94
95
  raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" if !arg.is_a?(Array)
95
96
 
96
97
  normed = normalize_item(items)
97
- payloads = items['args'.freeze].map do |args|
98
- copy = normed.merge('args'.freeze => args, 'jid'.freeze => SecureRandom.hex(12), 'enqueued_at'.freeze => Time.now.to_f)
99
- result = process_single(items['class'.freeze], copy)
98
+ payloads = items['args'].map do |args|
99
+ copy = normed.merge('args' => args, 'jid' => SecureRandom.hex(12), 'enqueued_at' => Time.now.to_f)
100
+ result = process_single(items['class'], copy)
100
101
  result ? result : nil
101
102
  end.compact
102
103
 
103
104
  raw_push(payloads) if !payloads.empty?
104
- payloads.collect { |payload| payload['jid'.freeze] }
105
+ payloads.collect { |payload| payload['jid'] }
105
106
  end
106
107
 
107
108
  # Allows sharding of jobs across any number of Redis instances. All jobs
@@ -144,14 +145,14 @@ module Sidekiq
144
145
  # Messages are enqueued to the 'default' queue.
145
146
  #
146
147
  def enqueue(klass, *args)
147
- klass.client_push('class'.freeze => klass, 'args'.freeze => args)
148
+ klass.client_push('class' => klass, 'args' => args)
148
149
  end
149
150
 
150
151
  # Example usage:
151
152
  # Sidekiq::Client.enqueue_to(:queue_name, MyWorker, 'foo', 1, :bat => 'bar')
152
153
  #
153
154
  def enqueue_to(queue, klass, *args)
154
- klass.client_push('queue'.freeze => queue, 'class'.freeze => klass, 'args'.freeze => args)
155
+ klass.client_push('queue' => queue, 'class' => klass, 'args' => args)
155
156
  end
156
157
 
157
158
  # Example usage:
@@ -162,8 +163,8 @@ module Sidekiq
162
163
  now = Time.now.to_f
163
164
  ts = (int < 1_000_000_000 ? now + int : int)
164
165
 
165
- item = { 'class'.freeze => klass, 'args'.freeze => args, 'at'.freeze => ts, 'queue'.freeze => queue }
166
- item.delete('at'.freeze) if ts <= now
166
+ item = { 'class' => klass, 'args' => args, 'at' => ts, 'queue' => queue }
167
+ item.delete('at') if ts <= now
167
168
 
168
169
  klass.client_push(item)
169
170
  end
@@ -188,25 +189,25 @@ module Sidekiq
188
189
  end
189
190
 
190
191
  def atomic_push(conn, payloads)
191
- if payloads.first['at'.freeze]
192
- conn.zadd('schedule'.freeze, payloads.map do |hash|
193
- at = hash.delete('at'.freeze).to_s
192
+ if payloads.first['at']
193
+ conn.zadd('schedule', payloads.map do |hash|
194
+ at = hash.delete('at').to_s
194
195
  [at, Sidekiq.dump_json(hash)]
195
196
  end)
196
197
  else
197
- q = payloads.first['queue'.freeze]
198
+ q = payloads.first['queue']
198
199
  now = Time.now.to_f
199
200
  to_push = payloads.map do |entry|
200
- entry['enqueued_at'.freeze] = now
201
+ entry['enqueued_at'] = now
201
202
  Sidekiq.dump_json(entry)
202
203
  end
203
- conn.sadd('queues'.freeze, q)
204
+ conn.sadd('queues', q)
204
205
  conn.lpush("queue:#{q}", to_push)
205
206
  end
206
207
  end
207
208
 
208
209
  def process_single(worker_class, item)
209
- queue = item['queue'.freeze]
210
+ queue = item['queue']
210
211
 
211
212
  middleware.invoke(worker_class, item, queue, @redis_pool) do
212
213
  item
@@ -214,25 +215,25 @@ module Sidekiq
214
215
  end
215
216
 
216
217
  def normalize_item(item)
217
- raise(ArgumentError, "Job must be a Hash with 'class' and 'args' keys: { 'class' => SomeWorker, 'args' => ['bob', 1, :foo => 'bar'] }") unless item.is_a?(Hash) && item.has_key?('class'.freeze) && item.has_key?('args'.freeze)
218
+ raise(ArgumentError, "Job must be a Hash with 'class' and 'args' keys: { 'class' => SomeWorker, 'args' => ['bob', 1, :foo => 'bar'] }") unless item.is_a?(Hash) && item.has_key?('class') && item.has_key?('args')
218
219
  raise(ArgumentError, "Job args must be an Array") unless item['args'].is_a?(Array)
219
- raise(ArgumentError, "Job class must be either a Class or String representation of the class name") unless item['class'.freeze].is_a?(Class) || item['class'.freeze].is_a?(String)
220
- raise(ArgumentError, "Job 'at' must be a Numeric timestamp") if item.has_key?('at'.freeze) && !item['at'].is_a?(Numeric)
220
+ raise(ArgumentError, "Job class must be either a Class or String representation of the class name") unless item['class'].is_a?(Class) || item['class'].is_a?(String)
221
+ raise(ArgumentError, "Job 'at' must be a Numeric timestamp") if item.has_key?('at') && !item['at'].is_a?(Numeric)
221
222
  #raise(ArgumentError, "Arguments must be native JSON types, see https://github.com/mperham/sidekiq/wiki/Best-Practices") unless JSON.load(JSON.dump(item['args'])) == item['args']
222
223
 
223
- normalized_hash(item['class'.freeze])
224
+ normalized_hash(item['class'])
224
225
  .each{ |key, value| item[key] = value if item[key].nil? }
225
226
 
226
- item['class'.freeze] = item['class'.freeze].to_s
227
- item['queue'.freeze] = item['queue'.freeze].to_s
228
- item['jid'.freeze] ||= SecureRandom.hex(12)
229
- item['created_at'.freeze] ||= Time.now.to_f
227
+ item['class'] = item['class'].to_s
228
+ item['queue'] = item['queue'].to_s
229
+ item['jid'] ||= SecureRandom.hex(12)
230
+ item['created_at'] ||= Time.now.to_f
230
231
  item
231
232
  end
232
233
 
233
234
  def normalized_hash(item_class)
234
235
  if item_class.is_a?(Class)
235
- raise(ArgumentError, "Message must include a Sidekiq::Worker class, not class name: #{item_class.ancestors.inspect}") if !item_class.respond_to?('get_sidekiq_options'.freeze)
236
+ raise(ArgumentError, "Message must include a Sidekiq::Worker class, not class name: #{item_class.ancestors.inspect}") if !item_class.respond_to?('get_sidekiq_options')
236
237
  item_class.get_sidekiq_options
237
238
  else
238
239
  Sidekiq.default_worker_options
data/lib/sidekiq/delay.rb CHANGED
@@ -1,3 +1,4 @@
1
+ # frozen_string_literal: true
1
2
  module Sidekiq
2
3
  module Extensions
3
4
 
data/lib/sidekiq/fetch.rb CHANGED
@@ -13,7 +13,7 @@ module Sidekiq
13
13
  end
14
14
 
15
15
  def queue_name
16
- queue.sub(/.*queue:/, ''.freeze)
16
+ queue.sub(/.*queue:/, '')
17
17
  end
18
18
 
19
19
  def requeue
@@ -1,9 +1,10 @@
1
+ # frozen_string_literal: true
1
2
  module Sidekiq
2
3
  class JobLogger
3
4
 
4
5
  def call(item, queue)
5
6
  start = Time.now
6
- logger.info("start".freeze)
7
+ logger.info("start")
7
8
  yield
8
9
  logger.info("done: #{elapsed(start)} sec")
9
10
  rescue Exception
@@ -1,3 +1,4 @@
1
+ # frozen_string_literal: true
1
2
  require 'sidekiq/scheduled'
2
3
  require 'sidekiq/api'
3
4
 
@@ -204,7 +205,11 @@ module Sidekiq
204
205
  end
205
206
 
206
207
  def delay_for(worker, count, exception)
207
- worker && worker.sidekiq_retry_in_block && retry_in(worker, count, exception) || seconds_to_delay(count)
208
+ if worker && worker.sidekiq_retry_in_block
209
+ custom_retry_in = retry_in(worker, count, exception).to_i
210
+ return custom_retry_in if custom_retry_in > 0
211
+ end
212
+ seconds_to_delay(count)
208
213
  end
209
214
 
210
215
  # delayed_job uses the same basic formula
@@ -214,7 +219,7 @@ module Sidekiq
214
219
 
215
220
  def retry_in(worker, count, exception)
216
221
  begin
217
- worker.sidekiq_retry_in_block.call(count, exception).to_i
222
+ worker.sidekiq_retry_in_block.call(count, exception)
218
223
  rescue Exception => e
219
224
  handle_exception(e, { context: "Failure scheduling retry using the defined `sidekiq_retry_in` in #{worker.class.name}, falling back to default" })
220
225
  nil
@@ -1,4 +1,3 @@
1
- # encoding: utf-8
2
1
  # frozen_string_literal: true
3
2
  require 'sidekiq/manager'
4
3
  require 'sidekiq/fetch'
@@ -14,6 +13,8 @@ module Sidekiq
14
13
 
15
14
  attr_accessor :manager, :poller, :fetcher
16
15
 
16
+ STATS_TTL = 5*365*24*60*60
17
+
17
18
  def initialize(options)
18
19
  @manager = Sidekiq::Manager.new(options)
19
20
  @poller = Sidekiq::Scheduled::Poller.new
@@ -73,19 +74,24 @@ module Sidekiq
73
74
  key = identity
74
75
  fails = procd = 0
75
76
  begin
76
- Processor::FAILURE.update {|curr| fails = curr; 0 }
77
- Processor::PROCESSED.update {|curr| procd = curr; 0 }
77
+ fails = Processor::FAILURE.reset
78
+ procd = Processor::PROCESSED.reset
79
+ curstate = Processor::WORKER_STATE.dup
78
80
 
79
- workers_key = "#{key}:workers".freeze
80
- nowdate = Time.now.utc.strftime("%Y-%m-%d".freeze)
81
+ workers_key = "#{key}:workers"
82
+ nowdate = Time.now.utc.strftime("%Y-%m-%d")
81
83
  Sidekiq.redis do |conn|
82
84
  conn.multi do
83
- conn.incrby("stat:processed".freeze, procd)
85
+ conn.incrby("stat:processed", procd)
84
86
  conn.incrby("stat:processed:#{nowdate}", procd)
85
- conn.incrby("stat:failed".freeze, fails)
87
+ conn.expire("stat:processed:#{nowdate}", STATS_TTL)
88
+
89
+ conn.incrby("stat:failed", fails)
86
90
  conn.incrby("stat:failed:#{nowdate}", fails)
91
+ conn.expire("stat:failed:#{nowdate}", STATS_TTL)
92
+
87
93
  conn.del(workers_key)
88
- Processor::WORKER_STATE.each_pair do |tid, hash|
94
+ curstate.each_pair do |tid, hash|
89
95
  conn.hset(workers_key, tid, Sidekiq.dump_json(hash))
90
96
  end
91
97
  conn.expire(workers_key, 60)
@@ -97,7 +103,7 @@ module Sidekiq
97
103
  conn.multi do
98
104
  conn.sadd('processes', key)
99
105
  conn.exists(key)
100
- conn.hmset(key, 'info', to_json, 'busy', Processor::WORKER_STATE.size, 'beat', Time.now.to_f, 'quiet', @done)
106
+ conn.hmset(key, 'info', to_json, 'busy', curstate.size, 'beat', Time.now.to_f, 'quiet', @done)
101
107
  conn.expire(key, 60)
102
108
  conn.rpop("#{key}-signals")
103
109
  end
@@ -113,8 +119,8 @@ module Sidekiq
113
119
  # ignore all redis/network issues
114
120
  logger.error("heartbeat: #{e.message}")
115
121
  # don't lose the counts if there was a network issue
116
- Processor::PROCESSED.increment(procd)
117
- Processor::FAILURE.increment(fails)
122
+ Processor::PROCESSED.incr(procd)
123
+ Processor::FAILURE.incr(fails)
118
124
  end
119
125
  end
120
126
 
@@ -33,9 +33,9 @@ module Sidekiq
33
33
  def self.job_hash_context(job_hash)
34
34
  # If we're using a wrapper class, like ActiveJob, use the "wrapped"
35
35
  # attribute to expose the underlying thing.
36
- klass = job_hash['wrapped'.freeze] || job_hash["class".freeze]
37
- bid = job_hash['bid'.freeze]
38
- "#{klass} JID-#{job_hash['jid'.freeze]}#{" BID-#{bid}" if bid}"
36
+ klass = job_hash['wrapped'] || job_hash["class"]
37
+ bid = job_hash['bid']
38
+ "#{klass} JID-#{job_hash['jid']}#{" BID-#{bid}" if bid}"
39
39
  end
40
40
 
41
41
  def self.with_job_hash_context(job_hash, &block)
@@ -1,4 +1,3 @@
1
- # encoding: utf-8
2
1
  # frozen_string_literal: true
3
2
  require 'sidekiq/util'
4
3
  require 'sidekiq/processor'