sidekiq 6.0.1 → 6.0.2

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of sidekiq might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 792692164ce00b070d352bda8869518e3d20d92cfdb70cda88456b53d5a4a627
4
- data.tar.gz: b489c2ecc7b3708dd578a0184a9da58ac32d36e737a3b704d094e1ef375e93e1
3
+ metadata.gz: 9dd138f78183ff31972192acbcee3ea7aad67403dfdd478500f3bbbfebf14698
4
+ data.tar.gz: ea4ab3b7c40bf358a80df9043013a63ee7c0562322ce5f212f938cecdc93c62f
5
5
  SHA512:
6
- metadata.gz: 80d7bd6a6fc9ef0a6026c9c5bb8aaf727db7323e1f1d9e057550681329991ddf82de030df70dc80e8f3a3608fc1f999ecb5d251765a5bcf09f23f764d86365a7
7
- data.tar.gz: 3148011168f88fbc4fa67f244a56be0b0e7da36d5009f4d465e0df2a7df93ddf8a99fc1c57c43fa0365a0cdb1de9fb770306daf070bcea5f330c1f3522210bf9
6
+ metadata.gz: c34d01bffdf5af462b98afa03e4b71356b8afffd873533eb953305314e1c0fd0ea4a4c0eea53c7169b61aca51420a50d5eda9127ad815c46c4053954677be24d
7
+ data.tar.gz: fdfe2b1704bc7d3a071756ca983c9b81bf4534ceb5f532bec59ec87fe4c3094c95120a7b1471163327f55e132c9773a3fd05d9a6211c6c1a1342c58872e8d980
data/Changes.md CHANGED
@@ -2,7 +2,13 @@
2
2
 
3
3
  [Sidekiq Changes](https://github.com/mperham/sidekiq/blob/master/Changes.md) | [Sidekiq Pro Changes](https://github.com/mperham/sidekiq/blob/master/Pro-Changes.md) | [Sidekiq Enterprise Changes](https://github.com/mperham/sidekiq/blob/master/Ent-Changes.md)
4
4
 
5
- HEAD
5
+ 6.0.2
6
+ ---------
7
+
8
+ - Fix Sidekiq Enterprise's rolling restart functionality, broken by refactoring in 6.0.0. [#4334]
9
+ - More internal refactoring and performance tuning [fatkodima]
10
+
11
+ 6.0.1
6
12
  ---------
7
13
 
8
14
  - **Performance tuning**, Sidekiq should be 10-15% faster now [#4303, 4299,
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- sidekiq (6.0.1)
4
+ sidekiq (6.0.2)
5
5
  connection_pool (>= 2.2.2)
6
6
  rack (>= 2.0.0)
7
7
  rack-protection (>= 2.0.0)
data/README.md CHANGED
@@ -19,7 +19,8 @@ Performance
19
19
 
20
20
  Version | Latency | Garbage created for 10k jobs | Time to process 100k jobs | Throughput | Ruby
21
21
  -----------------|------|---------|---------|------------------------|-----
22
- Sidekiq 6.0.0 | 3 ms | 156 MB | 19 sec | **5200 jobs/sec** | MRI 2.6.3
22
+ Sidekiq 6.0.2 | 3 ms | 156 MB | 14.0 sec| **7100 jobs/sec** | MRI 2.6.3
23
+ Sidekiq 6.0.0 | 3 ms | 156 MB | 19 sec | 5200 jobs/sec | MRI 2.6.3
23
24
  Sidekiq 4.0.0 | 10 ms | 151 MB | 22 sec | 4500 jobs/sec |
24
25
  Sidekiq 3.5.1 | 22 ms | 1257 MB | 125 sec | 800 jobs/sec |
25
26
  Resque 1.25.2 | - | - | 420 sec | 240 jobs/sec |
@@ -80,8 +80,8 @@ module Sidekiq
80
80
  }
81
81
 
82
82
  s = processes.size
83
- workers_size = pipe2_res[0...s].map(&:to_i).inject(0, &:+)
84
- enqueued = pipe2_res[s..-1].map(&:to_i).inject(0, &:+)
83
+ workers_size = pipe2_res[0...s].sum(&:to_i)
84
+ enqueued = pipe2_res[s..-1].sum(&:to_i)
85
85
 
86
86
  default_queue_latency = if (entry = pipe1_res[6].first)
87
87
  job = begin
@@ -438,12 +438,18 @@ module Sidekiq
438
438
 
439
439
  def uncompress_backtrace(backtrace)
440
440
  if backtrace.is_a?(Array)
441
- # Handle old jobs with previous backtrace format
441
+ # Handle old jobs with raw Array backtrace format
442
442
  backtrace
443
443
  else
444
444
  decoded = Base64.decode64(backtrace)
445
445
  uncompressed = Zlib::Inflate.inflate(decoded)
446
- Marshal.load(uncompressed)
446
+ begin
447
+ Sidekiq.load_json(uncompressed)
448
+ rescue
449
+ # Handle old jobs with marshalled backtrace format
450
+ # TODO Remove in 7.x
451
+ Marshal.load(uncompressed)
452
+ end
447
453
  end
448
454
  end
449
455
  end
@@ -471,8 +477,9 @@ module Sidekiq
471
477
  end
472
478
 
473
479
  def reschedule(at)
474
- delete
475
- @parent.schedule(at, item)
480
+ Sidekiq.redis do |conn|
481
+ conn.zincrby(@parent.name, at - @score, Sidekiq.dump_json(@item))
482
+ end
476
483
  end
477
484
 
478
485
  def add_to_queue
@@ -554,7 +561,7 @@ module Sidekiq
554
561
  end
555
562
 
556
563
  def scan(match, count = 100)
557
- return to_enum(:scan, match) unless block_given?
564
+ return to_enum(:scan, match, count) unless block_given?
558
565
 
559
566
  match = "*#{match}*" unless match.include?("*")
560
567
  Sidekiq.redis do |conn|
@@ -648,11 +655,13 @@ module Sidekiq
648
655
  Sidekiq.redis do |conn|
649
656
  elements = conn.zrangebyscore(name, score, score)
650
657
  elements.each do |element|
651
- message = Sidekiq.load_json(element)
652
- if message["jid"] == jid
653
- ret = conn.zrem(name, element)
654
- @_size -= 1 if ret
655
- break ret
658
+ if element.index(jid)
659
+ message = Sidekiq.load_json(element)
660
+ if message["jid"] == jid
661
+ ret = conn.zrem(name, element)
662
+ @_size -= 1 if ret
663
+ break ret
664
+ end
656
665
  end
657
666
  end
658
667
  end
@@ -786,30 +795,28 @@ module Sidekiq
786
795
  end
787
796
 
788
797
  def each
789
- procs = Sidekiq.redis { |conn| conn.sscan_each("processes").to_a }.sort
798
+ result = Sidekiq.redis { |conn|
799
+ procs = conn.sscan_each("processes").to_a.sort
790
800
 
791
- Sidekiq.redis do |conn|
792
801
  # We're making a tradeoff here between consuming more memory instead of
793
802
  # making more roundtrips to Redis, but if you have hundreds or thousands of workers,
794
803
  # you'll be happier this way
795
- result = conn.pipelined {
804
+ conn.pipelined do
796
805
  procs.each do |key|
797
806
  conn.hmget(key, "info", "busy", "beat", "quiet")
798
807
  end
799
- }
808
+ end
809
+ }
800
810
 
801
- result.each do |info, busy, at_s, quiet|
802
- # If a process is stopped between when we query Redis for `procs` and
803
- # when we query for `result`, we will have an item in `result` that is
804
- # composed of `nil` values.
805
- next if info.nil?
811
+ result.each do |info, busy, at_s, quiet|
812
+ # If a process is stopped between when we query Redis for `procs` and
813
+ # when we query for `result`, we will have an item in `result` that is
814
+ # composed of `nil` values.
815
+ next if info.nil?
806
816
 
807
- hash = Sidekiq.load_json(info)
808
- yield Process.new(hash.merge("busy" => busy.to_i, "beat" => at_s.to_f, "quiet" => quiet))
809
- end
817
+ hash = Sidekiq.load_json(info)
818
+ yield Process.new(hash.merge("busy" => busy.to_i, "beat" => at_s.to_f, "quiet" => quiet))
810
819
  end
811
-
812
- nil
813
820
  end
814
821
 
815
822
  # This method is not guaranteed accurate since it does not prune the set
@@ -953,7 +960,7 @@ module Sidekiq
953
960
  procs.each do |key|
954
961
  conn.hget(key, "busy")
955
962
  end
956
- }.map(&:to_i).inject(:+)
963
+ }.sum(&:to_i)
957
964
  end
958
965
  end
959
966
  end
@@ -41,6 +41,8 @@ module Sidekiq
41
41
 
42
42
  self_read, self_write = IO.pipe
43
43
  sigs = %w[INT TERM TTIN TSTP]
44
+ # USR1 and USR2 don't work on the JVM
45
+ sigs << "USR2" unless jruby?
44
46
  sigs.each do |sig|
45
47
  trap sig do
46
48
  self_write.puts(sig)
@@ -74,7 +74,7 @@ module Sidekiq
74
74
  # The global retry handler requires only the barest of data.
75
75
  # We want to be able to retry as much as possible so we don't
76
76
  # require the worker to be instantiated.
77
- def global(msg, queue)
77
+ def global(jobstr, queue)
78
78
  yield
79
79
  rescue Handled => ex
80
80
  raise ex
@@ -85,6 +85,7 @@ module Sidekiq
85
85
  # ignore, will be pushed back onto queue during hard_shutdown
86
86
  raise Sidekiq::Shutdown if exception_caused_by_shutdown?(e)
87
87
 
88
+ msg = Sidekiq.load_json(jobstr)
88
89
  if msg["retry"]
89
90
  attempt_retry(nil, msg, queue, e)
90
91
  else
@@ -106,7 +107,7 @@ module Sidekiq
106
107
  # exception so the global block does not reprocess the error. The
107
108
  # Skip exception is unwrapped within Sidekiq::Processor#process before
108
109
  # calling the handle_exception handlers.
109
- def local(worker, msg, queue)
110
+ def local(worker, jobstr, queue)
110
111
  yield
111
112
  rescue Handled => ex
112
113
  raise ex
@@ -117,6 +118,7 @@ module Sidekiq
117
118
  # ignore, will be pushed back onto queue during hard_shutdown
118
119
  raise Sidekiq::Shutdown if exception_caused_by_shutdown?(e)
119
120
 
121
+ msg = Sidekiq.load_json(jobstr)
120
122
  if msg["retry"].nil?
121
123
  msg["retry"] = worker.class.get_sidekiq_options["retry"]
122
124
  end
@@ -252,7 +254,7 @@ module Sidekiq
252
254
  end
253
255
 
254
256
  def compress_backtrace(backtrace)
255
- serialized = Marshal.dump(backtrace)
257
+ serialized = Sidekiq.dump_json(backtrace)
256
258
  compressed = Zlib::Deflate.deflate(serialized)
257
259
  Base64.encode64(compressed)
258
260
  end
@@ -4,21 +4,6 @@ require "fileutils"
4
4
  require "sidekiq/api"
5
5
 
6
6
  class Sidekiq::Monitor
7
- CMD = File.basename($PROGRAM_NAME)
8
-
9
- attr_reader :stage
10
-
11
- def self.print_usage
12
- puts "#{CMD} - monitor Sidekiq from the command line."
13
- puts
14
- puts "Usage: #{CMD} <section>"
15
- puts
16
- puts " <section> (optional) view a specific section of the status output"
17
- puts " Valid sections are: #{Sidekiq::Monitor::Status::VALID_SECTIONS.join(", ")}"
18
- puts
19
- puts "Set REDIS_URL to the location of your Redis server if not monitoring localhost."
20
- end
21
-
22
7
  class Status
23
8
  VALID_SECTIONS = %w[all version overview processes queues]
24
9
  COL_PAD = 2
@@ -111,16 +111,19 @@ module Sidekiq
111
111
  nil
112
112
  end
113
113
 
114
- def dispatch(job_hash, queue)
114
+ def dispatch(job_hash, queue, jobstr)
115
115
  # since middleware can mutate the job hash
116
- # we clone here so we report the original
116
+ # we need to clone it to report the original
117
117
  # job structure to the Web UI
118
- pristine = json_clone(job_hash)
118
+ # or to push back to redis when retrying.
119
+ # To avoid costly and, most of the time, useless cloning here,
120
+ # we pass original String of JSON to respected methods
121
+ # to re-parse it there if we need access to the original, untouched job
119
122
 
120
123
  @job_logger.prepare(job_hash) do
121
- @retrier.global(pristine, queue) do
124
+ @retrier.global(jobstr, queue) do
122
125
  @job_logger.call(job_hash, queue) do
123
- stats(pristine, queue) do
126
+ stats(jobstr, queue) do
124
127
  # Rails 5 requires a Reloader to wrap code execution. In order to
125
128
  # constantize the worker and instantiate an instance, we have to call
126
129
  # the Reloader. It handles code loading, db connection management, etc.
@@ -129,7 +132,7 @@ module Sidekiq
129
132
  klass = constantize(job_hash["class"])
130
133
  worker = klass.new
131
134
  worker.jid = job_hash["jid"]
132
- @retrier.local(worker, pristine, queue) do
135
+ @retrier.local(worker, jobstr, queue) do
133
136
  yield worker
134
137
  end
135
138
  end
@@ -156,7 +159,7 @@ module Sidekiq
156
159
 
157
160
  ack = false
158
161
  begin
159
- dispatch(job_hash, queue) do |worker|
162
+ dispatch(job_hash, queue, jobstr) do |worker|
160
163
  Sidekiq.server_middleware.invoke(worker, job_hash, queue) do
161
164
  execute_job(worker, job_hash["args"])
162
165
  end
@@ -247,8 +250,8 @@ module Sidekiq
247
250
  FAILURE = Counter.new
248
251
  WORKER_STATE = SharedWorkerState.new
249
252
 
250
- def stats(job_hash, queue)
251
- WORKER_STATE.set(tid, {queue: queue, payload: job_hash, run_at: Time.now.to_i})
253
+ def stats(jobstr, queue)
254
+ WORKER_STATE.set(tid, {queue: queue, payload: jobstr, run_at: Time.now.to_i})
252
255
 
253
256
  begin
254
257
  yield
@@ -273,30 +276,5 @@ module Sidekiq
273
276
  constant.const_get(name, false)
274
277
  end
275
278
  end
276
-
277
- # Deep clone the arguments passed to the worker so that if
278
- # the job fails, what is pushed back onto Redis hasn't
279
- # been mutated by the worker.
280
- def json_clone(obj)
281
- if Integer === obj || Float === obj || TrueClass === obj || FalseClass === obj || NilClass === obj
282
- return obj
283
- elsif String === obj
284
- return obj.dup
285
- elsif Array === obj
286
- duped = Array.new(obj.size)
287
- obj.each_with_index do |value, index|
288
- duped[index] = json_clone(value)
289
- end
290
- elsif Hash === obj
291
- duped = obj.dup
292
- duped.each_pair do |key, value|
293
- duped[key] = json_clone(value)
294
- end
295
- else
296
- duped = obj.dup
297
- end
298
-
299
- duped
300
- end
301
279
  end
302
280
  end
@@ -11,8 +11,6 @@ module Sidekiq
11
11
  module Util
12
12
  include ExceptionHandler
13
13
 
14
- EXPIRY = 60 * 60 * 24
15
-
16
14
  def watchdog(last_words)
17
15
  yield
18
16
  rescue Exception => ex
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Sidekiq
4
- VERSION = "6.0.1"
4
+ VERSION = "6.0.2"
5
5
  end
@@ -5,7 +5,6 @@ module Sidekiq
5
5
  extend WebRouter
6
6
 
7
7
  CONTENT_LENGTH = "Content-Length"
8
- CONTENT_TYPE = "Content-Type"
9
8
  REDIS_KEYS = %w[redis_version uptime_in_days connected_clients used_memory_human used_memory_peak_human]
10
9
  CSP_HEADER = [
11
10
  "default-src 'self' https: http:",
@@ -307,7 +306,7 @@ module Sidekiq
307
306
 
308
307
  resp[1] = resp[1].dup
309
308
 
310
- resp[1][CONTENT_LENGTH] = resp[2].inject(0) { |l, p| l + p.bytesize }.to_s
309
+ resp[1][CONTENT_LENGTH] = resp[2].sum(&:bytesize).to_s
311
310
 
312
311
  resp
313
312
  end
@@ -156,12 +156,6 @@ module Sidekiq
156
156
  @stats ||= Sidekiq::Stats.new
157
157
  end
158
158
 
159
- def retries_with_score(score)
160
- Sidekiq.redis { |conn|
161
- conn.zrangebyscore("retry", score, score)
162
- }.map { |msg| Sidekiq.load_json(msg) }
163
- end
164
-
165
159
  def redis_connection
166
160
  Sidekiq.redis do |conn|
167
161
  c = conn.connection
@@ -13,7 +13,7 @@ de:
13
13
  Retries: Versuche
14
14
  Enqueued: In der Warteschlange
15
15
  Worker: Arbeiter
16
- LivePoll: Live Poll
16
+ LivePoll: Echtzeitabfrage
17
17
  StopPolling: Abfrage stoppen
18
18
  Queue: Warteschlange
19
19
  Class: Klasse
@@ -33,12 +33,13 @@ de:
33
33
  NextRetry: Nächster Versuch
34
34
  RetryCount: Anzahl der Versuche
35
35
  RetryNow: Jetzt erneut versuchen
36
- Kill: Töten
36
+ Kill: Vernichten
37
37
  LastRetry: Letzter Versuch
38
38
  OriginallyFailed: Ursprünglich fehlgeschlagen
39
39
  AreYouSure: Bist du sicher?
40
40
  DeleteAll: Alle löschen
41
41
  RetryAll: Alle erneut versuchen
42
+ KillAll: Alle vernichten
42
43
  NoRetriesFound: Keine erneuten Versuche gefunden
43
44
  Error: Fehler
44
45
  ErrorClass: Fehlerklasse
@@ -67,3 +68,14 @@ de:
67
68
  Thread: Thread
68
69
  Threads: Threads
69
70
  Jobs: Jobs
71
+ Paused: Pausiert
72
+ Stop: Stopp
73
+ Quiet: Leise
74
+ StopAll: Alle stoppen
75
+ QuietAll: Alle leise
76
+ PollingInterval: Abfrageintervall
77
+ Plugins: Erweiterungen
78
+ NotYetEnqueued: Noch nicht in der Warteschlange
79
+ CreatedAt: Erstellt
80
+ BackToApp: Zurück zur Anwendung
81
+ Latency: Latenz
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sidekiq
3
3
  version: !ruby/object:Gem::Version
4
- version: 6.0.1
4
+ version: 6.0.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Mike Perham
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2019-10-02 00:00:00.000000000 Z
11
+ date: 2019-10-12 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: redis
@@ -214,8 +214,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
214
214
  - !ruby/object:Gem::Version
215
215
  version: '0'
216
216
  requirements: []
217
- rubyforge_project:
218
- rubygems_version: 2.7.6
217
+ rubygems_version: 3.0.3
219
218
  signing_key:
220
219
  specification_version: 4
221
220
  summary: Simple, efficient background processing for Ruby