sidekiq 6.4.0 → 6.4.1

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of sidekiq might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9622b2851203b0c5a80695ab7801ca77e15dc63a641ae79132cda9a2fcbe0cc6
4
- data.tar.gz: dd943a02d2cf910f51866d02254f3a7845cb159735bd54d223dd7127a1fbd2fa
3
+ metadata.gz: 99c9e264c092b88ea726be158fafe5bbab91f82f4b5864dee406280622e98e4b
4
+ data.tar.gz: acd72bd99929d7c9d129cb9662276cc5adb7214de07cd4fc8accf6b9d521994a
5
5
  SHA512:
6
- metadata.gz: 65bcd542866d8699ecf5958d81a15fd322cd1c51dfc1dbc6a8b15402b70862510c134e2b0ed9f9dbcdbb4dea3a59c1d12878ff926145e503079a59896f279ff3
7
- data.tar.gz: 36143e85dc7fd4611f8a43fe39788dc590725ac63a827fbc174916da2d59fdadb3fb64fa0d819c4ef90694f6f292ea15f7fb58fb29f368c0a06c5b90f1b4bca7
6
+ metadata.gz: 622c25276c017302c1a9d144e9366043ba359b2c3b0c57d4e7baad8f9de2e9c9969a86c91acdbefcf736af92e297c4e1fbe2008aa41e0c1accadda77dd0724f5
7
+ data.tar.gz: 7e64012a5368cb0158ecaa50cdea6447709a64dd3a2816b36a31e7f17d70fffff81bd8d317c0cc1f9a6317adcffad9c200c48f9ca4bf208afba819ff7a07738e
data/Changes.md CHANGED
@@ -2,6 +2,14 @@
2
2
 
3
3
  [Sidekiq Changes](https://github.com/mperham/sidekiq/blob/main/Changes.md) | [Sidekiq Pro Changes](https://github.com/mperham/sidekiq/blob/main/Pro-Changes.md) | [Sidekiq Enterprise Changes](https://github.com/mperham/sidekiq/blob/main/Ent-Changes.md)
4
4
 
5
+ HEAD
6
+ ---------
7
+
8
+ - Fix pipeline/multi deprecations in redis-rb 4.6
9
+ - Fix sidekiq.yml YAML load errors on Ruby 3.1 [#5141]
10
+ - Sharding support for `perform_bulk` [#5129]
11
+ - Refactor job logger for SPEEEEEEED
12
+
5
13
  6.4.0
6
14
  ---------
7
15
 
@@ -27,7 +35,6 @@ bin/rails generate sidekiq:job ProcessOrderJob
27
35
  ```
28
36
  - Fix job retries losing CurrentAttributes [#5090]
29
37
  - Tweak shutdown to give long-running threads time to cleanup [#5095]
30
- - Add keyword arguments support in extensions
31
38
 
32
39
  6.3.1
33
40
  ---------
data/README.md CHANGED
@@ -80,6 +80,11 @@ Useful resources:
80
80
  Every Friday morning is Sidekiq happy hour: I video chat and answer questions.
81
81
  See the [Sidekiq support page](https://sidekiq.org/support.html) for details.
82
82
 
83
+ Contributing
84
+ -----------------
85
+
86
+ Please see [the contributing guidelines](https://github.com/mperham/sidekiq/blob/main/.github/contributing.md).
87
+
83
88
 
84
89
  License
85
90
  -----------------
data/bin/sidekiq CHANGED
@@ -4,7 +4,7 @@
4
4
  # RUBYOPT=-w bundle exec sidekiq
5
5
  $TESTING = false
6
6
 
7
- require_relative '../lib/sidekiq/cli'
7
+ require_relative "../lib/sidekiq/cli"
8
8
 
9
9
  def integrate_with_systemd
10
10
  return unless ENV["NOTIFY_SOCKET"]
@@ -32,8 +32,8 @@ begin
32
32
  rescue => e
33
33
  raise e if $DEBUG
34
34
  if Sidekiq.error_handlers.length == 0
35
- STDERR.puts e.message
36
- STDERR.puts e.backtrace.join("\n")
35
+ warn e.message
36
+ warn e.backtrace.join("\n")
37
37
  else
38
38
  cli.handle_exception e
39
39
  end
data/bin/sidekiqload CHANGED
@@ -4,19 +4,18 @@
4
4
  # RUBYOPT=-w bundle exec sidekiq
5
5
  $TESTING = false
6
6
 
7
- #require 'ruby-prof'
8
- require 'bundler/setup'
7
+ # require "ruby-prof"
8
+ require "bundler/setup"
9
9
  Bundler.require(:default, :load_test)
10
10
 
11
- require_relative '../lib/sidekiq/cli'
12
- require_relative '../lib/sidekiq/launcher'
13
-
14
- include Sidekiq::Util
11
+ require_relative "../lib/sidekiq/cli"
12
+ require_relative "../lib/sidekiq/launcher"
15
13
 
16
14
  Sidekiq.configure_server do |config|
17
15
  config.options[:concurrency] = 10
18
- config.redis = { db: 13, port: 6380, driver: :hiredis }
19
- config.options[:queues] << 'default'
16
+ config.redis = {db: 13, port: 6380}
17
+ # config.redis = { db: 13, port: 6380, driver: :hiredis}
18
+ config.options[:queues] << "default"
20
19
  config.logger.level = Logger::ERROR
21
20
  config.average_scheduled_poll_interval = 2
22
21
  config.reliable! if defined?(Sidekiq::Pro)
@@ -29,9 +28,9 @@ class LoadWorker
29
28
  1
30
29
  end
31
30
 
32
- def perform(idx, ts=nil)
33
- puts(Time.now.to_f - ts) if ts != nil
34
- #raise idx.to_s if idx % 100 == 1
31
+ def perform(idx, ts = nil)
32
+ puts(Time.now.to_f - ts) if !ts.nil?
33
+ # raise idx.to_s if idx % 100 == 1
35
34
  end
36
35
  end
37
36
 
@@ -39,43 +38,41 @@ end
39
38
  # brew install toxiproxy
40
39
  # gem install toxiproxy
41
40
  # run `toxiproxy-server` in a separate terminal window.
42
- require 'toxiproxy'
41
+ require "toxiproxy"
43
42
  # simulate a non-localhost network for realer-world conditions.
44
43
  # adding 1ms of network latency has an ENORMOUS impact on benchmarks
45
44
  Toxiproxy.populate([{
46
- "name": "redis",
47
- "listen": "127.0.0.1:6380",
48
- "upstream": "127.0.0.1:6379"
45
+ name: "redis",
46
+ listen: "127.0.0.1:6380",
47
+ upstream: "127.0.0.1:6379"
49
48
  }])
50
49
 
51
50
  self_read, self_write = IO.pipe
52
- %w(INT TERM TSTP TTIN).each do |sig|
53
- begin
54
- trap sig do
55
- self_write.puts(sig)
56
- end
57
- rescue ArgumentError
58
- puts "Signal #{sig} not supported"
51
+ %w[INT TERM TSTP TTIN].each do |sig|
52
+ trap sig do
53
+ self_write.puts(sig)
59
54
  end
55
+ rescue ArgumentError
56
+ puts "Signal #{sig} not supported"
60
57
  end
61
58
 
62
- Sidekiq.redis {|c| c.flushdb}
59
+ Sidekiq.redis { |c| c.flushdb }
63
60
  def handle_signal(launcher, sig)
64
61
  Sidekiq.logger.debug "Got #{sig} signal"
65
62
  case sig
66
- when 'INT'
63
+ when "INT"
67
64
  # Handle Ctrl-C in JRuby like MRI
68
65
  # http://jira.codehaus.org/browse/JRUBY-4637
69
66
  raise Interrupt
70
- when 'TERM'
67
+ when "TERM"
71
68
  # Heroku sends TERM and then waits 30 seconds for process to exit.
72
69
  raise Interrupt
73
- when 'TSTP'
70
+ when "TSTP"
74
71
  Sidekiq.logger.info "Received TSTP, no longer accepting new work"
75
72
  launcher.quiet
76
- when 'TTIN'
73
+ when "TTIN"
77
74
  Thread.list.each do |thread|
78
- Sidekiq.logger.warn "Thread TID-#{(thread.object_id ^ ::Process.pid).to_s(36)} #{thread['label']}"
75
+ Sidekiq.logger.warn "Thread TID-#{(thread.object_id ^ ::Process.pid).to_s(36)} #{thread["label"]}"
79
76
  if thread.backtrace
80
77
  Sidekiq.logger.warn thread.backtrace.join("\n")
81
78
  else
@@ -89,7 +86,7 @@ def Process.rss
89
86
  `ps -o rss= -p #{Process.pid}`.chomp.to_i
90
87
  end
91
88
 
92
- iter = 10
89
+ iter = 50
93
90
  count = 10_000
94
91
 
95
92
  iter.times do
@@ -99,40 +96,41 @@ iter.times do
99
96
  count.times do |idx|
100
97
  arr[idx][0] = idx
101
98
  end
102
- Sidekiq::Client.push_bulk('class' => LoadWorker, 'args' => arr)
99
+ Sidekiq::Client.push_bulk("class" => LoadWorker, "args" => arr)
103
100
  end
104
- Sidekiq.logger.error "Created #{count*iter} jobs"
101
+ Sidekiq.logger.error "Created #{count * iter} jobs"
105
102
 
106
103
  start = Time.now
107
104
 
108
105
  Monitoring = Thread.new do
109
- watchdog("monitor thread") do
110
- while true
106
+ while true
107
+ sleep 0.2
108
+ qsize = Sidekiq.redis do |conn|
109
+ conn.llen "queue:default"
110
+ end
111
+ total = qsize
112
+ # Sidekiq.logger.error("RSS: #{Process.rss} Pending: #{total}")
113
+ if total == 0
114
+ Sidekiq.logger.error("Done, #{iter * count} jobs in #{Time.now - start} sec")
115
+ Sidekiq.logger.error("Now here's the latency for three jobs")
116
+
117
+ LoadWorker.perform_async(1, Time.now.to_f)
118
+ LoadWorker.perform_async(2, Time.now.to_f)
119
+ LoadWorker.perform_async(3, Time.now.to_f)
120
+
111
121
  sleep 0.2
112
- qsize = Sidekiq.redis do |conn|
113
- conn.llen "queue:default"
114
- end
115
- total = qsize
116
- #Sidekiq.logger.error("RSS: #{Process.rss} Pending: #{total}")
117
- if total == 0
118
- Sidekiq.logger.error("Done, #{iter * count} jobs in #{Time.now - start} sec")
119
- Sidekiq.logger.error("Now here's the latency for three jobs")
120
-
121
- LoadWorker.perform_async(1, Time.now.to_f)
122
- LoadWorker.perform_async(2, Time.now.to_f)
123
- LoadWorker.perform_async(3, Time.now.to_f)
124
-
125
- sleep 0.2
126
- exit(0)
127
- end
122
+ exit(0)
128
123
  end
129
124
  end
130
125
  end
131
126
 
132
127
  begin
133
- #RubyProf::exclude_threads = [ Monitoring ]
134
- #RubyProf.start
135
- fire_event(:startup)
128
+ # RubyProf::exclude_threads = [ Monitoring ]
129
+ # RubyProf.start
130
+ events = Sidekiq.options[:lifecycle_events][:startup]
131
+ events.each(&:call)
132
+ events.clear
133
+
136
134
  Sidekiq.logger.error "Simulating 1ms of latency between Sidekiq and redis"
137
135
  Toxiproxy[:redis].downstream(:latency, latency: 1).apply do
138
136
  launcher = Sidekiq::Launcher.new(Sidekiq.options)
@@ -144,14 +142,14 @@ begin
144
142
  end
145
143
  end
146
144
  rescue SystemExit => e
147
- #Sidekiq.logger.error("Profiling...")
148
- #result = RubyProf.stop
149
- #printer = RubyProf::GraphHtmlPrinter.new(result)
150
- #printer.print(File.new("output.html", "w"), :min_percent => 1)
145
+ # Sidekiq.logger.error("Profiling...")
146
+ # result = RubyProf.stop
147
+ # printer = RubyProf::GraphHtmlPrinter.new(result)
148
+ # printer.print(File.new("output.html", "w"), :min_percent => 1)
151
149
  # normal
152
150
  rescue => e
153
151
  raise e if $DEBUG
154
- STDERR.puts e.message
155
- STDERR.puts e.backtrace.join("\n")
152
+ warn e.message
153
+ warn e.backtrace.join("\n")
156
154
  exit 1
157
155
  end
data/bin/sidekiqmon CHANGED
@@ -1,6 +1,6 @@
1
1
  #!/usr/bin/env ruby
2
2
 
3
- require 'sidekiq/monitor'
3
+ require "sidekiq/monitor"
4
4
 
5
5
  section = "all"
6
6
  section = ARGV[0] if ARGV.size == 1
data/lib/sidekiq/api.rb CHANGED
@@ -54,14 +54,14 @@ module Sidekiq
54
54
  # O(1) redis calls
55
55
  def fetch_stats_fast!
56
56
  pipe1_res = Sidekiq.redis { |conn|
57
- conn.pipelined do
58
- conn.get("stat:processed")
59
- conn.get("stat:failed")
60
- conn.zcard("schedule")
61
- conn.zcard("retry")
62
- conn.zcard("dead")
63
- conn.scard("processes")
64
- conn.lrange("queue:default", -1, -1)
57
+ conn.pipelined do |pipeline|
58
+ pipeline.get("stat:processed")
59
+ pipeline.get("stat:failed")
60
+ pipeline.zcard("schedule")
61
+ pipeline.zcard("retry")
62
+ pipeline.zcard("dead")
63
+ pipeline.scard("processes")
64
+ pipeline.lrange("queue:default", -1, -1)
65
65
  end
66
66
  }
67
67
 
@@ -101,9 +101,9 @@ module Sidekiq
101
101
  }
102
102
 
103
103
  pipe2_res = Sidekiq.redis { |conn|
104
- conn.pipelined do
105
- processes.each { |key| conn.hget(key, "busy") }
106
- queues.each { |queue| conn.llen("queue:#{queue}") }
104
+ conn.pipelined do |pipeline|
105
+ processes.each { |key| pipeline.hget(key, "busy") }
106
+ queues.each { |queue| pipeline.llen("queue:#{queue}") }
107
107
  end
108
108
  }
109
109
 
@@ -147,9 +147,9 @@ module Sidekiq
147
147
  Sidekiq.redis do |conn|
148
148
  queues = conn.sscan_each("queues").to_a
149
149
 
150
- lengths = conn.pipelined {
150
+ lengths = conn.pipelined { |pipeline|
151
151
  queues.each do |queue|
152
- conn.llen("queue:#{queue}")
152
+ pipeline.llen("queue:#{queue}")
153
153
  end
154
154
  }
155
155
 
@@ -287,9 +287,9 @@ module Sidekiq
287
287
 
288
288
  def clear
289
289
  Sidekiq.redis do |conn|
290
- conn.multi do
291
- conn.unlink(@rname)
292
- conn.srem("queues", name)
290
+ conn.multi do |transaction|
291
+ transaction.unlink(@rname)
292
+ transaction.srem("queues", name)
293
293
  end
294
294
  end
295
295
  end
@@ -355,8 +355,12 @@ module Sidekiq
355
355
  # Unwrap known wrappers so they show up in a human-friendly manner in the Web UI
356
356
  @display_args ||= case klass
357
357
  when /\ASidekiq::Extensions::Delayed/
358
- safe_load(args[0], args) do |_, _, arg|
359
- arg
358
+ safe_load(args[0], args) do |_, _, arg, kwarg|
359
+ if !kwarg || kwarg.empty?
360
+ arg
361
+ else
362
+ [arg, kwarg]
363
+ end
360
364
  end
361
365
  when "ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper"
362
366
  job_args = self["wrapped"] ? args[0]["arguments"] : []
@@ -519,9 +523,9 @@ module Sidekiq
519
523
 
520
524
  def remove_job
521
525
  Sidekiq.redis do |conn|
522
- results = conn.multi {
523
- conn.zrangebyscore(parent.name, score, score)
524
- conn.zremrangebyscore(parent.name, score, score)
526
+ results = conn.multi { |transaction|
527
+ transaction.zrangebyscore(parent.name, score, score)
528
+ transaction.zremrangebyscore(parent.name, score, score)
525
529
  }.first
526
530
 
527
531
  if results.size == 1
@@ -542,9 +546,9 @@ module Sidekiq
542
546
  yield msg if msg
543
547
 
544
548
  # push the rest back onto the sorted set
545
- conn.multi do
549
+ conn.multi do |transaction|
546
550
  nonmatched.each do |message|
547
- conn.zadd(parent.name, score.to_f.to_s, message)
551
+ transaction.zadd(parent.name, score.to_f.to_s, message)
548
552
  end
549
553
  end
550
554
  end
@@ -731,10 +735,10 @@ module Sidekiq
731
735
  def kill(message, opts = {})
732
736
  now = Time.now.to_f
733
737
  Sidekiq.redis do |conn|
734
- conn.multi do
735
- conn.zadd(name, now.to_s, message)
736
- conn.zremrangebyscore(name, "-inf", now - self.class.timeout)
737
- conn.zremrangebyrank(name, 0, - self.class.max_jobs)
738
+ conn.multi do |transaction|
739
+ transaction.zadd(name, now.to_s, message)
740
+ transaction.zremrangebyscore(name, "-inf", now - self.class.timeout)
741
+ transaction.zremrangebyrank(name, 0, - self.class.max_jobs)
738
742
  end
739
743
  end
740
744
 
@@ -782,9 +786,9 @@ module Sidekiq
782
786
  count = 0
783
787
  Sidekiq.redis do |conn|
784
788
  procs = conn.sscan_each("processes").to_a.sort
785
- heartbeats = conn.pipelined {
789
+ heartbeats = conn.pipelined { |pipeline|
786
790
  procs.each do |key|
787
- conn.hget(key, "info")
791
+ pipeline.hget(key, "info")
788
792
  end
789
793
  }
790
794
 
@@ -806,9 +810,9 @@ module Sidekiq
806
810
  # We're making a tradeoff here between consuming more memory instead of
807
811
  # making more roundtrips to Redis, but if you have hundreds or thousands of workers,
808
812
  # you'll be happier this way
809
- conn.pipelined do
813
+ conn.pipelined do |pipeline|
810
814
  procs.each do |key|
811
- conn.hmget(key, "info", "busy", "beat", "quiet", "rss", "rtt_us")
815
+ pipeline.hmget(key, "info", "busy", "beat", "quiet", "rss", "rtt_us")
812
816
  end
813
817
  end
814
818
  }
@@ -922,9 +926,9 @@ module Sidekiq
922
926
  def signal(sig)
923
927
  key = "#{identity}-signals"
924
928
  Sidekiq.redis do |c|
925
- c.multi do
926
- c.lpush(key, sig)
927
- c.expire(key, 60)
929
+ c.multi do |transaction|
930
+ transaction.lpush(key, sig)
931
+ transaction.expire(key, 60)
928
932
  end
929
933
  end
930
934
  end
@@ -958,9 +962,9 @@ module Sidekiq
958
962
  Sidekiq.redis do |conn|
959
963
  procs = conn.sscan_each("processes").to_a
960
964
  procs.sort.each do |key|
961
- valid, workers = conn.pipelined {
962
- conn.exists?(key)
963
- conn.hgetall("#{key}:workers")
965
+ valid, workers = conn.pipelined { |pipeline|
966
+ pipeline.exists?(key)
967
+ pipeline.hgetall("#{key}:workers")
964
968
  }
965
969
  next unless valid
966
970
  workers.each_pair do |tid, json|
@@ -988,9 +992,9 @@ module Sidekiq
988
992
  if procs.empty?
989
993
  0
990
994
  else
991
- conn.pipelined {
995
+ conn.pipelined { |pipeline|
992
996
  procs.each do |key|
993
- conn.hget(key, "busy")
997
+ pipeline.hget(key, "busy")
994
998
  end
995
999
  }.sum(&:to_i)
996
1000
  end
data/lib/sidekiq/cli.rb CHANGED
@@ -115,8 +115,8 @@ module Sidekiq
115
115
  begin
116
116
  launcher.run
117
117
 
118
- while (readable_io = IO.select([self_read]))
119
- signal = readable_io.first[0].gets.strip
118
+ while (readable_io = self_read.wait_readable)
119
+ signal = readable_io.gets.strip
120
120
  handle_signal(signal)
121
121
  end
122
122
  rescue Interrupt
@@ -382,7 +382,7 @@ module Sidekiq
382
382
  def parse_config(path)
383
383
  erb = ERB.new(File.read(path))
384
384
  erb.filename = File.expand_path(path)
385
- opts = YAML.load(erb.result) || {}
385
+ opts = load_yaml(erb.result) || {}
386
386
 
387
387
  if opts.respond_to? :deep_symbolize_keys!
388
388
  opts.deep_symbolize_keys!
@@ -398,6 +398,14 @@ module Sidekiq
398
398
  opts
399
399
  end
400
400
 
401
+ def load_yaml(src)
402
+ if Psych::VERSION > "4.0"
403
+ YAML.safe_load(src, permitted_classes: [Symbol], aliases: true)
404
+ else
405
+ YAML.load(src)
406
+ end
407
+ end
408
+
401
409
  def parse_queues(opts, queues_and_weights)
402
410
  queues_and_weights.each { |queue_and_weight| parse_queue(opts, *queue_and_weight) }
403
411
  end
@@ -103,7 +103,7 @@ module Sidekiq
103
103
 
104
104
  normed = normalize_item(items)
105
105
  payloads = args.map.with_index { |job_args, index|
106
- copy = normed.merge("args" => job_args, "jid" => SecureRandom.hex(12), "enqueued_at" => Time.now.to_f)
106
+ copy = normed.merge("args" => job_args, "jid" => SecureRandom.hex(12))
107
107
  copy["at"] = (at.is_a?(Array) ? at[index] : at) if at
108
108
 
109
109
  result = process_single(items["class"], copy)
@@ -189,8 +189,23 @@ module Sidekiq
189
189
 
190
190
  def raw_push(payloads)
191
191
  @redis_pool.with do |conn|
192
- conn.pipelined do
193
- atomic_push(conn, payloads)
192
+ retryable = true
193
+ begin
194
+ conn.pipelined do |pipeline|
195
+ atomic_push(pipeline, payloads)
196
+ end
197
+ rescue Redis::BaseError => ex
198
+ # 2550 Failover can cause the server to become a replica, need
199
+ # to disconnect and reopen the socket to get back to the primary.
200
+ # 4495 Use the same logic if we have a "Not enough replicas" error from the primary
201
+ # 4985 Use the same logic when a blocking command is force-unblocked
202
+ # The retry logic is copied from sidekiq.rb
203
+ if retryable && ex.message =~ /READONLY|NOREPLICAS|UNBLOCKED/
204
+ conn.disconnect!
205
+ retryable = false
206
+ retry
207
+ end
208
+ raise
194
209
  end
195
210
  end
196
211
  true
data/lib/sidekiq/delay.rb CHANGED
@@ -3,7 +3,7 @@
3
3
  module Sidekiq
4
4
  module Extensions
5
5
  def self.enable_delay!
6
- Sidekiq.logger.error "Sidekiq's Delayed Extensions will be removed in Sidekiq 7.0. #{caller(1..1).first}"
6
+ warn "Sidekiq's Delayed Extensions will be removed in Sidekiq 7.0", uplevel: 1
7
7
 
8
8
  if defined?(::ActiveSupport)
9
9
  require "sidekiq/extensions/active_record"
@@ -16,8 +16,8 @@ module Sidekiq
16
16
  include Sidekiq::Worker
17
17
 
18
18
  def perform(yml)
19
- (target, method_name, args, kwargs) = YAML.load(yml)
20
- msg = kwargs.empty? ? target.public_send(method_name, *args) : target.public_send(method_name, *args, **kwargs)
19
+ (target, method_name, args) = YAML.load(yml)
20
+ msg = target.public_send(method_name, *args)
21
21
  # The email method can return nil, which causes ActionMailer to return
22
22
  # an undeliverable empty message.
23
23
  if msg
@@ -18,8 +18,8 @@ module Sidekiq
18
18
  include Sidekiq::Worker
19
19
 
20
20
  def perform(yml)
21
- (target, method_name, args, kwargs) = YAML.load(yml)
22
- kwargs.empty? ? target.__send__(method_name, *args) : target.__send__(method_name, *args, **kwargs)
21
+ (target, method_name, args) = YAML.load(yml)
22
+ target.__send__(method_name, *args)
23
23
  end
24
24
  end
25
25
 
@@ -16,8 +16,8 @@ module Sidekiq
16
16
  include Sidekiq::Worker
17
17
 
18
18
  def perform(yml)
19
- (target, method_name, args, kwargs) = YAML.load(yml)
20
- kwargs.empty? ? target.__send__(method_name, *args) : target.__send__(method_name, *args, **kwargs)
19
+ (target, method_name, args) = YAML.load(yml)
20
+ target.__send__(method_name, *args)
21
21
  end
22
22
  end
23
23
 
@@ -13,13 +13,13 @@ module Sidekiq
13
13
  @opts = options
14
14
  end
15
15
 
16
- def method_missing(name, *args, **kwargs)
16
+ def method_missing(name, *args)
17
17
  # Sidekiq has a limitation in that its message must be JSON.
18
18
  # JSON can't round trip real Ruby objects so we use YAML to
19
19
  # serialize the objects to a String. The YAML will be converted
20
20
  # to JSON and then deserialized on the other side back into a
21
21
  # Ruby object.
22
- obj = [@target, name, args, kwargs]
22
+ obj = [@target, name, args]
23
23
  marshalled = ::YAML.dump(obj)
24
24
  if marshalled.size > SIZE_LIMIT
25
25
  ::Sidekiq.logger.warn { "#{@target}.#{name} job argument is #{marshalled.bytesize} bytes, you should refactor it to reduce the size" }
data/lib/sidekiq/fetch.rb CHANGED
@@ -59,9 +59,9 @@ module Sidekiq
59
59
  end
60
60
 
61
61
  Sidekiq.redis do |conn|
62
- conn.pipelined do
62
+ conn.pipelined do |pipeline|
63
63
  jobs_to_requeue.each do |queue, jobs|
64
- conn.rpush(queue, jobs)
64
+ pipeline.rpush(queue, jobs)
65
65
  end
66
66
  end
67
67
  end
@@ -12,46 +12,34 @@ module Sidekiq
12
12
 
13
13
  yield
14
14
 
15
- with_elapsed_time_context(start) do
16
- @logger.info("done")
17
- end
15
+ Sidekiq::Context.add(:elapsed, elapsed(start))
16
+ @logger.info("done")
18
17
  rescue Exception
19
- with_elapsed_time_context(start) do
20
- @logger.info("fail")
21
- end
18
+ Sidekiq::Context.add(:elapsed, elapsed(start))
19
+ @logger.info("fail")
22
20
 
23
21
  raise
24
22
  end
25
23
 
26
24
  def prepare(job_hash, &block)
27
- level = job_hash["log_level"]
28
- if level
29
- @logger.log_at(level) do
30
- Sidekiq::Context.with(job_hash_context(job_hash), &block)
31
- end
32
- else
33
- Sidekiq::Context.with(job_hash_context(job_hash), &block)
34
- end
35
- end
36
-
37
- def job_hash_context(job_hash)
38
25
  # If we're using a wrapper class, like ActiveJob, use the "wrapped"
39
26
  # attribute to expose the underlying thing.
40
27
  h = {
41
28
  class: job_hash["display_class"] || job_hash["wrapped"] || job_hash["class"],
42
29
  jid: job_hash["jid"]
43
30
  }
44
- h[:bid] = job_hash["bid"] if job_hash["bid"]
45
- h[:tags] = job_hash["tags"] if job_hash["tags"]
46
- h
47
- end
48
-
49
- def with_elapsed_time_context(start, &block)
50
- Sidekiq::Context.with(elapsed_time_context(start), &block)
51
- end
31
+ h[:bid] = job_hash["bid"] if job_hash.has_key?("bid")
32
+ h[:tags] = job_hash["tags"] if job_hash.has_key?("tags")
52
33
 
53
- def elapsed_time_context(start)
54
- {elapsed: elapsed(start).to_s}
34
+ Thread.current[:sidekiq_context] = h
35
+ level = job_hash["log_level"]
36
+ if level
37
+ @logger.log_at(level, &block)
38
+ else
39
+ yield
40
+ end
41
+ ensure
42
+ Thread.current[:sidekiq_context] = nil
55
43
  end
56
44
 
57
45
  private
@@ -84,9 +84,9 @@ module Sidekiq
84
84
  # Note we don't stop the heartbeat thread; if the process
85
85
  # doesn't actually exit, it'll reappear in the Web UI.
86
86
  Sidekiq.redis do |conn|
87
- conn.pipelined do
88
- conn.srem("processes", identity)
89
- conn.unlink("#{identity}:workers")
87
+ conn.pipelined do |pipeline|
88
+ pipeline.srem("processes", identity)
89
+ pipeline.unlink("#{identity}:workers")
90
90
  end
91
91
  end
92
92
  rescue
@@ -107,14 +107,14 @@ module Sidekiq
107
107
  nowdate = Time.now.utc.strftime("%Y-%m-%d")
108
108
  begin
109
109
  Sidekiq.redis do |conn|
110
- conn.pipelined do
111
- conn.incrby("stat:processed", procd)
112
- conn.incrby("stat:processed:#{nowdate}", procd)
113
- conn.expire("stat:processed:#{nowdate}", STATS_TTL)
114
-
115
- conn.incrby("stat:failed", fails)
116
- conn.incrby("stat:failed:#{nowdate}", fails)
117
- conn.expire("stat:failed:#{nowdate}", STATS_TTL)
110
+ conn.pipelined do |pipeline|
111
+ pipeline.incrby("stat:processed", procd)
112
+ pipeline.incrby("stat:processed:#{nowdate}", procd)
113
+ pipeline.expire("stat:processed:#{nowdate}", STATS_TTL)
114
+
115
+ pipeline.incrby("stat:failed", fails)
116
+ pipeline.incrby("stat:failed:#{nowdate}", fails)
117
+ pipeline.expire("stat:failed:#{nowdate}", STATS_TTL)
118
118
  end
119
119
  end
120
120
  rescue => ex
@@ -138,20 +138,20 @@ module Sidekiq
138
138
  nowdate = Time.now.utc.strftime("%Y-%m-%d")
139
139
 
140
140
  Sidekiq.redis do |conn|
141
- conn.multi do
142
- conn.incrby("stat:processed", procd)
143
- conn.incrby("stat:processed:#{nowdate}", procd)
144
- conn.expire("stat:processed:#{nowdate}", STATS_TTL)
141
+ conn.multi do |transaction|
142
+ transaction.incrby("stat:processed", procd)
143
+ transaction.incrby("stat:processed:#{nowdate}", procd)
144
+ transaction.expire("stat:processed:#{nowdate}", STATS_TTL)
145
145
 
146
- conn.incrby("stat:failed", fails)
147
- conn.incrby("stat:failed:#{nowdate}", fails)
148
- conn.expire("stat:failed:#{nowdate}", STATS_TTL)
146
+ transaction.incrby("stat:failed", fails)
147
+ transaction.incrby("stat:failed:#{nowdate}", fails)
148
+ transaction.expire("stat:failed:#{nowdate}", STATS_TTL)
149
149
 
150
- conn.unlink(workers_key)
150
+ transaction.unlink(workers_key)
151
151
  curstate.each_pair do |tid, hash|
152
- conn.hset(workers_key, tid, Sidekiq.dump_json(hash))
152
+ transaction.hset(workers_key, tid, Sidekiq.dump_json(hash))
153
153
  end
154
- conn.expire(workers_key, 60)
154
+ transaction.expire(workers_key, 60)
155
155
  end
156
156
  end
157
157
 
@@ -161,17 +161,17 @@ module Sidekiq
161
161
  kb = memory_usage(::Process.pid)
162
162
 
163
163
  _, exists, _, _, msg = Sidekiq.redis { |conn|
164
- conn.multi {
165
- conn.sadd("processes", key)
166
- conn.exists?(key)
167
- conn.hmset(key, "info", to_json,
164
+ conn.multi { |transaction|
165
+ transaction.sadd("processes", key)
166
+ transaction.exists?(key)
167
+ transaction.hmset(key, "info", to_json,
168
168
  "busy", curstate.size,
169
169
  "beat", Time.now.to_f,
170
170
  "rtt_us", rtt,
171
171
  "quiet", @done,
172
172
  "rss", kb)
173
- conn.expire(key, 60)
174
- conn.rpop("#{key}-signals")
173
+ transaction.expire(key, 60)
174
+ transaction.rpop("#{key}-signals")
175
175
  }
176
176
  }
177
177
 
@@ -16,6 +16,10 @@ module Sidekiq
16
16
  def self.current
17
17
  Thread.current[:sidekiq_context] ||= {}
18
18
  end
19
+
20
+ def self.add(k, v)
21
+ Thread.current[:sidekiq_context][k] = v
22
+ end
19
23
  end
20
24
 
21
25
  module LoggingUtils
@@ -16,22 +16,22 @@ module Sidekiq
16
16
 
17
17
  case type
18
18
  when "zset"
19
- total_size, items = conn.multi {
20
- conn.zcard(key)
19
+ total_size, items = conn.multi { |transaction|
20
+ transaction.zcard(key)
21
21
  if rev
22
- conn.zrevrange(key, starting, ending, with_scores: true)
22
+ transaction.zrevrange(key, starting, ending, with_scores: true)
23
23
  else
24
- conn.zrange(key, starting, ending, with_scores: true)
24
+ transaction.zrange(key, starting, ending, with_scores: true)
25
25
  end
26
26
  }
27
27
  [current_page, total_size, items]
28
28
  when "list"
29
- total_size, items = conn.multi {
30
- conn.llen(key)
29
+ total_size, items = conn.multi { |transaction|
30
+ transaction.llen(key)
31
31
  if rev
32
- conn.lrange(key, -ending - 1, -starting - 1)
32
+ transaction.lrange(key, -ending - 1, -starting - 1)
33
33
  else
34
- conn.lrange(key, starting, ending)
34
+ transaction.lrange(key, starting, ending)
35
35
  end
36
36
  }
37
37
  items.reverse! if rev
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Sidekiq
4
- VERSION = "6.4.0"
4
+ VERSION = "6.4.1"
5
5
  end
@@ -242,7 +242,7 @@ module Sidekiq
242
242
  queue class args retry_count retried_at failed_at
243
243
  jid error_message error_class backtrace
244
244
  error_backtrace enqueued_at retry wrapped
245
- created_at tags
245
+ created_at tags display_class
246
246
  ])
247
247
 
248
248
  def retry_extra_items(retry_job)
data/lib/sidekiq/web.rb CHANGED
@@ -148,9 +148,9 @@ module Sidekiq
148
148
 
149
149
  ::Rack::Builder.new do
150
150
  use Rack::Static, urls: ["/stylesheets", "/images", "/javascripts"],
151
- root: ASSETS,
152
- cascade: true,
153
- header_rules: rules
151
+ root: ASSETS,
152
+ cascade: true,
153
+ header_rules: rules
154
154
  m.each { |middleware, block| use(*middleware, &block) }
155
155
  use Sidekiq::Web::CsrfProtection unless $TESTING
156
156
  run WebApplication.new(klass)
@@ -236,8 +236,10 @@ module Sidekiq
236
236
 
237
237
  def perform_bulk(args, batch_size: 1_000)
238
238
  hash = @opts.transform_keys(&:to_s)
239
+ pool = Thread.current[:sidekiq_via_pool] || @klass.get_sidekiq_options["pool"] || Sidekiq.redis_pool
240
+ client = Sidekiq::Client.new(pool)
239
241
  result = args.each_slice(batch_size).flat_map do |slice|
240
- Sidekiq::Client.push_bulk(hash.merge("class" => @klass, "args" => slice))
242
+ client.push_bulk(hash.merge("class" => @klass, "args" => slice))
241
243
  end
242
244
 
243
245
  result.is_a?(Enumerator::Lazy) ? result.force : result
@@ -312,12 +314,8 @@ module Sidekiq
312
314
  #
313
315
  # SomeWorker.perform_bulk([[1], [2], [3]])
314
316
  #
315
- def perform_bulk(items, batch_size: 1_000)
316
- result = items.each_slice(batch_size).flat_map do |slice|
317
- Sidekiq::Client.push_bulk("class" => self, "args" => slice)
318
- end
319
-
320
- result.is_a?(Enumerator::Lazy) ? result.force : result
317
+ def perform_bulk(*args, **kwargs)
318
+ Setter.new(self, {}).perform_bulk(*args, **kwargs)
321
319
  end
322
320
 
323
321
  # +interval+ must be a timestamp, numeric or something that acts
data/lib/sidekiq.rb CHANGED
@@ -103,6 +103,7 @@ module Sidekiq
103
103
  # to disconnect and reopen the socket to get back to the primary.
104
104
  # 4495 Use the same logic if we have a "Not enough replicas" error from the primary
105
105
  # 4985 Use the same logic when a blocking command is force-unblocked
106
+ # The same retry logic is also used in client.rb
106
107
  if retryable && ex.message =~ /READONLY|NOREPLICAS|UNBLOCKED/
107
108
  conn.disconnect!
108
109
  retryable = false
@@ -202,6 +202,8 @@ table .table-checkbox label {
202
202
 
203
203
  .navbar .navbar-brand .status {
204
204
  color: #585454;
205
+ display: inline-block;
206
+ width: 75px;
205
207
  }
206
208
 
207
209
 
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sidekiq
3
3
  version: !ruby/object:Gem::Version
4
- version: 6.4.0
4
+ version: 6.4.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Mike Perham
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2022-01-20 00:00:00.000000000 Z
11
+ date: 2022-02-07 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: redis
@@ -193,7 +193,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
193
193
  - !ruby/object:Gem::Version
194
194
  version: '0'
195
195
  requirements: []
196
- rubygems_version: 3.1.4
196
+ rubygems_version: 3.2.32
197
197
  signing_key:
198
198
  specification_version: 4
199
199
  summary: Simple, efficient background processing for Ruby