sidekiq 7.1.6 → 7.2.1

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of sidekiq might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: b50ab03f32263dd24ee37ce2ddadc7a4e5e226860146c1a677262dd3a417329d
4
- data.tar.gz: 0af1051781796465d5ce5dd0d0e454016fdc04b7cb5fcf0b8d62e447faff11ed
3
+ metadata.gz: 34d69bb09eb124a8fd4710af9972bdb6a0cd74429bc3e61cdccdae683a4a0203
4
+ data.tar.gz: 8a37c7f172e4efabb061f980a73547d7245585574133b07d159f0412e49dd362
5
5
  SHA512:
6
- metadata.gz: f2e84e49ceb024e8cb24a7213ffe81e40ed9423d646e39778ed51049bf89a04c28890a21e93eeb38bb9daefe88b94b150d71a0cdf80ecdad712a2167dd0a3421
7
- data.tar.gz: 39ffc3e4eee3e2c27a3f774f2a2120c50a52ab9e2049b0fc925123c622553bf4fd19512c16e63882031909bba6ea5b2d15cf856256a3c453d080e89253196fc6
6
+ metadata.gz: dbb63070d419ad3d2b8192b2864ff4608288c00fe51c7e79e4a3faafd33665cce0c92ece38f9cd2a30ae4b7ff7e66debaf894198b054b88c5d0b5d5ee3790153
7
+ data.tar.gz: 8ba7a30ebdd19734a7cc64031a6eb63e4f317de0ae652bed324c272fd57f422186add097b38ab88aaa2a0a925de6b0845d18739691cbd9e9f2e90f3d79bb9e0a
data/Changes.md CHANGED
@@ -2,6 +2,49 @@
2
2
 
3
3
  [Sidekiq Changes](https://github.com/sidekiq/sidekiq/blob/main/Changes.md) | [Sidekiq Pro Changes](https://github.com/sidekiq/sidekiq/blob/main/Pro-Changes.md) | [Sidekiq Enterprise Changes](https://github.com/sidekiq/sidekiq/blob/main/Ent-Changes.md)
4
4
 
5
+ 7.2.1
6
+ ----------
7
+
8
+ - Add `Sidekiq::Work` type which replaces the raw Hash as the third parameter in
9
+ `Sidekiq::WorkSet#each { |pid, tid, hash| ... }` [#6145]
10
+ - **DEPRECATED**: direct access to the attributes within the `hash` block parameter above.
11
+ The `Sidekiq::Work` instance contains accessor methods to get at the same data, e.g.
12
+ ```ruby
13
+ work["queue"] # Old
14
+ work.queue # New
15
+ ```
16
+ - Fix Ruby 3.3 warnings around `base64` gem [#6151, earlopain]
17
+
18
+ 7.2.0
19
+ ----------
20
+
21
+ - `sidekiq_retries_exhausted` can return `:discard` to avoid the deadset
22
+ and all death handlers [#6091]
23
+ - Metrics filtering by job class in Web UI [#5974]
24
+ - Better readability and formatting for numbers within the Web UI [#6080]
25
+ - Add explicit error if user code tries to nest test modes [#6078]
26
+ ```ruby
27
+ Sidekiq::Testing.inline! # global setting
28
+ Sidekiq::Testing.fake! do # override within block
29
+ # ok
30
+ Sidekiq::Testing.inline! do # can't override the override
31
+ # not ok, nested
32
+ end
33
+ end
34
+ ```
35
+ - **SECURITY** Forbid inline JavaScript execution in Web UI [#6074]
36
+ - Adjust redis-client adapter to avoid `method_missing` [#6083]
37
+ This can result in app code breaking if your app's Redis API usage was
38
+ depending on Sidekiq's adapter to correct invalid redis-client API usage.
39
+ One example:
40
+ ```ruby
41
+ # bad, not redis-client native
42
+ # Unsupported command argument type: TrueClass (TypeError)
43
+ Sidekiq.redis { |c| c.set("key", "value", nx: true, ex: 15) }
44
+ # good
45
+ Sidekiq.redis { |c| c.set("key", "value", "nx", "ex", 15) }
46
+ ```
47
+
5
48
  7.1.6
6
49
  ----------
7
50
 
@@ -150,6 +193,11 @@ end
150
193
  - Job Execution metrics!!!
151
194
  - See `docs/7.0-Upgrade.md` for release notes
152
195
 
196
+ 6.5.{10,11,12}
197
+ ----------
198
+
199
+ - Fixes for Rails 7.1 [#6067, #6070]
200
+
153
201
  6.5.9
154
202
  ----------
155
203
 
@@ -0,0 +1,268 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ #
4
+ # bin/bench is a helpful script to load test and
5
+ # performance tune Sidekiq's core. It's a configurable script,
6
+ # which accepts the following parameters as ENV variables.
7
+ #
8
+ # QUEUES
9
+ # Number of queues to consume from. Default is 8
10
+ #
11
+ # PROCESSES
12
+ # The number of processes this benchmark will create. Each process, consumes
13
+ # from one of the available queues. When processes are more than the number of
14
+ # queues, they are distributed to processes in round robin. Default is 8
15
+ #
16
+ # ELEMENTS
17
+ # Number of jobs to push to each queue. Default is 1000
18
+ #
19
+ # ITERATIONS
20
+ # Each queue pushes ITERATIONS times ELEMENTS jobs. Default is 1000
21
+ #
22
+ # PORT
23
+ # The port of the Dragonfly instance. Default is 6379
24
+ #
25
+ # IP
26
+ # The ip of the Dragonfly instance. Default is 127.0.0.1
27
+ #
28
+ # Example Usage:
29
+ #
30
+ # > RUBY_YJIT_ENABLE=1 THREADS=10 PROCESSES=8 QUEUES=8 bin/multi_queue_bench
31
+ #
32
+ # None of this script is considered a public API and may change over time.
33
+ #
34
+
35
+ # Quiet some warnings we see when running in warning mode:
36
+ # RUBYOPT=-w bundle exec sidekiq
37
+ $TESTING = false
38
+ puts RUBY_DESCRIPTION
39
+
40
+ require "bundler/setup"
41
+ Bundler.require(:default, :load_test)
42
+
43
+ class LoadWorker
44
+ include Sidekiq::Job
45
+ sidekiq_options retry: 1
46
+ sidekiq_retry_in do |x|
47
+ 1
48
+ end
49
+
50
+ def perform(idx, ts = nil)
51
+ puts(Time.now.to_f - ts) if !ts.nil?
52
+ # raise idx.to_s if idx % 100 == 1
53
+ end
54
+ end
55
+
56
+ def Process.rss
57
+ `ps -o rss= -p #{Process.pid}`.chomp.to_i
58
+ end
59
+
60
+ $iterations = ENV["ITERATIONS"] ? Integer(ENV["ITERATIONS"]) : 1_000
61
+ $elements = ENV["ELEMENTS"] ? Integer(ENV["ELEMENTS"]) : 1_000
62
+ $port = ENV["PORT"] ? Integer(ENV["PORT"]) : 6379
63
+ $ip = ENV["IP"] ? String(ENV["IP"]) : "127.0.0.1"
64
+
65
+ class Loader
66
+ def initialize
67
+ @iter = $iterations
68
+ @count = $elements
69
+ end
70
+
71
+ def configure(queue)
72
+ @x = Sidekiq.configure_embed do |config|
73
+ config.redis = {db: 0, host: $ip, port: $port}
74
+ config.concurrency = Integer(ENV.fetch("THREADS", "30"))
75
+ config.queues = queue
76
+ config.logger.level = Logger::WARN
77
+ config.average_scheduled_poll_interval = 2
78
+ config.reliable! if defined?(Sidekiq::Pro)
79
+ end
80
+
81
+ @self_read, @self_write = IO.pipe
82
+ %w[INT TERM TSTP TTIN].each do |sig|
83
+ trap sig do
84
+ @self_write.puts(sig)
85
+ end
86
+ rescue ArgumentError
87
+ puts "Signal #{sig} not supported"
88
+ end
89
+ end
90
+
91
+ def handle_signal(sig)
92
+ launcher = @x
93
+ Sidekiq.logger.debug "Got #{sig} signal"
94
+ case sig
95
+ when "INT"
96
+ # Handle Ctrl-C in JRuby like MRI
97
+ # http://jira.codehaus.org/browse/JRUBY-4637
98
+ raise Interrupt
99
+ when "TERM"
100
+ # Heroku sends TERM and then waits 30 seconds for process to exit.
101
+ raise Interrupt
102
+ when "TSTP"
103
+ Sidekiq.logger.info "Received TSTP, no longer accepting new work"
104
+ launcher.quiet
105
+ when "TTIN"
106
+ Thread.list.each do |thread|
107
+ Sidekiq.logger.warn "Thread TID-#{(thread.object_id ^ ::Process.pid).to_s(36)} #{thread["label"]}"
108
+ if thread.backtrace
109
+ Sidekiq.logger.warn thread.backtrace.join("\n")
110
+ else
111
+ Sidekiq.logger.warn "<no backtrace available>"
112
+ end
113
+ end
114
+ end
115
+ end
116
+
117
+ def setup(queue)
118
+ Sidekiq.logger.error("Setup RSS: #{Process.rss}")
119
+ Sidekiq.logger.error("Pushing work to queue: #{queue}")
120
+ start = Time.now
121
+ @iter.times do
122
+ arr = Array.new(@count) { |idx| [idx] }
123
+ #always prepends by queue:: that's why we pass 'q1, q2 etc' instead of `queue::q1`
124
+ Sidekiq::Client.push_bulk("class" => LoadWorker, "args" => arr, "queue" => queue)
125
+ end
126
+ end
127
+
128
+ def monitor_single(queue)
129
+ q = "queue:#{queue}"
130
+ @monitor_single = Thread.new do
131
+ GC.start
132
+ loop do
133
+ sleep 0.2
134
+ total = Sidekiq.redis do |conn|
135
+ conn.llen q
136
+ end
137
+
138
+ if total == 0
139
+ sleep 0.1
140
+ @x.stop
141
+ Process.kill("INT", $$)
142
+ break
143
+ end
144
+
145
+ end
146
+ end
147
+ end
148
+
149
+ def monitor_all(queues)
150
+ @monitor_all = Thread.new do
151
+ GC.start
152
+ loop do
153
+ sleep 0.2
154
+ qsize = 0
155
+ queues.each do |q|
156
+ tmp = Sidekiq.redis do |conn|
157
+ conn.llen q
158
+ end
159
+ qsize = qsize + tmp
160
+ end
161
+ total = qsize
162
+
163
+ if total == 0
164
+ ending = Time.now - @start
165
+ size = @iter * @count * queues.length()
166
+ Sidekiq.logger.error("Done, #{size} jobs in #{ending} sec, #{(size / ending).to_i} jobs/sec")
167
+ Sidekiq.logger.error("Ending RSS: #{Process.rss}")
168
+
169
+ sleep 0.1
170
+ @x.stop
171
+ Process.kill("INT", $$)
172
+ break
173
+ end
174
+ end
175
+ end
176
+ end
177
+
178
+ def run(queues, queue, monitor_all_queues)
179
+ #Sidekiq.logger.warn("Consuming from #{queue}")
180
+ if monitor_all_queues
181
+ monitor_all(queues)
182
+ else
183
+ monitor_single(queue)
184
+ end
185
+
186
+ @start = Time.now
187
+ @x.run
188
+
189
+ while (readable_io = IO.select([@self_read]))
190
+ signal = readable_io.first[0].gets.strip
191
+ handle_signal(signal)
192
+ end
193
+ # normal
194
+ rescue Interrupt
195
+ rescue => e
196
+ raise e if $DEBUG
197
+ warn e.message
198
+ warn e.backtrace.join("\n")
199
+ exit 1
200
+ ensure
201
+ @x.stop
202
+ end
203
+ end
204
+
205
+ def setup(queue)
206
+ ll = Loader.new
207
+ ll.configure(queue)
208
+ ll.setup(queue)
209
+ end
210
+
211
+ def consume(queues, queue, monitor_all_queues)
212
+ ll = Loader.new
213
+ ll.configure(queue)
214
+ ll.run(queues, queue, monitor_all_queues)
215
+ end
216
+
217
+ # We assign one queue to each sidekiq process
218
+ def run(number_of_processes, total_queues)
219
+ read_stream, write_stream = IO.pipe
220
+
221
+ queues = []
222
+ (0..total_queues-1).each do |idx|
223
+ queues.push("queue:q#{idx}")
224
+ end
225
+
226
+ Sidekiq.logger.info("Queues are: #{queues}")
227
+
228
+ # Produce
229
+ start = Time.now
230
+ (0..total_queues-1).each do |idx|
231
+ Process.fork do
232
+ queue_num = "q#{idx}"
233
+ setup(queue_num)
234
+ end
235
+ end
236
+
237
+ queue_sz = $iterations * $elements * total_queues
238
+ Process.waitall
239
+
240
+ ending = Time.now - start
241
+ #Sidekiq.logger.info("Pushed #{queue_sz} in #{ending} secs")
242
+
243
+ # Consume
244
+ (0..number_of_processes-1).each do |idx|
245
+ Process.fork do
246
+ # First process only consumes from it's own queue but monitors all queues.
247
+ # It works as a synchronization point. Once all processes finish
248
+ # (that is, when all queues are emptied) it prints the the stats.
249
+ if idx == 0
250
+ queue = "q#{idx}"
251
+ consume(queues, queue, true)
252
+ else
253
+ queue = "q#{idx % total_queues}"
254
+ consume(queues, queue, false)
255
+ end
256
+ end
257
+ end
258
+
259
+ Process.waitall
260
+ write_stream.close
261
+ results = read_stream.read
262
+ read_stream.close
263
+ end
264
+
265
+ $total_processes = ENV["PROCESSES"] ? Integer(ENV["PROCESSES"]) : 8;
266
+ $total_queues = ENV["QUEUES"] ? Integer(ENV["QUEUES"]) : 8;
267
+
268
+ run($total_processes, $total_queues)
data/lib/sidekiq/api.rb CHANGED
@@ -4,7 +4,6 @@ require "sidekiq"
4
4
 
5
5
  require "zlib"
6
6
  require "set"
7
- require "base64"
8
7
 
9
8
  require "sidekiq/metrics/query"
10
9
 
@@ -491,8 +490,8 @@ module Sidekiq
491
490
  end
492
491
 
493
492
  def uncompress_backtrace(backtrace)
494
- decoded = Base64.decode64(backtrace)
495
- uncompressed = Zlib::Inflate.inflate(decoded)
493
+ strict_base64_decoded = backtrace.unpack1("m0")
494
+ uncompressed = Zlib::Inflate.inflate(strict_base64_decoded)
496
495
  Sidekiq.load_json(uncompressed)
497
496
  end
498
497
  end
@@ -679,7 +678,7 @@ module Sidekiq
679
678
  range_start = page * page_size + offset_size
680
679
  range_end = range_start + page_size - 1
681
680
  elements = Sidekiq.redis { |conn|
682
- conn.zrange name, range_start, range_end, withscores: true
681
+ conn.zrange name, range_start, range_end, "withscores"
683
682
  }
684
683
  break if elements.empty?
685
684
  page -= 1
@@ -706,7 +705,7 @@ module Sidekiq
706
705
  end
707
706
 
708
707
  elements = Sidekiq.redis { |conn|
709
- conn.zrange(name, begin_score, end_score, "BYSCORE", withscores: true)
708
+ conn.zrange(name, begin_score, end_score, "BYSCORE", "withscores")
710
709
  }
711
710
 
712
711
  elements.each_with_object([]) do |element, result|
@@ -881,7 +880,7 @@ module Sidekiq
881
880
  # @api private
882
881
  def cleanup
883
882
  # dont run cleanup more than once per minute
884
- return 0 unless Sidekiq.redis { |conn| conn.set("process_cleanup", "1", nx: true, ex: 60) }
883
+ return 0 unless Sidekiq.redis { |conn| conn.set("process_cleanup", "1", "NX", "EX", "60") }
885
884
 
886
885
  count = 0
887
886
  Sidekiq.redis do |conn|
@@ -1110,11 +1109,11 @@ module Sidekiq
1110
1109
 
1111
1110
  procs.zip(all_works).each do |key, workers|
1112
1111
  workers.each_pair do |tid, json|
1113
- results << [key, tid, Sidekiq.load_json(json)] unless json.empty?
1112
+ results << [key, tid, Sidekiq::Work.new(key, tid, Sidekiq.load_json(json))] unless json.empty?
1114
1113
  end
1115
1114
  end
1116
1115
 
1117
- results.sort_by { |(_, _, hsh)| hsh["run_at"] }.each(&block)
1116
+ results.sort_by { |(_, _, hsh)| hsh.raw("run_at") }.each(&block)
1118
1117
  end
1119
1118
 
1120
1119
  # Note that #size is only as accurate as Sidekiq's heartbeat,
@@ -1138,6 +1137,59 @@ module Sidekiq
1138
1137
  end
1139
1138
  end
1140
1139
  end
1140
+
1141
+ # Sidekiq::Work represents a job which is currently executing.
1142
+ class Work
1143
+ attr_reader :process_id
1144
+ attr_reader :thread_id
1145
+
1146
+ def initialize(pid, tid, hsh)
1147
+ @process_id = pid
1148
+ @thread_id = tid
1149
+ @hsh = hsh
1150
+ @job = nil
1151
+ end
1152
+
1153
+ def queue
1154
+ @hsh["queue"]
1155
+ end
1156
+
1157
+ def run_at
1158
+ Time.at(@hsh["run_at"])
1159
+ end
1160
+
1161
+ def job
1162
+ @job ||= Sidekiq::JobRecord.new(@hsh["payload"])
1163
+ end
1164
+
1165
+ def payload
1166
+ @hsh["payload"]
1167
+ end
1168
+
1169
+ # deprecated
1170
+ def [](key)
1171
+ kwargs = {uplevel: 1}
1172
+ kwargs[:category] = :deprecated if RUBY_VERSION > "3.0" # TODO
1173
+ warn("Direct access to `Sidekiq::Work` attributes is deprecated, please use `#payload`, `#queue`, `#run_at` or `#job` instead", **kwargs)
1174
+
1175
+ @hsh[key]
1176
+ end
1177
+
1178
+ # :nodoc:
1179
+ # @api private
1180
+ def raw(name)
1181
+ @hsh[name]
1182
+ end
1183
+
1184
+ def method_missing(*all)
1185
+ @hsh.send(*all)
1186
+ end
1187
+
1188
+ def respond_to_missing?(name)
1189
+ @hsh.respond_to?(name)
1190
+ end
1191
+ end
1192
+
1141
1193
  # Since "worker" is a nebulous term, we've deprecated the use of this class name.
1142
1194
  # Is "worker" a process, a type of job, a thread? Undefined!
1143
1195
  # WorkSet better describes the data.
@@ -258,9 +258,9 @@ module Sidekiq
258
258
  @logger = logger
259
259
  end
260
260
 
261
- private def arity(handler)
262
- return handler.arity if handler.is_a?(Proc)
263
- handler.method(:call).arity
261
+ private def parameter_size(handler)
262
+ target = handler.is_a?(Proc) ? handler : handler.method(:call)
263
+ target.parameters.size
264
264
  end
265
265
 
266
266
  # INTERNAL USE ONLY
@@ -269,7 +269,7 @@ module Sidekiq
269
269
  p ["!!!!!", ex]
270
270
  end
271
271
  @options[:error_handlers].each do |handler|
272
- if arity(handler) == 2
272
+ if parameter_size(handler) == 2
273
273
  # TODO Remove in 8.0
274
274
  logger.info { "DEPRECATION: Sidekiq exception handlers now take three arguments, see #{handler}" }
275
275
  handler.call(ex, {_config: self}.merge(ctx))
@@ -44,7 +44,7 @@ module Sidekiq
44
44
 
45
45
  @pool.with do |c|
46
46
  # only allow one deploy mark for a given label for the next minute
47
- lock = c.set("deploylock-#{label}", stamp, nx: true, ex: 60)
47
+ lock = c.set("deploylock-#{label}", stamp, "nx", "ex", "60")
48
48
  if lock
49
49
  c.multi do |pipe|
50
50
  pipe.hsetnx(key, stamp, label)
@@ -1,7 +1,6 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  require "zlib"
4
- require "base64"
5
4
  require "sidekiq/component"
6
5
 
7
6
  module Sidekiq
@@ -226,7 +225,7 @@ module Sidekiq
226
225
  end
227
226
 
228
227
  def retries_exhausted(jobinst, msg, exception)
229
- begin
228
+ rv = begin
230
229
  block = jobinst&.sidekiq_retries_exhausted_block
231
230
 
232
231
  # the sidekiq_retries_exhausted_block can be defined in a wrapped class (ActiveJob for instance)
@@ -239,6 +238,7 @@ module Sidekiq
239
238
  handle_exception(e, {context: "Error calling retries_exhausted", job: msg})
240
239
  end
241
240
 
241
+ return if rv == :discard # poof!
242
242
  send_to_morgue(msg) unless msg["dead"] == false
243
243
 
244
244
  @capsule.config.death_handlers.each do |handler|
@@ -294,7 +294,7 @@ module Sidekiq
294
294
  def compress_backtrace(backtrace)
295
295
  serialized = Sidekiq.dump_json(backtrace)
296
296
  compressed = Zlib::Deflate.deflate(serialized)
297
- Base64.encode64(compressed)
297
+ [compressed].pack("m0") # Base64.strict_encode64
298
298
  end
299
299
  end
300
300
  end
@@ -20,7 +20,8 @@ module Sidekiq
20
20
  end
21
21
 
22
22
  # Get metric data for all jobs from the last hour
23
- def top_jobs(minutes: 60)
23
+ # +class_filter+: return only results for classes matching filter
24
+ def top_jobs(class_filter: nil, minutes: 60)
24
25
  result = Result.new
25
26
 
26
27
  time = @time
@@ -39,6 +40,7 @@ module Sidekiq
39
40
  redis_results.each do |hash|
40
41
  hash.each do |k, v|
41
42
  kls, metric = k.split("|")
43
+ next if class_filter && !class_filter.match?(kls)
42
44
  result.job_results[kls].add_metric metric, time, v.to_i
43
45
  end
44
46
  time -= 60
@@ -117,6 +119,7 @@ module Sidekiq
117
119
 
118
120
  def total_avg(metric = "ms")
119
121
  completed = totals["p"] - totals["f"]
122
+ return 0 if completed.zero?
120
123
  totals[metric].to_f / completed
121
124
  end
122
125
 
@@ -103,12 +103,16 @@ module Sidekiq
103
103
  def reset
104
104
  @lock.synchronize {
105
105
  array = [@totals, @jobs, @grams]
106
- @totals = Hash.new(0)
107
- @jobs = Hash.new(0)
108
- @grams = Hash.new { |hash, key| hash[key] = Histogram.new(key) }
106
+ reset_instance_variables
109
107
  array
110
108
  }
111
109
  end
110
+
111
+ def reset_instance_variables
112
+ @totals = Hash.new(0)
113
+ @jobs = Hash.new(0)
114
+ @grams = Hash.new { |hash, key| hash[key] = Histogram.new(key) }
115
+ end
112
116
  end
113
117
 
114
118
  class Middleware
@@ -19,9 +19,9 @@ module Sidekiq
19
19
  total_size, items = conn.multi { |transaction|
20
20
  transaction.zcard(key)
21
21
  if rev
22
- transaction.zrange(key, starting, ending, "REV", withscores: true)
22
+ transaction.zrange(key, starting, ending, "REV", "withscores")
23
23
  else
24
- transaction.zrange(key, starting, ending, withscores: true)
24
+ transaction.zrange(key, starting, ending, "withscores")
25
25
  end
26
26
  }
27
27
  [current_page, total_size, items]
data/lib/sidekiq/rails.rb CHANGED
@@ -56,10 +56,10 @@ module Sidekiq
56
56
  # This is the integration code necessary so that if a job uses `Rails.logger.info "Hello"`,
57
57
  # it will appear in the Sidekiq console with all of the job context.
58
58
  unless ::Rails.logger == config.logger || ::ActiveSupport::Logger.logger_outputs_to?(::Rails.logger, $stdout)
59
- if ::Rails::VERSION::STRING < "7.1"
60
- ::Rails.logger.extend(::ActiveSupport::Logger.broadcast(config.logger))
61
- else
59
+ if ::Rails.logger.respond_to?(:broadcast_to)
62
60
  ::Rails.logger.broadcast_to(config.logger)
61
+ else
62
+ ::Rails.logger.extend(::ActiveSupport::Logger.broadcast(config.logger))
63
63
  end
64
64
  end
65
65
  end
@@ -21,6 +21,22 @@ module Sidekiq
21
21
  @client.call("EVALSHA", sha, keys.size, *keys, *argv)
22
22
  end
23
23
 
24
+ # this is the set of Redis commands used by Sidekiq. Not guaranteed
25
+ # to be comprehensive, we use this as a performance enhancement to
26
+ # avoid calling method_missing on most commands
27
+ USED_COMMANDS = %w[bitfield bitfield_ro del exists expire flushdb
28
+ get hdel hget hgetall hincrby hlen hmget hset hsetnx incr incrby
29
+ lindex llen lmove lpop lpush lrange lrem mget mset ping pttl
30
+ publish rpop rpush sadd scard script set sismember smembers
31
+ srem ttl type unlink zadd zcard zincrby zrange zrem
32
+ zremrangebyrank zremrangebyscore]
33
+
34
+ USED_COMMANDS.each do |name|
35
+ define_method(name) do |*args|
36
+ @client.call(name, *args)
37
+ end
38
+ end
39
+
24
40
  private
25
41
 
26
42
  # this allows us to use methods like `conn.hmset(...)` instead of having to use
@@ -39,9 +39,8 @@ module Sidekiq
39
39
  uri.password = redacted
40
40
  scrubbed_options[:url] = uri.to_s
41
41
  end
42
- if scrubbed_options[:password]
43
- scrubbed_options[:password] = redacted
44
- end
42
+ scrubbed_options[:password] = redacted if scrubbed_options[:password]
43
+ scrubbed_options[:sentinel_password] = redacted if scrubbed_options[:sentinel_password]
45
44
  scrubbed_options[:sentinels]&.each do |sentinel|
46
45
  sentinel[:password] = redacted if sentinel[:password]
47
46
  end
@@ -193,7 +193,7 @@ module Sidekiq
193
193
  # should never depend on sidekiq/api.
194
194
  def cleanup
195
195
  # dont run cleanup more than once per minute
196
- return 0 unless redis { |conn| conn.set("process_cleanup", "1", nx: true, ex: 60) }
196
+ return 0 unless redis { |conn| conn.set("process_cleanup", "1", "NX", "EX", "60") }
197
197
 
198
198
  count = 0
199
199
  redis do |conn|
@@ -5,6 +5,7 @@ require "sidekiq"
5
5
 
6
6
  module Sidekiq
7
7
  class Testing
8
+ class TestModeAlreadySetError < RuntimeError; end
8
9
  class << self
9
10
  attr_accessor :__global_test_mode
10
11
 
@@ -12,8 +13,13 @@ module Sidekiq
12
13
  # all threads. Calling with a block only affects the current Thread.
13
14
  def __set_test_mode(mode)
14
15
  if block_given?
16
+ # Reentrant testing modes will lead to a rat's nest of code which is
17
+ # hard to reason about. You can set the testing mode once globally and
18
+ # you can override that global setting once per-thread.
19
+ raise TestModeAlreadySetError, "Nesting test modes is not supported" if __local_test_mode
20
+
21
+ self.__local_test_mode = mode
15
22
  begin
16
- self.__local_test_mode = mode
17
23
  yield
18
24
  ensure
19
25
  self.__local_test_mode = nil
@@ -272,7 +278,7 @@ module Sidekiq
272
278
  def perform_one
273
279
  raise(EmptyQueueError, "perform_one called with empty job queue") if jobs.empty?
274
280
  next_job = jobs.first
275
- Queues.delete_for(next_job["jid"], queue, to_s)
281
+ Queues.delete_for(next_job["jid"], next_job["queue"], to_s)
276
282
  process_job(next_job)
277
283
  end
278
284
 
@@ -1,6 +1,6 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Sidekiq
4
- VERSION = "7.1.6"
4
+ VERSION = "7.2.1"
5
5
  MAJOR = 7
6
6
  end