sidekiq 6.4.1 → 6.4.2

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of sidekiq might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 99c9e264c092b88ea726be158fafe5bbab91f82f4b5864dee406280622e98e4b
4
- data.tar.gz: acd72bd99929d7c9d129cb9662276cc5adb7214de07cd4fc8accf6b9d521994a
3
+ metadata.gz: 70d899b76bdd764a2ec7e5f23b6d056494b58897daaf49db672daad3e0f60237
4
+ data.tar.gz: daae0430ecaf8eb7f172c70e628581c20a17e1a7cb1a58170b9082fc3dde1ea2
5
5
  SHA512:
6
- metadata.gz: 622c25276c017302c1a9d144e9366043ba359b2c3b0c57d4e7baad8f9de2e9c9969a86c91acdbefcf736af92e297c4e1fbe2008aa41e0c1accadda77dd0724f5
7
- data.tar.gz: 7e64012a5368cb0158ecaa50cdea6447709a64dd3a2816b36a31e7f17d70fffff81bd8d317c0cc1f9a6317adcffad9c200c48f9ca4bf208afba819ff7a07738e
6
+ metadata.gz: 930a0feb7ff47473eb995f8ba695468498afecb311c25296c93042ecfa00cd2bf48e19efd5ba775b4895c065705a344043e0c0e5d7b4115c2dceb1ec7cf341f9
7
+ data.tar.gz: 14ab645609e7c9f3fa954432097cd46d6e9e74e5bddc9ed1541697d13f3399add407ffb5af23c89c82651793c97e2c54b8fd4f845ca20287ce6bf033fd9caed2
data/Changes.md CHANGED
@@ -2,7 +2,22 @@
2
2
 
3
3
  [Sidekiq Changes](https://github.com/mperham/sidekiq/blob/main/Changes.md) | [Sidekiq Pro Changes](https://github.com/mperham/sidekiq/blob/main/Pro-Changes.md) | [Sidekiq Enterprise Changes](https://github.com/mperham/sidekiq/blob/main/Ent-Changes.md)
4
4
 
5
- HEAD
5
+ 6.4.2
6
+ ---------
7
+
8
+ - Strict argument checking now runs after client-side middleware [#5246]
9
+ - Fix page events with live polling [#5184]
10
+ - Many under-the-hood changes to remove all usage of the term "worker"
11
+ from the Sidekiq codebase and APIs. This mostly involved RDoc and local
12
+ variable names but a few constants and public APIs were changed. The old
13
+ APIs will be removed in Sidekiq 7.0.
14
+ ```
15
+ Sidekiq::DEFAULT_WORKER_OPTIONS -> Sidekiq.default_job_options
16
+ Sidekiq.default_worker_options -> Sidekiq.default_job_options
17
+ Sidekiq::Queues["default"].jobs_by_worker(HardJob) -> Sidekiq::Queues["default"].jobs_by_class(HardJob)
18
+ ```
19
+
20
+ 6.4.1
6
21
  ---------
7
22
 
8
23
  - Fix pipeline/multi deprecations in redis-rb 4.6
@@ -319,6 +334,13 @@ See the [Logging wiki page](https://github.com/mperham/sidekiq/wiki/Logging) for
319
334
  - Integrate the StandardRB code formatter to ensure consistent code
320
335
  styling. [#4114, gearnode]
321
336
 
337
+ 5.2.10
338
+ ---------
339
+
340
+ - Backport fix for CVE-2022-23837.
341
+ - Migrate to `exists?` for redis-rb.
342
+ - Lock redis-rb to <4.6 to avoid deprecations.
343
+
322
344
  5.2.9
323
345
  ---------
324
346
 
data/README.md CHANGED
@@ -36,7 +36,7 @@ Sidekiq 6.0 supports Rails 5.0+ but does not require it.
36
36
  Installation
37
37
  -----------------
38
38
 
39
- gem install sidekiq
39
+ bundle add sidekiq
40
40
 
41
41
 
42
42
  Getting Started
data/bin/sidekiqload CHANGED
@@ -36,7 +36,6 @@ end
36
36
 
37
37
  # brew tap shopify/shopify
38
38
  # brew install toxiproxy
39
- # gem install toxiproxy
40
39
  # run `toxiproxy-server` in a separate terminal window.
41
40
  require "toxiproxy"
42
41
  # simulate a non-localhost network for realer-world conditions.
@@ -90,12 +89,7 @@ iter = 50
90
89
  count = 10_000
91
90
 
92
91
  iter.times do
93
- arr = Array.new(count) do
94
- []
95
- end
96
- count.times do |idx|
97
- arr[idx][0] = idx
98
- end
92
+ arr = Array.new(count) { |idx| [idx] }
99
93
  Sidekiq::Client.push_bulk("class" => LoadWorker, "args" => arr)
100
94
  end
101
95
  Sidekiq.logger.error "Created #{count * iter} jobs"
data/lib/sidekiq/api.rb CHANGED
@@ -354,31 +354,31 @@ module Sidekiq
354
354
  def display_args
355
355
  # Unwrap known wrappers so they show up in a human-friendly manner in the Web UI
356
356
  @display_args ||= case klass
357
- when /\ASidekiq::Extensions::Delayed/
358
- safe_load(args[0], args) do |_, _, arg, kwarg|
359
- if !kwarg || kwarg.empty?
360
- arg
361
- else
362
- [arg, kwarg]
363
- end
364
- end
365
- when "ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper"
366
- job_args = self["wrapped"] ? args[0]["arguments"] : []
367
- if (self["wrapped"] || args[0]) == "ActionMailer::DeliveryJob"
368
- # remove MailerClass, mailer_method and 'deliver_now'
369
- job_args.drop(3)
370
- elsif (self["wrapped"] || args[0]) == "ActionMailer::MailDeliveryJob"
371
- # remove MailerClass, mailer_method and 'deliver_now'
372
- job_args.drop(3).first["args"]
373
- else
374
- job_args
375
- end
376
- else
377
- if self["encrypt"]
378
- # no point in showing 150+ bytes of random garbage
379
- args[-1] = "[encrypted data]"
380
- end
381
- args
357
+ when /\ASidekiq::Extensions::Delayed/
358
+ safe_load(args[0], args) do |_, _, arg, kwarg|
359
+ if !kwarg || kwarg.empty?
360
+ arg
361
+ else
362
+ [arg, kwarg]
363
+ end
364
+ end
365
+ when "ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper"
366
+ job_args = self["wrapped"] ? args[0]["arguments"] : []
367
+ if (self["wrapped"] || args[0]) == "ActionMailer::DeliveryJob"
368
+ # remove MailerClass, mailer_method and 'deliver_now'
369
+ job_args.drop(3)
370
+ elsif (self["wrapped"] || args[0]) == "ActionMailer::MailDeliveryJob"
371
+ # remove MailerClass, mailer_method and 'deliver_now'
372
+ job_args.drop(3).first["args"]
373
+ else
374
+ job_args
375
+ end
376
+ else
377
+ if self["encrypt"]
378
+ # no point in showing 150+ bytes of random garbage
379
+ args[-1] = "[encrypted data]"
380
+ end
381
+ args
382
382
  end
383
383
  end
384
384
 
@@ -964,7 +964,7 @@ module Sidekiq
964
964
  procs.sort.each do |key|
965
965
  valid, workers = conn.pipelined { |pipeline|
966
966
  pipeline.exists?(key)
967
- pipeline.hgetall("#{key}:workers")
967
+ pipeline.hgetall("#{key}:work")
968
968
  }
969
969
  next unless valid
970
970
  workers.each_pair do |tid, json|
data/lib/sidekiq/cli.rb CHANGED
@@ -20,7 +20,7 @@ module Sidekiq
20
20
  attr_accessor :launcher
21
21
  attr_accessor :environment
22
22
 
23
- def parse(args = ARGV)
23
+ def parse(args = ARGV.dup)
24
24
  setup_options(args)
25
25
  initialize_logger
26
26
  validate!
@@ -115,8 +115,8 @@ module Sidekiq
115
115
  begin
116
116
  launcher.run
117
117
 
118
- while (readable_io = self_read.wait_readable)
119
- signal = readable_io.gets.strip
118
+ while self_read.wait_readable
119
+ signal = self_read.gets.strip
120
120
  handle_signal(signal)
121
121
  end
122
122
  rescue Interrupt
@@ -295,7 +295,7 @@ module Sidekiq
295
295
  (File.directory?(options[:require]) && !File.exist?("#{options[:require]}/config/application.rb"))
296
296
  logger.info "=================================================================="
297
297
  logger.info " Please point Sidekiq to a Rails application or a Ruby file "
298
- logger.info " to load your worker classes with -r [DIR|FILE]."
298
+ logger.info " to load your job classes with -r [DIR|FILE]."
299
299
  logger.info "=================================================================="
300
300
  logger.info @parser
301
301
  die(1)
@@ -336,7 +336,7 @@ module Sidekiq
336
336
  parse_queue opts, queue, weight
337
337
  end
338
338
 
339
- o.on "-r", "--require [PATH|DIR]", "Location of Rails application with workers or file to require" do |arg|
339
+ o.on "-r", "--require [PATH|DIR]", "Location of Rails application with jobs or file to require" do |arg|
340
340
  opts[:require] = arg
341
341
  end
342
342
 
@@ -15,7 +15,7 @@ module Sidekiq
15
15
  # client.middleware do |chain|
16
16
  # chain.use MyClientMiddleware
17
17
  # end
18
- # client.push('class' => 'SomeWorker', 'args' => [1,2,3])
18
+ # client.push('class' => 'SomeJob', 'args' => [1,2,3])
19
19
  #
20
20
  # All client instances default to the globally-defined
21
21
  # Sidekiq.client_middleware but you can change as necessary.
@@ -49,16 +49,16 @@ module Sidekiq
49
49
  # The main method used to push a job to Redis. Accepts a number of options:
50
50
  #
51
51
  # queue - the named queue to use, default 'default'
52
- # class - the worker class to call, required
52
+ # class - the job class to call, required
53
53
  # args - an array of simple arguments to the perform method, must be JSON-serializable
54
54
  # at - timestamp to schedule the job (optional), must be Numeric (e.g. Time.now.to_f)
55
55
  # retry - whether to retry this job if it fails, default true or an integer number of retries
56
56
  # backtrace - whether to save any error backtrace, default false
57
57
  #
58
58
  # If class is set to the class name, the jobs' options will be based on Sidekiq's default
59
- # worker options. Otherwise, they will be based on the job class's options.
59
+ # job options. Otherwise, they will be based on the job class's options.
60
60
  #
61
- # Any options valid for a worker class's sidekiq_options are also available here.
61
+ # Any options valid for a job class's sidekiq_options are also available here.
62
62
  #
63
63
  # All options must be strings, not symbols. NB: because we are serializing to JSON, all
64
64
  # symbols in 'args' will be converted to strings. Note that +backtrace: true+ can take quite a bit of
@@ -67,13 +67,15 @@ module Sidekiq
67
67
  # Returns a unique Job ID. If middleware stops the job, nil will be returned instead.
68
68
  #
69
69
  # Example:
70
- # push('queue' => 'my_queue', 'class' => MyWorker, 'args' => ['foo', 1, :bat => 'bar'])
70
+ # push('queue' => 'my_queue', 'class' => MyJob, 'args' => ['foo', 1, :bat => 'bar'])
71
71
  #
72
72
  def push(item)
73
73
  normed = normalize_item(item)
74
- payload = process_single(item["class"], normed)
75
-
74
+ payload = middleware.invoke(normed["class"], normed, normed["queue"], @redis_pool) do
75
+ normed
76
+ end
76
77
  if payload
78
+ verify_json(payload)
77
79
  raw_push([payload])
78
80
  payload["jid"]
79
81
  end
@@ -101,12 +103,17 @@ module Sidekiq
101
103
  raise ArgumentError, "Job 'at' must be a Numeric or an Array of Numeric timestamps" if at && (Array(at).empty? || !Array(at).all? { |entry| entry.is_a?(Numeric) })
102
104
  raise ArgumentError, "Job 'at' Array must have same size as 'args' Array" if at.is_a?(Array) && at.size != args.size
103
105
 
106
+ jid = items.delete("jid")
107
+ raise ArgumentError, "Explicitly passing 'jid' when pushing more than one job is not supported" if jid && args.size > 1
108
+
104
109
  normed = normalize_item(items)
105
110
  payloads = args.map.with_index { |job_args, index|
106
111
  copy = normed.merge("args" => job_args, "jid" => SecureRandom.hex(12))
107
112
  copy["at"] = (at.is_a?(Array) ? at[index] : at) if at
108
-
109
- result = process_single(items["class"], copy)
113
+ result = middleware.invoke(copy["class"], copy, copy["queue"], @redis_pool) do
114
+ verify_json(copy)
115
+ copy
116
+ end
110
117
  result || nil
111
118
  }.compact
112
119
 
@@ -119,8 +126,8 @@ module Sidekiq
119
126
  #
120
127
  # pool = ConnectionPool.new { Redis.new }
121
128
  # Sidekiq::Client.via(pool) do
122
- # SomeWorker.perform_async(1,2,3)
123
- # SomeOtherWorker.perform_async(1,2,3)
129
+ # SomeJob.perform_async(1,2,3)
130
+ # SomeOtherJob.perform_async(1,2,3)
124
131
  # end
125
132
  #
126
133
  # Generally this is only needed for very large Sidekiq installs processing
@@ -145,10 +152,10 @@ module Sidekiq
145
152
  end
146
153
 
147
154
  # Resque compatibility helpers. Note all helpers
148
- # should go through Worker#client_push.
155
+ # should go through Sidekiq::Job#client_push.
149
156
  #
150
157
  # Example usage:
151
- # Sidekiq::Client.enqueue(MyWorker, 'foo', 1, :bat => 'bar')
158
+ # Sidekiq::Client.enqueue(MyJob, 'foo', 1, :bat => 'bar')
152
159
  #
153
160
  # Messages are enqueued to the 'default' queue.
154
161
  #
@@ -157,14 +164,14 @@ module Sidekiq
157
164
  end
158
165
 
159
166
  # Example usage:
160
- # Sidekiq::Client.enqueue_to(:queue_name, MyWorker, 'foo', 1, :bat => 'bar')
167
+ # Sidekiq::Client.enqueue_to(:queue_name, MyJob, 'foo', 1, :bat => 'bar')
161
168
  #
162
169
  def enqueue_to(queue, klass, *args)
163
170
  klass.client_push("queue" => queue, "class" => klass, "args" => args)
164
171
  end
165
172
 
166
173
  # Example usage:
167
- # Sidekiq::Client.enqueue_to_in(:queue_name, 3.minutes, MyWorker, 'foo', 1, :bat => 'bar')
174
+ # Sidekiq::Client.enqueue_to_in(:queue_name, 3.minutes, MyJob, 'foo', 1, :bat => 'bar')
168
175
  #
169
176
  def enqueue_to_in(queue, interval, klass, *args)
170
177
  int = interval.to_f
@@ -178,7 +185,7 @@ module Sidekiq
178
185
  end
179
186
 
180
187
  # Example usage:
181
- # Sidekiq::Client.enqueue_in(3.minutes, MyWorker, 'foo', 1, :bat => 'bar')
188
+ # Sidekiq::Client.enqueue_in(3.minutes, MyJob, 'foo', 1, :bat => 'bar')
182
189
  #
183
190
  def enqueue_in(interval, klass, *args)
184
191
  klass.perform_in(interval, *args)
@@ -228,13 +235,5 @@ module Sidekiq
228
235
  conn.lpush("queue:#{queue}", to_push)
229
236
  end
230
237
  end
231
-
232
- def process_single(worker_class, item)
233
- queue = item["queue"]
234
-
235
- middleware.invoke(worker_class, item, queue, @redis_pool) do
236
- item
237
- end
238
- end
239
238
  end
240
239
  end
@@ -10,7 +10,7 @@ module Sidekiq
10
10
  def initialize(performable, target, options = {})
11
11
  @performable = performable
12
12
  @target = target
13
- @opts = options
13
+ @opts = options.transform_keys(&:to_s)
14
14
  end
15
15
 
16
16
  def method_missing(name, *args)
@@ -25,11 +25,11 @@ module Sidekiq
25
25
  #
26
26
  # A job looks like:
27
27
  #
28
- # { 'class' => 'HardWorker', 'args' => [1, 2, 'foo'], 'retry' => true }
28
+ # { 'class' => 'HardJob', 'args' => [1, 2, 'foo'], 'retry' => true }
29
29
  #
30
30
  # The 'retry' option also accepts a number (in place of 'true'):
31
31
  #
32
- # { 'class' => 'HardWorker', 'args' => [1, 2, 'foo'], 'retry' => 5 }
32
+ # { 'class' => 'HardJob', 'args' => [1, 2, 'foo'], 'retry' => 5 }
33
33
  #
34
34
  # The job will be retried this number of times before giving up. (If simply
35
35
  # 'true', Sidekiq retries 25 times)
@@ -53,11 +53,11 @@ module Sidekiq
53
53
  #
54
54
  # Sidekiq.options[:max_retries] = 7
55
55
  #
56
- # or limit the number of retries for a particular worker and send retries to
56
+ # or limit the number of retries for a particular job and send retries to
57
57
  # a low priority queue with:
58
58
  #
59
- # class MyWorker
60
- # include Sidekiq::Worker
59
+ # class MyJob
60
+ # include Sidekiq::Job
61
61
  # sidekiq_options retry: 10, retry_queue: 'low'
62
62
  # end
63
63
  #
@@ -76,7 +76,7 @@ module Sidekiq
76
76
 
77
77
  # The global retry handler requires only the barest of data.
78
78
  # We want to be able to retry as much as possible so we don't
79
- # require the worker to be instantiated.
79
+ # require the job to be instantiated.
80
80
  def global(jobstr, queue)
81
81
  yield
82
82
  rescue Handled => ex
@@ -103,14 +103,14 @@ module Sidekiq
103
103
  end
104
104
 
105
105
  # The local retry support means that any errors that occur within
106
- # this block can be associated with the given worker instance.
106
+ # this block can be associated with the given job instance.
107
107
  # This is required to support the `sidekiq_retries_exhausted` block.
108
108
  #
109
109
  # Note that any exception from the block is wrapped in the Skip
110
110
  # exception so the global block does not reprocess the error. The
111
111
  # Skip exception is unwrapped within Sidekiq::Processor#process before
112
112
  # calling the handle_exception handlers.
113
- def local(worker, jobstr, queue)
113
+ def local(jobinst, jobstr, queue)
114
114
  yield
115
115
  rescue Handled => ex
116
116
  raise ex
@@ -123,11 +123,11 @@ module Sidekiq
123
123
 
124
124
  msg = Sidekiq.load_json(jobstr)
125
125
  if msg["retry"].nil?
126
- msg["retry"] = worker.class.get_sidekiq_options["retry"]
126
+ msg["retry"] = jobinst.class.get_sidekiq_options["retry"]
127
127
  end
128
128
 
129
129
  raise e unless msg["retry"]
130
- attempt_retry(worker, msg, queue, e)
130
+ attempt_retry(jobinst, msg, queue, e)
131
131
  # We've handled this error associated with this job, don't
132
132
  # need to handle it at the global level
133
133
  raise Skip
@@ -135,10 +135,10 @@ module Sidekiq
135
135
 
136
136
  private
137
137
 
138
- # Note that +worker+ can be nil here if an error is raised before we can
139
- # instantiate the worker instance. All access must be guarded and
138
+ # Note that +jobinst+ can be nil here if an error is raised before we can
139
+ # instantiate the job instance. All access must be guarded and
140
140
  # best effort.
141
- def attempt_retry(worker, msg, queue, exception)
141
+ def attempt_retry(jobinst, msg, queue, exception)
142
142
  max_retry_attempts = retry_attempts_from(msg["retry"], @max_retries)
143
143
 
144
144
  msg["queue"] = (msg["retry_queue"] || queue)
@@ -170,7 +170,7 @@ module Sidekiq
170
170
  end
171
171
 
172
172
  if count < max_retry_attempts
173
- delay = delay_for(worker, count, exception)
173
+ delay = delay_for(jobinst, count, exception)
174
174
  # Logging here can break retries if the logging device raises ENOSPC #3979
175
175
  # logger.debug { "Failure! Retry #{count} in #{delay} seconds" }
176
176
  retry_at = Time.now.to_f + delay
@@ -180,13 +180,13 @@ module Sidekiq
180
180
  end
181
181
  else
182
182
  # Goodbye dear message, you (re)tried your best I'm sure.
183
- retries_exhausted(worker, msg, exception)
183
+ retries_exhausted(jobinst, msg, exception)
184
184
  end
185
185
  end
186
186
 
187
- def retries_exhausted(worker, msg, exception)
187
+ def retries_exhausted(jobinst, msg, exception)
188
188
  begin
189
- block = worker&.sidekiq_retries_exhausted_block
189
+ block = jobinst&.sidekiq_retries_exhausted_block
190
190
  block&.call(msg, exception)
191
191
  rescue => e
192
192
  handle_exception(e, {context: "Error calling retries_exhausted", job: msg})
@@ -215,19 +215,19 @@ module Sidekiq
215
215
  end
216
216
  end
217
217
 
218
- def delay_for(worker, count, exception)
218
+ def delay_for(jobinst, count, exception)
219
219
  jitter = rand(10) * (count + 1)
220
- if worker&.sidekiq_retry_in_block
221
- custom_retry_in = retry_in(worker, count, exception).to_i
220
+ if jobinst&.sidekiq_retry_in_block
221
+ custom_retry_in = retry_in(jobinst, count, exception).to_i
222
222
  return custom_retry_in + jitter if custom_retry_in > 0
223
223
  end
224
224
  (count**4) + 15 + jitter
225
225
  end
226
226
 
227
- def retry_in(worker, count, exception)
228
- worker.sidekiq_retry_in_block.call(count, exception)
227
+ def retry_in(jobinst, count, exception)
228
+ jobinst.sidekiq_retry_in_block.call(count, exception)
229
229
  rescue Exception => e
230
- handle_exception(e, {context: "Failure scheduling retry using the defined `sidekiq_retry_in` in #{worker.class.name}, falling back to default"})
230
+ handle_exception(e, {context: "Failure scheduling retry using the defined `sidekiq_retry_in` in #{jobinst.class.name}, falling back to default"})
231
231
  nil
232
232
  end
233
233
 
@@ -12,16 +12,19 @@ module Sidekiq
12
12
  raise(ArgumentError, "Job class must be either a Class or String representation of the class name: `#{item}`") unless item["class"].is_a?(Class) || item["class"].is_a?(String)
13
13
  raise(ArgumentError, "Job 'at' must be a Numeric timestamp: `#{item}`") if item.key?("at") && !item["at"].is_a?(Numeric)
14
14
  raise(ArgumentError, "Job tags must be an Array: `#{item}`") if item["tags"] && !item["tags"].is_a?(Array)
15
+ end
15
16
 
17
+ def verify_json(item)
18
+ job_class = item["wrapped"] || item["class"]
16
19
  if Sidekiq.options[:on_complex_arguments] == :raise
17
20
  msg = <<~EOM
18
- Job arguments to #{item["class"]} must be native JSON types, see https://github.com/mperham/sidekiq/wiki/Best-Practices.
21
+ Job arguments to #{job_class} must be native JSON types, see https://github.com/mperham/sidekiq/wiki/Best-Practices.
19
22
  To disable this error, remove `Sidekiq.strict_args!` from your initializer.
20
23
  EOM
21
24
  raise(ArgumentError, msg) unless json_safe?(item)
22
25
  elsif Sidekiq.options[:on_complex_arguments] == :warn
23
26
  Sidekiq.logger.warn <<~EOM unless json_safe?(item)
24
- Job arguments to #{item["class"]} do not serialize to JSON safely. This will raise an error in
27
+ Job arguments to #{job_class} do not serialize to JSON safely. This will raise an error in
25
28
  Sidekiq 7.0. See https://github.com/mperham/sidekiq/wiki/Best-Practices or raise an error today
26
29
  by calling `Sidekiq.strict_args!` during Sidekiq initialization.
27
30
  EOM
@@ -39,20 +42,19 @@ module Sidekiq
39
42
 
40
43
  raise(ArgumentError, "Job must include a valid queue name") if item["queue"].nil? || item["queue"] == ""
41
44
 
45
+ item["jid"] ||= SecureRandom.hex(12)
42
46
  item["class"] = item["class"].to_s
43
47
  item["queue"] = item["queue"].to_s
44
- item["jid"] ||= SecureRandom.hex(12)
45
48
  item["created_at"] ||= Time.now.to_f
46
-
47
49
  item
48
50
  end
49
51
 
50
52
  def normalized_hash(item_class)
51
53
  if item_class.is_a?(Class)
52
- raise(ArgumentError, "Message must include a Sidekiq::Worker class, not class name: #{item_class.ancestors.inspect}") unless item_class.respond_to?(:get_sidekiq_options)
54
+ raise(ArgumentError, "Message must include a Sidekiq::Job class, not class name: #{item_class.ancestors.inspect}") unless item_class.respond_to?(:get_sidekiq_options)
53
55
  item_class.get_sidekiq_options
54
56
  else
55
- Sidekiq.default_worker_options
57
+ Sidekiq.default_job_options
56
58
  end
57
59
  end
58
60
 
@@ -15,7 +15,7 @@ module Sidekiq
15
15
  proc { "sidekiq" },
16
16
  proc { Sidekiq::VERSION },
17
17
  proc { |me, data| data["tag"] },
18
- proc { |me, data| "[#{Processor::WORKER_STATE.size} of #{data["concurrency"]} busy]" },
18
+ proc { |me, data| "[#{Processor::WORK_STATE.size} of #{data["concurrency"]} busy]" },
19
19
  proc { |me, data| "stopping" if me.stopping? }
20
20
  ]
21
21
 
@@ -43,9 +43,7 @@ module Sidekiq
43
43
  @poller.terminate
44
44
  end
45
45
 
46
- # Shuts down the process. This method does not
47
- # return until all work is complete and cleaned up.
48
- # It can take up to the timeout to complete.
46
+ # Shuts down this Sidekiq instance. Waits up to the deadline for all jobs to complete.
49
47
  def stop
50
48
  deadline = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC) + @options[:timeout]
51
49
 
@@ -55,7 +53,7 @@ module Sidekiq
55
53
 
56
54
  @manager.stop(deadline)
57
55
 
58
- # Requeue everything in case there was a worker who grabbed work while stopped
56
+ # Requeue everything in case there was a thread which fetched a job while the process was stopped.
59
57
  # This call is a no-op in Sidekiq but necessary for Sidekiq Pro.
60
58
  strategy = @options[:fetch]
61
59
  strategy.bulk_requeue([], @options)
@@ -86,7 +84,7 @@ module Sidekiq
86
84
  Sidekiq.redis do |conn|
87
85
  conn.pipelined do |pipeline|
88
86
  pipeline.srem("processes", identity)
89
- pipeline.unlink("#{identity}:workers")
87
+ pipeline.unlink("#{identity}:work")
90
88
  end
91
89
  end
92
90
  rescue
@@ -132,9 +130,8 @@ module Sidekiq
132
130
  begin
133
131
  fails = Processor::FAILURE.reset
134
132
  procd = Processor::PROCESSED.reset
135
- curstate = Processor::WORKER_STATE.dup
133
+ curstate = Processor::WORK_STATE.dup
136
134
 
137
- workers_key = "#{key}:workers"
138
135
  nowdate = Time.now.utc.strftime("%Y-%m-%d")
139
136
 
140
137
  Sidekiq.redis do |conn|
@@ -146,12 +143,16 @@ module Sidekiq
146
143
  transaction.incrby("stat:failed", fails)
147
144
  transaction.incrby("stat:failed:#{nowdate}", fails)
148
145
  transaction.expire("stat:failed:#{nowdate}", STATS_TTL)
146
+ end
149
147
 
150
- transaction.unlink(workers_key)
148
+ # work is the current set of executing jobs
149
+ work_key = "#{key}:work"
150
+ conn.pipelined do |transaction|
151
+ transaction.unlink(work_key)
151
152
  curstate.each_pair do |tid, hash|
152
- transaction.hset(workers_key, tid, Sidekiq.dump_json(hash))
153
+ transaction.hset(work_key, tid, Sidekiq.dump_json(hash))
153
154
  end
154
- transaction.expire(workers_key, 60)
155
+ transaction.expire(work_key, 60)
155
156
  end
156
157
  end
157
158
 
@@ -214,7 +215,7 @@ module Sidekiq
214
215
  Last RTT readings were #{RTT_READINGS.buffer.inspect}, ideally these should be < 1000.
215
216
  Ensure Redis is running in the same AZ or datacenter as Sidekiq.
216
217
  If these values are close to 100,000, that means your Sidekiq process may be
217
- CPU overloaded; see https://github.com/mperham/sidekiq/discussions/5039
218
+ CPU-saturated; reduce your concurrency and/or see https://github.com/mperham/sidekiq/discussions/5039
218
219
  EOM
219
220
  RTT_READINGS.reset
220
221
  end
@@ -35,24 +35,10 @@ module Sidekiq
35
35
  nil
36
36
  end
37
37
 
38
- def debug?
39
- level <= 0
40
- end
41
-
42
- def info?
43
- level <= 1
44
- end
45
-
46
- def warn?
47
- level <= 2
48
- end
49
-
50
- def error?
51
- level <= 3
52
- end
53
-
54
- def fatal?
55
- level <= 4
38
+ LEVELS.each do |level, numeric_level|
39
+ define_method("#{level}?") do
40
+ local_level.nil? ? super() : local_level <= numeric_level
41
+ end
56
42
  end
57
43
 
58
44
  def local_level
@@ -50,7 +50,7 @@ module Sidekiq
50
50
  return if @done
51
51
  @done = true
52
52
 
53
- logger.info { "Terminating quiet workers" }
53
+ logger.info { "Terminating quiet threads" }
54
54
  @workers.each { |x| x.terminate }
55
55
  fire_event(:quiet, reverse: true)
56
56
  end
@@ -65,7 +65,7 @@ module Sidekiq
65
65
  sleep PAUSE_TIME
66
66
  return if @workers.empty?
67
67
 
68
- logger.info { "Pausing to allow workers to finish..." }
68
+ logger.info { "Pausing to allow jobs to finish..." }
69
69
  wait_for(deadline) { @workers.empty? }
70
70
  return if @workers.empty?
71
71
 
@@ -96,7 +96,7 @@ module Sidekiq
96
96
  private
97
97
 
98
98
  def hard_shutdown
99
- # We've reached the timeout and we still have busy workers.
99
+ # We've reached the timeout and we still have busy threads.
100
100
  # They must die but their jobs shall live on.
101
101
  cleanup = nil
102
102
  @plock.synchronize do
@@ -106,12 +106,12 @@ module Sidekiq
106
106
  if cleanup.size > 0
107
107
  jobs = cleanup.map { |p| p.job }.compact
108
108
 
109
- logger.warn { "Terminating #{cleanup.size} busy worker threads" }
110
- logger.warn { "Work still in progress #{jobs.inspect}" }
109
+ logger.warn { "Terminating #{cleanup.size} busy threads" }
110
+ logger.warn { "Jobs still in progress #{jobs.inspect}" }
111
111
 
112
112
  # Re-enqueue unfinished jobs
113
113
  # NOTE: You may notice that we may push a job back to redis before
114
- # the worker thread is terminated. This is ok because Sidekiq's
114
+ # the thread is terminated. This is ok because Sidekiq's
115
115
  # contract says that jobs are run AT LEAST once. Process termination
116
116
  # is delayed until we're certain the jobs are back in Redis because
117
117
  # it is worse to lose a job than to run it twice.