sidekiq 6.0.6 → 6.1.3

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of sidekiq might be problematic. Click here for more details.

Files changed (53) hide show
  1. checksums.yaml +4 -4
  2. data/.github/ISSUE_TEMPLATE/bug_report.md +20 -0
  3. data/.github/workflows/ci.yml +41 -0
  4. data/5.0-Upgrade.md +1 -1
  5. data/Changes.md +48 -1
  6. data/Ent-Changes.md +54 -1
  7. data/Gemfile +1 -1
  8. data/Gemfile.lock +97 -112
  9. data/Pro-Changes.md +34 -3
  10. data/README.md +2 -6
  11. data/bin/sidekiq +26 -2
  12. data/lib/sidekiq.rb +5 -3
  13. data/lib/sidekiq/api.rb +16 -10
  14. data/lib/sidekiq/cli.rb +24 -8
  15. data/lib/sidekiq/client.rb +16 -11
  16. data/lib/sidekiq/extensions/action_mailer.rb +3 -2
  17. data/lib/sidekiq/extensions/active_record.rb +4 -3
  18. data/lib/sidekiq/extensions/class_methods.rb +5 -4
  19. data/lib/sidekiq/fetch.rb +20 -20
  20. data/lib/sidekiq/job_retry.rb +1 -0
  21. data/lib/sidekiq/launcher.rb +47 -13
  22. data/lib/sidekiq/logger.rb +3 -2
  23. data/lib/sidekiq/manager.rb +4 -4
  24. data/lib/sidekiq/middleware/chain.rb +1 -1
  25. data/lib/sidekiq/processor.rb +4 -4
  26. data/lib/sidekiq/rails.rb +16 -18
  27. data/lib/sidekiq/redis_connection.rb +18 -13
  28. data/lib/sidekiq/sd_notify.rb +1 -1
  29. data/lib/sidekiq/systemd.rb +1 -15
  30. data/lib/sidekiq/testing.rb +1 -1
  31. data/lib/sidekiq/version.rb +1 -1
  32. data/lib/sidekiq/web.rb +15 -7
  33. data/lib/sidekiq/web/application.rb +2 -4
  34. data/lib/sidekiq/web/csrf_protection.rb +156 -0
  35. data/lib/sidekiq/web/helpers.rb +15 -6
  36. data/lib/sidekiq/web/router.rb +1 -1
  37. data/lib/sidekiq/worker.rb +2 -5
  38. data/sidekiq.gemspec +2 -3
  39. data/web/assets/javascripts/application.js +3 -8
  40. data/web/assets/stylesheets/application-dark.css +60 -33
  41. data/web/assets/stylesheets/application.css +19 -6
  42. data/web/locales/fr.yml +3 -3
  43. data/web/locales/pl.yml +4 -4
  44. data/web/locales/ru.yml +4 -0
  45. data/web/locales/vi.yml +83 -0
  46. data/web/views/busy.erb +4 -2
  47. data/web/views/morgue.erb +1 -1
  48. data/web/views/queues.erb +1 -1
  49. data/web/views/retries.erb +1 -1
  50. data/web/views/scheduled.erb +1 -1
  51. metadata +13 -25
  52. data/.circleci/config.yml +0 -60
  53. data/.github/issue_template.md +0 -11
@@ -2,13 +2,44 @@
2
2
 
3
3
  [Sidekiq Changes](https://github.com/mperham/sidekiq/blob/master/Changes.md) | [Sidekiq Pro Changes](https://github.com/mperham/sidekiq/blob/master/Pro-Changes.md) | [Sidekiq Enterprise Changes](https://github.com/mperham/sidekiq/blob/master/Ent-Changes.md)
4
4
 
5
- Please see [http://sidekiq.org](http://sidekiq.org) for more details and how to buy.
5
+ Please see [sidekiq.org](https://sidekiq.org/) for more details and how to buy.
6
6
 
7
- HEAD
7
+ 5.2.1
8
8
  ---------
9
9
 
10
- - Update `constantize` for batch callbacks. [#4469]
10
+ - Propagate death callbacks to parent batches [#4774]
11
+ - Allow customization of Batch linger to quickly reclaim memory in Redis [#4772]
12
+ - Fix disappearing processes in Busy due to super_fetch initialization when used in
13
+ tandem with `SIDEKIQ_PRELOAD_APP=1` in `sidekiqswarm`. [#4733]
14
+
15
+ 5.2.0
16
+ ---------
17
+
18
+ - The Sidekiq Pro and Enterprise gem servers now `bundle install` much faster with **Bundler 2.2+** [#4158]
19
+ - Fix issue with reliable push and multiple shards [#4669]
20
+ - Fix Pro memory leak due to fetch refactoring in Sidekiq 6.1 [#4652]
21
+ - Gracefully handle poison pill jobs [#4633]
22
+ - Remove support for multi-shard batches [#4642]
23
+ - Rename `Sidekiq::Rack::BatchStatus` to `Sidekiq::Pro::BatchStatus` [#4655]
24
+
25
+ 5.1.1
26
+ ---------
11
27
 
28
+ - Fix broken basic fetcher [#4616]
29
+
30
+ 5.1.0
31
+ ---------
32
+
33
+ - Remove old Statsd metrics with `WorkerName` in the name [#4377]
34
+ ```
35
+ job.WorkerName.count -> job.count with tag worker:WorkerName
36
+ job.WorkerName.perform -> job.perform with tag worker:WorkerName
37
+ job.WorkerName.failure -> job.failure with tag worker:WorkerName
38
+ ```
39
+ - Remove `concurrent-ruby` gem dependency [#4586]
40
+ - Update `constantize` for batch callbacks. [#4469]
41
+ - Add queue tag to `jobs.recovered.fetch` metric [#4594]
42
+ - Refactor Pro's fetch infrastructure [#4602]
12
43
 
13
44
  5.0.1
14
45
  ---------
data/README.md CHANGED
@@ -2,11 +2,7 @@ Sidekiq
2
2
  ==============
3
3
 
4
4
  [![Gem Version](https://badge.fury.io/rb/sidekiq.svg)](https://rubygems.org/gems/sidekiq)
5
- [![Code Climate](https://codeclimate.com/github/mperham/sidekiq.svg)](https://codeclimate.com/github/mperham/sidekiq)
6
- [![Test Coverage](https://codeclimate.com/github/mperham/sidekiq/badges/coverage.svg)](https://codeclimate.com/github/mperham/sidekiq/coverage)
7
- [![Build Status](https://circleci.com/gh/mperham/sidekiq/tree/master.svg?style=svg)](https://circleci.com/gh/mperham/sidekiq/tree/master)
8
- [![Gitter Chat](https://badges.gitter.im/mperham/sidekiq.svg)](https://gitter.im/mperham/sidekiq)
9
-
5
+ ![Build](https://github.com/mperham/sidekiq/workflows/CI/badge.svg)
10
6
 
11
7
  Simple, efficient background processing for Ruby.
12
8
 
@@ -94,4 +90,4 @@ Please see [LICENSE](https://github.com/mperham/sidekiq/blob/master/LICENSE) for
94
90
  Author
95
91
  -----------------
96
92
 
97
- Mike Perham, [@mperham@mastodon.xyz](https://mastodon.xyz/@mperham) / [@sidekiq](https://twitter.com/sidekiq), [https://www.mikeperham.com](https://www.mikeperham.com) / [https://www.contribsys.com](https://www.contribsys.com)
93
+ Mike Perham, [@getajobmike](https://twitter.com/getajobmike) / [@sidekiq](https://twitter.com/sidekiq), [https://www.mikeperham.com](https://www.mikeperham.com) / [https://www.contribsys.com](https://www.contribsys.com)
@@ -6,13 +6,37 @@ $TESTING = false
6
6
 
7
7
  require_relative '../lib/sidekiq/cli'
8
8
 
9
+ def integrate_with_systemd
10
+ return unless ENV["NOTIFY_SOCKET"]
11
+
12
+ Sidekiq.configure_server do |config|
13
+ Sidekiq.logger.info "Enabling systemd notification integration"
14
+ require "sidekiq/sd_notify"
15
+ config.on(:startup) do
16
+ Sidekiq::SdNotify.ready
17
+ end
18
+ config.on(:shutdown) do
19
+ Sidekiq::SdNotify.stopping
20
+ end
21
+ Sidekiq.start_watchdog if Sidekiq::SdNotify.watchdog?
22
+ end
23
+ end
24
+
9
25
  begin
10
26
  cli = Sidekiq::CLI.instance
11
27
  cli.parse
28
+
29
+ integrate_with_systemd
30
+
12
31
  cli.run
13
32
  rescue => e
14
33
  raise e if $DEBUG
15
- STDERR.puts e.message
16
- STDERR.puts e.backtrace.join("\n")
34
+ if Sidekiq.error_handlers.length == 0
35
+ STDERR.puts e.message
36
+ STDERR.puts e.backtrace.join("\n")
37
+ else
38
+ cli.handle_exception e
39
+ end
40
+
17
41
  exit 1
18
42
  end
@@ -20,6 +20,7 @@ module Sidekiq
20
20
  labels: [],
21
21
  concurrency: 10,
22
22
  require: ".",
23
+ strict: true,
23
24
  environment: nil,
24
25
  timeout: 25,
25
26
  poll_interval_average: nil,
@@ -95,10 +96,11 @@ module Sidekiq
95
96
  retryable = true
96
97
  begin
97
98
  yield conn
98
- rescue Redis::CommandError => ex
99
+ rescue Redis::BaseError => ex
99
100
  # 2550 Failover can cause the server to become a replica, need
100
101
  # to disconnect and reopen the socket to get back to the primary.
101
- if retryable && ex.message =~ /READONLY/
102
+ # 4495 Use the same logic if we have a "Not enough replicas" error from the primary
103
+ if retryable && ex.message =~ /READONLY|NOREPLICAS/
102
104
  conn.disconnect!
103
105
  retryable = false
104
106
  retry
@@ -196,7 +198,7 @@ module Sidekiq
196
198
  end
197
199
 
198
200
  def self.logger
199
- @logger ||= Sidekiq::Logger.new(STDOUT, level: Logger::INFO)
201
+ @logger ||= Sidekiq::Logger.new($stdout, level: Logger::INFO)
200
202
  end
201
203
 
202
204
  def self.logger=(logger)
@@ -85,10 +85,10 @@ module Sidekiq
85
85
 
86
86
  default_queue_latency = if (entry = pipe1_res[6].first)
87
87
  job = begin
88
- Sidekiq.load_json(entry)
89
- rescue
90
- {}
91
- end
88
+ Sidekiq.load_json(entry)
89
+ rescue
90
+ {}
91
+ end
92
92
  now = Time.now.to_f
93
93
  thence = job["enqueued_at"] || now
94
94
  now - thence
@@ -791,19 +791,22 @@ module Sidekiq
791
791
  # you'll be happier this way
792
792
  conn.pipelined do
793
793
  procs.each do |key|
794
- conn.hmget(key, "info", "busy", "beat", "quiet")
794
+ conn.hmget(key, "info", "busy", "beat", "quiet", "rss")
795
795
  end
796
796
  end
797
797
  }
798
798
 
799
- result.each do |info, busy, at_s, quiet|
799
+ result.each do |info, busy, at_s, quiet, rss|
800
800
  # If a process is stopped between when we query Redis for `procs` and
801
801
  # when we query for `result`, we will have an item in `result` that is
802
802
  # composed of `nil` values.
803
803
  next if info.nil?
804
804
 
805
805
  hash = Sidekiq.load_json(info)
806
- yield Process.new(hash.merge("busy" => busy.to_i, "beat" => at_s.to_f, "quiet" => quiet))
806
+ yield Process.new(hash.merge("busy" => busy.to_i,
807
+ "beat" => at_s.to_f,
808
+ "quiet" => quiet,
809
+ "rss" => rss))
807
810
  end
808
811
  end
809
812
 
@@ -916,12 +919,13 @@ module Sidekiq
916
919
  class Workers
917
920
  include Enumerable
918
921
 
919
- def each
922
+ def each(&block)
923
+ results = []
920
924
  Sidekiq.redis do |conn|
921
925
  procs = conn.sscan_each("processes").to_a
922
926
  procs.sort.each do |key|
923
927
  valid, workers = conn.pipelined {
924
- conn.exists(key)
928
+ conn.exists?(key)
925
929
  conn.hgetall("#{key}:workers")
926
930
  }
927
931
  next unless valid
@@ -930,10 +934,12 @@ module Sidekiq
930
934
  p = hsh["payload"]
931
935
  # avoid breaking API, this is a side effect of the JSON optimization in #4316
932
936
  hsh["payload"] = Sidekiq.load_json(p) if p.is_a?(String)
933
- yield key, tid, hsh
937
+ results << [key, tid, hsh]
934
938
  end
935
939
  end
936
940
  end
941
+
942
+ results.sort_by { |(_, _, hsh)| hsh["run_at"] }.each(&block)
937
943
  end
938
944
 
939
945
  # Note that #size is only as accurate as Sidekiq's heartbeat,
@@ -33,8 +33,9 @@ module Sidekiq
33
33
  # Code within this method is not tested because it alters
34
34
  # global process state irreversibly. PRs which improve the
35
35
  # test coverage of Sidekiq::CLI are welcomed.
36
- def run
37
- boot_system
36
+ def run(boot_app: true)
37
+ boot_application if boot_app
38
+
38
39
  if environment == "development" && $stdout.tty? && Sidekiq.log_formatter.is_a?(Sidekiq::Logger::Formatters::Pretty)
39
40
  print_banner
40
41
  end
@@ -43,7 +44,7 @@ module Sidekiq
43
44
  self_read, self_write = IO.pipe
44
45
  sigs = %w[INT TERM TTIN TSTP]
45
46
  # USR1 and USR2 don't work on the JVM
46
- sigs << "USR2" unless jruby?
47
+ sigs << "USR2" if Sidekiq.pro? && !jruby?
47
48
  sigs.each do |sig|
48
49
  trap sig do
49
50
  self_write.puts(sig)
@@ -54,13 +55,26 @@ module Sidekiq
54
55
 
55
56
  logger.info "Running in #{RUBY_DESCRIPTION}"
56
57
  logger.info Sidekiq::LICENSE
57
- logger.info "Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org" unless defined?(::Sidekiq::Pro)
58
+ logger.info "Upgrade to Sidekiq Pro for more features and support: https://sidekiq.org" unless defined?(::Sidekiq::Pro)
58
59
 
59
60
  # touch the connection pool so it is created before we
60
61
  # fire startup and start multithreading.
61
- ver = Sidekiq.redis_info["redis_version"]
62
+ info = Sidekiq.redis_info
63
+ ver = info["redis_version"]
62
64
  raise "You are connecting to Redis v#{ver}, Sidekiq requires Redis v4.0.0 or greater" if ver < "4"
63
65
 
66
+ maxmemory_policy = info["maxmemory_policy"]
67
+ if maxmemory_policy != "noeviction"
68
+ logger.warn <<~EOM
69
+
70
+
71
+ WARNING: Your Redis instance will evict Sidekiq data under heavy load.
72
+ The 'noeviction' maxmemory policy is recommended (current policy: '#{maxmemory_policy}').
73
+ See: https://github.com/mperham/sidekiq/wiki/Using-Redis#memory
74
+
75
+ EOM
76
+ end
77
+
64
78
  # Since the user can pass us a connection pool explicitly in the initializer, we
65
79
  # need to verify the size is large enough or else Sidekiq's performance is dramatically slowed.
66
80
  cursize = Sidekiq.redis_pool.size
@@ -228,8 +242,7 @@ module Sidekiq
228
242
  opts = parse_config(opts[:config_file]).merge(opts) if opts[:config_file]
229
243
 
230
244
  # set defaults
231
- opts[:queues] = ["default"] if opts[:queues].nil? || opts[:queues].empty?
232
- opts[:strict] = true if opts[:strict].nil?
245
+ opts[:queues] = ["default"] if opts[:queues].nil?
233
246
  opts[:concurrency] = Integer(ENV["RAILS_MAX_THREADS"]) if opts[:concurrency].nil? && ENV["RAILS_MAX_THREADS"]
234
247
 
235
248
  # merge with defaults
@@ -240,7 +253,7 @@ module Sidekiq
240
253
  Sidekiq.options
241
254
  end
242
255
 
243
- def boot_system
256
+ def boot_application
244
257
  ENV["RACK_ENV"] = ENV["RAILS_ENV"] = environment
245
258
 
246
259
  if File.directory?(options[:require])
@@ -368,6 +381,8 @@ module Sidekiq
368
381
  end
369
382
 
370
383
  opts = opts.merge(opts.delete(environment.to_sym) || {})
384
+ opts.delete(:strict)
385
+
371
386
  parse_queues(opts, opts.delete(:queues) || [])
372
387
 
373
388
  opts
@@ -379,6 +394,7 @@ module Sidekiq
379
394
 
380
395
  def parse_queue(opts, queue, weight = nil)
381
396
  opts[:queues] ||= []
397
+ opts[:strict] = true if opts[:strict].nil?
382
398
  raise ArgumentError, "queues: #{queue} cannot be defined twice" if opts[:queues].include?(queue)
383
399
  [weight.to_i, 1].max.times { opts[:queues] << queue }
384
400
  opts[:strict] = false if weight.to_i > 0
@@ -19,7 +19,7 @@ module Sidekiq
19
19
  #
20
20
  def middleware(&block)
21
21
  @chain ||= Sidekiq.client_middleware
22
- if block_given?
22
+ if block
23
23
  @chain = @chain.dup
24
24
  yield @chain
25
25
  end
@@ -90,16 +90,17 @@ module Sidekiq
90
90
  # Returns an array of the of pushed jobs' jids. The number of jobs pushed can be less
91
91
  # than the number given if the middleware stopped processing for one or more jobs.
92
92
  def push_bulk(items)
93
- arg = items["args"].first
94
- return [] unless arg # no jobs to push
95
- raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" unless arg.is_a?(Array)
93
+ args = items["args"]
94
+ raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" unless args.is_a?(Array) && args.all?(Array)
95
+ return [] if args.empty? # no jobs to push
96
96
 
97
97
  at = items.delete("at")
98
98
  raise ArgumentError, "Job 'at' must be a Numeric or an Array of Numeric timestamps" if at && (Array(at).empty? || !Array(at).all?(Numeric))
99
+ raise ArgumentError, "Job 'at' Array must have same size as 'args' Array" if at.is_a?(Array) && at.size != args.size
99
100
 
100
101
  normed = normalize_item(items)
101
- payloads = items["args"].map.with_index { |args, index|
102
- copy = normed.merge("args" => args, "jid" => SecureRandom.hex(12), "enqueued_at" => Time.now.to_f)
102
+ payloads = args.map.with_index { |job_args, index|
103
+ copy = normed.merge("args" => job_args, "jid" => SecureRandom.hex(12), "enqueued_at" => Time.now.to_f)
103
104
  copy["at"] = (at.is_a?(Array) ? at[index] : at) if at
104
105
 
105
106
  result = process_single(items["class"], copy)
@@ -218,16 +219,20 @@ module Sidekiq
218
219
  end
219
220
  end
220
221
 
222
+ def validate(item)
223
+ raise(ArgumentError, "Job must be a Hash with 'class' and 'args' keys: `#{item}`") unless item.is_a?(Hash) && item.key?("class") && item.key?("args")
224
+ raise(ArgumentError, "Job args must be an Array: `#{item}`") unless item["args"].is_a?(Array)
225
+ raise(ArgumentError, "Job class must be either a Class or String representation of the class name: `#{item}`") unless item["class"].is_a?(Class) || item["class"].is_a?(String)
226
+ raise(ArgumentError, "Job 'at' must be a Numeric timestamp: `#{item}`") if item.key?("at") && !item["at"].is_a?(Numeric)
227
+ raise(ArgumentError, "Job tags must be an Array: `#{item}`") if item["tags"] && !item["tags"].is_a?(Array)
228
+ end
229
+
221
230
  def normalize_item(item)
222
231
  # 6.0.0 push_bulk bug, #4321
223
232
  # TODO Remove after a while...
224
233
  item.delete("at") if item.key?("at") && item["at"].nil?
225
234
 
226
- raise(ArgumentError, "Job must be a Hash with 'class' and 'args' keys: { 'class' => SomeWorker, 'args' => ['bob', 1, :foo => 'bar'] }") unless item.is_a?(Hash) && item.key?("class") && item.key?("args")
227
- raise(ArgumentError, "Job args must be an Array") unless item["args"].is_a?(Array)
228
- raise(ArgumentError, "Job class must be either a Class or String representation of the class name") unless item["class"].is_a?(Class) || item["class"].is_a?(String)
229
- raise(ArgumentError, "Job 'at' must be a Numeric timestamp") if item.key?("at") && !item["at"].is_a?(Numeric)
230
- raise(ArgumentError, "Job tags must be an Array") if item["tags"] && !item["tags"].is_a?(Array)
235
+ validate(item)
231
236
  # raise(ArgumentError, "Arguments must be native JSON types, see https://github.com/mperham/sidekiq/wiki/Best-Practices") unless JSON.load(JSON.dump(item['args'])) == item['args']
232
237
 
233
238
  # merge in the default sidekiq_options for the item's class and/or wrapped element
@@ -5,9 +5,10 @@ require "sidekiq/extensions/generic_proxy"
5
5
  module Sidekiq
6
6
  module Extensions
7
7
  ##
8
- # Adds 'delay', 'delay_for' and `delay_until` methods to ActionMailer to offload arbitrary email
9
- # delivery to Sidekiq. Example:
8
+ # Adds +delay+, +delay_for+ and +delay_until+ methods to ActionMailer to offload arbitrary email
9
+ # delivery to Sidekiq.
10
10
  #
11
+ # @example
11
12
  # UserMailer.delay.send_welcome_email(new_user)
12
13
  # UserMailer.delay_for(5.days).send_welcome_email(new_user)
13
14
  # UserMailer.delay_until(5.days.from_now).send_welcome_email(new_user)
@@ -5,10 +5,11 @@ require "sidekiq/extensions/generic_proxy"
5
5
  module Sidekiq
6
6
  module Extensions
7
7
  ##
8
- # Adds 'delay', 'delay_for' and `delay_until` methods to ActiveRecord to offload instance method
9
- # execution to Sidekiq. Examples:
8
+ # Adds +delay+, +delay_for+ and +delay_until+ methods to ActiveRecord to offload instance method
9
+ # execution to Sidekiq.
10
10
  #
11
- # User.recent_signups.each { |user| user.delay.mark_as_awesome }
11
+ # @example
12
+ # User.recent_signups.each { |user| user.delay.mark_as_awesome }
12
13
  #
13
14
  # Please note, this is not recommended as this will serialize the entire
14
15
  # object to Redis. Your Sidekiq jobs should pass IDs, not entire instances.
@@ -5,11 +5,12 @@ require "sidekiq/extensions/generic_proxy"
5
5
  module Sidekiq
6
6
  module Extensions
7
7
  ##
8
- # Adds 'delay', 'delay_for' and `delay_until` methods to all Classes to offload class method
9
- # execution to Sidekiq. Examples:
8
+ # Adds `delay`, `delay_for` and `delay_until` methods to all Classes to offload class method
9
+ # execution to Sidekiq.
10
10
  #
11
- # User.delay.delete_inactive
12
- # Wikipedia.delay.download_changes_for(Date.today)
11
+ # @example
12
+ # User.delay.delete_inactive
13
+ # Wikipedia.delay.download_changes_for(Date.today)
13
14
  #
14
15
  class DelayedClass
15
16
  include Sidekiq::Worker
@@ -25,8 +25,10 @@ module Sidekiq
25
25
  }
26
26
 
27
27
  def initialize(options)
28
- @strictly_ordered_queues = !!options[:strict]
29
- @queues = options[:queues].map { |q| "queue:#{q}" }
28
+ raise ArgumentError, "missing queue list" unless options[:queues]
29
+ @options = options
30
+ @strictly_ordered_queues = !!@options[:strict]
31
+ @queues = @options[:queues].map { |q| "queue:#{q}" }
30
32
  if @strictly_ordered_queues
31
33
  @queues.uniq!
32
34
  @queues << TIMEOUT
@@ -38,24 +40,7 @@ module Sidekiq
38
40
  UnitOfWork.new(*work) if work
39
41
  end
40
42
 
41
- # Creating the Redis#brpop command takes into account any
42
- # configured queue weights. By default Redis#brpop returns
43
- # data from the first queue that has pending elements. We
44
- # recreate the queue command each time we invoke Redis#brpop
45
- # to honor weights and avoid queue starvation.
46
- def queues_cmd
47
- if @strictly_ordered_queues
48
- @queues
49
- else
50
- queues = @queues.shuffle!.uniq
51
- queues << TIMEOUT
52
- queues
53
- end
54
- end
55
-
56
- # By leaving this as a class method, it can be pluggable and used by the Manager actor. Making it
57
- # an instance method will make it async to the Fetcher actor
58
- def self.bulk_requeue(inprogress, options)
43
+ def bulk_requeue(inprogress, options)
59
44
  return if inprogress.empty?
60
45
 
61
46
  Sidekiq.logger.debug { "Re-queueing terminated jobs" }
@@ -76,5 +61,20 @@ module Sidekiq
76
61
  rescue => ex
77
62
  Sidekiq.logger.warn("Failed to requeue #{inprogress.size} jobs: #{ex.message}")
78
63
  end
64
+
65
+ # Creating the Redis#brpop command takes into account any
66
+ # configured queue weights. By default Redis#brpop returns
67
+ # data from the first queue that has pending elements. We
68
+ # recreate the queue command each time we invoke Redis#brpop
69
+ # to honor weights and avoid queue starvation.
70
+ def queues_cmd
71
+ if @strictly_ordered_queues
72
+ @queues
73
+ else
74
+ queues = @queues.shuffle!.uniq
75
+ queues << TIMEOUT
76
+ queues
77
+ end
78
+ end
79
79
  end
80
80
  end