sidekiq 6.0.4 → 6.1.1

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of sidekiq might be problematic. Click here for more details.

Files changed (45) hide show
  1. checksums.yaml +4 -4
  2. data/.circleci/config.yml +13 -24
  3. data/Changes.md +58 -0
  4. data/Ent-Changes.md +20 -1
  5. data/Gemfile +1 -1
  6. data/Gemfile.lock +113 -104
  7. data/Pro-Changes.md +20 -1
  8. data/README.md +2 -5
  9. data/bin/sidekiq +26 -2
  10. data/lib/sidekiq.rb +9 -7
  11. data/lib/sidekiq/api.rb +9 -6
  12. data/lib/sidekiq/cli.rb +18 -9
  13. data/lib/sidekiq/client.rb +17 -10
  14. data/lib/sidekiq/extensions/active_record.rb +3 -2
  15. data/lib/sidekiq/extensions/class_methods.rb +5 -4
  16. data/lib/sidekiq/fetch.rb +20 -20
  17. data/lib/sidekiq/job_logger.rb +1 -1
  18. data/lib/sidekiq/launcher.rb +34 -7
  19. data/lib/sidekiq/logger.rb +9 -9
  20. data/lib/sidekiq/manager.rb +3 -3
  21. data/lib/sidekiq/monitor.rb +2 -2
  22. data/lib/sidekiq/processor.rb +5 -5
  23. data/lib/sidekiq/rails.rb +16 -18
  24. data/lib/sidekiq/redis_connection.rb +18 -13
  25. data/lib/sidekiq/sd_notify.rb +149 -0
  26. data/lib/sidekiq/systemd.rb +24 -0
  27. data/lib/sidekiq/testing.rb +1 -1
  28. data/lib/sidekiq/version.rb +1 -1
  29. data/lib/sidekiq/web.rb +16 -8
  30. data/lib/sidekiq/web/application.rb +6 -8
  31. data/lib/sidekiq/web/csrf_protection.rb +153 -0
  32. data/lib/sidekiq/web/helpers.rb +4 -7
  33. data/lib/sidekiq/web/router.rb +2 -4
  34. data/lib/sidekiq/worker.rb +4 -7
  35. data/sidekiq.gemspec +2 -3
  36. data/web/assets/javascripts/application.js +25 -27
  37. data/web/assets/stylesheets/application-dark.css +132 -124
  38. data/web/assets/stylesheets/application.css +5 -0
  39. data/web/locales/fr.yml +2 -2
  40. data/web/locales/ja.yml +2 -0
  41. data/web/locales/lt.yml +83 -0
  42. data/web/locales/pl.yml +4 -4
  43. data/web/locales/vi.yml +83 -0
  44. data/web/views/layout.erb +1 -1
  45. metadata +14 -23
@@ -2,7 +2,26 @@
2
2
 
3
3
  [Sidekiq Changes](https://github.com/mperham/sidekiq/blob/master/Changes.md) | [Sidekiq Pro Changes](https://github.com/mperham/sidekiq/blob/master/Pro-Changes.md) | [Sidekiq Enterprise Changes](https://github.com/mperham/sidekiq/blob/master/Ent-Changes.md)
4
4
 
5
- Please see [http://sidekiq.org/](http://sidekiq.org/) for more details and how to buy.
5
+ Please see [sidekiq.org](https://sidekiq.org/) for more details and how to buy.
6
+
7
+ 5.1.1
8
+ ---------
9
+
10
+ - Fix broken basic fetcher [#4616]
11
+
12
+ 5.1.0
13
+ ---------
14
+
15
+ - Remove old Statsd metrics with `WorkerName` in the name [#4377]
16
+ ```
17
+ job.WorkerName.count -> job.count with tag worker:WorkerName
18
+ job.WorkerName.perform -> job.perform with tag worker:WorkerName
19
+ job.WorkerName.failure -> job.failure with tag worker:WorkerName
20
+ ```
21
+ - Remove `concurrent-ruby` gem dependency [#4586]
22
+ - Update `constantize` for batch callbacks. [#4469]
23
+ - Add queue tag to `jobs.recovered.fetch` metric [#4594]
24
+ - Refactor Pro's fetch infrastructure [#4602]
6
25
 
7
26
  5.0.1
8
27
  ---------
data/README.md CHANGED
@@ -2,11 +2,8 @@ Sidekiq
2
2
  ==============
3
3
 
4
4
  [![Gem Version](https://badge.fury.io/rb/sidekiq.svg)](https://rubygems.org/gems/sidekiq)
5
- [![Code Climate](https://codeclimate.com/github/mperham/sidekiq.svg)](https://codeclimate.com/github/mperham/sidekiq)
6
- [![Test Coverage](https://codeclimate.com/github/mperham/sidekiq/badges/coverage.svg)](https://codeclimate.com/github/mperham/sidekiq/coverage)
5
+ [![Codecov](https://codecov.io/gh/mperham/sidekiq/branch/master/graph/badge.svg)](https://codecov.io/gh/mperham/sidekiq)
7
6
  [![Build Status](https://circleci.com/gh/mperham/sidekiq/tree/master.svg?style=svg)](https://circleci.com/gh/mperham/sidekiq/tree/master)
8
- [![Gitter Chat](https://badges.gitter.im/mperham/sidekiq.svg)](https://gitter.im/mperham/sidekiq)
9
-
10
7
 
11
8
  Simple, efficient background processing for Ruby.
12
9
 
@@ -94,4 +91,4 @@ Please see [LICENSE](https://github.com/mperham/sidekiq/blob/master/LICENSE) for
94
91
  Author
95
92
  -----------------
96
93
 
97
- Mike Perham, [@mperham@mastodon.xyz](https://mastodon.xyz/@mperham) / [@sidekiq](https://twitter.com/sidekiq), [https://www.mikeperham.com](https://www.mikeperham.com) / [https://www.contribsys.com](https://www.contribsys.com)
94
+ Mike Perham, [@getajobmike](https://twitter.com/getajobmike) / [@sidekiq](https://twitter.com/sidekiq), [https://www.mikeperham.com](https://www.mikeperham.com) / [https://www.contribsys.com](https://www.contribsys.com)
@@ -6,13 +6,37 @@ $TESTING = false
6
6
 
7
7
  require_relative '../lib/sidekiq/cli'
8
8
 
9
+ def integrate_with_systemd
10
+ return unless ENV["NOTIFY_SOCKET"]
11
+
12
+ Sidekiq.configure_server do |config|
13
+ Sidekiq.logger.info "Enabling systemd notification integration"
14
+ require "sidekiq/sd_notify"
15
+ config.on(:startup) do
16
+ Sidekiq::SdNotify.ready
17
+ end
18
+ config.on(:shutdown) do
19
+ Sidekiq::SdNotify.stopping
20
+ end
21
+ Sidekiq.start_watchdog if Sidekiq::SdNotify.watchdog?
22
+ end
23
+ end
24
+
9
25
  begin
10
26
  cli = Sidekiq::CLI.instance
11
27
  cli.parse
28
+
29
+ integrate_with_systemd
30
+
12
31
  cli.run
13
32
  rescue => e
14
33
  raise e if $DEBUG
15
- STDERR.puts e.message
16
- STDERR.puts e.backtrace.join("\n")
34
+ if Sidekiq.error_handlers.length == 0
35
+ STDERR.puts e.message
36
+ STDERR.puts e.backtrace.join("\n")
37
+ else
38
+ cli.handle_exception e
39
+ end
40
+
17
41
  exit 1
18
42
  end
@@ -20,6 +20,7 @@ module Sidekiq
20
20
  labels: [],
21
21
  concurrency: 10,
22
22
  require: ".",
23
+ strict: true,
23
24
  environment: nil,
24
25
  timeout: 25,
25
26
  poll_interval_average: nil,
@@ -30,16 +31,16 @@ module Sidekiq
30
31
  startup: [],
31
32
  quiet: [],
32
33
  shutdown: [],
33
- heartbeat: [],
34
+ heartbeat: []
34
35
  },
35
36
  dead_max_jobs: 10_000,
36
37
  dead_timeout_in_seconds: 180 * 24 * 60 * 60, # 6 months
37
- reloader: proc { |&block| block.call },
38
+ reloader: proc { |&block| block.call }
38
39
  }
39
40
 
40
41
  DEFAULT_WORKER_OPTIONS = {
41
42
  "retry" => true,
42
- "queue" => "default",
43
+ "queue" => "default"
43
44
  }
44
45
 
45
46
  FAKE_INFO = {
@@ -47,7 +48,7 @@ module Sidekiq
47
48
  "uptime_in_days" => "9999",
48
49
  "connected_clients" => "9999",
49
50
  "used_memory_human" => "9P",
50
- "used_memory_peak_human" => "9P",
51
+ "used_memory_peak_human" => "9P"
51
52
  }
52
53
 
53
54
  def self.❨╯°□°❩╯︵┻━┻
@@ -95,10 +96,11 @@ module Sidekiq
95
96
  retryable = true
96
97
  begin
97
98
  yield conn
98
- rescue Redis::CommandError => ex
99
+ rescue Redis::BaseError => ex
99
100
  # 2550 Failover can cause the server to become a replica, need
100
101
  # to disconnect and reopen the socket to get back to the primary.
101
- if retryable && ex.message =~ /READONLY/
102
+ # 4495 Use the same logic if we have a "Not enough replicas" error from the primary
103
+ if retryable && ex.message =~ /READONLY|NOREPLICAS/
102
104
  conn.disconnect!
103
105
  retryable = false
104
106
  retry
@@ -154,7 +156,7 @@ module Sidekiq
154
156
 
155
157
  def self.default_worker_options=(hash)
156
158
  # stringify
157
- @default_worker_options = default_worker_options.merge(Hash[hash.map { |k, v| [k.to_s, v] }])
159
+ @default_worker_options = default_worker_options.merge(hash.transform_keys(&:to_s))
158
160
  end
159
161
 
160
162
  def self.default_worker_options
@@ -105,7 +105,7 @@ module Sidekiq
105
105
 
106
106
  default_queue_latency: default_queue_latency,
107
107
  workers_size: workers_size,
108
- enqueued: enqueued,
108
+ enqueued: enqueued
109
109
  }
110
110
  end
111
111
 
@@ -273,7 +273,7 @@ module Sidekiq
273
273
  def clear
274
274
  Sidekiq.redis do |conn|
275
275
  conn.multi do
276
- conn.del(@rname)
276
+ conn.unlink(@rname)
277
277
  conn.srem("queues", name)
278
278
  end
279
279
  end
@@ -562,7 +562,7 @@ module Sidekiq
562
562
 
563
563
  def clear
564
564
  Sidekiq.redis do |conn|
565
- conn.del(name)
565
+ conn.unlink(name)
566
566
  end
567
567
  end
568
568
  alias_method :💣, :clear
@@ -916,12 +916,13 @@ module Sidekiq
916
916
  class Workers
917
917
  include Enumerable
918
918
 
919
- def each
919
+ def each(&block)
920
+ results = []
920
921
  Sidekiq.redis do |conn|
921
922
  procs = conn.sscan_each("processes").to_a
922
923
  procs.sort.each do |key|
923
924
  valid, workers = conn.pipelined {
924
- conn.exists(key)
925
+ conn.exists?(key)
925
926
  conn.hgetall("#{key}:workers")
926
927
  }
927
928
  next unless valid
@@ -930,10 +931,12 @@ module Sidekiq
930
931
  p = hsh["payload"]
931
932
  # avoid breaking API, this is a side effect of the JSON optimization in #4316
932
933
  hsh["payload"] = Sidekiq.load_json(p) if p.is_a?(String)
933
- yield key, tid, hsh
934
+ results << [key, tid, hsh]
934
935
  end
935
936
  end
936
937
  end
938
+
939
+ results.sort_by { |(_, _, hsh)| hsh["run_at"] }.each(&block)
937
940
  end
938
941
 
939
942
  # Note that #size is only as accurate as Sidekiq's heartbeat,
@@ -33,8 +33,9 @@ module Sidekiq
33
33
  # Code within this method is not tested because it alters
34
34
  # global process state irreversibly. PRs which improve the
35
35
  # test coverage of Sidekiq::CLI are welcomed.
36
- def run
37
- boot_system
36
+ def run(boot_app: true)
37
+ boot_application if boot_app
38
+
38
39
  if environment == "development" && $stdout.tty? && Sidekiq.log_formatter.is_a?(Sidekiq::Logger::Formatters::Pretty)
39
40
  print_banner
40
41
  end
@@ -54,7 +55,7 @@ module Sidekiq
54
55
 
55
56
  logger.info "Running in #{RUBY_DESCRIPTION}"
56
57
  logger.info Sidekiq::LICENSE
57
- logger.info "Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org" unless defined?(::Sidekiq::Pro)
58
+ logger.info "Upgrade to Sidekiq Pro for more features and support: https://sidekiq.org" unless defined?(::Sidekiq::Pro)
58
59
 
59
60
  # touch the connection pool so it is created before we
60
61
  # fire startup and start multithreading.
@@ -163,7 +164,7 @@ module Sidekiq
163
164
  Sidekiq.logger.warn "<no backtrace available>"
164
165
  end
165
166
  end
166
- },
167
+ }
167
168
  }
168
169
  UNHANDLED_SIGNAL_HANDLER = ->(cli) { Sidekiq.logger.info "No signal handler registered, ignoring" }
169
170
  SIGNAL_HANDLERS.default = UNHANDLED_SIGNAL_HANDLER
@@ -182,7 +183,11 @@ module Sidekiq
182
183
  end
183
184
 
184
185
  def set_environment(cli_env)
185
- @environment = cli_env || ENV["RAILS_ENV"] || ENV["RACK_ENV"] || "development"
186
+ # See #984 for discussion.
187
+ # APP_ENV is now the preferred ENV term since it is not tech-specific.
188
+ # Both Sinatra 2.0+ and Sidekiq support this term.
189
+ # RAILS_ENV and RACK_ENV are there for legacy support.
190
+ @environment = cli_env || ENV["APP_ENV"] || ENV["RAILS_ENV"] || ENV["RACK_ENV"] || "development"
186
191
  end
187
192
 
188
193
  def symbolize_keys_deep!(hash)
@@ -224,8 +229,7 @@ module Sidekiq
224
229
  opts = parse_config(opts[:config_file]).merge(opts) if opts[:config_file]
225
230
 
226
231
  # set defaults
227
- opts[:queues] = ["default"] if opts[:queues].nil? || opts[:queues].empty?
228
- opts[:strict] = true if opts[:strict].nil?
232
+ opts[:queues] = ["default"] if opts[:queues].nil?
229
233
  opts[:concurrency] = Integer(ENV["RAILS_MAX_THREADS"]) if opts[:concurrency].nil? && ENV["RAILS_MAX_THREADS"]
230
234
 
231
235
  # merge with defaults
@@ -236,7 +240,7 @@ module Sidekiq
236
240
  Sidekiq.options
237
241
  end
238
242
 
239
- def boot_system
243
+ def boot_application
240
244
  ENV["RACK_ENV"] = ENV["RAILS_ENV"] = environment
241
245
 
242
246
  if File.directory?(options[:require])
@@ -364,6 +368,8 @@ module Sidekiq
364
368
  end
365
369
 
366
370
  opts = opts.merge(opts.delete(environment.to_sym) || {})
371
+ opts.delete(:strict)
372
+
367
373
  parse_queues(opts, opts.delete(:queues) || [])
368
374
 
369
375
  opts
@@ -375,13 +381,16 @@ module Sidekiq
375
381
 
376
382
  def parse_queue(opts, queue, weight = nil)
377
383
  opts[:queues] ||= []
384
+ opts[:strict] = true if opts[:strict].nil?
378
385
  raise ArgumentError, "queues: #{queue} cannot be defined twice" if opts[:queues].include?(queue)
379
386
  [weight.to_i, 1].max.times { opts[:queues] << queue }
380
387
  opts[:strict] = false if weight.to_i > 0
381
388
  end
382
389
 
383
390
  def rails_app?
384
- defined?(::Rails)
391
+ defined?(::Rails) && ::Rails.respond_to?(:application)
385
392
  end
386
393
  end
387
394
  end
395
+
396
+ require "sidekiq/systemd"
@@ -90,16 +90,17 @@ module Sidekiq
90
90
  # Returns an array of the of pushed jobs' jids. The number of jobs pushed can be less
91
91
  # than the number given if the middleware stopped processing for one or more jobs.
92
92
  def push_bulk(items)
93
- arg = items["args"].first
94
- return [] unless arg # no jobs to push
95
- raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" unless arg.is_a?(Array)
93
+ args = items["args"]
94
+ raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" unless args.is_a?(Array) && args.all?(Array)
95
+ return [] if args.empty? # no jobs to push
96
96
 
97
97
  at = items.delete("at")
98
98
  raise ArgumentError, "Job 'at' must be a Numeric or an Array of Numeric timestamps" if at && (Array(at).empty? || !Array(at).all?(Numeric))
99
+ raise ArgumentError, "Job 'at' Array must have same size as 'args' Array" if at.is_a?(Array) && at.size != args.size
99
100
 
100
101
  normed = normalize_item(items)
101
- payloads = items["args"].map.with_index { |args, index|
102
- copy = normed.merge("args" => args, "jid" => SecureRandom.hex(12), "enqueued_at" => Time.now.to_f)
102
+ payloads = args.map.with_index { |job_args, index|
103
+ copy = normed.merge("args" => job_args, "jid" => SecureRandom.hex(12), "enqueued_at" => Time.now.to_f)
103
104
  copy["at"] = (at.is_a?(Array) ? at[index] : at) if at
104
105
 
105
106
  result = process_single(items["class"], copy)
@@ -218,16 +219,20 @@ module Sidekiq
218
219
  end
219
220
  end
220
221
 
222
+ def validate(item)
223
+ raise(ArgumentError, "Job must be a Hash with 'class' and 'args' keys: `#{item}`") unless item.is_a?(Hash) && item.key?("class") && item.key?("args")
224
+ raise(ArgumentError, "Job args must be an Array: `#{item}`") unless item["args"].is_a?(Array)
225
+ raise(ArgumentError, "Job class must be either a Class or String representation of the class name: `#{item}`") unless item["class"].is_a?(Class) || item["class"].is_a?(String)
226
+ raise(ArgumentError, "Job 'at' must be a Numeric timestamp: `#{item}`") if item.key?("at") && !item["at"].is_a?(Numeric)
227
+ raise(ArgumentError, "Job tags must be an Array: `#{item}`") if item["tags"] && !item["tags"].is_a?(Array)
228
+ end
229
+
221
230
  def normalize_item(item)
222
231
  # 6.0.0 push_bulk bug, #4321
223
232
  # TODO Remove after a while...
224
233
  item.delete("at") if item.key?("at") && item["at"].nil?
225
234
 
226
- raise(ArgumentError, "Job must be a Hash with 'class' and 'args' keys: { 'class' => SomeWorker, 'args' => ['bob', 1, :foo => 'bar'] }") unless item.is_a?(Hash) && item.key?("class") && item.key?("args")
227
- raise(ArgumentError, "Job args must be an Array") unless item["args"].is_a?(Array)
228
- raise(ArgumentError, "Job class must be either a Class or String representation of the class name") unless item["class"].is_a?(Class) || item["class"].is_a?(String)
229
- raise(ArgumentError, "Job 'at' must be a Numeric timestamp") if item.key?("at") && !item["at"].is_a?(Numeric)
230
- raise(ArgumentError, "Job tags must be an Array") if item["tags"] && !item["tags"].is_a?(Array)
235
+ validate(item)
231
236
  # raise(ArgumentError, "Arguments must be native JSON types, see https://github.com/mperham/sidekiq/wiki/Best-Practices") unless JSON.load(JSON.dump(item['args'])) == item['args']
232
237
 
233
238
  # merge in the default sidekiq_options for the item's class and/or wrapped element
@@ -236,6 +241,8 @@ module Sidekiq
236
241
  defaults = defaults.merge(item["wrapped"].get_sidekiq_options) if item["wrapped"].respond_to?("get_sidekiq_options")
237
242
  item = defaults.merge(item)
238
243
 
244
+ raise(ArgumentError, "Job must include a valid queue name") if item["queue"].nil? || item["queue"] == ""
245
+
239
246
  item["class"] = item["class"].to_s
240
247
  item["queue"] = item["queue"].to_s
241
248
  item["jid"] ||= SecureRandom.hex(12)
@@ -6,9 +6,10 @@ module Sidekiq
6
6
  module Extensions
7
7
  ##
8
8
  # Adds 'delay', 'delay_for' and `delay_until` methods to ActiveRecord to offload instance method
9
- # execution to Sidekiq. Examples:
9
+ # execution to Sidekiq.
10
10
  #
11
- # User.recent_signups.each { |user| user.delay.mark_as_awesome }
11
+ # @example
12
+ # User.recent_signups.each { |user| user.delay.mark_as_awesome }
12
13
  #
13
14
  # Please note, this is not recommended as this will serialize the entire
14
15
  # object to Redis. Your Sidekiq jobs should pass IDs, not entire instances.
@@ -5,11 +5,12 @@ require "sidekiq/extensions/generic_proxy"
5
5
  module Sidekiq
6
6
  module Extensions
7
7
  ##
8
- # Adds 'delay', 'delay_for' and `delay_until` methods to all Classes to offload class method
9
- # execution to Sidekiq. Examples:
8
+ # Adds `delay`, `delay_for` and `delay_until` methods to all Classes to offload class method
9
+ # execution to Sidekiq.
10
10
  #
11
- # User.delay.delete_inactive
12
- # Wikipedia.delay.download_changes_for(Date.today)
11
+ # @example
12
+ # User.delay.delete_inactive
13
+ # Wikipedia.delay.download_changes_for(Date.today)
13
14
  #
14
15
  class DelayedClass
15
16
  include Sidekiq::Worker
@@ -25,8 +25,10 @@ module Sidekiq
25
25
  }
26
26
 
27
27
  def initialize(options)
28
- @strictly_ordered_queues = !!options[:strict]
29
- @queues = options[:queues].map { |q| "queue:#{q}" }
28
+ raise ArgumentError, "missing queue list" unless options[:queues]
29
+ @options = options
30
+ @strictly_ordered_queues = !!@options[:strict]
31
+ @queues = @options[:queues].map { |q| "queue:#{q}" }
30
32
  if @strictly_ordered_queues
31
33
  @queues.uniq!
32
34
  @queues << TIMEOUT
@@ -38,24 +40,7 @@ module Sidekiq
38
40
  UnitOfWork.new(*work) if work
39
41
  end
40
42
 
41
- # Creating the Redis#brpop command takes into account any
42
- # configured queue weights. By default Redis#brpop returns
43
- # data from the first queue that has pending elements. We
44
- # recreate the queue command each time we invoke Redis#brpop
45
- # to honor weights and avoid queue starvation.
46
- def queues_cmd
47
- if @strictly_ordered_queues
48
- @queues
49
- else
50
- queues = @queues.shuffle!.uniq
51
- queues << TIMEOUT
52
- queues
53
- end
54
- end
55
-
56
- # By leaving this as a class method, it can be pluggable and used by the Manager actor. Making it
57
- # an instance method will make it async to the Fetcher actor
58
- def self.bulk_requeue(inprogress, options)
43
+ def bulk_requeue(inprogress, options)
59
44
  return if inprogress.empty?
60
45
 
61
46
  Sidekiq.logger.debug { "Re-queueing terminated jobs" }
@@ -76,5 +61,20 @@ module Sidekiq
76
61
  rescue => ex
77
62
  Sidekiq.logger.warn("Failed to requeue #{inprogress.size} jobs: #{ex.message}")
78
63
  end
64
+
65
+ # Creating the Redis#brpop command takes into account any
66
+ # configured queue weights. By default Redis#brpop returns
67
+ # data from the first queue that has pending elements. We
68
+ # recreate the queue command each time we invoke Redis#brpop
69
+ # to honor weights and avoid queue starvation.
70
+ def queues_cmd
71
+ if @strictly_ordered_queues
72
+ @queues
73
+ else
74
+ queues = @queues.shuffle!.uniq
75
+ queues << TIMEOUT
76
+ queues
77
+ end
78
+ end
79
79
  end
80
80
  end
@@ -39,7 +39,7 @@ module Sidekiq
39
39
  # attribute to expose the underlying thing.
40
40
  h = {
41
41
  class: job_hash["wrapped"] || job_hash["class"],
42
- jid: job_hash["jid"],
42
+ jid: job_hash["jid"]
43
43
  }
44
44
  h[:bid] = job_hash["bid"] if job_hash["bid"]
45
45
  h[:tags] = job_hash["tags"] if job_hash["tags"]
@@ -16,12 +16,13 @@ module Sidekiq
16
16
  proc { Sidekiq::VERSION },
17
17
  proc { |me, data| data["tag"] },
18
18
  proc { |me, data| "[#{Processor::WORKER_STATE.size} of #{data["concurrency"]} busy]" },
19
- proc { |me, data| "stopping" if me.stopping? },
19
+ proc { |me, data| "stopping" if me.stopping? }
20
20
  ]
21
21
 
22
22
  attr_accessor :manager, :poller, :fetcher
23
23
 
24
24
  def initialize(options)
25
+ options[:fetch] ||= BasicFetch.new(options)
25
26
  @manager = Sidekiq::Manager.new(options)
26
27
  @poller = Sidekiq::Scheduled::Poller.new
27
28
  @done = false
@@ -56,7 +57,7 @@ module Sidekiq
56
57
 
57
58
  # Requeue everything in case there was a worker who grabbed work while stopped
58
59
  # This call is a no-op in Sidekiq but necessary for Sidekiq Pro.
59
- strategy = (@options[:fetch] || Sidekiq::BasicFetch)
60
+ strategy = @options[:fetch]
60
61
  strategy.bulk_requeue([], @options)
61
62
 
62
63
  clear_heartbeat
@@ -83,7 +84,7 @@ module Sidekiq
83
84
  Sidekiq.redis do |conn|
84
85
  conn.pipelined do
85
86
  conn.srem("processes", identity)
86
- conn.del("#{identity}:workers")
87
+ conn.unlink("#{identity}:workers")
87
88
  end
88
89
  end
89
90
  rescue
@@ -96,6 +97,32 @@ module Sidekiq
96
97
 
97
98
  end
98
99
 
100
+ def self.flush_stats
101
+ fails = Processor::FAILURE.reset
102
+ procd = Processor::PROCESSED.reset
103
+ return if fails + procd == 0
104
+
105
+ nowdate = Time.now.utc.strftime("%Y-%m-%d")
106
+ begin
107
+ Sidekiq.redis do |conn|
108
+ conn.pipelined do
109
+ conn.incrby("stat:processed", procd)
110
+ conn.incrby("stat:processed:#{nowdate}", procd)
111
+ conn.expire("stat:processed:#{nowdate}", STATS_TTL)
112
+
113
+ conn.incrby("stat:failed", fails)
114
+ conn.incrby("stat:failed:#{nowdate}", fails)
115
+ conn.expire("stat:failed:#{nowdate}", STATS_TTL)
116
+ end
117
+ end
118
+ rescue => ex
119
+ # we're exiting the process, things might be shut down so don't
120
+ # try to handle the exception
121
+ Sidekiq.logger.warn("Unable to flush stats: #{ex}")
122
+ end
123
+ end
124
+ at_exit(&method(:flush_stats))
125
+
99
126
  def ❤
100
127
  key = identity
101
128
  fails = procd = 0
@@ -118,7 +145,7 @@ module Sidekiq
118
145
  conn.incrby("stat:failed:#{nowdate}", fails)
119
146
  conn.expire("stat:failed:#{nowdate}", STATS_TTL)
120
147
 
121
- conn.del(workers_key)
148
+ conn.unlink(workers_key)
122
149
  curstate.each_pair do |tid, hash|
123
150
  conn.hset(workers_key, tid, Sidekiq.dump_json(hash))
124
151
  end
@@ -131,7 +158,7 @@ module Sidekiq
131
158
  _, exists, _, _, msg = Sidekiq.redis { |conn|
132
159
  conn.multi {
133
160
  conn.sadd("processes", key)
134
- conn.exists(key)
161
+ conn.exists?(key)
135
162
  conn.hmset(key, "info", to_json, "busy", curstate.size, "beat", Time.now.to_f, "quiet", @done)
136
163
  conn.expire(key, 60)
137
164
  conn.rpop("#{key}-signals")
@@ -146,7 +173,7 @@ module Sidekiq
146
173
  ::Process.kill(msg, ::Process.pid)
147
174
  rescue => e
148
175
  # ignore all redis/network issues
149
- logger.error("heartbeat: #{e.message}")
176
+ logger.error("heartbeat: #{e}")
150
177
  # don't lose the counts if there was a network issue
151
178
  Processor::PROCESSED.incr(procd)
152
179
  Processor::FAILURE.incr(fails)
@@ -163,7 +190,7 @@ module Sidekiq
163
190
  "concurrency" => @options[:concurrency],
164
191
  "queues" => @options[:queues].uniq,
165
192
  "labels" => @options[:labels],
166
- "identity" => identity,
193
+ "identity" => identity
167
194
  }
168
195
  end
169
196
  end