sidekiq 6.0.0 → 6.1.2
Sign up to get free protection for your applications and to get access to all the features.
Potentially problematic release.
This version of sidekiq might be problematic. Click here for more details.
- checksums.yaml +4 -4
- data/.github/ISSUE_TEMPLATE/bug_report.md +20 -0
- data/.github/workflows/ci.yml +41 -0
- data/6.0-Upgrade.md +3 -1
- data/Changes.md +163 -1
- data/Ent-Changes.md +33 -2
- data/Gemfile +2 -2
- data/Gemfile.lock +109 -113
- data/Pro-Changes.md +39 -2
- data/README.md +4 -6
- data/bin/sidekiq +26 -2
- data/bin/sidekiqload +8 -4
- data/bin/sidekiqmon +4 -5
- data/lib/generators/sidekiq/worker_generator.rb +11 -1
- data/lib/sidekiq/api.rb +130 -94
- data/lib/sidekiq/cli.rb +40 -24
- data/lib/sidekiq/client.rb +33 -12
- data/lib/sidekiq/extensions/action_mailer.rb +3 -2
- data/lib/sidekiq/extensions/active_record.rb +4 -3
- data/lib/sidekiq/extensions/class_methods.rb +5 -4
- data/lib/sidekiq/fetch.rb +26 -26
- data/lib/sidekiq/job_logger.rb +12 -4
- data/lib/sidekiq/job_retry.rb +23 -10
- data/lib/sidekiq/launcher.rb +35 -10
- data/lib/sidekiq/logger.rb +108 -12
- data/lib/sidekiq/manager.rb +4 -4
- data/lib/sidekiq/middleware/chain.rb +12 -3
- data/lib/sidekiq/monitor.rb +3 -18
- data/lib/sidekiq/paginator.rb +7 -2
- data/lib/sidekiq/processor.rb +22 -24
- data/lib/sidekiq/rails.rb +16 -18
- data/lib/sidekiq/redis_connection.rb +21 -13
- data/lib/sidekiq/scheduled.rb +13 -12
- data/lib/sidekiq/sd_notify.rb +149 -0
- data/lib/sidekiq/systemd.rb +24 -0
- data/lib/sidekiq/testing.rb +13 -1
- data/lib/sidekiq/util.rb +0 -2
- data/lib/sidekiq/version.rb +1 -1
- data/lib/sidekiq/web/application.rb +23 -24
- data/lib/sidekiq/web/csrf_protection.rb +158 -0
- data/lib/sidekiq/web/helpers.rb +25 -16
- data/lib/sidekiq/web/router.rb +2 -4
- data/lib/sidekiq/web.rb +16 -8
- data/lib/sidekiq/worker.rb +8 -11
- data/lib/sidekiq.rb +22 -8
- data/sidekiq.gemspec +3 -4
- data/web/assets/javascripts/application.js +25 -27
- data/web/assets/javascripts/dashboard.js +2 -2
- data/web/assets/stylesheets/application-dark.css +143 -0
- data/web/assets/stylesheets/application.css +16 -6
- data/web/locales/de.yml +14 -2
- data/web/locales/en.yml +2 -0
- data/web/locales/fr.yml +2 -2
- data/web/locales/ja.yml +2 -0
- data/web/locales/lt.yml +83 -0
- data/web/locales/pl.yml +4 -4
- data/web/locales/ru.yml +4 -0
- data/web/locales/vi.yml +83 -0
- data/web/views/_job_info.erb +2 -1
- data/web/views/busy.erb +6 -3
- data/web/views/dead.erb +2 -2
- data/web/views/layout.erb +1 -0
- data/web/views/morgue.erb +5 -2
- data/web/views/queue.erb +10 -1
- data/web/views/queues.erb +9 -1
- data/web/views/retries.erb +5 -2
- data/web/views/retry.erb +2 -2
- data/web/views/scheduled.erb +5 -2
- metadata +21 -29
- data/.circleci/config.yml +0 -61
- data/.github/issue_template.md +0 -11
data/Pro-Changes.md
CHANGED
@@ -2,14 +2,51 @@
|
|
2
2
|
|
3
3
|
[Sidekiq Changes](https://github.com/mperham/sidekiq/blob/master/Changes.md) | [Sidekiq Pro Changes](https://github.com/mperham/sidekiq/blob/master/Pro-Changes.md) | [Sidekiq Enterprise Changes](https://github.com/mperham/sidekiq/blob/master/Ent-Changes.md)
|
4
4
|
|
5
|
-
Please see [
|
5
|
+
Please see [sidekiq.org](https://sidekiq.org/) for more details and how to buy.
|
6
|
+
|
7
|
+
5.2.0
|
8
|
+
---------
|
9
|
+
|
10
|
+
- The Sidekiq Pro and Enterprise gem servers now `bundle install` much faster with **Bundler 2.2+** [#4158]
|
11
|
+
- Fix issue with reliable push and multiple shards [#4669]
|
12
|
+
- Fix Pro memory leak due to fetch refactoring in Sidekiq 6.1 [#4652]
|
13
|
+
- Gracefully handle poison pill jobs [#4633]
|
14
|
+
- Remove support for multi-shard batches [#4642]
|
15
|
+
- Rename `Sidekiq::Rack::BatchStatus` to `Sidekiq::Pro::BatchStatus` [#4655]
|
16
|
+
|
17
|
+
5.1.1
|
18
|
+
---------
|
19
|
+
|
20
|
+
- Fix broken basic fetcher [#4616]
|
21
|
+
|
22
|
+
5.1.0
|
23
|
+
---------
|
24
|
+
|
25
|
+
- Remove old Statsd metrics with `WorkerName` in the name [#4377]
|
26
|
+
```
|
27
|
+
job.WorkerName.count -> job.count with tag worker:WorkerName
|
28
|
+
job.WorkerName.perform -> job.perform with tag worker:WorkerName
|
29
|
+
job.WorkerName.failure -> job.failure with tag worker:WorkerName
|
30
|
+
```
|
31
|
+
- Remove `concurrent-ruby` gem dependency [#4586]
|
32
|
+
- Update `constantize` for batch callbacks. [#4469]
|
33
|
+
- Add queue tag to `jobs.recovered.fetch` metric [#4594]
|
34
|
+
- Refactor Pro's fetch infrastructure [#4602]
|
35
|
+
|
36
|
+
5.0.1
|
37
|
+
---------
|
38
|
+
|
39
|
+
- Rejigger batch failures UI to add direct links to retries and scheduled jobs [#4209]
|
40
|
+
- Delete batch data with `UNLINK` [#4155]
|
41
|
+
- Fix bug where a scheduled job can lose its scheduled time when using reliable push [#4267]
|
42
|
+
- Sidekiq::JobSet#scan and #find_job APIs have been promoted to Sidekiq OSS. [#4259]
|
6
43
|
|
7
44
|
5.0.0
|
8
45
|
---------
|
9
46
|
|
10
47
|
- There is no significant migration from Sidekiq Pro 4.0 to 5.0
|
11
48
|
but make sure you read the [update notes for Sidekiq
|
12
|
-
6.0](/mperham/sidekiq/blob/master/6.0-Upgrade.md).
|
49
|
+
6.0](https://github.com/mperham/sidekiq/blob/master/6.0-Upgrade.md).
|
13
50
|
- Removed various deprecated APIs and associated warnings.
|
14
51
|
- **BREAKING CHANGE** Remove the `Sidekiq::Batch::Status#dead_jobs` API in favor of
|
15
52
|
`Sidekiq::Batch::Status#dead_jids`. [#4217]
|
data/README.md
CHANGED
@@ -2,10 +2,7 @@ Sidekiq
|
|
2
2
|
==============
|
3
3
|
|
4
4
|
[![Gem Version](https://badge.fury.io/rb/sidekiq.svg)](https://rubygems.org/gems/sidekiq)
|
5
|
-
|
6
|
-
[![Build Status](https://circleci.com/gh/mperham/sidekiq/tree/master.svg?style=svg)](https://circleci.com/gh/mperham/sidekiq/tree/master)
|
7
|
-
[![Gitter Chat](https://badges.gitter.im/mperham/sidekiq.svg)](https://gitter.im/mperham/sidekiq)
|
8
|
-
|
5
|
+
![Build](https://github.com/mperham/sidekiq/workflows/CI/badge.svg)
|
9
6
|
|
10
7
|
Simple, efficient background processing for Ruby.
|
11
8
|
|
@@ -18,7 +15,8 @@ Performance
|
|
18
15
|
|
19
16
|
Version | Latency | Garbage created for 10k jobs | Time to process 100k jobs | Throughput | Ruby
|
20
17
|
-----------------|------|---------|---------|------------------------|-----
|
21
|
-
Sidekiq 6.0.
|
18
|
+
Sidekiq 6.0.2 | 3 ms | 156 MB | 14.0 sec| **7100 jobs/sec** | MRI 2.6.3
|
19
|
+
Sidekiq 6.0.0 | 3 ms | 156 MB | 19 sec | 5200 jobs/sec | MRI 2.6.3
|
22
20
|
Sidekiq 4.0.0 | 10 ms | 151 MB | 22 sec | 4500 jobs/sec |
|
23
21
|
Sidekiq 3.5.1 | 22 ms | 1257 MB | 125 sec | 800 jobs/sec |
|
24
22
|
Resque 1.25.2 | - | - | 420 sec | 240 jobs/sec |
|
@@ -92,4 +90,4 @@ Please see [LICENSE](https://github.com/mperham/sidekiq/blob/master/LICENSE) for
|
|
92
90
|
Author
|
93
91
|
-----------------
|
94
92
|
|
95
|
-
Mike Perham, [@
|
93
|
+
Mike Perham, [@getajobmike](https://twitter.com/getajobmike) / [@sidekiq](https://twitter.com/sidekiq), [https://www.mikeperham.com](https://www.mikeperham.com) / [https://www.contribsys.com](https://www.contribsys.com)
|
data/bin/sidekiq
CHANGED
@@ -6,13 +6,37 @@ $TESTING = false
|
|
6
6
|
|
7
7
|
require_relative '../lib/sidekiq/cli'
|
8
8
|
|
9
|
+
def integrate_with_systemd
|
10
|
+
return unless ENV["NOTIFY_SOCKET"]
|
11
|
+
|
12
|
+
Sidekiq.configure_server do |config|
|
13
|
+
Sidekiq.logger.info "Enabling systemd notification integration"
|
14
|
+
require "sidekiq/sd_notify"
|
15
|
+
config.on(:startup) do
|
16
|
+
Sidekiq::SdNotify.ready
|
17
|
+
end
|
18
|
+
config.on(:shutdown) do
|
19
|
+
Sidekiq::SdNotify.stopping
|
20
|
+
end
|
21
|
+
Sidekiq.start_watchdog if Sidekiq::SdNotify.watchdog?
|
22
|
+
end
|
23
|
+
end
|
24
|
+
|
9
25
|
begin
|
10
26
|
cli = Sidekiq::CLI.instance
|
11
27
|
cli.parse
|
28
|
+
|
29
|
+
integrate_with_systemd
|
30
|
+
|
12
31
|
cli.run
|
13
32
|
rescue => e
|
14
33
|
raise e if $DEBUG
|
15
|
-
|
16
|
-
|
34
|
+
if Sidekiq.error_handlers.length == 0
|
35
|
+
STDERR.puts e.message
|
36
|
+
STDERR.puts e.backtrace.join("\n")
|
37
|
+
else
|
38
|
+
cli.handle_exception e
|
39
|
+
end
|
40
|
+
|
17
41
|
exit 1
|
18
42
|
end
|
data/bin/sidekiqload
CHANGED
@@ -5,7 +5,8 @@
|
|
5
5
|
$TESTING = false
|
6
6
|
|
7
7
|
#require 'ruby-prof'
|
8
|
-
|
8
|
+
require 'bundler/setup'
|
9
|
+
Bundler.require(:default, :load_test)
|
9
10
|
|
10
11
|
require_relative '../lib/sidekiq/cli'
|
11
12
|
require_relative '../lib/sidekiq/launcher'
|
@@ -102,17 +103,20 @@ iter.times do
|
|
102
103
|
end
|
103
104
|
Sidekiq.logger.error "Created #{count*iter} jobs"
|
104
105
|
|
106
|
+
start = Time.now
|
107
|
+
|
105
108
|
Monitoring = Thread.new do
|
106
109
|
watchdog("monitor thread") do
|
107
110
|
while true
|
108
|
-
sleep 0.
|
111
|
+
sleep 0.2
|
109
112
|
qsize = Sidekiq.redis do |conn|
|
110
113
|
conn.llen "queue:default"
|
111
114
|
end
|
112
115
|
total = qsize
|
113
|
-
Sidekiq.logger.error("RSS: #{Process.rss} Pending: #{total}")
|
116
|
+
#Sidekiq.logger.error("RSS: #{Process.rss} Pending: #{total}")
|
114
117
|
if total == 0
|
115
|
-
Sidekiq.logger.error("Done,
|
118
|
+
Sidekiq.logger.error("Done, #{iter * count} jobs in #{Time.now - start} sec")
|
119
|
+
Sidekiq.logger.error("Now here's the latency for three jobs")
|
116
120
|
|
117
121
|
LoadWorker.perform_async(1, Time.now.to_f)
|
118
122
|
LoadWorker.perform_async(2, Time.now.to_f)
|
data/bin/sidekiqmon
CHANGED
@@ -16,7 +16,9 @@ module Sidekiq
|
|
16
16
|
end
|
17
17
|
|
18
18
|
def create_test_file
|
19
|
-
|
19
|
+
return unless test_framework
|
20
|
+
|
21
|
+
if test_framework == :rspec
|
20
22
|
create_worker_spec
|
21
23
|
else
|
22
24
|
create_worker_test
|
@@ -42,6 +44,14 @@ module Sidekiq
|
|
42
44
|
)
|
43
45
|
template "worker_test.rb.erb", template_file
|
44
46
|
end
|
47
|
+
|
48
|
+
def file_name
|
49
|
+
@_file_name ||= super.sub(/_?worker\z/i, "")
|
50
|
+
end
|
51
|
+
|
52
|
+
def test_framework
|
53
|
+
::Rails.application.config.generators.options[:rails][:test_framework]
|
54
|
+
end
|
45
55
|
end
|
46
56
|
end
|
47
57
|
end
|
data/lib/sidekiq/api.rb
CHANGED
@@ -2,23 +2,11 @@
|
|
2
2
|
|
3
3
|
require "sidekiq"
|
4
4
|
|
5
|
-
|
6
|
-
|
7
|
-
def sscan(conn, key)
|
8
|
-
cursor = "0"
|
9
|
-
result = []
|
10
|
-
loop do
|
11
|
-
cursor, values = conn.sscan(key, cursor)
|
12
|
-
result.push(*values)
|
13
|
-
break if cursor == "0"
|
14
|
-
end
|
15
|
-
result
|
16
|
-
end
|
17
|
-
end
|
5
|
+
require "zlib"
|
6
|
+
require "base64"
|
18
7
|
|
8
|
+
module Sidekiq
|
19
9
|
class Stats
|
20
|
-
include RedisScanner
|
21
|
-
|
22
10
|
def initialize
|
23
11
|
fetch_stats!
|
24
12
|
end
|
@@ -77,11 +65,11 @@ module Sidekiq
|
|
77
65
|
}
|
78
66
|
|
79
67
|
processes = Sidekiq.redis { |conn|
|
80
|
-
|
68
|
+
conn.sscan_each("processes").to_a
|
81
69
|
}
|
82
70
|
|
83
71
|
queues = Sidekiq.redis { |conn|
|
84
|
-
|
72
|
+
conn.sscan_each("queues").to_a
|
85
73
|
}
|
86
74
|
|
87
75
|
pipe2_res = Sidekiq.redis { |conn|
|
@@ -92,8 +80,8 @@ module Sidekiq
|
|
92
80
|
}
|
93
81
|
|
94
82
|
s = processes.size
|
95
|
-
workers_size = pipe2_res[0...s].
|
96
|
-
enqueued = pipe2_res[s..-1].
|
83
|
+
workers_size = pipe2_res[0...s].sum(&:to_i)
|
84
|
+
enqueued = pipe2_res[s..-1].sum(&:to_i)
|
97
85
|
|
98
86
|
default_queue_latency = if (entry = pipe1_res[6].first)
|
99
87
|
job = begin
|
@@ -117,7 +105,7 @@ module Sidekiq
|
|
117
105
|
|
118
106
|
default_queue_latency: default_queue_latency,
|
119
107
|
workers_size: workers_size,
|
120
|
-
enqueued: enqueued
|
108
|
+
enqueued: enqueued
|
121
109
|
}
|
122
110
|
end
|
123
111
|
|
@@ -142,11 +130,9 @@ module Sidekiq
|
|
142
130
|
end
|
143
131
|
|
144
132
|
class Queues
|
145
|
-
include RedisScanner
|
146
|
-
|
147
133
|
def lengths
|
148
134
|
Sidekiq.redis do |conn|
|
149
|
-
queues =
|
135
|
+
queues = conn.sscan_each("queues").to_a
|
150
136
|
|
151
137
|
lengths = conn.pipelined {
|
152
138
|
queues.each do |queue|
|
@@ -154,13 +140,8 @@ module Sidekiq
|
|
154
140
|
end
|
155
141
|
}
|
156
142
|
|
157
|
-
|
158
|
-
array_of_arrays
|
159
|
-
memo[queue] = lengths[i]
|
160
|
-
i += 1
|
161
|
-
}.sort_by { |_, size| size }
|
162
|
-
|
163
|
-
Hash[array_of_arrays.reverse]
|
143
|
+
array_of_arrays = queues.zip(lengths).sort_by { |_, size| -size }
|
144
|
+
Hash[array_of_arrays]
|
164
145
|
end
|
165
146
|
end
|
166
147
|
end
|
@@ -182,18 +163,12 @@ module Sidekiq
|
|
182
163
|
private
|
183
164
|
|
184
165
|
def date_stat_hash(stat)
|
185
|
-
i = 0
|
186
166
|
stat_hash = {}
|
187
|
-
|
188
|
-
|
189
|
-
|
190
|
-
|
191
|
-
|
192
|
-
datestr = date.strftime("%Y-%m-%d")
|
193
|
-
keys << "stat:#{stat}:#{datestr}"
|
194
|
-
dates << datestr
|
195
|
-
i += 1
|
196
|
-
end
|
167
|
+
dates = @start_date.downto(@start_date - @days_previous + 1).map { |date|
|
168
|
+
date.strftime("%Y-%m-%d")
|
169
|
+
}
|
170
|
+
|
171
|
+
keys = dates.map { |datestr| "stat:#{stat}:#{datestr}" }
|
197
172
|
|
198
173
|
begin
|
199
174
|
Sidekiq.redis do |conn|
|
@@ -225,13 +200,12 @@ module Sidekiq
|
|
225
200
|
#
|
226
201
|
class Queue
|
227
202
|
include Enumerable
|
228
|
-
extend RedisScanner
|
229
203
|
|
230
204
|
##
|
231
205
|
# Return all known queues within Redis.
|
232
206
|
#
|
233
207
|
def self.all
|
234
|
-
Sidekiq.redis { |c|
|
208
|
+
Sidekiq.redis { |c| c.sscan_each("queues").to_a }.sort.map { |q| Sidekiq::Queue.new(q) }
|
235
209
|
end
|
236
210
|
|
237
211
|
attr_reader :name
|
@@ -299,7 +273,7 @@ module Sidekiq
|
|
299
273
|
def clear
|
300
274
|
Sidekiq.redis do |conn|
|
301
275
|
conn.multi do
|
302
|
-
conn.
|
276
|
+
conn.unlink(@rname)
|
303
277
|
conn.srem("queues", name)
|
304
278
|
end
|
305
279
|
end
|
@@ -349,7 +323,7 @@ module Sidekiq
|
|
349
323
|
end
|
350
324
|
when "ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper"
|
351
325
|
job_class = @item["wrapped"] || args[0]
|
352
|
-
if job_class == "ActionMailer::DeliveryJob"
|
326
|
+
if job_class == "ActionMailer::DeliveryJob" || job_class == "ActionMailer::MailDeliveryJob"
|
353
327
|
# MailerClass#mailer_method
|
354
328
|
args[0]["arguments"][0..1].join("#")
|
355
329
|
else
|
@@ -372,6 +346,9 @@ module Sidekiq
|
|
372
346
|
if (self["wrapped"] || args[0]) == "ActionMailer::DeliveryJob"
|
373
347
|
# remove MailerClass, mailer_method and 'deliver_now'
|
374
348
|
job_args.drop(3)
|
349
|
+
elsif (self["wrapped"] || args[0]) == "ActionMailer::MailDeliveryJob"
|
350
|
+
# remove MailerClass, mailer_method and 'deliver_now'
|
351
|
+
job_args.drop(3).first["args"]
|
375
352
|
else
|
376
353
|
job_args
|
377
354
|
end
|
@@ -400,6 +377,20 @@ module Sidekiq
|
|
400
377
|
Time.at(self["created_at"] || self["enqueued_at"] || 0).utc
|
401
378
|
end
|
402
379
|
|
380
|
+
def tags
|
381
|
+
self["tags"] || []
|
382
|
+
end
|
383
|
+
|
384
|
+
def error_backtrace
|
385
|
+
# Cache nil values
|
386
|
+
if defined?(@error_backtrace)
|
387
|
+
@error_backtrace
|
388
|
+
else
|
389
|
+
value = self["error_backtrace"]
|
390
|
+
@error_backtrace = value && uncompress_backtrace(value)
|
391
|
+
end
|
392
|
+
end
|
393
|
+
|
403
394
|
attr_reader :queue
|
404
395
|
|
405
396
|
def latency
|
@@ -433,6 +424,23 @@ module Sidekiq
|
|
433
424
|
Sidekiq.logger.warn "Unable to load YAML: #{ex.message}" unless Sidekiq.options[:environment] == "development"
|
434
425
|
default
|
435
426
|
end
|
427
|
+
|
428
|
+
def uncompress_backtrace(backtrace)
|
429
|
+
if backtrace.is_a?(Array)
|
430
|
+
# Handle old jobs with raw Array backtrace format
|
431
|
+
backtrace
|
432
|
+
else
|
433
|
+
decoded = Base64.decode64(backtrace)
|
434
|
+
uncompressed = Zlib::Inflate.inflate(decoded)
|
435
|
+
begin
|
436
|
+
Sidekiq.load_json(uncompressed)
|
437
|
+
rescue
|
438
|
+
# Handle old jobs with marshalled backtrace format
|
439
|
+
# TODO Remove in 7.x
|
440
|
+
Marshal.load(uncompressed)
|
441
|
+
end
|
442
|
+
end
|
443
|
+
end
|
436
444
|
end
|
437
445
|
|
438
446
|
class SortedEntry < Job
|
@@ -458,8 +466,9 @@ module Sidekiq
|
|
458
466
|
end
|
459
467
|
|
460
468
|
def reschedule(at)
|
461
|
-
|
462
|
-
|
469
|
+
Sidekiq.redis do |conn|
|
470
|
+
conn.zincrby(@parent.name, at.to_f - @score, Sidekiq.dump_json(@item))
|
471
|
+
end
|
463
472
|
end
|
464
473
|
|
465
474
|
def add_to_queue
|
@@ -503,7 +512,7 @@ module Sidekiq
|
|
503
512
|
else
|
504
513
|
# multiple jobs with the same score
|
505
514
|
# find the one with the right JID and push it
|
506
|
-
|
515
|
+
matched, nonmatched = results.partition { |message|
|
507
516
|
if message.index(jid)
|
508
517
|
msg = Sidekiq.load_json(message)
|
509
518
|
msg["jid"] == jid
|
@@ -512,12 +521,12 @@ module Sidekiq
|
|
512
521
|
end
|
513
522
|
}
|
514
523
|
|
515
|
-
msg =
|
524
|
+
msg = matched.first
|
516
525
|
yield msg if msg
|
517
526
|
|
518
527
|
# push the rest back onto the sorted set
|
519
528
|
conn.multi do
|
520
|
-
|
529
|
+
nonmatched.each do |message|
|
521
530
|
conn.zadd(parent.name, score.to_f.to_s, message)
|
522
531
|
end
|
523
532
|
end
|
@@ -540,9 +549,20 @@ module Sidekiq
|
|
540
549
|
Sidekiq.redis { |c| c.zcard(name) }
|
541
550
|
end
|
542
551
|
|
552
|
+
def scan(match, count = 100)
|
553
|
+
return to_enum(:scan, match, count) unless block_given?
|
554
|
+
|
555
|
+
match = "*#{match}*" unless match.include?("*")
|
556
|
+
Sidekiq.redis do |conn|
|
557
|
+
conn.zscan_each(name, match: match, count: count) do |entry, score|
|
558
|
+
yield SortedEntry.new(self, score, entry)
|
559
|
+
end
|
560
|
+
end
|
561
|
+
end
|
562
|
+
|
543
563
|
def clear
|
544
564
|
Sidekiq.redis do |conn|
|
545
|
-
conn.
|
565
|
+
conn.unlink(name)
|
546
566
|
end
|
547
567
|
end
|
548
568
|
alias_method :💣, :clear
|
@@ -576,28 +596,40 @@ module Sidekiq
|
|
576
596
|
end
|
577
597
|
end
|
578
598
|
|
599
|
+
##
|
600
|
+
# Fetch jobs that match a given time or Range. Job ID is an
|
601
|
+
# optional second argument.
|
579
602
|
def fetch(score, jid = nil)
|
603
|
+
begin_score, end_score =
|
604
|
+
if score.is_a?(Range)
|
605
|
+
[score.first, score.last]
|
606
|
+
else
|
607
|
+
[score, score]
|
608
|
+
end
|
609
|
+
|
580
610
|
elements = Sidekiq.redis { |conn|
|
581
|
-
conn.zrangebyscore(name,
|
611
|
+
conn.zrangebyscore(name, begin_score, end_score, with_scores: true)
|
582
612
|
}
|
583
613
|
|
584
614
|
elements.each_with_object([]) do |element, result|
|
585
|
-
|
586
|
-
|
587
|
-
|
588
|
-
else
|
589
|
-
result << entry
|
590
|
-
end
|
615
|
+
data, job_score = element
|
616
|
+
entry = SortedEntry.new(self, job_score, data)
|
617
|
+
result << entry if jid.nil? || entry.jid == jid
|
591
618
|
end
|
592
619
|
end
|
593
620
|
|
594
621
|
##
|
595
622
|
# Find the job with the given JID within this sorted set.
|
596
|
-
#
|
597
|
-
# This is a slow, inefficient operation. Do not use under
|
598
|
-
# normal conditions. Sidekiq Pro contains a faster version.
|
623
|
+
# This is a slower O(n) operation. Do not use for app logic.
|
599
624
|
def find_job(jid)
|
600
|
-
|
625
|
+
Sidekiq.redis do |conn|
|
626
|
+
conn.zscan_each(name, match: "*#{jid}*", count: 100) do |entry, score|
|
627
|
+
job = JSON.parse(entry)
|
628
|
+
matched = job["jid"] == jid
|
629
|
+
return SortedEntry.new(self, score, entry) if matched
|
630
|
+
end
|
631
|
+
end
|
632
|
+
nil
|
601
633
|
end
|
602
634
|
|
603
635
|
def delete_by_value(name, value)
|
@@ -612,11 +644,13 @@ module Sidekiq
|
|
612
644
|
Sidekiq.redis do |conn|
|
613
645
|
elements = conn.zrangebyscore(name, score, score)
|
614
646
|
elements.each do |element|
|
615
|
-
|
616
|
-
|
617
|
-
|
618
|
-
|
619
|
-
|
647
|
+
if element.index(jid)
|
648
|
+
message = Sidekiq.load_json(element)
|
649
|
+
if message["jid"] == jid
|
650
|
+
ret = conn.zrem(name, element)
|
651
|
+
@_size -= 1 if ret
|
652
|
+
break ret
|
653
|
+
end
|
620
654
|
end
|
621
655
|
end
|
622
656
|
end
|
@@ -720,7 +754,6 @@ module Sidekiq
|
|
720
754
|
#
|
721
755
|
class ProcessSet
|
722
756
|
include Enumerable
|
723
|
-
include RedisScanner
|
724
757
|
|
725
758
|
def initialize(clean_plz = true)
|
726
759
|
cleanup if clean_plz
|
@@ -731,7 +764,7 @@ module Sidekiq
|
|
731
764
|
def cleanup
|
732
765
|
count = 0
|
733
766
|
Sidekiq.redis do |conn|
|
734
|
-
procs =
|
767
|
+
procs = conn.sscan_each("processes").to_a.sort
|
735
768
|
heartbeats = conn.pipelined {
|
736
769
|
procs.each do |key|
|
737
770
|
conn.hget(key, "info")
|
@@ -741,40 +774,37 @@ module Sidekiq
|
|
741
774
|
# the hash named key has an expiry of 60 seconds.
|
742
775
|
# if it's not found, that means the process has not reported
|
743
776
|
# in to Redis and probably died.
|
744
|
-
to_prune =
|
745
|
-
|
746
|
-
|
747
|
-
end
|
777
|
+
to_prune = procs.select.with_index { |proc, i|
|
778
|
+
heartbeats[i].nil?
|
779
|
+
}
|
748
780
|
count = conn.srem("processes", to_prune) unless to_prune.empty?
|
749
781
|
end
|
750
782
|
count
|
751
783
|
end
|
752
784
|
|
753
785
|
def each
|
754
|
-
|
786
|
+
result = Sidekiq.redis { |conn|
|
787
|
+
procs = conn.sscan_each("processes").to_a.sort
|
755
788
|
|
756
|
-
Sidekiq.redis do |conn|
|
757
789
|
# We're making a tradeoff here between consuming more memory instead of
|
758
790
|
# making more roundtrips to Redis, but if you have hundreds or thousands of workers,
|
759
791
|
# you'll be happier this way
|
760
|
-
|
792
|
+
conn.pipelined do
|
761
793
|
procs.each do |key|
|
762
794
|
conn.hmget(key, "info", "busy", "beat", "quiet")
|
763
795
|
end
|
764
|
-
|
796
|
+
end
|
797
|
+
}
|
765
798
|
|
766
|
-
|
767
|
-
|
768
|
-
|
769
|
-
|
770
|
-
|
799
|
+
result.each do |info, busy, at_s, quiet|
|
800
|
+
# If a process is stopped between when we query Redis for `procs` and
|
801
|
+
# when we query for `result`, we will have an item in `result` that is
|
802
|
+
# composed of `nil` values.
|
803
|
+
next if info.nil?
|
771
804
|
|
772
|
-
|
773
|
-
|
774
|
-
end
|
805
|
+
hash = Sidekiq.load_json(info)
|
806
|
+
yield Process.new(hash.merge("busy" => busy.to_i, "beat" => at_s.to_f, "quiet" => quiet))
|
775
807
|
end
|
776
|
-
|
777
|
-
nil
|
778
808
|
end
|
779
809
|
|
780
810
|
# This method is not guaranteed accurate since it does not prune the set
|
@@ -885,22 +915,28 @@ module Sidekiq
|
|
885
915
|
#
|
886
916
|
class Workers
|
887
917
|
include Enumerable
|
888
|
-
include RedisScanner
|
889
918
|
|
890
|
-
def each
|
919
|
+
def each(&block)
|
920
|
+
results = []
|
891
921
|
Sidekiq.redis do |conn|
|
892
|
-
procs =
|
922
|
+
procs = conn.sscan_each("processes").to_a
|
893
923
|
procs.sort.each do |key|
|
894
924
|
valid, workers = conn.pipelined {
|
895
|
-
conn.exists(key)
|
925
|
+
conn.exists?(key)
|
896
926
|
conn.hgetall("#{key}:workers")
|
897
927
|
}
|
898
928
|
next unless valid
|
899
929
|
workers.each_pair do |tid, json|
|
900
|
-
|
930
|
+
hsh = Sidekiq.load_json(json)
|
931
|
+
p = hsh["payload"]
|
932
|
+
# avoid breaking API, this is a side effect of the JSON optimization in #4316
|
933
|
+
hsh["payload"] = Sidekiq.load_json(p) if p.is_a?(String)
|
934
|
+
results << [key, tid, hsh]
|
901
935
|
end
|
902
936
|
end
|
903
937
|
end
|
938
|
+
|
939
|
+
results.sort_by { |(_, _, hsh)| hsh["run_at"] }.each(&block)
|
904
940
|
end
|
905
941
|
|
906
942
|
# Note that #size is only as accurate as Sidekiq's heartbeat,
|
@@ -911,7 +947,7 @@ module Sidekiq
|
|
911
947
|
# which can easily get out of sync with crashy processes.
|
912
948
|
def size
|
913
949
|
Sidekiq.redis do |conn|
|
914
|
-
procs =
|
950
|
+
procs = conn.sscan_each("processes").to_a
|
915
951
|
if procs.empty?
|
916
952
|
0
|
917
953
|
else
|
@@ -919,7 +955,7 @@ module Sidekiq
|
|
919
955
|
procs.each do |key|
|
920
956
|
conn.hget(key, "busy")
|
921
957
|
end
|
922
|
-
}.
|
958
|
+
}.sum(&:to_i)
|
923
959
|
end
|
924
960
|
end
|
925
961
|
end
|