sidekiq 8.0.0.beta1 → 8.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: f7280923fe38a56b42bb33c639f9e0a365c1f585f928e8cdfd3ce06fe989fc7e
4
- data.tar.gz: e8c6a01e393390f1a3ab3a48c04b3621efd5621c6b184240c3e8316602e24e9c
3
+ metadata.gz: e6ce813e475bd69e3cb05f0ebe216e1dcbd6b4865584a4761c889a97761d5fe1
4
+ data.tar.gz: 855e4b4db0b7c080f9a4347c338eff1c4b62be91e233d0e20447b15a3251756a
5
5
  SHA512:
6
- metadata.gz: 076a8f70170dfdc6ec78430ca41b182ed986453bdfbcfe24b7dd5ef1d8f2321b560cb94201c2f2c3a30c2ce42a31e38ce14151f0ebbee90bc97a2463b427747a
7
- data.tar.gz: 8cc4828931a73da67ae40edb14508b82624781b9732280ce9e8db615c8bed1039b89b8b50db54c35bbffd4a010f904598be2bb4aa037228bbd4b4b325b2b9421
6
+ metadata.gz: 20f3dfd6b5e9aba7f8d2a0ef0fb9bef6ee7918aa31a8deee0bc592dcd957e44c08ca5d6a30e16064c81a5f68bc571fe8585c2e3ba453a81db0d565862e67454b
7
+ data.tar.gz: cdfea6f26e591ac0fb7c4bfe8874d8272615b27bfdd91daef13d1620e902fd5ba8a3c818d70a5729a89104c6ae7ac576c0f0df65717f514999d5c1c651605bd8
data/Changes.md CHANGED
@@ -2,29 +2,51 @@
2
2
 
3
3
  [Sidekiq Changes](https://github.com/sidekiq/sidekiq/blob/main/Changes.md) | [Sidekiq Pro Changes](https://github.com/sidekiq/sidekiq/blob/main/Pro-Changes.md) | [Sidekiq Enterprise Changes](https://github.com/sidekiq/sidekiq/blob/main/Ent-Changes.md)
4
4
 
5
- HEAD / main
5
+ 8.0.0
6
6
  ----------
7
7
 
8
8
  - **WARNING** The underlying class name for Active Jobs has changed from `ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper` to `Sidekiq::ActiveJob::Wrapper`.
9
- - **WARNING** The `created_at` and `enqueued_at` attributes are now stored as
10
- integer milliseconds, rather than epoch floats. This is meant to avoid precision
11
- issues with JSON and JavaScript's 53-bit Floats. Example:
12
- `"created_at" => 1234567890.123456` -> `"created_at" => 1234567890123`.
9
+ The old name will still work in 8.x.
10
+ - **WARNING** The `created_at`, `enqueued_at`, `failed_at` and `retried_at` attributes are now stored as epoch milliseconds, rather than epoch floats.
11
+ This is meant to avoid precision issues with JSON and JavaScript's 53-bit Floats.
12
+ Example: `"created_at" => 1234567890.123456` -> `"created_at" => 1234567890123`.
13
13
  - **NEW FEATURE** Job Profiling is now supported with [Vernier](https://vernier.prof)
14
14
  which makes it really easy to performance tune your slow jobs.
15
15
  The Web UI contains a new **Profiles** tab to view any collected profile data.
16
16
  Please read the new [Profiling](https://github.com/sidekiq/sidekiq/wiki/Profiling) wiki page for details.
17
+ - **NEW FEATURE** Job Metrics now store up to 72 hours of data and the Web UI allows display of 24/48/72 hours. [#6614]
17
18
  - CurrentAttribute support now uses `ActiveJob::Arguments` to serialize the context object, supporting Symbols and GlobalID.
18
19
  The change should be backwards compatible. [#6510]
19
20
  - Freshen up `Sidekiq::Web` to simplify the code and improve security [#6532]
20
21
  The CSS has been rewritten from scratch to remove the Bootstrap framework.
22
+ - Add `on_cancel` callback for iterable jobs [#6607]
23
+ - Add `cursor` reader to get the current cursor inside iterable jobs [#6606]
21
24
  - Default error logging has been modified to use Ruby's `Exception#detailed_message` and `#full_message` APIs.
22
25
  - CI now runs against Redis, Dragonfly and Valkey.
26
+ - Job tags now allow custom CSS display [#6595]
23
27
  - The Web UI's language picker now shows options in the native language
24
28
  - Remove global variable usage within the codebase
29
+ - Colorize and adjust logging for easier reading
25
30
  - Adjust Sidekiq's default thread priority to -1 for a 50ms timeslice.
26
31
  This can help avoid TimeoutErrors when Sidekiq is overloaded. [#6543]
27
- - Support: Redis 7.2+, Ruby 3.2+, Rails 7.0+
32
+ - Use `Logger#with_level`, remove Sidekiq's custom impl
33
+ - Remove `base64` gem dependency
34
+ - Support: (Dragonfly 1.27+, Valkey 7.2+, Redis 7.2+), Ruby 3.2+, Rails 7.0+
35
+
36
+ 7.3.10
37
+ ----------
38
+
39
+ - Deprecate Redis :password as a String to avoid log disclosure. [#6625]
40
+ Use a Proc instead: `config.redis = { password: ->(username) { "password" } }`
41
+
42
+ 7.3.9
43
+ ----------
44
+
45
+ - Only require activejob if necessary [#6584]
46
+ You might get `uninitialized constant Sidekiq::ActiveJob` if you
47
+ `require 'sidekiq'` before `require 'rails'`.
48
+ - Fix iterable job cancellation [#6589]
49
+ - Web UI accessibility improvements [#6604]
28
50
 
29
51
  7.3.8
30
52
  ----------
data/README.md CHANGED
@@ -13,7 +13,7 @@ same process. Sidekiq can be used by any Ruby application.
13
13
  Requirements
14
14
  -----------------
15
15
 
16
- - Redis: Redis 7.2+, Valkey 7.2+ or Dragonfly 1.13+
16
+ - Redis: Redis 7.2+, Valkey 7.2+ or Dragonfly 1.27+
17
17
  - Ruby: MRI 3.2+ or JRuby 9.4+.
18
18
 
19
19
  Sidekiq 8.0 supports Rails and Active Job 7.0+.
data/lib/sidekiq/api.rb CHANGED
@@ -441,6 +441,18 @@ module Sidekiq
441
441
  self["bid"]
442
442
  end
443
443
 
444
+ def failed_at
445
+ if self["failed_at"]
446
+ time_from_timestamp(self["failed_at"])
447
+ end
448
+ end
449
+
450
+ def retried_at
451
+ if self["retried_at"]
452
+ time_from_timestamp(self["retried_at"])
453
+ end
454
+ end
455
+
444
456
  def enqueued_at
445
457
  if self["enqueued_at"]
446
458
  time_from_timestamp(self["enqueued_at"])
@@ -11,12 +11,12 @@ module Sidekiq
11
11
  # This capsule will pull jobs from the "single" queue and process
12
12
  # the jobs with one thread, meaning the jobs will be processed serially.
13
13
  #
14
- # Sidekiq.configure_server do |config|
15
- # config.capsule("single-threaded") do |cap|
16
- # cap.concurrency = 1
17
- # cap.queues = %w(single)
14
+ # Sidekiq.configure_server do |config|
15
+ # config.capsule("single-threaded") do |cap|
16
+ # cap.concurrency = 1
17
+ # cap.queues = %w(single)
18
+ # end
18
19
  # end
19
- # end
20
20
  class Capsule
21
21
  include Sidekiq::Component
22
22
  extend Forwardable
@@ -23,6 +23,16 @@ module Sidekiq
23
23
  module Component # :nodoc:
24
24
  attr_reader :config
25
25
 
26
+ # This is epoch milliseconds, appropriate for persistence
27
+ def real_ms
28
+ ::Process.clock_gettime(::Process::CLOCK_REALTIME, :millisecond)
29
+ end
30
+
31
+ # used for time difference and relative comparisons, not persistence.
32
+ def mono_ms
33
+ ::Process.clock_gettime(::Process::CLOCK_MONOTONIC, :millisecond)
34
+ end
35
+
26
36
  def watchdog(last_words)
27
37
  yield
28
38
  rescue Exception => ex
@@ -46,6 +46,7 @@ module Sidekiq
46
46
  # def on_start
47
47
  # def on_resume
48
48
  # def on_stop
49
+ # def on_cancel
49
50
  # def on_complete
50
51
  # def around_iteration
51
52
  #
@@ -64,6 +64,10 @@ module Sidekiq
64
64
  @_cancelled
65
65
  end
66
66
 
67
+ def cursor
68
+ @_cursor.freeze
69
+ end
70
+
67
71
  # A hook to override that will be called when the job starts iterating.
68
72
  #
69
73
  # It is called only once, for the first time.
@@ -91,6 +95,11 @@ module Sidekiq
91
95
  def on_stop
92
96
  end
93
97
 
98
+ # A hook to override that will be called when the job is cancelled.
99
+ #
100
+ def on_cancel
101
+ end
102
+
94
103
  # A hook to override that will be called when the job finished iterating.
95
104
  #
96
105
  def on_complete
@@ -182,6 +191,7 @@ module Sidekiq
182
191
 
183
192
  def iterate_with_enumerator(enumerator, arguments)
184
193
  if is_cancelled?
194
+ on_cancel
185
195
  logger.info { "Job cancelled" }
186
196
  return true
187
197
  end
@@ -200,6 +210,7 @@ module Sidekiq
200
210
  state_flushed_at = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC)
201
211
  if cancelled
202
212
  @_cancelled = true
213
+ on_cancel
203
214
  logger.info { "Job cancelled" }
204
215
  return true
205
216
  end
@@ -26,16 +26,16 @@ module Sidekiq
26
26
  # If we're using a wrapper class, like ActiveJob, use the "wrapped"
27
27
  # attribute to expose the underlying thing.
28
28
  h = {
29
- class: job_hash["display_class"] || job_hash["wrapped"] || job_hash["class"],
30
- jid: job_hash["jid"]
29
+ jid: job_hash["jid"],
30
+ class: job_hash["display_class"] || job_hash["wrapped"] || job_hash["class"]
31
31
  }
32
32
  h[:bid] = job_hash["bid"] if job_hash.has_key?("bid")
33
33
  h[:tags] = job_hash["tags"] if job_hash.has_key?("tags")
34
34
 
35
35
  Thread.current[:sidekiq_context] = h
36
36
  level = job_hash["log_level"]
37
- if level && @logger.respond_to?(:log_at)
38
- @logger.log_at(level, &block)
37
+ if level
38
+ @logger.with_level(level, &block)
39
39
  else
40
40
  yield
41
41
  end
@@ -139,6 +139,10 @@ module Sidekiq
139
139
 
140
140
  private
141
141
 
142
+ def now_ms
143
+ ::Process.clock_gettime(::Process::CLOCK_REALTIME, :millisecond)
144
+ end
145
+
142
146
  # Note that +jobinst+ can be nil here if an error is raised before we can
143
147
  # instantiate the job instance. All access must be guarded and
144
148
  # best effort.
@@ -156,10 +160,10 @@ module Sidekiq
156
160
  msg["error_message"] = m
157
161
  msg["error_class"] = exception.class.name
158
162
  count = if msg["retry_count"]
159
- msg["retried_at"] = Time.now.to_f
163
+ msg["retried_at"] = now_ms
160
164
  msg["retry_count"] += 1
161
165
  else
162
- msg["failed_at"] = Time.now.to_f
166
+ msg["failed_at"] = now_ms
163
167
  msg["retry_count"] = 0
164
168
  end
165
169
 
@@ -177,7 +181,7 @@ module Sidekiq
177
181
  return retries_exhausted(jobinst, msg, exception) if count >= max_retry_attempts
178
182
 
179
183
  rf = msg["retry_for"]
180
- return retries_exhausted(jobinst, msg, exception) if rf && ((msg["failed_at"] + rf) < Time.now.to_f)
184
+ return retries_exhausted(jobinst, msg, exception) if rf && (time_for(msg["failed_at"]) + rf) < Time.now
181
185
 
182
186
  strategy, delay = delay_for(jobinst, count, exception, msg)
183
187
  case strategy
@@ -197,6 +201,14 @@ module Sidekiq
197
201
  end
198
202
  end
199
203
 
204
+ def time_for(item)
205
+ if item.is_a?(Float)
206
+ Time.at(item)
207
+ else
208
+ Time.at(item / 1000, item % 1000)
209
+ end
210
+ end
211
+
200
212
  # returns (strategy, seconds)
201
213
  def delay_for(jobinst, count, exception, msg)
202
214
  rv = begin
@@ -22,88 +22,41 @@ module Sidekiq
22
22
  end
23
23
  end
24
24
 
25
- module LoggingUtils
26
- LEVELS = {
27
- "debug" => 0,
28
- "info" => 1,
29
- "warn" => 2,
30
- "error" => 3,
31
- "fatal" => 4
32
- }
33
- LEVELS.default_proc = proc do |_, level|
34
- puts("Invalid log level: #{level.inspect}")
35
- nil
36
- end
37
-
38
- LEVELS.each do |level, numeric_level|
39
- define_method(:"#{level}?") do
40
- local_level.nil? ? super() : local_level <= numeric_level
41
- end
42
- end
43
-
44
- def local_level
45
- Thread.current[:sidekiq_log_level]
46
- end
47
-
48
- def local_level=(level)
49
- case level
50
- when Integer
51
- Thread.current[:sidekiq_log_level] = level
52
- when Symbol, String
53
- Thread.current[:sidekiq_log_level] = LEVELS[level.to_s]
54
- when nil
55
- Thread.current[:sidekiq_log_level] = nil
56
- else
57
- raise ArgumentError, "Invalid log level: #{level.inspect}"
58
- end
59
- end
60
-
61
- def level
62
- local_level || super
63
- end
64
-
65
- # Change the thread-local level for the duration of the given block.
66
- def log_at(level)
67
- old_local_level = local_level
68
- self.local_level = level
69
- yield
70
- ensure
71
- self.local_level = old_local_level
72
- end
73
- end
74
-
75
25
  class Logger < ::Logger
76
- include LoggingUtils
77
-
78
26
  module Formatters
27
+ COLORS = {
28
+ "DEBUG" => "\e[1;32mDEBUG\e[0m", # green
29
+ "INFO" => "\e[1;34mINFO \e[0m", # blue
30
+ "WARN" => "\e[1;33mWARN \e[0m", # yellow
31
+ "ERROR" => "\e[1;31mERROR\e[0m", # red
32
+ "FATAL" => "\e[1;35mFATAL\e[0m" # pink
33
+ }
79
34
  class Base < ::Logger::Formatter
80
35
  def tid
81
36
  Thread.current["sidekiq_tid"] ||= (Thread.current.object_id ^ ::Process.pid).to_s(36)
82
37
  end
83
38
 
84
39
  def format_context(ctxt = Sidekiq::Context.current)
85
- if ctxt.size > 0
86
- ctxt.map { |k, v|
87
- case v
88
- when Array
89
- "#{k}=#{v.join(",")}"
90
- else
91
- "#{k}=#{v}"
92
- end
93
- }.join(" ")
94
- end
40
+ (ctxt.size == 0) ? "" : " #{ctxt.map { |k, v|
41
+ case v
42
+ when Array
43
+ "#{k}=#{v.join(",")}"
44
+ else
45
+ "#{k}=#{v}"
46
+ end
47
+ }.join(" ")}"
95
48
  end
96
49
  end
97
50
 
98
51
  class Pretty < Base
99
52
  def call(severity, time, program_name, message)
100
- "#{time.utc.iso8601(3)} pid=#{::Process.pid} tid=#{tid} #{format_context} #{severity}: #{message}\n"
53
+ "#{Formatters::COLORS[severity]} #{time.utc.iso8601(3)} pid=#{::Process.pid} tid=#{tid}#{format_context}: #{message}\n"
101
54
  end
102
55
  end
103
56
 
104
57
  class WithoutTimestamp < Pretty
105
58
  def call(severity, time, program_name, message)
106
- "pid=#{::Process.pid} tid=#{tid} #{format_context} #{severity}: #{message}\n"
59
+ "#{Formatters::COLORS[severity]} pid=#{::Process.pid} tid=#{tid} #{format_context}: #{message}\n"
107
60
  end
108
61
  end
109
62
 
@@ -10,7 +10,7 @@ module Sidekiq
10
10
  # Caller sets a set of attributes to act as filters. {#fetch} will call
11
11
  # Redis and return a Hash of results.
12
12
  #
13
- # NB: all metrics and times/dates are UTC only. We specifically do not
13
+ # NB: all metrics and times/dates are UTC only. We explicitly do not
14
14
  # support timezones.
15
15
  class Query
16
16
  def initialize(pool: nil, now: Time.now)
@@ -19,23 +19,46 @@ module Sidekiq
19
19
  @klass = nil
20
20
  end
21
21
 
22
+ ROLLUPS = {
23
+ # minutely aggregates per minute
24
+ minutely: [60, ->(time) { time.strftime("j|%y%m%d|%-H:%M") }],
25
+ # hourly aggregates every 10 minutes so we'll have six data points per hour
26
+ hourly: [600, ->(time) {
27
+ m = time.min
28
+ mins = (m < 10) ? "0" : m.to_s[0]
29
+ time.strftime("j|%y%m%d|%-H:#{mins}")
30
+ }]
31
+ }
32
+
22
33
  # Get metric data for all jobs from the last hour
23
34
  # +class_filter+: return only results for classes matching filter
24
- def top_jobs(class_filter: nil, minutes: 60)
25
- result = Result.new
26
-
35
+ # +minutes+: the number of fine-grained minute buckets to retrieve
36
+ # +hours+: the number of coarser-grained 10-minute buckets to retrieve, in hours
37
+ def top_jobs(class_filter: nil, minutes: nil, hours: nil)
27
38
  time = @time
39
+ minutes = 60 unless minutes || hours
40
+
41
+ # DoS protection, sanity check
42
+ minutes = 60 if minutes && minutes > 480
43
+ hours = 72 if hours && hours > 72
44
+
45
+ granularity = hours ? :hourly : :minutely
46
+ result = Result.new(granularity)
47
+ result.ends_at = time
48
+ count = hours ? hours * 6 : minutes
49
+ stride, keyproc = ROLLUPS[granularity]
50
+
28
51
  redis_results = @pool.with do |conn|
29
52
  conn.pipelined do |pipe|
30
- minutes.times do |idx|
31
- key = "j|#{time.strftime("%Y%m%d")}|#{time.hour}:#{time.min}"
53
+ count.times do |idx|
54
+ key = keyproc.call(time)
32
55
  pipe.hgetall key
33
- result.prepend_bucket time
34
- time -= 60
56
+ time -= stride
35
57
  end
36
58
  end
37
59
  end
38
60
 
61
+ result.starts_at = time
39
62
  time = @time
40
63
  redis_results.each do |hash|
41
64
  hash.each do |k, v|
@@ -43,63 +66,66 @@ module Sidekiq
43
66
  next if class_filter && !class_filter.match?(kls)
44
67
  result.job_results[kls].add_metric metric, time, v.to_i
45
68
  end
46
- time -= 60
69
+ time -= stride
47
70
  end
48
71
 
49
- result.marks = fetch_marks(result.starts_at..result.ends_at)
50
-
72
+ result.marks = fetch_marks(result.starts_at..result.ends_at, granularity)
51
73
  result
52
74
  end
53
75
 
54
- def for_job(klass, minutes: 60)
55
- result = Result.new
56
-
76
+ def for_job(klass, minutes: nil, hours: nil)
57
77
  time = @time
78
+ minutes = 60 unless minutes || hours
79
+
80
+ # DoS protection, sanity check
81
+ minutes = 60 if minutes && minutes > 480
82
+ hours = 72 if hours && hours > 72
83
+
84
+ granularity = hours ? :hourly : :minutely
85
+ result = Result.new(granularity)
86
+ result.ends_at = time
87
+ count = hours ? hours * 6 : minutes
88
+ stride, keyproc = ROLLUPS[granularity]
89
+
58
90
  redis_results = @pool.with do |conn|
59
91
  conn.pipelined do |pipe|
60
- minutes.times do |idx|
61
- key = "j|#{time.strftime("%Y%m%d")}|#{time.hour}:#{time.min}"
92
+ count.times do |idx|
93
+ key = keyproc.call(time)
62
94
  pipe.hmget key, "#{klass}|ms", "#{klass}|p", "#{klass}|f"
63
- result.prepend_bucket time
64
- time -= 60
95
+ time -= stride
65
96
  end
66
97
  end
67
98
  end
68
99
 
100
+ result.starts_at = time
69
101
  time = @time
70
102
  @pool.with do |conn|
71
103
  redis_results.each do |(ms, p, f)|
72
104
  result.job_results[klass].add_metric "ms", time, ms.to_i if ms
73
105
  result.job_results[klass].add_metric "p", time, p.to_i if p
74
106
  result.job_results[klass].add_metric "f", time, f.to_i if f
75
- result.job_results[klass].add_hist time, Histogram.new(klass).fetch(conn, time).reverse
76
- time -= 60
107
+ result.job_results[klass].add_hist time, Histogram.new(klass).fetch(conn, time).reverse if minutes
108
+ time -= stride
77
109
  end
78
110
  end
79
111
 
80
- result.marks = fetch_marks(result.starts_at..result.ends_at)
81
-
112
+ result.marks = fetch_marks(result.starts_at..result.ends_at, granularity)
82
113
  result
83
114
  end
84
115
 
85
- class Result < Struct.new(:starts_at, :ends_at, :size, :buckets, :job_results, :marks)
86
- def initialize
116
+ class Result < Struct.new(:granularity, :starts_at, :ends_at, :size, :job_results, :marks)
117
+ def initialize(granularity = :minutely)
87
118
  super
88
- self.buckets = []
119
+ self.granularity = granularity
89
120
  self.marks = []
90
- self.job_results = Hash.new { |h, k| h[k] = JobResult.new }
91
- end
92
-
93
- def prepend_bucket(time)
94
- buckets.unshift time.strftime("%H:%M")
95
- self.ends_at ||= time
96
- self.starts_at = time
121
+ self.job_results = Hash.new { |h, k| h[k] = JobResult.new(granularity) }
97
122
  end
98
123
  end
99
124
 
100
- class JobResult < Struct.new(:series, :hist, :totals)
101
- def initialize
125
+ class JobResult < Struct.new(:granularity, :series, :hist, :totals)
126
+ def initialize(granularity = :minutely)
102
127
  super
128
+ self.granularity = granularity
103
129
  self.series = Hash.new { |h, k| h[k] = Hash.new(0) }
104
130
  self.hist = Hash.new { |h, k| h[k] = [] }
105
131
  self.totals = Hash.new(0)
@@ -107,14 +133,14 @@ module Sidekiq
107
133
 
108
134
  def add_metric(metric, time, value)
109
135
  totals[metric] += value
110
- series[metric][time.strftime("%H:%M")] += value
136
+ series[metric][Query.bkt_time_s(time, granularity)] += value
111
137
 
112
138
  # Include timing measurements in seconds for convenience
113
139
  add_metric("s", time, value / 1000.0) if metric == "ms"
114
140
  end
115
141
 
116
142
  def add_hist(time, hist_result)
117
- hist[time.strftime("%H:%M")] = hist_result
143
+ hist[Query.bkt_time_s(time, granularity)] = hist_result
118
144
  end
119
145
 
120
146
  def total_avg(metric = "ms")
@@ -131,22 +157,24 @@ module Sidekiq
131
157
  end
132
158
  end
133
159
 
134
- class MarkResult < Struct.new(:time, :label)
135
- def bucket
136
- time.strftime("%H:%M")
137
- end
160
+ MarkResult = Struct.new(:time, :label, :bucket)
161
+
162
+ def self.bkt_time_s(time, granularity)
163
+ # truncate time to ten minutes ("8:40", not "8:43") or one minute
164
+ truncation = (granularity == :hourly) ? 600 : 60
165
+ Time.at(time.to_i - time.to_i % truncation).utc.iso8601
138
166
  end
139
167
 
140
168
  private
141
169
 
142
- def fetch_marks(time_range)
170
+ def fetch_marks(time_range, granularity)
143
171
  [].tap do |result|
144
172
  marks = @pool.with { |c| c.hgetall("#{@time.strftime("%Y%m%d")}-marks") }
145
173
 
146
174
  marks.each do |timestamp, label|
147
175
  time = Time.parse(timestamp)
148
176
  if time_range.cover? time
149
- result << MarkResult.new(time, label)
177
+ result << MarkResult.new(time, label, Query.bkt_time_s(time, granularity))
150
178
  end
151
179
  end
152
180
  end
@@ -25,7 +25,10 @@ module Sidekiq
25
25
  #
26
26
  # To store this data, we use Redis' BITFIELD command to store unsigned 16-bit counters
27
27
  # per bucket per klass per minute. It's unlikely that most people will be executing more
28
- # than 1000 job/sec for a full minute of a specific type.
28
+ # than 1000 job/sec for a full minute of a specific type (i.e. overflow 65,536).
29
+ #
30
+ # Histograms are only stored at the fine-grained level, they are not rolled up
31
+ # for longer-term buckets.
29
32
  class Histogram
30
33
  include Enumerable
31
34
 
@@ -82,15 +85,15 @@ module Sidekiq
82
85
  end
83
86
 
84
87
  def fetch(conn, now = Time.now)
85
- window = now.utc.strftime("%d-%H:%-M")
86
- key = "#{@klass}-#{window}"
88
+ window = now.utc.strftime("%-d-%-H:%-M")
89
+ key = "h|#{@klass}-#{window}"
87
90
  conn.bitfield_ro(key, *FETCH)
88
91
  end
89
92
 
90
93
  def persist(conn, now = Time.now)
91
94
  buckets, @buckets = @buckets, []
92
- window = now.utc.strftime("%d-%H:%-M")
93
- key = "#{@klass}-#{window}"
95
+ window = now.utc.strftime("%-d-%-H:%-M")
96
+ key = "h|#{@klass}-#{window}"
94
97
  cmd = [key, "OVERFLOW", "SAT"]
95
98
  buckets.each_with_index do |counter, idx|
96
99
  val = counter.value
@@ -19,13 +19,13 @@ module Sidekiq
19
19
  end
20
20
 
21
21
  def track(queue, klass)
22
- start = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC, :millisecond)
22
+ start = mono_ms
23
23
  time_ms = 0
24
24
  begin
25
25
  begin
26
26
  yield
27
27
  ensure
28
- finish = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC, :millisecond)
28
+ finish = mono_ms
29
29
  time_ms = finish - start
30
30
  end
31
31
  # We don't track time for failed jobs as they can have very unpredictable
@@ -51,7 +51,7 @@ module Sidekiq
51
51
  end
52
52
 
53
53
  # LONG_TERM = 90 * 24 * 60 * 60
54
- # MID_TERM = 7 * 24 * 60 * 60
54
+ MID_TERM = 3 * 24 * 60 * 60
55
55
  SHORT_TERM = 8 * 60 * 60
56
56
 
57
57
  def flush(time = Time.now)
@@ -62,8 +62,10 @@ module Sidekiq
62
62
 
63
63
  now = time.utc
64
64
  # nowdate = now.strftime("%Y%m%d")
65
- # nowhour = now.strftime("%Y%m%d|%-H")
66
- nowmin = now.strftime("%Y%m%d|%-H:%-M")
65
+ # "250214|8:4" is the 10 minute bucket for Feb 14 2025, 08:43
66
+ nowmid = now.strftime("%y%m%d|%-H:%M")[0..-2]
67
+ # "250214|8:43" is the 1 minute bucket for Feb 14 2025, 08:43
68
+ nowshort = now.strftime("%y%m%d|%-H:%M")
67
69
  count = 0
68
70
 
69
71
  redis do |conn|
@@ -81,8 +83,8 @@ module Sidekiq
81
83
  # daily or hourly rollups.
82
84
  [
83
85
  # ["j", jobs, nowdate, LONG_TERM],
84
- # ["j", jobs, nowhour, MID_TERM],
85
- ["j", jobs, nowmin, SHORT_TERM]
86
+ ["j", jobs, nowmid, MID_TERM],
87
+ ["j", jobs, nowshort, SHORT_TERM]
86
88
  ].each do |prefix, data, bucket, ttl|
87
89
  conn.pipelined do |xa|
88
90
  stats = "#{prefix}|#{bucket}"