solid_queue 1.1.4 → 1.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 778d9495c79b9b8af00416fd1980250fe6c3213043cbfd31eba5eb5e2795ad13
4
- data.tar.gz: 14f929171d65334300648a5f80e59b9f8245034119cfd436d37dae545210c97b
3
+ metadata.gz: 86df23afff2a24d62b03768997b2544497806d78064f03fa40b34f830516eada
4
+ data.tar.gz: c62fc25c9715937f0326d5b23159e34222fe6d4055e18045c201739cc48861f0
5
5
  SHA512:
6
- metadata.gz: cbbd8113e179c49ffcfe707aa38c96f31ac812822d380e7a6739daced30ee029046ec19cf2ea70c81e19c0f1a819412454f1f4eb1a4cfad7cf5fa8b0f88e3021
7
- data.tar.gz: c324b1127bbad96191d3c2e268058dd1cec7c29cffe45a511b0899f3e028e86b7ad0cf0638c93100393316bde1e429ab28c828a381c0692b8558c100f296b6f6
6
+ metadata.gz: 396da409f95384a6aacd42533865de189e7cb44a98e16c18fabde73d0addfb4086630a6affe7e5a49bbb9bc8e1f632fed0a86026d622c722ed940b61cac90aba
7
+ data.tar.gz: a602f595cd387355c090c4b0826c18c754943150e26e58de7ff05da1c34298c860845411a1399ac7659224e83b445188a7debea6a77da7fdac47d29e7c38c6b1
data/README.md CHANGED
@@ -440,7 +440,7 @@ class MyJob < ApplicationJob
440
440
 
441
441
  When a job includes these controls, we'll ensure that, at most, the number of jobs (indicated as `to`) that yield the same `key` will be performed concurrently, and this guarantee will last for `duration` for each job enqueued. Note that there's no guarantee about _the order of execution_, only about jobs being performed at the same time (overlapping).
442
442
 
443
- The concurrency limits use the concept of semaphores when enqueuing, and work as follows: when a job is enqueued, we check if it specifies concurrency controls. If it does, we check the semaphore for the computed concurrency key. If the semaphore is open, we claim it and we set the job as _ready_. Ready means it can be picked up by workers for execution. When the job finishes executing (be it successfully or unsuccessfully, resulting in a failed execution), we signal the semaphore and try to unblock the next job with the same key, if any. Unblocking the next job doesn't mean running that job right away, but moving it from _blocked_ to _ready_. Since something can happen that prevents the first job from releasing the semaphore and unblocking the next job (for example, someone pulling a plug in the machine where the worker is running), we have the `duration` as a failsafe. Jobs that have been blocked for more than duration are candidates to be released, but only as many of them as the concurrency rules allow, as each one would need to go through the semaphore dance check. This means that the `duration` is not really about the job that's enqueued or being run, it's about the jobs that are blocked waiting.
443
+ The concurrency limits use the concept of semaphores when enqueuing, and work as follows: when a job is enqueued, we check if it specifies concurrency controls. If it does, we check the semaphore for the computed concurrency key. If the semaphore is open, we claim it and we set the job as _ready_. Ready means it can be picked up by workers for execution. When the job finishes executing (be it successfully or unsuccessfully, resulting in a failed execution), we signal the semaphore and try to unblock the next job with the same key, if any. Unblocking the next job doesn't mean running that job right away, but moving it from _blocked_ to _ready_. Since something can happen that prevents the first job from releasing the semaphore and unblocking the next job (for example, someone pulling a plug in the machine where the worker is running), we have the `duration` as a failsafe. Jobs that have been blocked for more than duration are candidates to be released, but only as many of them as the concurrency rules allow, as each one would need to go through the semaphore dance check. This means that the `duration` is not really about the job that's enqueued or being run, it's about the jobs that are blocked waiting. It's important to note that after one or more candidate jobs are unblocked (either because a job finishes or because `duration` expires and a semaphore is released), the `duration` timer for the still blocked jobs is reset. This happens indirectly via the expiration time of the semaphore, which is updated.
444
444
 
445
445
 
446
446
  For example:
@@ -473,7 +473,7 @@ class Bundle::RebundlePostingsJob < ApplicationJob
473
473
 
474
474
  In this case, if we have a `Box::MovePostingsByContactToDesignatedBoxJob` job enqueued for a contact record with id `123` and another `Bundle::RebundlePostingsJob` job enqueued simultaneously for a bundle record that references contact `123`, only one of them will be allowed to proceed. The other one will stay blocked until the first one finishes (or 15 minutes pass, whatever happens first).
475
475
 
476
- Note that the `duration` setting depends indirectly on the value for `concurrency_maintenance_interval` that you set for your dispatcher(s), as that'd be the frequency with which blocked jobs are checked and unblocked. In general, you should set `duration` in a way that all your jobs would finish well under that duration and think of the concurrency maintenance task as a failsafe in case something goes wrong.
476
+ Note that the `duration` setting depends indirectly on the value for `concurrency_maintenance_interval` that you set for your dispatcher(s), as that'd be the frequency with which blocked jobs are checked and unblocked (at which point, only one job per concurrency key, at most, is unblocked). In general, you should set `duration` in a way that all your jobs would finish well under that duration and think of the concurrency maintenance task as a failsafe in case something goes wrong.
477
477
 
478
478
  Jobs are unblocked in order of priority but queue order is not taken into account for unblocking jobs. That means that if you have a group of jobs that share a concurrency group but are in different queues, or jobs of the same class that you enqueue in different queues, the queue order you set for a worker is not taken into account when unblocking blocked ones. The reason is that a job that runs unblocks the next one, and the job itself doesn't know about a particular worker's queue order (you could even have different workers with different queue orders), it can only know about priority. Once blocked jobs are unblocked and available for polling, they'll be picked up by a worker following its queue order.
479
479
 
@@ -12,7 +12,7 @@ module SolidQueue
12
12
  class << self
13
13
  def unblock(limit)
14
14
  SolidQueue.instrument(:release_many_blocked, limit: limit) do |payload|
15
- expired.distinct.limit(limit).pluck(:concurrency_key).then do |concurrency_keys|
15
+ expired.order(:concurrency_key).distinct.limit(limit).pluck(:concurrency_key).then do |concurrency_keys|
16
16
  payload[:size] = release_many releasable(concurrency_keys)
17
17
  end
18
18
  end
@@ -39,7 +39,10 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
39
39
  def fail_all_with(error)
40
40
  SolidQueue.instrument(:fail_many_claimed) do |payload|
41
41
  includes(:job).tap do |executions|
42
- executions.each { |execution| execution.failed_with(error) }
42
+ executions.each do |execution|
43
+ execution.failed_with(error)
44
+ execution.unblock_next_job
45
+ end
43
46
 
44
47
  payload[:process_ids] = executions.map(&:process_id).uniq
45
48
  payload[:job_ids] = executions.map(&:job_id).uniq
@@ -67,7 +70,7 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
67
70
  raise result.error
68
71
  end
69
72
  ensure
70
- job.unblock_next_blocked_job
73
+ unblock_next_job
71
74
  end
72
75
 
73
76
  def release
@@ -90,6 +93,10 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
90
93
  end
91
94
  end
92
95
 
96
+ def unblock_next_job
97
+ job.unblock_next_blocked_job
98
+ end
99
+
93
100
  private
94
101
  def execute
95
102
  ActiveJob::Base.execute(job.arguments.merge("provider_job_id" => job.id))
@@ -37,13 +37,5 @@ module SolidQueue
37
37
  include ActiveJob::ConcurrencyControls
38
38
  end
39
39
  end
40
-
41
- initializer "solid_queue.include_interruptible_concern" do
42
- if Gem::Version.new(RUBY_VERSION) >= Gem::Version.new("3.2")
43
- SolidQueue::Processes::Base.include SolidQueue::Processes::Interruptible
44
- else
45
- SolidQueue::Processes::Base.include SolidQueue::Processes::OgInterruptible
46
- end
47
- end
48
40
  end
49
41
  end
@@ -4,7 +4,7 @@ module SolidQueue
4
4
  module Processes
5
5
  class Base
6
6
  include Callbacks # Defines callbacks needed by other concerns
7
- include AppExecutor, Registrable, Procline
7
+ include AppExecutor, Registrable, Interruptible, Procline
8
8
 
9
9
  attr_reader :name
10
10
 
@@ -2,36 +2,36 @@
2
2
 
3
3
  module SolidQueue::Processes
4
4
  module Interruptible
5
- include SolidQueue::AppExecutor
6
-
7
5
  def wake_up
8
6
  interrupt
9
7
  end
10
8
 
11
9
  private
10
+ SELF_PIPE_BLOCK_SIZE = 11
12
11
 
13
12
  def interrupt
14
- queue << true
13
+ self_pipe[:writer].write_nonblock(".")
14
+ rescue Errno::EAGAIN, Errno::EINTR
15
+ # Ignore writes that would block and retry
16
+ # if another signal arrived while writing
17
+ retry
15
18
  end
16
19
 
17
- # Sleeps for 'time'. Can be interrupted asynchronously and return early via wake_up.
18
- # @param time [Numeric, Duration] the time to sleep. 0 returns immediately.
19
20
  def interruptible_sleep(time)
20
- # Invoking this from the main thread may result in significant slowdown.
21
- # Utilizing asynchronous execution (Futures) addresses this performance issue.
22
- Concurrent::Promises.future(time) do |timeout|
23
- queue.clear unless queue.pop(timeout:).nil?
24
- end.on_rejection! do |e|
25
- wrapped_exception = RuntimeError.new("Interruptible#interruptible_sleep - #{e.class}: #{e.message}")
26
- wrapped_exception.set_backtrace(e.backtrace)
27
- handle_thread_error(wrapped_exception)
28
- end.value
21
+ if time > 0 && self_pipe[:reader].wait_readable(time)
22
+ loop { self_pipe[:reader].read_nonblock(SELF_PIPE_BLOCK_SIZE) }
23
+ end
24
+ rescue Errno::EAGAIN, Errno::EINTR
25
+ end
29
26
 
30
- nil
27
+ # Self-pipe for signal-handling (http://cr.yp.to/docs/selfpipe.html)
28
+ def self_pipe
29
+ @self_pipe ||= create_self_pipe
31
30
  end
32
31
 
33
- def queue
34
- @queue ||= Queue.new
32
+ def create_self_pipe
33
+ reader, writer = IO.pipe
34
+ { reader: reader, writer: writer }
35
35
  end
36
36
  end
37
37
  end
@@ -1,3 +1,3 @@
1
1
  module SolidQueue
2
- VERSION = "1.1.4"
2
+ VERSION = "1.1.5"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: solid_queue
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.4
4
+ version: 1.1.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - Rosa Gutierrez
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2025-03-17 00:00:00.000000000 Z
11
+ date: 2025-04-20 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activerecord
@@ -295,7 +295,6 @@ files:
295
295
  - lib/solid_queue/processes/base.rb
296
296
  - lib/solid_queue/processes/callbacks.rb
297
297
  - lib/solid_queue/processes/interruptible.rb
298
- - lib/solid_queue/processes/og_interruptible.rb
299
298
  - lib/solid_queue/processes/poller.rb
300
299
  - lib/solid_queue/processes/process_exit_error.rb
301
300
  - lib/solid_queue/processes/process_missing_error.rb
@@ -1,39 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module SolidQueue::Processes
4
- # The original implementation of Interruptible that works
5
- # with Ruby 3.1 and earlier
6
- module OgInterruptible
7
- def wake_up
8
- interrupt
9
- end
10
-
11
- private
12
- SELF_PIPE_BLOCK_SIZE = 11
13
-
14
- def interrupt
15
- self_pipe[:writer].write_nonblock(".")
16
- rescue Errno::EAGAIN, Errno::EINTR
17
- # Ignore writes that would block and retry
18
- # if another signal arrived while writing
19
- retry
20
- end
21
-
22
- def interruptible_sleep(time)
23
- if time > 0 && self_pipe[:reader].wait_readable(time)
24
- loop { self_pipe[:reader].read_nonblock(SELF_PIPE_BLOCK_SIZE) }
25
- end
26
- rescue Errno::EAGAIN, Errno::EINTR
27
- end
28
-
29
- # Self-pipe for signal-handling (http://cr.yp.to/docs/selfpipe.html)
30
- def self_pipe
31
- @self_pipe ||= create_self_pipe
32
- end
33
-
34
- def create_self_pipe
35
- reader, writer = IO.pipe
36
- { reader: reader, writer: writer }
37
- end
38
- end
39
- end