solid_queue 1.1.2 → 1.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 17a05d42c432fb64bc429ef13665309e61fa0221d0c7e7534e1ae688772a8ed4
4
- data.tar.gz: 7173644a027051dfcd4911b0611995bac6686c9a78cd104836c5b3fbccf6d9dc
3
+ metadata.gz: 86df23afff2a24d62b03768997b2544497806d78064f03fa40b34f830516eada
4
+ data.tar.gz: c62fc25c9715937f0326d5b23159e34222fe6d4055e18045c201739cc48861f0
5
5
  SHA512:
6
- metadata.gz: fb0400861a8946176b1ed8d63cc5504743696ec456c637b459c59489897fcce388e4defba72ecaf061e70362965792474037aef4bcdee48bb071d9f22002611e
7
- data.tar.gz: f7b4f080b7b20b716950ff80ce5eff7bb1cc68c153bb65e2ef797044222dfcf52a5717e9a640817a946a54713ca164fb20ed7acbbc90bfec1effe57015c3d872
6
+ metadata.gz: 396da409f95384a6aacd42533865de189e7cb44a98e16c18fabde73d0addfb4086630a6affe7e5a49bbb9bc8e1f632fed0a86026d622c722ed940b61cac90aba
7
+ data.tar.gz: a602f595cd387355c090c4b0826c18c754943150e26e58de7ff05da1c34298c860845411a1399ac7659224e83b445188a7debea6a77da7fdac47d29e7c38c6b1
data/README.md CHANGED
@@ -9,6 +9,7 @@ Solid Queue can be used with SQL databases such as MySQL, PostgreSQL or SQLite,
9
9
  ## Table of contents
10
10
 
11
11
  - [Installation](#installation)
12
+ - [Usage in development and other non-production environments](#usage-in-development-and-other-non-production-environments)
12
13
  - [Single database configuration](#single-database-configuration)
13
14
  - [Incremental adoption](#incremental-adoption)
14
15
  - [High performance requirements](#high-performance-requirements)
@@ -38,6 +39,8 @@ Solid Queue is configured by default in new Rails 8 applications. But if you're
38
39
  1. `bundle add solid_queue`
39
40
  2. `bin/rails solid_queue:install`
40
41
 
42
+ (Note: The minimum supported version of Rails is 7.1 and Ruby is 3.1.6.)
43
+
41
44
  This will configure Solid Queue as the production Active Job backend, create the configuration files `config/queue.yml` and `config/recurring.yml`, and create the `db/queue_schema.rb`. It'll also create a `bin/jobs` executable wrapper that you can use to start Solid Queue.
42
45
 
43
46
  Once you've done that, you will then have to add the configuration for the queue database in `config/database.yml`. If you're using SQLite, it'll look like this:
@@ -84,7 +87,7 @@ For example, if you're using SQLite in development, update `database.yml` as fol
84
87
 
85
88
  ```diff
86
89
  development:
87
- primary:
90
+ + primary:
88
91
  <<: *default
89
92
  database: storage/development.sqlite3
90
93
  + queue:
@@ -310,7 +313,7 @@ and then remove the paused ones. Pausing in general should be something rare, us
310
313
  Do this:
311
314
 
312
315
  ```yml
313
- queues: background, backend
316
+ queues: [ background, backend ]
314
317
  ```
315
318
 
316
319
  instead of this:
@@ -372,9 +375,11 @@ In Solid queue, you can hook into two different points in the supervisor's life:
372
375
  - `start`: after the supervisor has finished booting and right before it forks workers and dispatchers.
373
376
  - `stop`: after receiving a signal (`TERM`, `INT` or `QUIT`) and right before starting graceful or immediate shutdown.
374
377
 
375
- And into two different points in a worker's life:
376
- - `worker_start`: after the worker has finished booting and right before it starts the polling loop.
377
- - `worker_stop`: after receiving a signal (`TERM`, `INT` or `QUIT`) and right before starting graceful or immediate shutdown (which is just `exit!`).
378
+ And into two different points in the worker's, dispatcher's and scheduler's life:
379
+ - `(worker|dispatcher|scheduler)_start`: after the worker/dispatcher/scheduler has finished booting and right before it starts the polling loop or loading the recurring schedule.
380
+ - `(worker|dispatcher|scheduler)_stop`: after receiving a signal (`TERM`, `INT` or `QUIT`) and right before starting graceful or immediate shutdown (which is just `exit!`).
381
+
382
+ Each of these hooks has an instance of the supervisor/worker/dispatcher/scheduler yielded to the block so that you may read its configuration for logging or metrics reporting purposes.
378
383
 
379
384
  You can use the following methods with a block to do this:
380
385
  ```ruby
@@ -383,12 +388,30 @@ SolidQueue.on_stop
383
388
 
384
389
  SolidQueue.on_worker_start
385
390
  SolidQueue.on_worker_stop
391
+
392
+ SolidQueue.on_dispatcher_start
393
+ SolidQueue.on_dispatcher_stop
394
+
395
+ SolidQueue.on_scheduler_start
396
+ SolidQueue.on_scheduler_stop
386
397
  ```
387
398
 
388
399
  For example:
389
400
  ```ruby
390
- SolidQueue.on_start { start_metrics_server }
391
- SolidQueue.on_stop { stop_metrics_server }
401
+ SolidQueue.on_start do |supervisor|
402
+ MyMetricsReporter.process_name = supervisor.name
403
+
404
+ start_metrics_server
405
+ end
406
+
407
+ SolidQueue.on_stop do |_supervisor|
408
+ stop_metrics_server
409
+ end
410
+
411
+ SolidQueue.on_worker_start do |worker|
412
+ MyMetricsReporter.process_name = worker.name
413
+ MyMetricsReporter.queues = worker.queues.join(',')
414
+ end
392
415
  ```
393
416
 
394
417
  These can be called several times to add multiple hooks, but it needs to happen before Solid Queue is started. An initializer would be a good place to do this.
@@ -417,7 +440,7 @@ class MyJob < ApplicationJob
417
440
 
418
441
  When a job includes these controls, we'll ensure that, at most, the number of jobs (indicated as `to`) that yield the same `key` will be performed concurrently, and this guarantee will last for `duration` for each job enqueued. Note that there's no guarantee about _the order of execution_, only about jobs being performed at the same time (overlapping).
419
442
 
420
- The concurrency limits use the concept of semaphores when enqueuing, and work as follows: when a job is enqueued, we check if it specifies concurrency controls. If it does, we check the semaphore for the computed concurrency key. If the semaphore is open, we claim it and we set the job as _ready_. Ready means it can be picked up by workers for execution. When the job finishes executing (be it successfully or unsuccessfully, resulting in a failed execution), we signal the semaphore and try to unblock the next job with the same key, if any. Unblocking the next job doesn't mean running that job right away, but moving it from _blocked_ to _ready_. Since something can happen that prevents the first job from releasing the semaphore and unblocking the next job (for example, someone pulling a plug in the machine where the worker is running), we have the `duration` as a failsafe. Jobs that have been blocked for more than duration are candidates to be released, but only as many of them as the concurrency rules allow, as each one would need to go through the semaphore dance check. This means that the `duration` is not really about the job that's enqueued or being run, it's about the jobs that are blocked waiting.
443
+ The concurrency limits use the concept of semaphores when enqueuing, and work as follows: when a job is enqueued, we check if it specifies concurrency controls. If it does, we check the semaphore for the computed concurrency key. If the semaphore is open, we claim it and we set the job as _ready_. Ready means it can be picked up by workers for execution. When the job finishes executing (be it successfully or unsuccessfully, resulting in a failed execution), we signal the semaphore and try to unblock the next job with the same key, if any. Unblocking the next job doesn't mean running that job right away, but moving it from _blocked_ to _ready_. Since something can happen that prevents the first job from releasing the semaphore and unblocking the next job (for example, someone pulling a plug in the machine where the worker is running), we have the `duration` as a failsafe. Jobs that have been blocked for more than duration are candidates to be released, but only as many of them as the concurrency rules allow, as each one would need to go through the semaphore dance check. This means that the `duration` is not really about the job that's enqueued or being run, it's about the jobs that are blocked waiting. It's important to note that after one or more candidate jobs are unblocked (either because a job finishes or because `duration` expires and a semaphore is released), the `duration` timer for the still blocked jobs is reset. This happens indirectly via the expiration time of the semaphore, which is updated.
421
444
 
422
445
 
423
446
  For example:
@@ -450,7 +473,7 @@ class Bundle::RebundlePostingsJob < ApplicationJob
450
473
 
451
474
  In this case, if we have a `Box::MovePostingsByContactToDesignatedBoxJob` job enqueued for a contact record with id `123` and another `Bundle::RebundlePostingsJob` job enqueued simultaneously for a bundle record that references contact `123`, only one of them will be allowed to proceed. The other one will stay blocked until the first one finishes (or 15 minutes pass, whatever happens first).
452
475
 
453
- Note that the `duration` setting depends indirectly on the value for `concurrency_maintenance_interval` that you set for your dispatcher(s), as that'd be the frequency with which blocked jobs are checked and unblocked. In general, you should set `duration` in a way that all your jobs would finish well under that duration and think of the concurrency maintenance task as a failsafe in case something goes wrong.
476
+ Note that the `duration` setting depends indirectly on the value for `concurrency_maintenance_interval` that you set for your dispatcher(s), as that'd be the frequency with which blocked jobs are checked and unblocked (at which point, only one job per concurrency key, at most, is unblocked). In general, you should set `duration` in a way that all your jobs would finish well under that duration and think of the concurrency maintenance task as a failsafe in case something goes wrong.
454
477
 
455
478
  Jobs are unblocked in order of priority but queue order is not taken into account for unblocking jobs. That means that if you have a group of jobs that share a concurrency group but are in different queues, or jobs of the same class that you enqueue in different queues, the queue order you set for a worker is not taken into account when unblocking blocked ones. The reason is that a job that runs unblocks the next one, and the job itself doesn't know about a particular worker's queue order (you could even have different workers with different queue orders), it can only know about priority. Once blocked jobs are unblocked and available for polling, they'll be picked up by a worker following its queue order.
456
479
 
@@ -12,7 +12,7 @@ module SolidQueue
12
12
  class << self
13
13
  def unblock(limit)
14
14
  SolidQueue.instrument(:release_many_blocked, limit: limit) do |payload|
15
- expired.distinct.limit(limit).pluck(:concurrency_key).then do |concurrency_keys|
15
+ expired.order(:concurrency_key).distinct.limit(limit).pluck(:concurrency_key).then do |concurrency_keys|
16
16
  payload[:size] = release_many releasable(concurrency_keys)
17
17
  end
18
18
  end
@@ -39,7 +39,10 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
39
39
  def fail_all_with(error)
40
40
  SolidQueue.instrument(:fail_many_claimed) do |payload|
41
41
  includes(:job).tap do |executions|
42
- executions.each { |execution| execution.failed_with(error) }
42
+ executions.each do |execution|
43
+ execution.failed_with(error)
44
+ execution.unblock_next_job
45
+ end
43
46
 
44
47
  payload[:process_ids] = executions.map(&:process_id).uniq
45
48
  payload[:job_ids] = executions.map(&:job_id).uniq
@@ -67,7 +70,7 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
67
70
  raise result.error
68
71
  end
69
72
  ensure
70
- job.unblock_next_blocked_job
73
+ unblock_next_job
71
74
  end
72
75
 
73
76
  def release
@@ -90,9 +93,13 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
90
93
  end
91
94
  end
92
95
 
96
+ def unblock_next_job
97
+ job.unblock_next_blocked_job
98
+ end
99
+
93
100
  private
94
101
  def execute
95
- ActiveJob::Base.execute(job.arguments)
102
+ ActiveJob::Base.execute(job.arguments.merge("provider_job_id" => job.id))
96
103
  Result.new(true, nil)
97
104
  rescue Exception => e
98
105
  Result.new(false, e)
@@ -14,7 +14,7 @@ module SolidQueue
14
14
  def dispatch_next_batch(batch_size)
15
15
  transaction do
16
16
  job_ids = next_batch(batch_size).non_blocking_lock.pluck(:job_id)
17
- if job_ids.empty? then []
17
+ if job_ids.empty? then 0
18
18
  else
19
19
  SolidQueue.instrument(:dispatch_scheduled, batch_size: batch_size) do |payload|
20
20
  payload[:size] = dispatch_jobs(job_ids)
@@ -141,7 +141,7 @@ module SolidQueue
141
141
 
142
142
  def recurring_tasks
143
143
  @recurring_tasks ||= recurring_tasks_config.map do |id, options|
144
- RecurringTask.from_configuration(id, **options) if options.has_key?(:schedule)
144
+ RecurringTask.from_configuration(id, **options) if options&.has_key?(:schedule)
145
145
  end.compact
146
146
  end
147
147
 
@@ -153,7 +153,9 @@ module SolidQueue
153
153
  end
154
154
 
155
155
  def recurring_tasks_config
156
- @recurring_tasks_config ||= config_from options[:recurring_schedule_file]
156
+ @recurring_tasks_config ||= begin
157
+ config_from options[:recurring_schedule_file]
158
+ end
157
159
  end
158
160
 
159
161
 
@@ -2,10 +2,14 @@
2
2
 
3
3
  module SolidQueue
4
4
  class Dispatcher < Processes::Poller
5
- attr_accessor :batch_size, :concurrency_maintenance
5
+ include LifecycleHooks
6
+ attr_reader :batch_size
6
7
 
8
+ after_boot :run_start_hooks
7
9
  after_boot :start_concurrency_maintenance
8
10
  before_shutdown :stop_concurrency_maintenance
11
+ before_shutdown :run_stop_hooks
12
+ after_shutdown :run_exit_hooks
9
13
 
10
14
  def initialize(**options)
11
15
  options = options.dup.with_defaults(SolidQueue::Configuration::DISPATCHER_DEFAULTS)
@@ -22,10 +26,12 @@ module SolidQueue
22
26
  end
23
27
 
24
28
  private
29
+ attr_reader :concurrency_maintenance
30
+
25
31
  def poll
26
32
  batch = dispatch_next_batch
27
33
 
28
- batch.size.zero? ? polling_interval : 0.seconds
34
+ batch.zero? ? polling_interval : 0.seconds
29
35
  end
30
36
 
31
37
  def dispatch_next_batch
@@ -38,20 +44,12 @@ module SolidQueue
38
44
  concurrency_maintenance&.start
39
45
  end
40
46
 
41
- def schedule_recurring_tasks
42
- recurring_schedule.schedule_tasks
43
- end
44
-
45
47
  def stop_concurrency_maintenance
46
48
  concurrency_maintenance&.stop
47
49
  end
48
50
 
49
- def unschedule_recurring_tasks
50
- recurring_schedule.unschedule_tasks
51
- end
52
-
53
51
  def all_work_completed?
54
- SolidQueue::ScheduledExecution.none? && recurring_schedule.empty?
52
+ SolidQueue::ScheduledExecution.none?
55
53
  end
56
54
 
57
55
  def set_procline
@@ -5,7 +5,7 @@ module SolidQueue
5
5
  extend ActiveSupport::Concern
6
6
 
7
7
  included do
8
- mattr_reader :lifecycle_hooks, default: { start: [], stop: [] }
8
+ mattr_reader :lifecycle_hooks, default: { start: [], stop: [], exit: [] }
9
9
  end
10
10
 
11
11
  class_methods do
@@ -17,7 +17,12 @@ module SolidQueue
17
17
  self.lifecycle_hooks[:stop] << block
18
18
  end
19
19
 
20
+ def on_exit(&block)
21
+ self.lifecycle_hooks[:exit] << block
22
+ end
23
+
20
24
  def clear_hooks
25
+ self.lifecycle_hooks[:exit] = []
21
26
  self.lifecycle_hooks[:start] = []
22
27
  self.lifecycle_hooks[:stop] = []
23
28
  end
@@ -32,9 +37,13 @@ module SolidQueue
32
37
  run_hooks_for :stop
33
38
  end
34
39
 
40
+ def run_exit_hooks
41
+ run_hooks_for :exit
42
+ end
43
+
35
44
  def run_hooks_for(event)
36
45
  self.class.lifecycle_hooks.fetch(event, []).each do |block|
37
- block.call
46
+ block.call(self)
38
47
  rescue Exception => exception
39
48
  handle_thread_error(exception)
40
49
  end
@@ -18,20 +18,16 @@ module SolidQueue
18
18
  def post(execution)
19
19
  available_threads.decrement
20
20
 
21
- future = Concurrent::Future.new(args: [ execution ], executor: executor) do |thread_execution|
21
+ Concurrent::Promises.future_on(executor, execution) do |thread_execution|
22
22
  wrap_in_app_executor do
23
23
  thread_execution.perform
24
24
  ensure
25
25
  available_threads.increment
26
26
  mutex.synchronize { on_idle.try(:call) if idle? }
27
27
  end
28
+ end.on_rejection! do |e|
29
+ handle_thread_error(e)
28
30
  end
29
-
30
- future.add_observer do |_, _, error|
31
- handle_thread_error(error) if error
32
- end
33
-
34
- future.execute
35
31
  end
36
32
 
37
33
  def idle_threads
@@ -7,27 +7,31 @@ module SolidQueue::Processes
7
7
  end
8
8
 
9
9
  private
10
+ SELF_PIPE_BLOCK_SIZE = 11
10
11
 
11
12
  def interrupt
12
- queue << true
13
+ self_pipe[:writer].write_nonblock(".")
14
+ rescue Errno::EAGAIN, Errno::EINTR
15
+ # Ignore writes that would block and retry
16
+ # if another signal arrived while writing
17
+ retry
13
18
  end
14
19
 
15
- # Sleeps for 'time'. Can be interrupted asynchronously and return early via wake_up.
16
- # @param time [Numeric] the time to sleep. 0 returns immediately.
17
- # @return [true, nil]
18
- # * returns `true` if an interrupt was requested via #wake_up between the
19
- # last call to `interruptible_sleep` and now, resulting in an early return.
20
- # * returns `nil` if it slept the full `time` and was not interrupted.
21
20
  def interruptible_sleep(time)
22
- # Invoking this from the main thread may result in significant slowdown.
23
- # Utilizing asynchronous execution (Futures) addresses this performance issue.
24
- Concurrent::Promises.future(time) do |timeout|
25
- queue.pop(timeout:).tap { queue.clear }
26
- end.value
21
+ if time > 0 && self_pipe[:reader].wait_readable(time)
22
+ loop { self_pipe[:reader].read_nonblock(SELF_PIPE_BLOCK_SIZE) }
23
+ end
24
+ rescue Errno::EAGAIN, Errno::EINTR
27
25
  end
28
26
 
29
- def queue
30
- @queue ||= Queue.new
27
+ # Self-pipe for signal-handling (http://cr.yp.to/docs/selfpipe.html)
28
+ def self_pipe
29
+ @self_pipe ||= create_self_pipe
30
+ end
31
+
32
+ def create_self_pipe
33
+ reader, writer = IO.pipe
34
+ { reader: reader, writer: writer }
31
35
  end
32
36
  end
33
37
  end
@@ -4,7 +4,7 @@ module SolidQueue
4
4
  module Processes
5
5
  class ProcessPrunedError < RuntimeError
6
6
  def initialize(last_heartbeat_at)
7
- super("Process was found dead and pruned (last heartbeat at: #{last_heartbeat_at}")
7
+ super("Process was found dead and pruned (last heartbeat at: #{last_heartbeat_at})")
8
8
  end
9
9
  end
10
10
  end
@@ -3,11 +3,15 @@
3
3
  module SolidQueue
4
4
  class Scheduler < Processes::Base
5
5
  include Processes::Runnable
6
+ include LifecycleHooks
6
7
 
7
- attr_accessor :recurring_schedule
8
+ attr_reader :recurring_schedule
8
9
 
10
+ after_boot :run_start_hooks
9
11
  after_boot :schedule_recurring_tasks
10
12
  before_shutdown :unschedule_recurring_tasks
13
+ before_shutdown :run_stop_hooks
14
+ after_shutdown :run_exit_hooks
11
15
 
12
16
  def initialize(recurring_tasks:, **options)
13
17
  @recurring_schedule = RecurringSchedule.new(recurring_tasks)
@@ -5,6 +5,8 @@ module SolidQueue
5
5
  include LifecycleHooks
6
6
  include Maintenance, Signals, Pidfiled
7
7
 
8
+ after_shutdown :run_exit_hooks
9
+
8
10
  class << self
9
11
  def start(**options)
10
12
  SolidQueue.supervisor = true
@@ -1,3 +1,3 @@
1
1
  module SolidQueue
2
- VERSION = "1.1.2"
2
+ VERSION = "1.1.5"
3
3
  end
@@ -6,14 +6,16 @@ module SolidQueue
6
6
 
7
7
  after_boot :run_start_hooks
8
8
  before_shutdown :run_stop_hooks
9
+ after_shutdown :run_exit_hooks
9
10
 
10
-
11
- attr_accessor :queues, :pool
11
+ attr_reader :queues, :pool
12
12
 
13
13
  def initialize(**options)
14
14
  options = options.dup.with_defaults(SolidQueue::Configuration::WORKER_DEFAULTS)
15
15
 
16
- @queues = Array(options[:queues])
16
+ # Ensure that the queues array is deep frozen to prevent accidental modification
17
+ @queues = Array(options[:queues]).map(&:freeze).freeze
18
+
17
19
  @pool = Pool.new(options[:threads], on_idle: -> { wake_up })
18
20
 
19
21
  super(**options)
data/lib/solid_queue.rb CHANGED
@@ -41,14 +41,20 @@ module SolidQueue
41
41
  mattr_accessor :clear_finished_jobs_after, default: 1.day
42
42
  mattr_accessor :default_concurrency_control_period, default: 3.minutes
43
43
 
44
- delegate :on_start, :on_stop, to: Supervisor
44
+ delegate :on_start, :on_stop, :on_exit, to: Supervisor
45
45
 
46
- def on_worker_start(...)
47
- Worker.on_start(...)
48
- end
46
+ [ Dispatcher, Scheduler, Worker ].each do |process|
47
+ define_singleton_method(:"on_#{process.name.demodulize.downcase}_start") do |&block|
48
+ process.on_start(&block)
49
+ end
50
+
51
+ define_singleton_method(:"on_#{process.name.demodulize.downcase}_stop") do |&block|
52
+ process.on_stop(&block)
53
+ end
49
54
 
50
- def on_worker_stop(...)
51
- Worker.on_stop(...)
55
+ define_singleton_method(:"on_#{process.name.demodulize.downcase}_exit") do |&block|
56
+ process.on_exit(&block)
57
+ end
52
58
  end
53
59
 
54
60
  def supervisor?
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: solid_queue
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.2
4
+ version: 1.1.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - Rosa Gutierrez
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2024-12-27 00:00:00.000000000 Z
11
+ date: 2025-04-20 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activerecord
@@ -220,6 +220,20 @@ dependencies:
220
220
  - - ">="
221
221
  - !ruby/object:Gem::Version
222
222
  version: '0'
223
+ - !ruby/object:Gem::Dependency
224
+ name: zeitwerk
225
+ requirement: !ruby/object:Gem::Requirement
226
+ requirements:
227
+ - - '='
228
+ - !ruby/object:Gem::Version
229
+ version: 2.6.0
230
+ type: :development
231
+ prerelease: false
232
+ version_requirements: !ruby/object:Gem::Requirement
233
+ requirements:
234
+ - - '='
235
+ - !ruby/object:Gem::Version
236
+ version: 2.6.0
223
237
  description: Database-backed Active Job backend.
224
238
  email:
225
239
  - rosa@37signals.com
@@ -307,15 +321,8 @@ metadata:
307
321
  homepage_uri: https://github.com/rails/solid_queue
308
322
  source_code_uri: https://github.com/rails/solid_queue
309
323
  post_install_message: |
310
- Upgrading to Solid Queue 0.9.0? There are some breaking changes about how recurring tasks are configured.
311
-
312
- Upgrading to Solid Queue 0.8.0 from < 0.6.0? You need to upgrade to 0.6.0 first.
313
-
314
- Upgrading to Solid Queue 0.4.x, 0.5.x, 0.6.x or 0.7.x? There are some breaking changes about how Solid Queue is started,
315
- configuration and new migrations.
316
-
317
- --> Check https://github.com/rails/solid_queue/blob/main/UPGRADING.md
318
- for upgrade instructions.
324
+ Upgrading from Solid Queue < 1.0? Check details on breaking changes and upgrade instructions
325
+ --> https://github.com/rails/solid_queue/blob/main/UPGRADING.md
319
326
  rdoc_options: []
320
327
  require_paths:
321
328
  - lib
@@ -323,7 +330,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
323
330
  requirements:
324
331
  - - ">="
325
332
  - !ruby/object:Gem::Version
326
- version: '0'
333
+ version: '3.1'
327
334
  required_rubygems_version: !ruby/object:Gem::Requirement
328
335
  requirements:
329
336
  - - ">="