solid_queue 1.1.5 → 1.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +53 -7
- data/app/models/solid_queue/job/concurrency_controls.rb +12 -0
- data/app/models/solid_queue/job/executable.rb +1 -1
- data/app/models/solid_queue/job.rb +2 -2
- data/lib/active_job/concurrency_controls.rb +4 -1
- data/lib/active_job/queue_adapters/solid_queue_adapter.rb +4 -1
- data/lib/generators/solid_queue/install/templates/config/recurring.yml +7 -2
- data/lib/solid_queue/cli.rb +2 -1
- data/lib/solid_queue/configuration.rb +1 -1
- data/lib/solid_queue/processes/base.rb +2 -1
- data/lib/solid_queue/processes/interruptible.rb +8 -5
- data/lib/solid_queue/processes/registrable.rb +1 -2
- data/lib/solid_queue/supervisor.rb +4 -1
- data/lib/solid_queue/version.rb +1 -1
- metadata +16 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 1d0b2a7e3a5ad6577d52d3528f42ff8862c087f6f406d0f6fd96f12f36db6795
|
4
|
+
data.tar.gz: 4db8708635736bd5ccb32ad619577494c039574fd452a40c3fecc55d0f57c27b
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 2ba5bb04f334bcc56bb0504d0baac497287058c242aaf8fccd85c72e1c72c5107ad38eff211d9c16641a3cc7ad366673a316cf357daf70bdf97009d68b7c5646
|
7
|
+
data.tar.gz: bf7770477e8dd290bd9461bcc5202b5d0832639ed31e7df7e96d192562fe5c0574522f1f689504c6d98c44c14ba389597bd7627fe3ec76c1e76937c0f0eaaac7
|
data/README.md
CHANGED
@@ -11,6 +11,7 @@ Solid Queue can be used with SQL databases such as MySQL, PostgreSQL or SQLite,
|
|
11
11
|
- [Installation](#installation)
|
12
12
|
- [Usage in development and other non-production environments](#usage-in-development-and-other-non-production-environments)
|
13
13
|
- [Single database configuration](#single-database-configuration)
|
14
|
+
- [Dashboard UI Setup](#dashboard-ui-setup)
|
14
15
|
- [Incremental adoption](#incremental-adoption)
|
15
16
|
- [High performance requirements](#high-performance-requirements)
|
16
17
|
- [Configuration](#configuration)
|
@@ -23,6 +24,7 @@ Solid Queue can be used with SQL databases such as MySQL, PostgreSQL or SQLite,
|
|
23
24
|
- [Lifecycle hooks](#lifecycle-hooks)
|
24
25
|
- [Errors when enqueuing](#errors-when-enqueuing)
|
25
26
|
- [Concurrency controls](#concurrency-controls)
|
27
|
+
- [Performance considerations](#performance-considerations)
|
26
28
|
- [Failed jobs and retries](#failed-jobs-and-retries)
|
27
29
|
- [Error reporting on jobs](#error-reporting-on-jobs)
|
28
30
|
- [Puma plugin](#puma-plugin)
|
@@ -156,6 +158,10 @@ Running Solid Queue in a separate database is recommended, but it's also possibl
|
|
156
158
|
|
157
159
|
You won't have multiple databases, so `database.yml` doesn't need to have primary and queue database.
|
158
160
|
|
161
|
+
### Dashboard ui setup
|
162
|
+
|
163
|
+
For viewing information about your jobs via a UI, we recommend taking a look at [mission_control-jobs](https://github.com/rails/mission_control-jobs), a dashboard where, among other things, you can examine and retry/discard failed jobs.
|
164
|
+
|
159
165
|
### Incremental adoption
|
160
166
|
|
161
167
|
If you're planning to adopt Solid Queue incrementally by switching one job at the time, you can do so by leaving the `config.active_job.queue_adapter` set to your old backend, and then set the `queue_adapter` directly in the jobs you're moving:
|
@@ -192,6 +198,8 @@ By default, Solid Queue will try to find your configuration under `config/queue.
|
|
192
198
|
bin/jobs -c config/calendar.yml
|
193
199
|
```
|
194
200
|
|
201
|
+
You can also skip all recurring tasks by setting the environment variable `SOLID_QUEUE_SKIP_RECURRING=true`. This is useful for environments like staging, review apps, or development where you don't want any recurring jobs to run. This is equivalent to using the `--skip-recurring` option with `bin/jobs`.
|
202
|
+
|
195
203
|
This is what this configuration looks like:
|
196
204
|
|
197
205
|
```yml
|
@@ -365,7 +373,7 @@ There are several settings that control how Solid Queue works that you can set a
|
|
365
373
|
- `silence_polling`: whether to silence Active Record logs emitted when polling for both workers and dispatchers—defaults to `true`.
|
366
374
|
- `supervisor_pidfile`: path to a pidfile that the supervisor will create when booting to prevent running more than one supervisor in the same host, or in case you want to use it for a health check. It's `nil` by default.
|
367
375
|
- `preserve_finished_jobs`: whether to keep finished jobs in the `solid_queue_jobs` table—defaults to `true`.
|
368
|
-
- `clear_finished_jobs_after`: period to keep finished jobs around, in case `preserve_finished_jobs` is true—defaults to 1 day.
|
376
|
+
- `clear_finished_jobs_after`: period to keep finished jobs around, in case `preserve_finished_jobs` is true — defaults to 1 day. When installing Solid Queue, [a recurring job](#recurring-tasks) is automatically configured to clear finished jobs every hour on the 12th minute in batches. You can edit the `recurring.yml` configuration to change this as you see fit.
|
369
377
|
- `default_concurrency_control_period`: the value to be used as the default for the `duration` parameter in [concurrency controls](#concurrency-controls). It defaults to 3 minutes.
|
370
378
|
|
371
379
|
|
@@ -425,11 +433,11 @@ In the case of recurring tasks, if such error is raised when enqueuing the job c
|
|
425
433
|
|
426
434
|
## Concurrency controls
|
427
435
|
|
428
|
-
Solid Queue extends Active Job with concurrency controls, that allows you to limit how many jobs of a certain type or with certain arguments can run at the same time. When limited in this way, jobs will be blocked from running, and they'll stay blocked until another job finishes and unblocks them, or after the set expiry time (concurrency limit's _duration_) elapses.
|
436
|
+
Solid Queue extends Active Job with concurrency controls, that allows you to limit how many jobs of a certain type or with certain arguments can run at the same time. When limited in this way, by default, jobs will be blocked from running, and they'll stay blocked until another job finishes and unblocks them, or after the set expiry time (concurrency limit's _duration_) elapses. Alternatively, jobs can be configured to be discarded instead of blocked. This means that if a job with certain arguments has already been enqueued, other jobs with the same characteristics (in the same concurrency _class_) won't be enqueued.
|
429
437
|
|
430
438
|
```ruby
|
431
439
|
class MyJob < ApplicationJob
|
432
|
-
limits_concurrency to: max_concurrent_executions, key: ->(arg1, arg2, **) { ... }, duration: max_interval_to_guarantee_concurrency_limit, group: concurrency_group
|
440
|
+
limits_concurrency to: max_concurrent_executions, key: ->(arg1, arg2, **) { ... }, duration: max_interval_to_guarantee_concurrency_limit, group: concurrency_group, on_conflict: on_conflict_behaviour
|
433
441
|
|
434
442
|
# ...
|
435
443
|
```
|
@@ -437,10 +445,19 @@ class MyJob < ApplicationJob
|
|
437
445
|
- `to` is `1` by default.
|
438
446
|
- `duration` is set to `SolidQueue.default_concurrency_control_period` by default, which itself defaults to `3 minutes`, but that you can configure as well.
|
439
447
|
- `group` is used to control the concurrency of different job classes together. It defaults to the job class name.
|
448
|
+
- `on_conflict` controls behaviour when enqueuing a job that conflicts with the concurrency limits configured. It can be set to one of the following:
|
449
|
+
- (default) `:block`: the job is blocked and is dispatched when another job completes and unblocks it, or when the duration expires.
|
450
|
+
- `:discard`: the job is discarded. When you choose this option, bear in mind that if a job runs and fails to remove the concurrency lock (or _semaphore_, read below to know more about this), all jobs conflicting with it will be discarded up to the interval defined by `duration` has elapsed.
|
440
451
|
|
441
452
|
When a job includes these controls, we'll ensure that, at most, the number of jobs (indicated as `to`) that yield the same `key` will be performed concurrently, and this guarantee will last for `duration` for each job enqueued. Note that there's no guarantee about _the order of execution_, only about jobs being performed at the same time (overlapping).
|
442
453
|
|
443
|
-
The concurrency limits use the concept of semaphores when enqueuing, and work as follows: when a job is enqueued, we check if it specifies concurrency controls. If it does, we check the semaphore for the computed concurrency key. If the semaphore is open, we claim it and we set the job as _ready_. Ready means it can be picked up by workers for execution. When the job finishes executing (be it successfully or unsuccessfully, resulting in a failed execution), we signal the semaphore and try to unblock the next job with the same key, if any. Unblocking the next job doesn't mean running that job right away, but moving it from _blocked_ to _ready_.
|
454
|
+
The concurrency limits use the concept of semaphores when enqueuing, and work as follows: when a job is enqueued, we check if it specifies concurrency controls. If it does, we check the semaphore for the computed concurrency key. If the semaphore is open, we claim it and we set the job as _ready_. Ready means it can be picked up by workers for execution. When the job finishes executing (be it successfully or unsuccessfully, resulting in a failed execution), we signal the semaphore and try to unblock the next job with the same key, if any. Unblocking the next job doesn't mean running that job right away, but moving it from _blocked_ to _ready_. If you're using the `discard` behaviour for `on_conflict`, jobs enqueued while the semaphore is closed will be discarded.
|
455
|
+
|
456
|
+
Since something can happen that prevents the first job from releasing the semaphore and unblocking the next job (for example, someone pulling a plug in the machine where the worker is running), we have the `duration` as a failsafe. Jobs that have been blocked for more than `duration` are candidates to be released, but only as many of them as the concurrency rules allow, as each one would need to go through the semaphore dance check. This means that the `duration` is not really about the job that's enqueued or being run, it's about the jobs that are blocked waiting, or about the jobs that would get discarded while the semaphore is closed.
|
457
|
+
|
458
|
+
It's important to note that after one or more candidate jobs are unblocked (either because a job finishes or because `duration` expires and a semaphore is released), the `duration` timer for the still blocked jobs is reset. This happens indirectly via the expiration time of the semaphore, which is updated.
|
459
|
+
|
460
|
+
When using `discard` as the behaviour to handle conflicts, you might have jobs discarded for up to the `duration` interval if something happens and a running job fails to release the semaphore.
|
444
461
|
|
445
462
|
|
446
463
|
For example:
|
@@ -475,10 +492,38 @@ In this case, if we have a `Box::MovePostingsByContactToDesignatedBoxJob` job en
|
|
475
492
|
|
476
493
|
Note that the `duration` setting depends indirectly on the value for `concurrency_maintenance_interval` that you set for your dispatcher(s), as that'd be the frequency with which blocked jobs are checked and unblocked (at which point, only one job per concurrency key, at most, is unblocked). In general, you should set `duration` in a way that all your jobs would finish well under that duration and think of the concurrency maintenance task as a failsafe in case something goes wrong.
|
477
494
|
|
478
|
-
Jobs are unblocked in order of priority but queue order is not taken into account for unblocking jobs
|
495
|
+
Jobs are unblocked in order of priority but **queue order is not taken into account for unblocking jobs**. That means that if you have a group of jobs that share a concurrency group but are in different queues, or jobs of the same class that you enqueue in different queues, the queue order you set for a worker is not taken into account when unblocking blocked ones. The reason is that a job that runs unblocks the next one, and the job itself doesn't know about a particular worker's queue order (you could even have different workers with different queue orders), it can only know about priority. Once blocked jobs are unblocked and available for polling, they'll be picked up by a worker following its queue order.
|
479
496
|
|
480
497
|
Finally, failed jobs that are automatically or manually retried work in the same way as new jobs that get enqueued: they get in the queue for getting an open semaphore, and whenever they get it, they'll be run. It doesn't matter if they had already gotten an open semaphore in the past.
|
481
498
|
|
499
|
+
### Performance considerations
|
500
|
+
|
501
|
+
Concurrency controls introduce significant overhead (blocked executions need to be created and promoted to ready, semaphores need to be created and updated) so you should consider carefully whether you need them. For throttling purposes, where you plan to have `limit` significantly larger than 1, I'd encourage relying on a limited number of workers per queue instead. For example:
|
502
|
+
|
503
|
+
```ruby
|
504
|
+
class ThrottledJob < ApplicationJob
|
505
|
+
queue_as :throttled
|
506
|
+
```
|
507
|
+
|
508
|
+
```yml
|
509
|
+
production:
|
510
|
+
workers:
|
511
|
+
- queues: throttled
|
512
|
+
threads: 1
|
513
|
+
polling_interval: 1
|
514
|
+
- queues: default
|
515
|
+
threads: 5
|
516
|
+
polling_interval: 0.1
|
517
|
+
processes: 3
|
518
|
+
```
|
519
|
+
|
520
|
+
Or something similar to that depending on your setup. You can also assign a different queue to a job on the moment of enqueuing so you can decide whether to enqueue a job in the throttled queue or another queue depending on the arguments, or pass a block to `queue_as` as explained [here](https://guides.rubyonrails.org/active_job_basics.html#queues).
|
521
|
+
|
522
|
+
|
523
|
+
In addition, mixing concurrency controls with **bulk enqueuing** (Active Job's `perform_all_later`) is not a good idea because concurrency controlled job needs to be enqueued one by one to ensure concurrency limits are respected, so you lose all the benefits of bulk enqueuing.
|
524
|
+
|
525
|
+
When jobs that have concurrency controls and `on_conflict: :discard` are enqueued in bulk, the ones that fail to be enqueued and are discarded would have `successfully_enqueued` set to `false`. The total count of jobs enqueued returned by `perform_all_later` will exclude these jobs as expected.
|
526
|
+
|
482
527
|
## Failed jobs and retries
|
483
528
|
|
484
529
|
Solid Queue doesn't include any automatic retry mechanism, it [relies on Active Job for this](https://edgeguides.rubyonrails.org/active_job_basics.html#retrying-or-discarding-failed-jobs). Jobs that fail will be kept in the system, and a _failed execution_ (a record in the `solid_queue_failed_executions` table) will be created for these. The job will stay there until manually discarded or re-enqueued. You can do this in a console as:
|
@@ -490,8 +535,6 @@ failed_execution.retry # This will re-enqueue the job as if it was enqueued for
|
|
490
535
|
failed_execution.discard # This will delete the job from the system
|
491
536
|
```
|
492
537
|
|
493
|
-
However, we recommend taking a look at [mission_control-jobs](https://github.com/rails/mission_control-jobs), a dashboard where, among other things, you can examine and retry/discard failed jobs.
|
494
|
-
|
495
538
|
### Error reporting on jobs
|
496
539
|
|
497
540
|
Some error tracking services that integrate with Rails, such as Sentry or Rollbar, hook into [Active Job](https://guides.rubyonrails.org/active_job_basics.html#exceptions) and automatically report not handled errors that happen during job execution. However, if your error tracking system doesn't, or if you need some custom reporting, you can hook into Active Job yourself. A possible way of doing this would be:
|
@@ -533,6 +576,7 @@ plugin :solid_queue if ENV["SOLID_QUEUE_IN_PUMA"]
|
|
533
576
|
```
|
534
577
|
that you set in production only. This is what Rails 8's default Puma config looks like. Otherwise, if you're using Puma in development but not Solid Queue, starting Puma would start also Solid Queue supervisor and it'll most likely fail because it won't be properly configured.
|
535
578
|
|
579
|
+
**Note**: phased restarts are not supported currently because the plugin requires [app preloading](https://github.com/puma/puma?tab=readme-ov-file#cluster-mode) to work.
|
536
580
|
|
537
581
|
## Jobs and transactional integrity
|
538
582
|
:warning: Having your jobs in the same ACID-compliant database as your application data enables a powerful yet sharp tool: taking advantage of transactional integrity to ensure some action in your app is not committed unless your job is also committed and vice versa, and ensuring that your job won't be enqueued until the transaction within which you're enqueuing it is committed. This can be very powerful and useful, but it can also backfire if you base some of your logic on this behaviour, and in the future, you move to another active job backend, or if you simply move Solid Queue to its own database, and suddenly the behaviour changes under you. Because this can be quite tricky and many people shouldn't need to worry about it, by default Solid Queue is configured in a different database as the main app.
|
@@ -571,6 +615,8 @@ Solid Queue supports defining recurring tasks that run at specific times in the
|
|
571
615
|
bin/jobs --recurring_schedule_file=config/schedule.yml
|
572
616
|
```
|
573
617
|
|
618
|
+
You can completely disable recurring tasks by setting the environment variable `SOLID_QUEUE_SKIP_RECURRING=true` or by using the `--skip-recurring` option with `bin/jobs`.
|
619
|
+
|
574
620
|
The configuration itself looks like this:
|
575
621
|
|
576
622
|
```yml
|
@@ -34,6 +34,10 @@ module SolidQueue
|
|
34
34
|
end
|
35
35
|
|
36
36
|
private
|
37
|
+
def concurrency_on_conflict
|
38
|
+
job_class.concurrency_on_conflict.to_s.inquiry
|
39
|
+
end
|
40
|
+
|
37
41
|
def acquire_concurrency_lock
|
38
42
|
return true unless concurrency_limited?
|
39
43
|
|
@@ -46,6 +50,14 @@ module SolidQueue
|
|
46
50
|
Semaphore.signal(self)
|
47
51
|
end
|
48
52
|
|
53
|
+
def handle_concurrency_conflict
|
54
|
+
if concurrency_on_conflict.discard?
|
55
|
+
destroy
|
56
|
+
else
|
57
|
+
block
|
58
|
+
end
|
59
|
+
end
|
60
|
+
|
49
61
|
def block
|
50
62
|
BlockedExecution.create_or_find_by!(job_id: id)
|
51
63
|
end
|
@@ -29,7 +29,7 @@ module SolidQueue
|
|
29
29
|
active_job.scheduled_at = scheduled_at
|
30
30
|
|
31
31
|
create_from_active_job(active_job).tap do |enqueued_job|
|
32
|
-
active_job.provider_job_id = enqueued_job.id
|
32
|
+
active_job.provider_job_id = enqueued_job.id if enqueued_job.persisted?
|
33
33
|
end
|
34
34
|
end
|
35
35
|
|
@@ -49,7 +49,7 @@ module SolidQueue
|
|
49
49
|
def create_all_from_active_jobs(active_jobs)
|
50
50
|
job_rows = active_jobs.map { |job| attributes_from_active_job(job) }
|
51
51
|
insert_all(job_rows)
|
52
|
-
where(active_job_id: active_jobs.map(&:job_id))
|
52
|
+
where(active_job_id: active_jobs.map(&:job_id)).order(id: :asc)
|
53
53
|
end
|
54
54
|
|
55
55
|
def attributes_from_active_job(active_job)
|
@@ -5,6 +5,7 @@ module ActiveJob
|
|
5
5
|
extend ActiveSupport::Concern
|
6
6
|
|
7
7
|
DEFAULT_CONCURRENCY_GROUP = ->(*) { self.class.name }
|
8
|
+
CONCURRENCY_ON_CONFLICT_BEHAVIOUR = %i[ block discard ]
|
8
9
|
|
9
10
|
included do
|
10
11
|
class_attribute :concurrency_key, instance_accessor: false
|
@@ -12,14 +13,16 @@ module ActiveJob
|
|
12
13
|
|
13
14
|
class_attribute :concurrency_limit
|
14
15
|
class_attribute :concurrency_duration, default: SolidQueue.default_concurrency_control_period
|
16
|
+
class_attribute :concurrency_on_conflict, default: :block
|
15
17
|
end
|
16
18
|
|
17
19
|
class_methods do
|
18
|
-
def limits_concurrency(key:, to: 1, group: DEFAULT_CONCURRENCY_GROUP, duration: SolidQueue.default_concurrency_control_period)
|
20
|
+
def limits_concurrency(key:, to: 1, group: DEFAULT_CONCURRENCY_GROUP, duration: SolidQueue.default_concurrency_control_period, on_conflict: :block)
|
19
21
|
self.concurrency_key = key
|
20
22
|
self.concurrency_limit = to
|
21
23
|
self.concurrency_group = group
|
22
24
|
self.concurrency_duration = duration
|
25
|
+
self.concurrency_on_conflict = on_conflict.presence_in(CONCURRENCY_ON_CONFLICT_BEHAVIOUR) || :block
|
23
26
|
end
|
24
27
|
end
|
25
28
|
|
@@ -7,7 +7,10 @@ module ActiveJob
|
|
7
7
|
# To use it set the queue_adapter config to +:solid_queue+.
|
8
8
|
#
|
9
9
|
# Rails.application.config.active_job.queue_adapter = :solid_queue
|
10
|
-
class SolidQueueAdapter
|
10
|
+
class SolidQueueAdapter < (Rails::VERSION::MAJOR == 7 && Rails::VERSION::MINOR == 1 ? Object : AbstractAdapter)
|
11
|
+
class_attribute :stopping, default: false, instance_writer: false
|
12
|
+
SolidQueue.on_worker_stop { self.stopping = true }
|
13
|
+
|
11
14
|
def enqueue_after_transaction_commit?
|
12
15
|
true
|
13
16
|
end
|
@@ -1,10 +1,15 @@
|
|
1
|
-
#
|
1
|
+
# examples:
|
2
2
|
# periodic_cleanup:
|
3
3
|
# class: CleanSoftDeletedRecordsJob
|
4
4
|
# queue: background
|
5
5
|
# args: [ 1000, { batch_size: 500 } ]
|
6
6
|
# schedule: every hour
|
7
|
-
#
|
7
|
+
# periodic_cleanup_with_command:
|
8
8
|
# command: "SoftDeletedRecord.due.delete_all"
|
9
9
|
# priority: 2
|
10
10
|
# schedule: at 5am every day
|
11
|
+
|
12
|
+
production:
|
13
|
+
clear_solid_queue_finished_jobs:
|
14
|
+
command: "SolidQueue::Job.clear_finished_in_batches(sleep_between_batches: 0.3)"
|
15
|
+
schedule: every hour at minute 12
|
data/lib/solid_queue/cli.rb
CHANGED
@@ -13,7 +13,8 @@ module SolidQueue
|
|
13
13
|
banner: "SOLID_QUEUE_RECURRING_SCHEDULE"
|
14
14
|
|
15
15
|
class_option :skip_recurring, type: :boolean, default: false,
|
16
|
-
desc: "Whether to skip recurring tasks scheduling"
|
16
|
+
desc: "Whether to skip recurring tasks scheduling",
|
17
|
+
banner: "SOLID_QUEUE_SKIP_RECURRING"
|
17
18
|
|
18
19
|
def self.exit_on_failure?
|
19
20
|
true
|
@@ -88,7 +88,7 @@ module SolidQueue
|
|
88
88
|
recurring_schedule_file: Rails.root.join(ENV["SOLID_QUEUE_RECURRING_SCHEDULE"] || DEFAULT_RECURRING_SCHEDULE_FILE_PATH),
|
89
89
|
only_work: false,
|
90
90
|
only_dispatch: false,
|
91
|
-
skip_recurring:
|
91
|
+
skip_recurring: ActiveModel::Type::Boolean.new.cast(ENV["SOLID_QUEUE_SKIP_RECURRING"])
|
92
92
|
}
|
93
93
|
end
|
94
94
|
|
@@ -4,7 +4,8 @@ module SolidQueue
|
|
4
4
|
module Processes
|
5
5
|
class Base
|
6
6
|
include Callbacks # Defines callbacks needed by other concerns
|
7
|
-
include AppExecutor, Registrable,
|
7
|
+
include AppExecutor, Registrable, Procline
|
8
|
+
prepend Interruptible
|
8
9
|
|
9
10
|
attr_reader :name
|
10
11
|
|
@@ -2,6 +2,11 @@
|
|
2
2
|
|
3
3
|
module SolidQueue::Processes
|
4
4
|
module Interruptible
|
5
|
+
def initialize(...)
|
6
|
+
super
|
7
|
+
@self_pipe = create_self_pipe
|
8
|
+
end
|
9
|
+
|
5
10
|
def wake_up
|
6
11
|
interrupt
|
7
12
|
end
|
@@ -9,6 +14,8 @@ module SolidQueue::Processes
|
|
9
14
|
private
|
10
15
|
SELF_PIPE_BLOCK_SIZE = 11
|
11
16
|
|
17
|
+
attr_reader :self_pipe
|
18
|
+
|
12
19
|
def interrupt
|
13
20
|
self_pipe[:writer].write_nonblock(".")
|
14
21
|
rescue Errno::EAGAIN, Errno::EINTR
|
@@ -21,14 +28,10 @@ module SolidQueue::Processes
|
|
21
28
|
if time > 0 && self_pipe[:reader].wait_readable(time)
|
22
29
|
loop { self_pipe[:reader].read_nonblock(SELF_PIPE_BLOCK_SIZE) }
|
23
30
|
end
|
24
|
-
rescue Errno::EAGAIN, Errno::EINTR
|
31
|
+
rescue Errno::EAGAIN, Errno::EINTR, IO::EWOULDBLOCKWaitReadable
|
25
32
|
end
|
26
33
|
|
27
34
|
# Self-pipe for signal-handling (http://cr.yp.to/docs/selfpipe.html)
|
28
|
-
def self_pipe
|
29
|
-
@self_pipe ||= create_self_pipe
|
30
|
-
end
|
31
|
-
|
32
35
|
def create_self_pipe
|
33
36
|
reader, writer = IO.pipe
|
34
37
|
{ reader: reader, writer: writer }
|
@@ -172,8 +172,11 @@ module SolidQueue
|
|
172
172
|
end
|
173
173
|
end
|
174
174
|
|
175
|
+
# When a supervised fork crashes or exits we need to mark all the
|
176
|
+
# executions it had claimed as failed so that they can be retried
|
177
|
+
# by some other worker.
|
175
178
|
def handle_claimed_jobs_by(terminated_fork, status)
|
176
|
-
if registered_process =
|
179
|
+
if registered_process = SolidQueue::Process.find_by(name: terminated_fork.name)
|
177
180
|
error = Processes::ProcessExitError.new(status)
|
178
181
|
registered_process.fail_all_claimed_executions_with(error)
|
179
182
|
end
|
data/lib/solid_queue/version.rb
CHANGED
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: solid_queue
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.
|
4
|
+
version: 1.2.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Rosa Gutierrez
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2025-
|
11
|
+
date: 2025-07-11 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: activerecord
|
@@ -94,6 +94,20 @@ dependencies:
|
|
94
94
|
- - "~>"
|
95
95
|
- !ruby/object:Gem::Version
|
96
96
|
version: 1.3.1
|
97
|
+
- !ruby/object:Gem::Dependency
|
98
|
+
name: appraisal
|
99
|
+
requirement: !ruby/object:Gem::Requirement
|
100
|
+
requirements:
|
101
|
+
- - ">="
|
102
|
+
- !ruby/object:Gem::Version
|
103
|
+
version: '0'
|
104
|
+
type: :development
|
105
|
+
prerelease: false
|
106
|
+
version_requirements: !ruby/object:Gem::Requirement
|
107
|
+
requirements:
|
108
|
+
- - ">="
|
109
|
+
- !ruby/object:Gem::Version
|
110
|
+
version: '0'
|
97
111
|
- !ruby/object:Gem::Dependency
|
98
112
|
name: debug
|
99
113
|
requirement: !ruby/object:Gem::Requirement
|