solid_queue 1.0.0.beta → 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 553dc884dd38c24c16196da9cceb6e7b40d02bd3f5c005bc980065232b4b1682
|
4
|
+
data.tar.gz: dba6309954b326305526417c05c1a5878a0195d759e8cd6fb6c4453ee61cd6c0
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: dc6ad2d8abd513cd9f357a1190c2173c978b46dd800d1b3f4040364cd70c44b5271da7108901c67a2811586d746607eac4f999c8886a56a1942a228f0f3d1d11
|
7
|
+
data.tar.gz: 999a90c18c09350609670b29bdd3f5e2596815f0a5edb194b225634b9169eca9efaf3e8efda23cfe346e6947e9477b983bbcae1a8a1adf3c6fc2ae9affd4e2ff
|
data/README.md
CHANGED
@@ -226,7 +226,6 @@ There are several settings that control how Solid Queue works that you can set a
|
|
226
226
|
|
227
227
|
**This is not used for errors raised within a job execution**. Errors happening in jobs are handled by Active Job's `retry_on` or `discard_on`, and ultimately will result in [failed jobs](#failed-jobs-and-retries). This is for errors happening within Solid Queue itself.
|
228
228
|
|
229
|
-
- `connects_to`: a custom database configuration that will be used in the abstract `SolidQueue::Record` Active Record model. This is required to use a different database than the main app. For example:
|
230
229
|
- `use_skip_locked`: whether to use `FOR UPDATE SKIP LOCKED` when performing locking reads. This will be automatically detected in the future, and for now, you'd only need to set this to `false` if your database doesn't support it. For MySQL, that'd be versions < 8, and for PostgreSQL, versions < 9.5. If you use SQLite, this has no effect, as writes are sequential.
|
231
230
|
- `process_heartbeat_interval`: the heartbeat interval that all processes will follow—defaults to 60 seconds.
|
232
231
|
- `process_alive_threshold`: how long to wait until a process is considered dead after its last heartbeat—defaults to 5 minutes.
|
@@ -381,13 +380,14 @@ bin/jobs --recurring_schedule_file=config/schedule.yml
|
|
381
380
|
The configuration itself looks like this:
|
382
381
|
|
383
382
|
```yml
|
384
|
-
|
385
|
-
|
386
|
-
|
387
|
-
|
388
|
-
|
389
|
-
|
390
|
-
|
383
|
+
production:
|
384
|
+
a_periodic_job:
|
385
|
+
class: MyJob
|
386
|
+
args: [ 42, { status: "custom_status" } ]
|
387
|
+
schedule: every second
|
388
|
+
a_cleanup_task:
|
389
|
+
command: "DeletedStuff.clear_all"
|
390
|
+
schedule: every day at 9am
|
391
391
|
```
|
392
392
|
|
393
393
|
Tasks are specified as a hash/dictionary, where the key will be the task's key internally. Each task needs to either have a `class`, which will be the job class to enqueue, or a `command`, which will be eval'ed in the context of a job (`SolidQueue::RecurringJob`) that will be enqueued according to its schedule, in the `solid_queue_recurring` queue.
|
@@ -409,6 +409,8 @@ Tasks are enqueued at their corresponding times by the scheduler, and each task
|
|
409
409
|
|
410
410
|
It's possible to run multiple schedulers with the same `recurring_tasks` configuration, for example, if you have multiple servers for redundancy, and you run the `scheduler` in more than one of them. To avoid enqueuing duplicate tasks at the same time, an entry in a new `solid_queue_recurring_executions` table is created in the same transaction as the job is enqueued. This table has a unique index on `task_key` and `run_at`, ensuring only one entry per task per time will be created. This only works if you have `preserve_finished_jobs` set to `true` (the default), and the guarantee applies as long as you keep the jobs around.
|
411
411
|
|
412
|
+
**Note**: a single recurring schedule is supported, so you can have multiple schedulers using the same schedule, but not multiple schedulers using different configurations.
|
413
|
+
|
412
414
|
Finally, it's possible to configure jobs that aren't handled by Solid Queue. That is, you can have a job like this in your app:
|
413
415
|
```ruby
|
414
416
|
class MyResqueJob < ApplicationJob
|
@@ -1,9 +1,10 @@
|
|
1
|
-
#
|
2
|
-
#
|
3
|
-
#
|
4
|
-
#
|
5
|
-
#
|
6
|
-
#
|
7
|
-
#
|
8
|
-
#
|
9
|
-
#
|
1
|
+
# production:
|
2
|
+
# periodic_cleanup:
|
3
|
+
# class: CleanSoftDeletedRecordsJob
|
4
|
+
# queue: background
|
5
|
+
# args: [ 1000, { batch_size: 500 } ]
|
6
|
+
# schedule: every hour
|
7
|
+
# periodic_command:
|
8
|
+
# command: "SoftDeletedRecord.due.delete_all"
|
9
|
+
# priority: 2
|
10
|
+
# schedule: at 5am every day
|
data/lib/solid_queue/version.rb
CHANGED
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: solid_queue
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.0.0
|
4
|
+
version: 1.0.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Rosa Gutierrez
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2024-09-
|
11
|
+
date: 2024-09-26 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: activerecord
|