que 1.0.0.beta4 → 1.4.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.github/workflows/{ruby.yml → tests.yml} +8 -4
- data/CHANGELOG.md +409 -293
- data/Dockerfile +20 -0
- data/README.md +129 -28
- data/auto/dev +21 -0
- data/auto/pre-push-hook +30 -0
- data/auto/psql +9 -0
- data/auto/test +5 -0
- data/auto/test-postgres-14 +17 -0
- data/bin/command_line_interface.rb +14 -6
- data/docker-compose.yml +46 -0
- data/docs/README.md +17 -18
- data/lib/que/active_record/model.rb +1 -1
- data/lib/que/connection.rb +4 -0
- data/lib/que/job.rb +35 -17
- data/lib/que/job_buffer.rb +28 -28
- data/lib/que/locker.rb +24 -12
- data/lib/que/migrations/4/up.sql +3 -1
- data/lib/que/migrations/5/down.sql +73 -0
- data/lib/que/migrations/5/up.sql +76 -0
- data/lib/que/migrations.rb +1 -2
- data/lib/que/poller.rb +10 -3
- data/lib/que/version.rb +5 -1
- data/lib/que/worker.rb +12 -4
- data/lib/que.rb +2 -1
- data/scripts/docker-entrypoint +14 -0
- data/scripts/test +5 -0
- metadata +20 -10
- data/CHANGELOG.1.0.beta.md +0 -137
data/Dockerfile
ADDED
@@ -0,0 +1,20 @@
|
|
1
|
+
FROM ruby:2.7.5-slim-buster@sha256:4cbbe2fba099026b243200aa8663f56476950cc64ccd91d7aaccddca31e445b5 AS base
|
2
|
+
|
3
|
+
# Install libpq-dev in our base layer, as it's needed in all environments
|
4
|
+
RUN apt-get update \
|
5
|
+
&& apt-get install -y libpq-dev \
|
6
|
+
&& rm -rf /var/lib/apt/lists/*
|
7
|
+
|
8
|
+
ENV RUBY_BUNDLER_VERSION 2.3.7
|
9
|
+
RUN gem install bundler -v $RUBY_BUNDLER_VERSION
|
10
|
+
|
11
|
+
ENV BUNDLE_PATH /usr/local/bundle
|
12
|
+
|
13
|
+
ENV RUBYOPT=-W:deprecated
|
14
|
+
|
15
|
+
FROM base AS dev-environment
|
16
|
+
|
17
|
+
# Install build-essential and git, as we'd need them for building gems that have native code components
|
18
|
+
RUN apt-get update \
|
19
|
+
&& apt-get install -y build-essential git \
|
20
|
+
&& rm -rf /var/lib/apt/lists/*
|
data/README.md
CHANGED
@@ -1,60 +1,65 @@
|
|
1
|
-
# Que
|
1
|
+
# Que ![tests](https://github.com/que-rb/que/workflows/tests/badge.svg)
|
2
2
|
|
3
|
-
**This README and the rest of the docs on the master branch all refer to Que 1.0
|
3
|
+
**This README and the rest of the docs on the master branch all refer to Que 1.0. If you're using version 0.x, please refer to the docs on [the 0.x branch](https://github.com/que-rb/que/tree/0.x).**
|
4
4
|
|
5
5
|
*TL;DR: Que is a high-performance job queue that improves the reliability of your application by protecting your jobs with the same [ACID guarantees](https://en.wikipedia.org/wiki/ACID) as the rest of your data.*
|
6
6
|
|
7
7
|
Que ("keɪ", or "kay") is a queue for Ruby and PostgreSQL that manages jobs using [advisory locks](http://www.postgresql.org/docs/current/static/explicit-locking.html#ADVISORY-LOCKS), which gives it several advantages over other RDBMS-backed queues:
|
8
8
|
|
9
|
-
|
10
|
-
|
11
|
-
|
9
|
+
- **Concurrency** - Workers don't block each other when trying to lock jobs, as often occurs with "SELECT FOR UPDATE"-style locking. This allows for very high throughput with a large number of workers.
|
10
|
+
- **Efficiency** - Locks are held in memory, so locking a job doesn't incur a disk write. These first two points are what limit performance with other queues. Under heavy load, Que's bottleneck is CPU, not I/O.
|
11
|
+
- **Safety** - If a Ruby process dies, the jobs it's working won't be lost, or left in a locked or ambiguous state - they immediately become available for any other worker to pick up.
|
12
12
|
|
13
13
|
Additionally, there are the general benefits of storing jobs in Postgres, alongside the rest of your data, rather than in Redis or a dedicated queue:
|
14
14
|
|
15
|
-
|
16
|
-
|
17
|
-
|
18
|
-
|
15
|
+
- **Transactional Control** - Queue a job along with other changes to your database, and it'll commit or rollback with everything else. If you're using ActiveRecord or Sequel, Que can piggyback on their connections, so setup is simple and jobs are protected by the transactions you're already using.
|
16
|
+
- **Atomic Backups** - Your jobs and data can be backed up together and restored as a snapshot. If your jobs relate to your data (and they usually do), there's no risk of jobs falling through the cracks during a recovery.
|
17
|
+
- **Fewer Dependencies** - If you're already using Postgres (and you probably should be), a separate queue is another moving part that can break.
|
18
|
+
- **Security** - Postgres' support for SSL connections keeps your data safe in transport, for added protection when you're running workers on cloud platforms that you can't completely control.
|
19
19
|
|
20
|
-
Que's primary goal is reliability. You should be able to leave your application running indefinitely without worrying about jobs being lost due to a lack of transactional support, or left in limbo due to a crashing process. Que does everything it can to ensure that jobs you queue are performed exactly once (though the occasional repetition of a job can be impossible to avoid - see the docs on [how to write a reliable job](
|
20
|
+
Que's primary goal is reliability. You should be able to leave your application running indefinitely without worrying about jobs being lost due to a lack of transactional support, or left in limbo due to a crashing process. Que does everything it can to ensure that jobs you queue are performed exactly once (though the occasional repetition of a job can be impossible to avoid - see the docs on [how to write a reliable job](/docs/README.md#writing-reliable-jobs)).
|
21
21
|
|
22
22
|
Que's secondary goal is performance. The worker process is multithreaded, so that a single process can run many jobs simultaneously.
|
23
23
|
|
24
24
|
Compatibility:
|
25
|
+
|
25
26
|
- MRI Ruby 2.2+
|
26
27
|
- PostgreSQL 9.5+
|
27
28
|
- Rails 4.1+ (optional)
|
28
29
|
|
29
30
|
**Please note** - Que's job table undergoes a lot of churn when it is under high load, and like any heavily-written table, is susceptible to bloat and slowness if Postgres isn't able to clean it up. The most common cause of this is long-running transactions, so it's recommended to try to keep all transactions against the database housing Que's job table as short as possible. This is good advice to remember for any high-activity database, but bears emphasizing when using tables that undergo a lot of writes.
|
30
31
|
|
31
|
-
|
32
32
|
## Installation
|
33
33
|
|
34
34
|
Add this line to your application's Gemfile:
|
35
35
|
|
36
|
-
|
36
|
+
```ruby
|
37
|
+
gem 'que'
|
38
|
+
```
|
37
39
|
|
38
40
|
And then execute:
|
39
41
|
|
40
|
-
|
42
|
+
```bash
|
43
|
+
bundle
|
44
|
+
```
|
41
45
|
|
42
46
|
Or install it yourself as:
|
43
47
|
|
44
|
-
|
45
|
-
|
48
|
+
```bash
|
49
|
+
gem install que
|
50
|
+
```
|
46
51
|
|
47
52
|
## Usage
|
48
53
|
|
49
54
|
First, create the queue schema in a migration. For example:
|
50
55
|
|
51
|
-
```
|
56
|
+
```ruby
|
52
57
|
class CreateQueSchema < ActiveRecord::Migration[5.0]
|
53
58
|
def up
|
54
59
|
# Whenever you use Que in a migration, always specify the version you're
|
55
60
|
# migrating to. If you're unsure what the current version is, check the
|
56
61
|
# changelog.
|
57
|
-
Que.migrate!(version:
|
62
|
+
Que.migrate!(version: 5)
|
58
63
|
end
|
59
64
|
|
60
65
|
def down
|
@@ -66,7 +71,7 @@ end
|
|
66
71
|
|
67
72
|
Create a class for each type of job you want to run:
|
68
73
|
|
69
|
-
```
|
74
|
+
```ruby
|
70
75
|
# app/jobs/charge_credit_card.rb
|
71
76
|
class ChargeCreditCard < Que::Job
|
72
77
|
# Default settings for this job. These are optional - without them, jobs
|
@@ -101,7 +106,7 @@ end
|
|
101
106
|
|
102
107
|
Queue your job. Again, it's best to do this in a transaction with other changes you're making. Also note that any arguments you pass will be serialized to JSON and back again, so stick to simple types (strings, integers, floats, hashes, and arrays).
|
103
108
|
|
104
|
-
```
|
109
|
+
```ruby
|
105
110
|
CreditCard.transaction do
|
106
111
|
# Persist credit card information
|
107
112
|
card = CreditCard.create(params[:credit_card])
|
@@ -111,11 +116,18 @@ end
|
|
111
116
|
|
112
117
|
You can also add options to run the job after a specific time, or with a specific priority:
|
113
118
|
|
114
|
-
```
|
119
|
+
```ruby
|
115
120
|
ChargeCreditCard.enqueue card.id, user_id: current_user.id, run_at: 1.day.from_now, priority: 5
|
116
121
|
```
|
122
|
+
## Running the Que Worker
|
123
|
+
In order to process jobs, you must start a separate worker process outside of your main server.
|
124
|
+
|
125
|
+
```bash
|
126
|
+
bundle exec que
|
127
|
+
```
|
128
|
+
|
129
|
+
Try running `que -h` to get a list of runtime options:
|
117
130
|
|
118
|
-
Finally, you can work jobs using the included `que` CLI. Try running `que -h` to get a list of runtime options:
|
119
131
|
```
|
120
132
|
$ que -h
|
121
133
|
usage: que [options] [file/to/require] ...
|
@@ -130,9 +142,26 @@ You may need to pass que a file path to require so that it can load your app. Qu
|
|
130
142
|
|
131
143
|
If you're using ActiveRecord to dump your database's schema, please [set your schema_format to :sql](http://guides.rubyonrails.org/migrations.html#types-of-schema-dumps) so that Que's table structure is managed correctly. This is a good idea regardless, as the `:ruby` schema format doesn't support many of PostgreSQL's advanced features.
|
132
144
|
|
133
|
-
Pre-1.0, the default queue name needed to be configured in order for Que to work out of the box with Rails. In 1.0 the default queue name is now 'default', as Rails expects, but when Rails enqueues some types of jobs it may try to use another queue name that isn't worked by default
|
145
|
+
Pre-1.0, the default queue name needed to be configured in order for Que to work out of the box with Rails. In 1.0 the default queue name is now 'default', as Rails expects, but when Rails enqueues some types of jobs it may try to use another queue name that isn't worked by default. You can either:
|
146
|
+
|
147
|
+
- [Configure Rails](https://guides.rubyonrails.org/configuring.html) to send all internal job types to the 'default' queue by adding the following to `config/application.rb`:
|
148
|
+
|
149
|
+
```ruby
|
150
|
+
config.action_mailer.deliver_later_queue_name = :default
|
151
|
+
config.action_mailbox.queues.incineration = :default
|
152
|
+
config.action_mailbox.queues.routing = :default
|
153
|
+
config.active_storage.queues.analysis = :default
|
154
|
+
config.active_storage.queues.purge = :default
|
155
|
+
```
|
156
|
+
|
157
|
+
- [Tell que](/docs#multiple-queues) to work all of these queues (less efficient because it requires polling all of them):
|
158
|
+
|
159
|
+
```bash
|
160
|
+
que -q default -q mailers -q action_mailbox_incineration -q action_mailbox_routing -q active_storage_analysis -q active_storage_purge
|
161
|
+
```
|
134
162
|
|
135
163
|
Also, if you would like to integrate Que with Active Job, you can do it by setting the adapter in `config/application.rb` or in a specific environment by setting it in `config/environments/production.rb`, for example:
|
164
|
+
|
136
165
|
```ruby
|
137
166
|
config.active_job.queue_adapter = :que
|
138
167
|
```
|
@@ -147,34 +176,106 @@ If you later decide to switch a job from Active Job to Que to have transactional
|
|
147
176
|
|
148
177
|
There are a couple ways to do testing. You may want to set `Que::Job.run_synchronously = true`, which will cause JobClass.enqueue to simply execute the job's logic synchronously, as if you'd run JobClass.run(*your_args). Or, you may want to leave it disabled so you can assert on the job state once they are stored in the database.
|
149
178
|
|
179
|
+
## Documentation
|
180
|
+
|
181
|
+
**For full documentation, see [here](docs/README.md)**.
|
182
|
+
|
150
183
|
## Related Projects
|
151
184
|
|
152
185
|
These projects are tested to be compatible with Que 1.x:
|
153
186
|
|
154
187
|
- [que-web](https://github.com/statianzo/que-web) is a Sinatra-based UI for inspecting your job queue.
|
155
188
|
- [que-scheduler](https://github.com/hlascelles/que-scheduler) lets you schedule tasks using a cron style config file.
|
189
|
+
- [que-locks](https://github.com/airhorns/que-locks) lets you lock around job execution for so only one job runs at once for a set of arguments.
|
156
190
|
|
157
191
|
If you have a project that uses or relates to Que, feel free to submit a PR adding it to the list!
|
158
192
|
|
159
193
|
## Community and Contributing
|
160
194
|
|
161
|
-
|
162
|
-
|
163
|
-
|
195
|
+
- For bugs in the library, please feel free to [open an issue](https://github.com/que-rb/que/issues/new).
|
196
|
+
- For general discussion and questions/concerns that don't relate to obvious bugs, join our [Discord Server](https://discord.gg/B3EW32H).
|
197
|
+
- For contributions, pull requests submitted via Github are welcome.
|
164
198
|
|
165
199
|
Regarding contributions, one of the project's priorities is to keep Que as simple, lightweight and dependency-free as possible, and pull requests that change too much or wouldn't be useful to the majority of Que's users have a good chance of being rejected. If you're thinking of submitting a pull request that adds a new feature, consider starting a discussion in [que-talk](https://groups.google.com/forum/#!forum/que-talk) first about what it would do and how it would be implemented. If it's a sufficiently large feature, or if most of Que's users wouldn't find it useful, it may be best implemented as a standalone gem, like some of the related projects above.
|
166
200
|
|
167
|
-
|
168
201
|
### Specs
|
169
202
|
|
170
203
|
A note on running specs - Que's worker system is multithreaded and therefore prone to race conditions. As such, if you've touched that code, a single spec run passing isn't a guarantee that any changes you've made haven't introduced bugs. One thing I like to do before pushing changes is rerun the specs many times and watching for hangs. You can do this from the command line with something like:
|
171
204
|
|
172
|
-
|
205
|
+
```bash
|
206
|
+
for i in {1..1000}; do SEED=$i bundle exec rake; done
|
207
|
+
```
|
173
208
|
|
174
209
|
This will iterate the specs one thousand times, each with a different ordering. If the specs hang, note what the seed number was on that iteration. For example, if the previous specs finished with a "Randomized with seed 328", you know that there's a hang with seed 329, and you can narrow it down to a specific spec with:
|
175
210
|
|
176
|
-
|
211
|
+
```bash
|
212
|
+
for i in {1..1000}; do LOG_SPEC=true SEED=328 bundle exec rake; done
|
213
|
+
```
|
177
214
|
|
178
215
|
Note that we iterate because there's no guarantee that the hang would reappear with a single additional run, so we need to rerun the specs until it reappears. The LOG_SPEC parameter will output the name and file location of each spec before it is run, so you can easily tell which spec is hanging, and you can continue narrowing things down from there.
|
179
216
|
|
180
217
|
Another helpful technique is to replace an `it` spec declaration with `hit` - this will run that particular spec 100 times during the run.
|
218
|
+
|
219
|
+
#### With Docker
|
220
|
+
|
221
|
+
We've provided a Dockerised environment to avoid the need to manually: install Ruby, install the gem bundle, set up Postgres, and connect to the database.
|
222
|
+
|
223
|
+
To run the specs using this environment, run:
|
224
|
+
|
225
|
+
```bash
|
226
|
+
./auto/test
|
227
|
+
```
|
228
|
+
|
229
|
+
To get a shell in the environment, run:
|
230
|
+
|
231
|
+
```bash
|
232
|
+
./auto/dev
|
233
|
+
```
|
234
|
+
|
235
|
+
The [Docker Compose config](docker-compose.yml) provides a convenient way to inject your local shell aliases into the Docker container. Simply create a file containing your alias definitions (or which sources them from other files) at `~/.docker-rc.d/.docker-bashrc`, and they will be available inside the container.
|
236
|
+
|
237
|
+
#### Without Docker
|
238
|
+
|
239
|
+
You'll need to have Postgres running. Assuming you have it running on port 5697, with a `que-test` database, and a username & password of `que`, you can run:
|
240
|
+
|
241
|
+
```bash
|
242
|
+
DATABASE_URL=postgres://que:que@localhost:5697/que-test bundle exec rake
|
243
|
+
```
|
244
|
+
|
245
|
+
If you don't already have Postgres, you could use Docker Compose to run just the database:
|
246
|
+
|
247
|
+
```bash
|
248
|
+
docker compose up -d db
|
249
|
+
```
|
250
|
+
|
251
|
+
If you want to try a different version of Postgres, e.g. 12:
|
252
|
+
|
253
|
+
```bash
|
254
|
+
export POSTGRES_VERSION=12
|
255
|
+
```
|
256
|
+
|
257
|
+
### Git pre-push hook
|
258
|
+
|
259
|
+
So we can avoid breaking the build, we've created Git pre-push hooks to verify everything is ok before pushing.
|
260
|
+
|
261
|
+
To set up the pre-push hook locally, run:
|
262
|
+
|
263
|
+
```bash
|
264
|
+
echo -e "#\!/bin/bash\n\$(dirname \$0)/../../auto/pre-push-hook" > .git/hooks/pre-push
|
265
|
+
chmod +x .git/hooks/pre-push
|
266
|
+
```
|
267
|
+
|
268
|
+
### Release process
|
269
|
+
|
270
|
+
The process for releasing a new version of the gem is:
|
271
|
+
|
272
|
+
- Merge PR(s)
|
273
|
+
- Git pull locally
|
274
|
+
- Update the version number, bundle install, and commit
|
275
|
+
- Update `CHANGELOG.md`, and commit
|
276
|
+
- Tag the commit with the version number, prefixed by `v`
|
277
|
+
- Git push to master
|
278
|
+
- Git push the tag
|
279
|
+
- Publish the new version of the gem to RubyGems: `gem build -o que.gem && gem push que.gem`
|
280
|
+
- Create a GitHub release - rather than describe anything there, link to the heading for the release in `CHANGELOG.md`
|
281
|
+
- Post on the Que Discord in `#announcements`
|
data/auto/dev
ADDED
@@ -0,0 +1,21 @@
|
|
1
|
+
#!/bin/bash
|
2
|
+
#
|
3
|
+
# Operate in development environment
|
4
|
+
|
5
|
+
set -Eeuo pipefail
|
6
|
+
|
7
|
+
cd "$(dirname "$0")/.."
|
8
|
+
|
9
|
+
docker compose build dev
|
10
|
+
|
11
|
+
# Delete containers and DB volume afterwards on CI
|
12
|
+
if [[ "${CI-}" == "true" ]]; then
|
13
|
+
trap '{
|
14
|
+
echo "Stopping containers..."
|
15
|
+
docker compose down
|
16
|
+
docker volume rm -f que_db-data
|
17
|
+
}' EXIT
|
18
|
+
fi
|
19
|
+
|
20
|
+
set -x
|
21
|
+
docker compose run --rm dev "${@-bash}"
|
data/auto/pre-push-hook
ADDED
@@ -0,0 +1,30 @@
|
|
1
|
+
#!/bin/bash
|
2
|
+
|
3
|
+
set -Eeuo pipefail
|
4
|
+
|
5
|
+
cd "$(dirname "$0")/.."
|
6
|
+
|
7
|
+
green='\e[32m'; blue='\e[36m'; red='\e[31m'; bold='\e[1m'; reset='\e[0m'
|
8
|
+
coloured-arrow() { printf "$bold$1==> $2$reset\n"; }
|
9
|
+
success() { coloured-arrow "$green" "$1"; }
|
10
|
+
info() { coloured-arrow "$blue" "$1"; }
|
11
|
+
err() { coloured-arrow "$red" "$1"; exit 1; }
|
12
|
+
|
13
|
+
info 'Running pre-push hook...'
|
14
|
+
|
15
|
+
on-exit() {
|
16
|
+
[[ -n "${succeeded-}" ]] || err 'Pre-push checks failed'
|
17
|
+
}
|
18
|
+
trap on-exit EXIT
|
19
|
+
|
20
|
+
info 'Checking for uncommitted changes...'
|
21
|
+
[[ -z $(git status -s) ]] || err 'ERROR: You have uncommited changes'
|
22
|
+
|
23
|
+
info 'Checking bundle...'
|
24
|
+
bundle check --dry-run || bundle install
|
25
|
+
|
26
|
+
info 'Specs...'
|
27
|
+
auto/test
|
28
|
+
|
29
|
+
succeeded=true
|
30
|
+
success 'All pre-push checks passed! =)'
|
data/auto/psql
ADDED
data/auto/test
ADDED
@@ -0,0 +1,17 @@
|
|
1
|
+
#!/bin/bash
|
2
|
+
|
3
|
+
set -Eeuo pipefail
|
4
|
+
|
5
|
+
export POSTGRES_VERSION=14
|
6
|
+
|
7
|
+
delete_db() {
|
8
|
+
docker compose down
|
9
|
+
docker volume rm -f que_db-data
|
10
|
+
}
|
11
|
+
|
12
|
+
trap delete_db EXIT
|
13
|
+
|
14
|
+
# pre-test cleanup is necessary as the existing db container will be used if it's running (potentially with the wrong PG version)
|
15
|
+
delete_db
|
16
|
+
"$(dirname "$0")"/test "$@"
|
17
|
+
delete_db
|
@@ -138,10 +138,9 @@ module Que
|
|
138
138
|
opts.on(
|
139
139
|
'--minimum-buffer-size [SIZE]',
|
140
140
|
Integer,
|
141
|
-
"
|
142
|
-
"process awaiting a worker (default: 2)",
|
141
|
+
"Unused (deprecated)",
|
143
142
|
) do |s|
|
144
|
-
|
143
|
+
warn "The --minimum-buffer-size SIZE option has been deprecated and will be removed in v2.0 (it's actually been unused since v1.0.0.beta4)"
|
145
144
|
end
|
146
145
|
|
147
146
|
opts.on(
|
@@ -183,8 +182,8 @@ OUTPUT
|
|
183
182
|
args.each do |file|
|
184
183
|
begin
|
185
184
|
require file
|
186
|
-
rescue LoadError
|
187
|
-
output.puts "Could not load file '#{file}'"
|
185
|
+
rescue LoadError => e
|
186
|
+
output.puts "Could not load file '#{file}': #{e}"
|
188
187
|
return 1
|
189
188
|
end
|
190
189
|
end
|
@@ -232,7 +231,16 @@ OUTPUT
|
|
232
231
|
$stop_que_executable = false
|
233
232
|
%w[INT TERM].each { |signal| trap(signal) { $stop_que_executable = true } }
|
234
233
|
|
235
|
-
output.puts
|
234
|
+
output.puts(
|
235
|
+
<<~STARTUP
|
236
|
+
Que #{Que::VERSION} started worker process with:
|
237
|
+
Worker threads: #{locker.workers.length} (priorities: #{locker.workers.map { |w| w.priority || 'any' }.join(', ')})
|
238
|
+
Buffer size: #{locker.job_buffer.maximum_size}
|
239
|
+
Queues:
|
240
|
+
#{locker.queues.map { |queue, interval| " - #{queue} (poll interval: #{interval}s)" }.join("\n")}
|
241
|
+
Que waiting for jobs...
|
242
|
+
STARTUP
|
243
|
+
)
|
236
244
|
|
237
245
|
loop do
|
238
246
|
sleep 0.01
|
data/docker-compose.yml
ADDED
@@ -0,0 +1,46 @@
|
|
1
|
+
version: "3.7"
|
2
|
+
|
3
|
+
services:
|
4
|
+
|
5
|
+
dev:
|
6
|
+
build:
|
7
|
+
context: .
|
8
|
+
target: dev-environment
|
9
|
+
depends_on:
|
10
|
+
- db
|
11
|
+
volumes:
|
12
|
+
- .:/work
|
13
|
+
- ruby-2.7.5-gem-cache:/usr/local/bundle
|
14
|
+
- ~/.docker-rc.d/:/.docker-rc.d/:ro
|
15
|
+
working_dir: /work
|
16
|
+
entrypoint: /work/scripts/docker-entrypoint
|
17
|
+
command: bash
|
18
|
+
environment:
|
19
|
+
DATABASE_URL: postgres://que:que@db/que-test
|
20
|
+
|
21
|
+
db:
|
22
|
+
image: "postgres:${POSTGRES_VERSION-13}"
|
23
|
+
volumes:
|
24
|
+
- db-data:/var/lib/postgresql/data
|
25
|
+
environment:
|
26
|
+
POSTGRES_USER: que
|
27
|
+
POSTGRES_PASSWORD: que
|
28
|
+
POSTGRES_DB: que-test
|
29
|
+
ports:
|
30
|
+
- 5697:5432
|
31
|
+
|
32
|
+
pg-dev:
|
33
|
+
image: "postgres:${POSTGRES_VERSION-13}"
|
34
|
+
depends_on:
|
35
|
+
- db
|
36
|
+
entrypoint: []
|
37
|
+
command: psql
|
38
|
+
environment:
|
39
|
+
PGHOST: db
|
40
|
+
PGUSER: que
|
41
|
+
PGPASSWORD: que
|
42
|
+
PGDATABASE: que-test
|
43
|
+
|
44
|
+
volumes:
|
45
|
+
db-data: ~
|
46
|
+
ruby-2.7.5-gem-cache: ~
|
data/docs/README.md
CHANGED
@@ -4,7 +4,7 @@ Docs Index
|
|
4
4
|
- [Command Line Interface](#command-line-interface)
|
5
5
|
* [worker-priorities and worker-count](#worker-priorities-and-worker-count)
|
6
6
|
* [poll-interval](#poll-interval)
|
7
|
-
* [
|
7
|
+
* [maximum-buffer-size](#maximum-buffer-size)
|
8
8
|
* [connection-url](#connection-url)
|
9
9
|
* [wait-period](#wait-period)
|
10
10
|
* [log-internals](#log-internals)
|
@@ -37,9 +37,9 @@ Docs Index
|
|
37
37
|
* [destroy](#destroy)
|
38
38
|
* [finish](#finish)
|
39
39
|
* [expire](#expire)
|
40
|
-
* [retry_in](#
|
41
|
-
* [error_count](#
|
42
|
-
* [default_resolve_action](#
|
40
|
+
* [retry_in](#retry_in)
|
41
|
+
* [error_count](#error_count)
|
42
|
+
* [default_resolve_action](#default_resolve_action)
|
43
43
|
- [Writing Reliable Jobs](#writing-reliable-jobs)
|
44
44
|
* [Timeouts](#timeouts)
|
45
45
|
- [Middleware](#middleware)
|
@@ -62,7 +62,6 @@ usage: que [options] [file/to/require] ...
|
|
62
62
|
--connection-url [URL] Set a custom database url to connect to for locking purposes.
|
63
63
|
--log-internals Log verbosely about Que's internal state. Only recommended for debugging issues
|
64
64
|
--maximum-buffer-size [SIZE] Set maximum number of jobs to be locked and held in this process awaiting a worker (default: 8)
|
65
|
-
--minimum-buffer-size [SIZE] Set minimum number of jobs to be locked and held in this process awaiting a worker (default: 2)
|
66
65
|
--wait-period [PERIOD] Set maximum interval between checks of the in-memory job queue, in milliseconds (default: 50)
|
67
66
|
```
|
68
67
|
|
@@ -82,9 +81,9 @@ If you pass both worker-count and worker-priorities, the count will trim or pad
|
|
82
81
|
|
83
82
|
This option sets the number of seconds the process will wait between polls of the job queue. Jobs that are ready to be worked immediately will be broadcast via the LISTEN/NOTIFY system, so polling is unnecessary for them - polling is only necessary for jobs that are scheduled in the future or which are being delayed due to errors. The default is 5 seconds.
|
84
83
|
|
85
|
-
###
|
84
|
+
### maximum-buffer-size
|
86
85
|
|
87
|
-
|
86
|
+
This option sets the size of the internal buffer that Que uses to hold jobs until they're ready for workers. The default maximum is 8, meaning that the process won't buffer more than 8 jobs that aren't yet ready to be worked. If you don't want jobs to be buffered at all, you can set this value to zero.
|
88
87
|
|
89
88
|
### connection-url
|
90
89
|
|
@@ -122,18 +121,18 @@ ActiveRecord::Base.transaction do
|
|
122
121
|
end
|
123
122
|
```
|
124
123
|
|
125
|
-
There are other docs to read if you're using [Sequel](#using-sequel) or [plain Postgres connections](#using-plain-connections) (with no ORM at all) instead of ActiveRecord.
|
124
|
+
There are other docs to read if you're using [Sequel](#using-sequel) or [plain Postgres connections](#using-plain-postgres-connections) (with no ORM at all) instead of ActiveRecord.
|
126
125
|
|
127
126
|
### Managing the Jobs Table
|
128
127
|
|
129
128
|
After you've connected Que to the database, you can manage the jobs table. You'll want to migrate to a specific version in a migration file, to ensure that they work the same way even when you upgrade Que in the future:
|
130
129
|
|
131
130
|
```ruby
|
132
|
-
# Update the schema to version #
|
133
|
-
Que.migrate!
|
131
|
+
# Update the schema to version #5.
|
132
|
+
Que.migrate!(version: 5)
|
134
133
|
|
135
134
|
# Remove Que's jobs table entirely.
|
136
|
-
Que.migrate!
|
135
|
+
Que.migrate!(version: 0)
|
137
136
|
```
|
138
137
|
|
139
138
|
There's also a helper method to clear all jobs from the jobs table:
|
@@ -405,11 +404,11 @@ Some new releases of Que may require updates to the database schema. It's recomm
|
|
405
404
|
```ruby
|
406
405
|
class UpdateQue < ActiveRecord::Migration[5.0]
|
407
406
|
def self.up
|
408
|
-
Que.migrate!
|
407
|
+
Que.migrate!(version: 3)
|
409
408
|
end
|
410
409
|
|
411
410
|
def self.down
|
412
|
-
Que.migrate!
|
411
|
+
Que.migrate!(version: 2)
|
413
412
|
end
|
414
413
|
end
|
415
414
|
```
|
@@ -418,7 +417,7 @@ This will make sure that your database schema stays consistent with your codebas
|
|
418
417
|
|
419
418
|
```ruby
|
420
419
|
# Change schema to version 3.
|
421
|
-
Que.migrate!
|
420
|
+
Que.migrate!(version: 3)
|
422
421
|
|
423
422
|
# Check your current schema version.
|
424
423
|
Que.db_version #=> 3
|
@@ -463,7 +462,7 @@ Que.enqueue current_user.id, job_class: 'ProcessCreditCard', queue: 'credit_card
|
|
463
462
|
|
464
463
|
To ensure safe operation, Que needs to be very careful in how it shuts down. When a Ruby process ends normally, it calls Thread#kill on any threads that are still running - unfortunately, if a thread is in the middle of a transaction when this happens, there is a risk that it will be prematurely commited, resulting in data corruption. See [here](http://blog.headius.com/2008/02/ruby-threadraise-threadkill-timeoutrb.html) and [here](http://coderrr.wordpress.com/2011/05/03/beware-of-threadkill-or-your-activerecord-transactions-are-in-danger-of-being-partially-committed/) for more detail on this.
|
465
464
|
|
466
|
-
To prevent this, Que will block the worker process from exiting until all jobs it is working have completed normally. Unfortunately, if you have long-running jobs, this may take a very long time (and if something goes wrong with a job's logic, it may never happen). The solution in this case is SIGKILL - luckily, Ruby processes that are killed via SIGKILL will end without using Thread#kill on its running threads. This is safer than exiting normally - when PostgreSQL loses the connection it will simply roll back the open transaction, if any, and unlock the job so it can be retried later by another worker. Be sure to read [Writing Reliable Jobs](#writing-reliable-jobs
|
465
|
+
To prevent this, Que will block the worker process from exiting until all jobs it is working have completed normally. Unfortunately, if you have long-running jobs, this may take a very long time (and if something goes wrong with a job's logic, it may never happen). The solution in this case is SIGKILL - luckily, Ruby processes that are killed via SIGKILL will end without using Thread#kill on its running threads. This is safer than exiting normally - when PostgreSQL loses the connection it will simply roll back the open transaction, if any, and unlock the job so it can be retried later by another worker. Be sure to read [Writing Reliable Jobs](#writing-reliable-jobs) for information on how to design your jobs to fail safely.
|
467
466
|
|
468
467
|
So, be prepared to use SIGKILL on your Ruby processes if they run for too long. For example, Heroku takes a good approach to this - when Heroku's platform is shutting down a process, it sends SIGTERM, waits ten seconds, then sends SIGKILL if the process still hasn't exited. This is a nice compromise - it will give each of your currently running jobs ten seconds to complete, and any jobs that haven't finished by then will be interrupted and retried later.
|
469
468
|
|
@@ -550,11 +549,11 @@ require 'que'
|
|
550
549
|
Sequel.migration do
|
551
550
|
up do
|
552
551
|
Que.connection = self
|
553
|
-
Que.migrate!
|
552
|
+
Que.migrate!(version: 5)
|
554
553
|
end
|
555
554
|
down do
|
556
555
|
Que.connection = self
|
557
|
-
Que.migrate!
|
556
|
+
Que.migrate!(version: 0)
|
558
557
|
end
|
559
558
|
end
|
560
559
|
```
|
@@ -593,7 +592,7 @@ Additionally, including `Que::ActiveJob::JobExtensions` lets you define a run()
|
|
593
592
|
|
594
593
|
## Job Helper Methods
|
595
594
|
|
596
|
-
There are a number of instance methods on Que::Job that you can use in your jobs, preferably in transactions. See [Writing Reliable Jobs](
|
595
|
+
There are a number of instance methods on Que::Job that you can use in your jobs, preferably in transactions. See [Writing Reliable Jobs](#writing-reliable-jobs) for more information on where to use these methods.
|
597
596
|
|
598
597
|
### destroy
|
599
598
|
|
data/lib/que/connection.rb
CHANGED