que 1.0.0 → 1.3.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +46 -0
- data/Dockerfile +20 -0
- data/README.md +112 -35
- data/auto/dev +21 -0
- data/auto/pre-push-hook +30 -0
- data/auto/psql +9 -0
- data/auto/test +5 -0
- data/auto/test-postgres-14 +17 -0
- data/bin/command_line_interface.rb +10 -1
- data/docker-compose.yml +46 -0
- data/docs/README.md +8 -8
- data/lib/que/job.rb +35 -17
- data/lib/que/locker.rb +21 -7
- data/lib/que/migrations/4/up.sql +3 -1
- data/lib/que/migrations/5/down.sql +73 -0
- data/lib/que/migrations/5/up.sql +76 -0
- data/lib/que/migrations.rb +1 -2
- data/lib/que/poller.rb +2 -0
- data/lib/que/version.rb +5 -1
- data/lib/que/worker.rb +2 -1
- data/scripts/docker-entrypoint +14 -0
- data/scripts/test +5 -0
- metadata +14 -3
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 382e70b455239bf58eda4d83e5772400b55055f98857d2b83fdfc7f36f26d856
|
4
|
+
data.tar.gz: dfad699d5bb996c965e3216de224fc9248a3cb2be70cce04f8407ed9b5ee0651
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: d1c34fc4b2ff8ce02b9b70f8fe9e9d361c8e4e701e36987690464a5f62263905e29453c9dbcb65bce9bda19d511d1430b1e235f1ad8e86cbd1e136e189e71015
|
7
|
+
data.tar.gz: 9cb2182726f813fb53c3cb6f2a41dc4bb482810448673a8f8e704860db119e1ad44a64613f3d688df1085f0686ff51ea74a0d1b46bc7353d79d81945cf3555e8
|
data/CHANGELOG.md
CHANGED
@@ -1,4 +1,50 @@
|
|
1
|
+
### 1.3.0 (2022-02-25)
|
2
|
+
|
3
|
+
**ACTION REQUIRED**
|
4
|
+
|
5
|
+
This release will allow you to safely upgrade to Que 2 when it comes out, without first needing to empty your `que_jobs` table.
|
6
|
+
|
7
|
+
**You will need to first update to this version, apply the Que schema migration, and deploy, before you can safely begin the process of upgrading to Que 2.**
|
8
|
+
|
9
|
+
Que 2 will bring Ruby 3 support, but to do that, the job arguments in the `que_jobs` table will need to be split into two columns - repurposing the existing one for positional arguments only (`args`), and adding a new one for keyword arguments (`kwargs`). This is so that Que running in Ruby 3, when reading job arguments stored in the database, can disambiguate between keyword arguments and a last positional argument hash.
|
10
|
+
|
11
|
+
The args split hasn't happened yet, but when it does, we still need to be able to successfully process all the existing queued jobs which have their keyword arguments in the `args` column still. Our solution is for you to have both Que 1 workers and Que 2 workers operating simultaneously during the upgrade, each processing only the jobs enqueued from that version. Once all the Que 1 jobs are processed, the Que 1 workers can be retired.
|
12
|
+
|
13
|
+
To allow the different worker versions to tell which jobs belong to which, we've added a new column to the `que_jobs` table in this version, `job_schema_version`. Jobs enqueued with Que 1 will have a `1` here, and jobs from Que 2 will have a `2`. Que schema migration 5 will default the job schema version of all existing jobs to `1`.
|
14
|
+
|
15
|
+
You will need to migrate Que to the latest Que schema version (5). For instance, on ActiveRecord and Rails 6:
|
16
|
+
|
17
|
+
```ruby
|
18
|
+
class UpdateQueTablesToVersion5 < ActiveRecord::Migration[6.0]
|
19
|
+
def up
|
20
|
+
Que.migrate!(version: 5)
|
21
|
+
end
|
22
|
+
def down
|
23
|
+
Que.migrate!(version: 4)
|
24
|
+
end
|
25
|
+
end
|
26
|
+
```
|
27
|
+
|
28
|
+
You must apply the schema migration and deploy to upgrade all workers.
|
29
|
+
|
30
|
+
No further action is required from you at this stage. The Que 2 release changelog will provide full upgrade instructions for the process briefly described above of simultaneously running both Que 1 & 2 workers. Note that you won't be required to upgrade from Ruby 2.7 to Ruby 3 at the same time as upgrading to Que 2.
|
31
|
+
|
32
|
+
If you use any Que plugins or custom code that interacts with the `que_jobs` table, before you can upgrade to Que 2, you will need to make sure they are updated for compatibility with Que 2: They'll need to make use of the `kwargs` column, and when inserting jobs, put the result of `Que.job_schema_version` into the `job_schema_version` column rather than continue to rely on its default of `1`.
|
33
|
+
|
34
|
+
### 1.2.0 (2022-02-23)
|
35
|
+
|
36
|
+
- **Deprecated**
|
37
|
+
+ Providing job options as top level keyword arguments to Job.enqueue is now deprecated. Support will be dropped in `2.0`. Job options should be nested under the `job_options` keyword arg instead. See [#336](https://github.com/que-rb/que/pull/336)
|
38
|
+
|
39
|
+
### 1.1.0 (2022-02-21)
|
40
|
+
|
41
|
+
- **Features**:
|
42
|
+
+ Add backtrace to errors, by [@trammel](https://github.com/trammel) in [#328](https://github.com/que-rb/que/pull/328)
|
43
|
+
- **Internal**:
|
44
|
+
+ Add Dockerised development environment, in [#324](https://github.com/que-rb/que/pull/324)
|
45
|
+
|
1
46
|
### 1.0.0 (2022-01-24)
|
47
|
+
|
2
48
|
_This release does not add any changes on top of 1.0.0.beta5._
|
3
49
|
|
4
50
|
### 1.0.0.beta5 (2021-12-23)
|
data/Dockerfile
ADDED
@@ -0,0 +1,20 @@
|
|
1
|
+
FROM ruby:2.7.5-slim-buster@sha256:4cbbe2fba099026b243200aa8663f56476950cc64ccd91d7aaccddca31e445b5 AS base
|
2
|
+
|
3
|
+
# Install libpq-dev in our base layer, as it's needed in all environments
|
4
|
+
RUN apt-get update \
|
5
|
+
&& apt-get install -y libpq-dev \
|
6
|
+
&& rm -rf /var/lib/apt/lists/*
|
7
|
+
|
8
|
+
ENV RUBY_BUNDLER_VERSION 2.3.7
|
9
|
+
RUN gem install bundler -v $RUBY_BUNDLER_VERSION
|
10
|
+
|
11
|
+
ENV BUNDLE_PATH /usr/local/bundle
|
12
|
+
|
13
|
+
ENV RUBYOPT=-W:deprecated
|
14
|
+
|
15
|
+
FROM base AS dev-environment
|
16
|
+
|
17
|
+
# Install build-essential and git, as we'd need them for building gems that have native code components
|
18
|
+
RUN apt-get update \
|
19
|
+
&& apt-get install -y build-essential git \
|
20
|
+
&& rm -rf /var/lib/apt/lists/*
|
data/README.md
CHANGED
@@ -6,55 +6,60 @@
|
|
6
6
|
|
7
7
|
Que ("keɪ", or "kay") is a queue for Ruby and PostgreSQL that manages jobs using [advisory locks](http://www.postgresql.org/docs/current/static/explicit-locking.html#ADVISORY-LOCKS), which gives it several advantages over other RDBMS-backed queues:
|
8
8
|
|
9
|
-
|
10
|
-
|
11
|
-
|
9
|
+
- **Concurrency** - Workers don't block each other when trying to lock jobs, as often occurs with "SELECT FOR UPDATE"-style locking. This allows for very high throughput with a large number of workers.
|
10
|
+
- **Efficiency** - Locks are held in memory, so locking a job doesn't incur a disk write. These first two points are what limit performance with other queues. Under heavy load, Que's bottleneck is CPU, not I/O.
|
11
|
+
- **Safety** - If a Ruby process dies, the jobs it's working won't be lost, or left in a locked or ambiguous state - they immediately become available for any other worker to pick up.
|
12
12
|
|
13
13
|
Additionally, there are the general benefits of storing jobs in Postgres, alongside the rest of your data, rather than in Redis or a dedicated queue:
|
14
14
|
|
15
|
-
|
16
|
-
|
17
|
-
|
18
|
-
|
15
|
+
- **Transactional Control** - Queue a job along with other changes to your database, and it'll commit or rollback with everything else. If you're using ActiveRecord or Sequel, Que can piggyback on their connections, so setup is simple and jobs are protected by the transactions you're already using.
|
16
|
+
- **Atomic Backups** - Your jobs and data can be backed up together and restored as a snapshot. If your jobs relate to your data (and they usually do), there's no risk of jobs falling through the cracks during a recovery.
|
17
|
+
- **Fewer Dependencies** - If you're already using Postgres (and you probably should be), a separate queue is another moving part that can break.
|
18
|
+
- **Security** - Postgres' support for SSL connections keeps your data safe in transport, for added protection when you're running workers on cloud platforms that you can't completely control.
|
19
19
|
|
20
20
|
Que's primary goal is reliability. You should be able to leave your application running indefinitely without worrying about jobs being lost due to a lack of transactional support, or left in limbo due to a crashing process. Que does everything it can to ensure that jobs you queue are performed exactly once (though the occasional repetition of a job can be impossible to avoid - see the docs on [how to write a reliable job](/docs/README.md#writing-reliable-jobs)).
|
21
21
|
|
22
22
|
Que's secondary goal is performance. The worker process is multithreaded, so that a single process can run many jobs simultaneously.
|
23
23
|
|
24
24
|
Compatibility:
|
25
|
+
|
25
26
|
- MRI Ruby 2.2+
|
26
27
|
- PostgreSQL 9.5+
|
27
28
|
- Rails 4.1+ (optional)
|
28
29
|
|
29
30
|
**Please note** - Que's job table undergoes a lot of churn when it is under high load, and like any heavily-written table, is susceptible to bloat and slowness if Postgres isn't able to clean it up. The most common cause of this is long-running transactions, so it's recommended to try to keep all transactions against the database housing Que's job table as short as possible. This is good advice to remember for any high-activity database, but bears emphasizing when using tables that undergo a lot of writes.
|
30
31
|
|
31
|
-
|
32
32
|
## Installation
|
33
33
|
|
34
34
|
Add this line to your application's Gemfile:
|
35
35
|
|
36
|
-
|
36
|
+
```ruby
|
37
|
+
gem 'que'
|
38
|
+
```
|
37
39
|
|
38
40
|
And then execute:
|
39
41
|
|
40
|
-
|
42
|
+
```bash
|
43
|
+
bundle
|
44
|
+
```
|
41
45
|
|
42
46
|
Or install it yourself as:
|
43
47
|
|
44
|
-
|
45
|
-
|
48
|
+
```bash
|
49
|
+
gem install que
|
50
|
+
```
|
46
51
|
|
47
52
|
## Usage
|
48
53
|
|
49
54
|
First, create the queue schema in a migration. For example:
|
50
55
|
|
51
|
-
```
|
56
|
+
```ruby
|
52
57
|
class CreateQueSchema < ActiveRecord::Migration[5.0]
|
53
58
|
def up
|
54
59
|
# Whenever you use Que in a migration, always specify the version you're
|
55
60
|
# migrating to. If you're unsure what the current version is, check the
|
56
61
|
# changelog.
|
57
|
-
Que.migrate!(version:
|
62
|
+
Que.migrate!(version: 5)
|
58
63
|
end
|
59
64
|
|
60
65
|
def down
|
@@ -66,7 +71,7 @@ end
|
|
66
71
|
|
67
72
|
Create a class for each type of job you want to run:
|
68
73
|
|
69
|
-
```
|
74
|
+
```ruby
|
70
75
|
# app/jobs/charge_credit_card.rb
|
71
76
|
class ChargeCreditCard < Que::Job
|
72
77
|
# Default settings for this job. These are optional - without them, jobs
|
@@ -101,7 +106,7 @@ end
|
|
101
106
|
|
102
107
|
Queue your job. Again, it's best to do this in a transaction with other changes you're making. Also note that any arguments you pass will be serialized to JSON and back again, so stick to simple types (strings, integers, floats, hashes, and arrays).
|
103
108
|
|
104
|
-
```
|
109
|
+
```ruby
|
105
110
|
CreditCard.transaction do
|
106
111
|
# Persist credit card information
|
107
112
|
card = CreditCard.create(params[:credit_card])
|
@@ -111,17 +116,18 @@ end
|
|
111
116
|
|
112
117
|
You can also add options to run the job after a specific time, or with a specific priority:
|
113
118
|
|
114
|
-
```
|
119
|
+
```ruby
|
115
120
|
ChargeCreditCard.enqueue card.id, user_id: current_user.id, run_at: 1.day.from_now, priority: 5
|
116
121
|
```
|
117
122
|
## Running the Que Worker
|
118
123
|
In order to process jobs, you must start a separate worker process outside of your main server.
|
119
124
|
|
120
|
-
```
|
125
|
+
```bash
|
121
126
|
bundle exec que
|
122
127
|
```
|
123
128
|
|
124
129
|
Try running `que -h` to get a list of runtime options:
|
130
|
+
|
125
131
|
```
|
126
132
|
$ que -h
|
127
133
|
usage: que [options] [file/to/require] ...
|
@@ -138,21 +144,24 @@ If you're using ActiveRecord to dump your database's schema, please [set your sc
|
|
138
144
|
|
139
145
|
Pre-1.0, the default queue name needed to be configured in order for Que to work out of the box with Rails. In 1.0 the default queue name is now 'default', as Rails expects, but when Rails enqueues some types of jobs it may try to use another queue name that isn't worked by default. You can either:
|
140
146
|
|
141
|
-
|
142
|
-
```ruby
|
143
|
-
config.action_mailer.deliver_later_queue_name = :default
|
144
|
-
config.action_mailbox.queues.incineration = :default
|
145
|
-
config.action_mailbox.queues.routing = :default
|
146
|
-
config.active_storage.queues.analysis = :default
|
147
|
-
config.active_storage.queues.purge = :default
|
148
|
-
```
|
147
|
+
- [Configure Rails](https://guides.rubyonrails.org/configuring.html) to send all internal job types to the 'default' queue by adding the following to `config/application.rb`:
|
149
148
|
|
150
|
-
|
151
|
-
|
152
|
-
|
153
|
-
|
149
|
+
```ruby
|
150
|
+
config.action_mailer.deliver_later_queue_name = :default
|
151
|
+
config.action_mailbox.queues.incineration = :default
|
152
|
+
config.action_mailbox.queues.routing = :default
|
153
|
+
config.active_storage.queues.analysis = :default
|
154
|
+
config.active_storage.queues.purge = :default
|
155
|
+
```
|
156
|
+
|
157
|
+
- [Tell que](/docs#multiple-queues) to work all of these queues (less efficient because it requires polling all of them):
|
158
|
+
|
159
|
+
```bash
|
160
|
+
que -q default -q mailers -q action_mailbox_incineration -q action_mailbox_routing -q active_storage_analysis -q active_storage_purge
|
161
|
+
```
|
154
162
|
|
155
163
|
Also, if you would like to integrate Que with Active Job, you can do it by setting the adapter in `config/application.rb` or in a specific environment by setting it in `config/environments/production.rb`, for example:
|
164
|
+
|
156
165
|
```ruby
|
157
166
|
config.active_job.queue_adapter = :que
|
158
167
|
```
|
@@ -183,9 +192,9 @@ If you have a project that uses or relates to Que, feel free to submit a PR addi
|
|
183
192
|
|
184
193
|
## Community and Contributing
|
185
194
|
|
186
|
-
|
187
|
-
|
188
|
-
|
195
|
+
- For bugs in the library, please feel free to [open an issue](https://github.com/que-rb/que/issues/new).
|
196
|
+
- For general discussion and questions/concerns that don't relate to obvious bugs, join our [Discord Server](https://discord.gg/B3EW32H).
|
197
|
+
- For contributions, pull requests submitted via Github are welcome.
|
189
198
|
|
190
199
|
Regarding contributions, one of the project's priorities is to keep Que as simple, lightweight and dependency-free as possible, and pull requests that change too much or wouldn't be useful to the majority of Que's users have a good chance of being rejected. If you're thinking of submitting a pull request that adds a new feature, consider starting a discussion in [que-talk](https://groups.google.com/forum/#!forum/que-talk) first about what it would do and how it would be implemented. If it's a sufficiently large feature, or if most of Que's users wouldn't find it useful, it may be best implemented as a standalone gem, like some of the related projects above.
|
191
200
|
|
@@ -193,12 +202,80 @@ Regarding contributions, one of the project's priorities is to keep Que as simpl
|
|
193
202
|
|
194
203
|
A note on running specs - Que's worker system is multithreaded and therefore prone to race conditions. As such, if you've touched that code, a single spec run passing isn't a guarantee that any changes you've made haven't introduced bugs. One thing I like to do before pushing changes is rerun the specs many times and watching for hangs. You can do this from the command line with something like:
|
195
204
|
|
196
|
-
|
205
|
+
```bash
|
206
|
+
for i in {1..1000}; do SEED=$i bundle exec rake; done
|
207
|
+
```
|
197
208
|
|
198
209
|
This will iterate the specs one thousand times, each with a different ordering. If the specs hang, note what the seed number was on that iteration. For example, if the previous specs finished with a "Randomized with seed 328", you know that there's a hang with seed 329, and you can narrow it down to a specific spec with:
|
199
210
|
|
200
|
-
|
211
|
+
```bash
|
212
|
+
for i in {1..1000}; do LOG_SPEC=true SEED=328 bundle exec rake; done
|
213
|
+
```
|
201
214
|
|
202
215
|
Note that we iterate because there's no guarantee that the hang would reappear with a single additional run, so we need to rerun the specs until it reappears. The LOG_SPEC parameter will output the name and file location of each spec before it is run, so you can easily tell which spec is hanging, and you can continue narrowing things down from there.
|
203
216
|
|
204
217
|
Another helpful technique is to replace an `it` spec declaration with `hit` - this will run that particular spec 100 times during the run.
|
218
|
+
|
219
|
+
#### With Docker
|
220
|
+
|
221
|
+
We've provided a Dockerised environment to avoid the need to manually: install Ruby, install the gem bundle, set up Postgres, and connect to the database.
|
222
|
+
|
223
|
+
To run the specs using this environment, run:
|
224
|
+
|
225
|
+
```bash
|
226
|
+
./auto/test
|
227
|
+
```
|
228
|
+
|
229
|
+
To get a shell in the environment, run:
|
230
|
+
|
231
|
+
```bash
|
232
|
+
./auto/dev
|
233
|
+
```
|
234
|
+
|
235
|
+
The [Docker Compose config](docker-compose.yml) provides a convenient way to inject your local shell aliases into the Docker container. Simply create a file containing your alias definitions (or which sources them from other files) at `~/.docker-rc.d/.docker-bashrc`, and they will be available inside the container.
|
236
|
+
|
237
|
+
#### Without Docker
|
238
|
+
|
239
|
+
You'll need to have Postgres running. Assuming you have it running on port 5697, with a `que-test` database, and a username & password of `que`, you can run:
|
240
|
+
|
241
|
+
```bash
|
242
|
+
DATABASE_URL=postgres://que:que@localhost:5697/que-test bundle exec rake
|
243
|
+
```
|
244
|
+
|
245
|
+
If you don't already have Postgres, you could use Docker Compose to run just the database:
|
246
|
+
|
247
|
+
```bash
|
248
|
+
docker compose up -d db
|
249
|
+
```
|
250
|
+
|
251
|
+
If you want to try a different version of Postgres, e.g. 12:
|
252
|
+
|
253
|
+
```bash
|
254
|
+
export POSTGRES_VERSION=12
|
255
|
+
```
|
256
|
+
|
257
|
+
### Git pre-push hook
|
258
|
+
|
259
|
+
So we can avoid breaking the build, we've created Git pre-push hooks to verify everything is ok before pushing.
|
260
|
+
|
261
|
+
To set up the pre-push hook locally, run:
|
262
|
+
|
263
|
+
```bash
|
264
|
+
echo -e "#\!/bin/bash\n\$(dirname \$0)/../../auto/pre-push-hook" > .git/hooks/pre-push
|
265
|
+
chmod +x .git/hooks/pre-push
|
266
|
+
```
|
267
|
+
|
268
|
+
### Release process
|
269
|
+
|
270
|
+
The process for releasing a new version of the gem is:
|
271
|
+
|
272
|
+
- Merge PR(s)
|
273
|
+
- Git pull locally
|
274
|
+
- Update the version number, bundle install, and commit
|
275
|
+
- Update `CHANGELOG.md`, and commit
|
276
|
+
- Tag the commit with the version number, prefixed by `v`
|
277
|
+
- Git push to master
|
278
|
+
- Git push the tag
|
279
|
+
- Publish the new version of the gem to RubyGems: `gem build -o que.gem && gem push que.gem`
|
280
|
+
- Create a GitHub release - rather than describe anything there, link to the heading for the release in `CHANGELOG.md`
|
281
|
+
- Post on the Que Discord in `#announcements`
|
data/auto/dev
ADDED
@@ -0,0 +1,21 @@
|
|
1
|
+
#!/bin/bash
|
2
|
+
#
|
3
|
+
# Operate in development environment
|
4
|
+
|
5
|
+
set -Eeuo pipefail
|
6
|
+
|
7
|
+
cd "$(dirname "$0")/.."
|
8
|
+
|
9
|
+
docker compose build dev
|
10
|
+
|
11
|
+
# Delete containers and DB volume afterwards on CI
|
12
|
+
if [[ "${CI-}" == "true" ]]; then
|
13
|
+
trap '{
|
14
|
+
echo "Stopping containers..."
|
15
|
+
docker compose down
|
16
|
+
docker volume rm -f que_db-data
|
17
|
+
}' EXIT
|
18
|
+
fi
|
19
|
+
|
20
|
+
set -x
|
21
|
+
docker compose run --rm dev "${@-bash}"
|
data/auto/pre-push-hook
ADDED
@@ -0,0 +1,30 @@
|
|
1
|
+
#!/bin/bash
|
2
|
+
|
3
|
+
set -Eeuo pipefail
|
4
|
+
|
5
|
+
cd "$(dirname "$0")/.."
|
6
|
+
|
7
|
+
green='\e[32m'; blue='\e[36m'; red='\e[31m'; bold='\e[1m'; reset='\e[0m'
|
8
|
+
coloured-arrow() { printf "$bold$1==> $2$reset\n"; }
|
9
|
+
success() { coloured-arrow "$green" "$1"; }
|
10
|
+
info() { coloured-arrow "$blue" "$1"; }
|
11
|
+
err() { coloured-arrow "$red" "$1"; exit 1; }
|
12
|
+
|
13
|
+
info 'Running pre-push hook...'
|
14
|
+
|
15
|
+
on-exit() {
|
16
|
+
[[ -n "${succeeded-}" ]] || err 'Pre-push checks failed'
|
17
|
+
}
|
18
|
+
trap on-exit EXIT
|
19
|
+
|
20
|
+
info 'Checking for uncommitted changes...'
|
21
|
+
[[ -z $(git status -s) ]] || err 'ERROR: You have uncommited changes'
|
22
|
+
|
23
|
+
info 'Checking bundle...'
|
24
|
+
bundle check --dry-run || bundle install
|
25
|
+
|
26
|
+
info 'Specs...'
|
27
|
+
auto/test
|
28
|
+
|
29
|
+
succeeded=true
|
30
|
+
success 'All pre-push checks passed! =)'
|
data/auto/psql
ADDED
data/auto/test
ADDED
@@ -0,0 +1,17 @@
|
|
1
|
+
#!/bin/bash
|
2
|
+
|
3
|
+
set -Eeuo pipefail
|
4
|
+
|
5
|
+
export POSTGRES_VERSION=14
|
6
|
+
|
7
|
+
delete_db() {
|
8
|
+
docker compose down
|
9
|
+
docker volume rm -f que_db-data
|
10
|
+
}
|
11
|
+
|
12
|
+
trap delete_db EXIT
|
13
|
+
|
14
|
+
# pre-test cleanup is necessary as the existing db container will be used if it's running (potentially with the wrong PG version)
|
15
|
+
delete_db
|
16
|
+
"$(dirname "$0")"/test "$@"
|
17
|
+
delete_db
|
@@ -232,7 +232,16 @@ OUTPUT
|
|
232
232
|
$stop_que_executable = false
|
233
233
|
%w[INT TERM].each { |signal| trap(signal) { $stop_que_executable = true } }
|
234
234
|
|
235
|
-
output.puts
|
235
|
+
output.puts(
|
236
|
+
<<~STARTUP
|
237
|
+
Que #{Que::VERSION} started worker process with:
|
238
|
+
Worker threads: #{locker.workers.length} (priorities: #{locker.workers.map { |w| w.priority || 'any' }.join(', ')})
|
239
|
+
Buffer size: #{locker.job_buffer.minimum_size}-#{locker.job_buffer.maximum_size}
|
240
|
+
Queues:
|
241
|
+
#{locker.queues.map { |queue, interval| " - #{queue} (poll interval: #{interval}s)" }.join("\n")}
|
242
|
+
Que waiting for jobs...
|
243
|
+
STARTUP
|
244
|
+
)
|
236
245
|
|
237
246
|
loop do
|
238
247
|
sleep 0.01
|
data/docker-compose.yml
ADDED
@@ -0,0 +1,46 @@
|
|
1
|
+
version: "3.7"
|
2
|
+
|
3
|
+
services:
|
4
|
+
|
5
|
+
dev:
|
6
|
+
build:
|
7
|
+
context: .
|
8
|
+
target: dev-environment
|
9
|
+
depends_on:
|
10
|
+
- db
|
11
|
+
volumes:
|
12
|
+
- .:/work
|
13
|
+
- ruby-2.7.5-gem-cache:/usr/local/bundle
|
14
|
+
- ~/.docker-rc.d/:/.docker-rc.d/:ro
|
15
|
+
working_dir: /work
|
16
|
+
entrypoint: /work/scripts/docker-entrypoint
|
17
|
+
command: bash
|
18
|
+
environment:
|
19
|
+
DATABASE_URL: postgres://que:que@db/que-test
|
20
|
+
|
21
|
+
db:
|
22
|
+
image: "postgres:${POSTGRES_VERSION-13}"
|
23
|
+
volumes:
|
24
|
+
- db-data:/var/lib/postgresql/data
|
25
|
+
environment:
|
26
|
+
POSTGRES_USER: que
|
27
|
+
POSTGRES_PASSWORD: que
|
28
|
+
POSTGRES_DB: que-test
|
29
|
+
ports:
|
30
|
+
- 5697:5432
|
31
|
+
|
32
|
+
pg-dev:
|
33
|
+
image: "postgres:${POSTGRES_VERSION-13}"
|
34
|
+
depends_on:
|
35
|
+
- db
|
36
|
+
entrypoint: []
|
37
|
+
command: psql
|
38
|
+
environment:
|
39
|
+
PGHOST: db
|
40
|
+
PGUSER: que
|
41
|
+
PGPASSWORD: que
|
42
|
+
PGDATABASE: que-test
|
43
|
+
|
44
|
+
volumes:
|
45
|
+
db-data: ~
|
46
|
+
ruby-2.7.5-gem-cache: ~
|
data/docs/README.md
CHANGED
@@ -129,11 +129,11 @@ There are other docs to read if you're using [Sequel](#using-sequel) or [plain P
|
|
129
129
|
After you've connected Que to the database, you can manage the jobs table. You'll want to migrate to a specific version in a migration file, to ensure that they work the same way even when you upgrade Que in the future:
|
130
130
|
|
131
131
|
```ruby
|
132
|
-
# Update the schema to version #
|
133
|
-
Que.migrate!
|
132
|
+
# Update the schema to version #5.
|
133
|
+
Que.migrate!(version: 5)
|
134
134
|
|
135
135
|
# Remove Que's jobs table entirely.
|
136
|
-
Que.migrate!
|
136
|
+
Que.migrate!(version: 0)
|
137
137
|
```
|
138
138
|
|
139
139
|
There's also a helper method to clear all jobs from the jobs table:
|
@@ -405,11 +405,11 @@ Some new releases of Que may require updates to the database schema. It's recomm
|
|
405
405
|
```ruby
|
406
406
|
class UpdateQue < ActiveRecord::Migration[5.0]
|
407
407
|
def self.up
|
408
|
-
Que.migrate!
|
408
|
+
Que.migrate!(version: 3)
|
409
409
|
end
|
410
410
|
|
411
411
|
def self.down
|
412
|
-
Que.migrate!
|
412
|
+
Que.migrate!(version: 2)
|
413
413
|
end
|
414
414
|
end
|
415
415
|
```
|
@@ -418,7 +418,7 @@ This will make sure that your database schema stays consistent with your codebas
|
|
418
418
|
|
419
419
|
```ruby
|
420
420
|
# Change schema to version 3.
|
421
|
-
Que.migrate!
|
421
|
+
Que.migrate!(version: 3)
|
422
422
|
|
423
423
|
# Check your current schema version.
|
424
424
|
Que.db_version #=> 3
|
@@ -550,11 +550,11 @@ require 'que'
|
|
550
550
|
Sequel.migration do
|
551
551
|
up do
|
552
552
|
Que.connection = self
|
553
|
-
Que.migrate!
|
553
|
+
Que.migrate!(version: 5)
|
554
554
|
end
|
555
555
|
down do
|
556
556
|
Que.connection = self
|
557
|
-
Que.migrate!
|
557
|
+
Que.migrate!(version: 0)
|
558
558
|
end
|
559
559
|
end
|
560
560
|
```
|
data/lib/que/job.rb
CHANGED
@@ -12,7 +12,7 @@ module Que
|
|
12
12
|
SQL[:insert_job] =
|
13
13
|
%{
|
14
14
|
INSERT INTO public.que_jobs
|
15
|
-
(queue, priority, run_at, job_class, args, data)
|
15
|
+
(queue, priority, run_at, job_class, args, data, job_schema_version)
|
16
16
|
VALUES
|
17
17
|
(
|
18
18
|
coalesce($1, 'default')::text,
|
@@ -20,7 +20,8 @@ module Que
|
|
20
20
|
coalesce($3, now())::timestamptz,
|
21
21
|
$4::text,
|
22
22
|
coalesce($5, '[]')::jsonb,
|
23
|
-
coalesce($6, '{}')::jsonb
|
23
|
+
coalesce($6, '{}')::jsonb,
|
24
|
+
#{Que.job_schema_version}
|
24
25
|
)
|
25
26
|
RETURNING *
|
26
27
|
}
|
@@ -57,22 +58,18 @@ module Que
|
|
57
58
|
|
58
59
|
def enqueue(
|
59
60
|
*args,
|
60
|
-
|
61
|
-
priority: nil,
|
62
|
-
run_at: nil,
|
63
|
-
job_class: nil,
|
64
|
-
tags: nil,
|
61
|
+
job_options: {},
|
65
62
|
**arg_opts
|
66
63
|
)
|
67
|
-
|
64
|
+
arg_opts, job_options = _extract_job_options(arg_opts, job_options.dup)
|
68
65
|
args << arg_opts if arg_opts.any?
|
69
66
|
|
70
|
-
if tags
|
71
|
-
if tags.length > MAXIMUM_TAGS_COUNT
|
72
|
-
raise Que::Error, "Can't enqueue a job with more than #{MAXIMUM_TAGS_COUNT} tags! (passed #{tags.length})"
|
67
|
+
if job_options[:tags]
|
68
|
+
if job_options[:tags].length > MAXIMUM_TAGS_COUNT
|
69
|
+
raise Que::Error, "Can't enqueue a job with more than #{MAXIMUM_TAGS_COUNT} tags! (passed #{job_options[:tags].length})"
|
73
70
|
end
|
74
71
|
|
75
|
-
tags.each do |tag|
|
72
|
+
job_options[:tags].each do |tag|
|
76
73
|
if tag.length > MAXIMUM_TAG_LENGTH
|
77
74
|
raise Que::Error, "Can't enqueue a job with a tag longer than 100 characters! (\"#{tag}\")"
|
78
75
|
end
|
@@ -80,13 +77,13 @@ module Que
|
|
80
77
|
end
|
81
78
|
|
82
79
|
attrs = {
|
83
|
-
queue: queue || resolve_que_setting(:queue) || Que.default_queue,
|
84
|
-
priority: priority || resolve_que_setting(:priority),
|
85
|
-
run_at: run_at || resolve_que_setting(:run_at),
|
80
|
+
queue: job_options[:queue] || resolve_que_setting(:queue) || Que.default_queue,
|
81
|
+
priority: job_options[:priority] || resolve_que_setting(:priority),
|
82
|
+
run_at: job_options[:run_at] || resolve_que_setting(:run_at),
|
86
83
|
args: Que.serialize_json(args),
|
87
|
-
data: tags ? Que.serialize_json(tags: tags) : "{}",
|
84
|
+
data: job_options[:tags] ? Que.serialize_json(tags: job_options[:tags]) : "{}",
|
88
85
|
job_class: \
|
89
|
-
job_class || name ||
|
86
|
+
job_options[:job_class] || name ||
|
90
87
|
raise(Error, "Can't enqueue an anonymous subclass of Que::Job"),
|
91
88
|
}
|
92
89
|
|
@@ -139,6 +136,27 @@ module Que
|
|
139
136
|
end
|
140
137
|
end
|
141
138
|
end
|
139
|
+
|
140
|
+
def _extract_job_options(arg_opts, job_options)
|
141
|
+
deprecated_job_option_names = []
|
142
|
+
|
143
|
+
%i[queue priority run_at job_class tags].each do |option_name|
|
144
|
+
next unless arg_opts.key?(option_name) && job_options[option_name].nil?
|
145
|
+
|
146
|
+
job_options[option_name] = arg_opts.delete(option_name)
|
147
|
+
deprecated_job_option_names << option_name
|
148
|
+
end
|
149
|
+
|
150
|
+
_log_job_options_deprecation(deprecated_job_option_names)
|
151
|
+
|
152
|
+
[arg_opts, job_options]
|
153
|
+
end
|
154
|
+
|
155
|
+
def _log_job_options_deprecation(deprecated_job_option_names)
|
156
|
+
return unless deprecated_job_option_names.any?
|
157
|
+
|
158
|
+
warn "Passing job options like (#{deprecated_job_option_names.join(', ')}) to `JobClass.enqueue` as top level keyword args has been deprecated and will be removed in version 2.0. Please wrap job options in an explicit `job_options` keyword arg instead."
|
159
|
+
end
|
142
160
|
end
|
143
161
|
|
144
162
|
# Set up some defaults.
|
data/lib/que/locker.rb
CHANGED
@@ -24,12 +24,12 @@ module Que
|
|
24
24
|
|
25
25
|
SQL[:register_locker] =
|
26
26
|
%{
|
27
|
-
INSERT INTO public.que_lockers (pid, worker_count, worker_priorities, ruby_pid, ruby_hostname, listening, queues)
|
28
|
-
VALUES (pg_backend_pid(), $1::integer, $2::integer[], $3::integer, $4::text, $5::boolean, $6::text[])
|
27
|
+
INSERT INTO public.que_lockers (pid, worker_count, worker_priorities, ruby_pid, ruby_hostname, listening, queues, job_schema_version)
|
28
|
+
VALUES (pg_backend_pid(), $1::integer, $2::integer[], $3::integer, $4::text, $5::boolean, $6::text[], $7::integer)
|
29
29
|
}
|
30
30
|
|
31
31
|
class Locker
|
32
|
-
attr_reader :thread, :workers, :job_buffer, :locks
|
32
|
+
attr_reader :thread, :workers, :job_buffer, :locks, :queues, :poll_interval
|
33
33
|
|
34
34
|
MESSAGE_RESOLVERS = {}
|
35
35
|
RESULT_RESOLVERS = {}
|
@@ -101,7 +101,20 @@ module Que
|
|
101
101
|
# Local cache of which advisory locks are held by this connection.
|
102
102
|
@locks = Set.new
|
103
103
|
|
104
|
-
@
|
104
|
+
@poll_interval = poll_interval
|
105
|
+
|
106
|
+
if queues.is_a?(Hash)
|
107
|
+
@queue_names = queues.keys
|
108
|
+
@queues = queues.transform_values do |interval|
|
109
|
+
interval || poll_interval
|
110
|
+
end
|
111
|
+
else
|
112
|
+
@queue_names = queues
|
113
|
+
@queues = queues.map do |queue_name|
|
114
|
+
[queue_name, poll_interval]
|
115
|
+
end.to_h
|
116
|
+
end
|
117
|
+
|
105
118
|
@wait_period = wait_period.to_f / 1000 # Milliseconds to seconds.
|
106
119
|
|
107
120
|
@workers =
|
@@ -183,11 +196,11 @@ module Que
|
|
183
196
|
|
184
197
|
@pollers =
|
185
198
|
if poll
|
186
|
-
queues.map do |
|
199
|
+
@queues.map do |queue_name, interval|
|
187
200
|
Poller.new(
|
188
201
|
connection: @connection,
|
189
|
-
queue:
|
190
|
-
poll_interval: interval
|
202
|
+
queue: queue_name,
|
203
|
+
poll_interval: interval,
|
191
204
|
)
|
192
205
|
end
|
193
206
|
end
|
@@ -266,6 +279,7 @@ module Que
|
|
266
279
|
CURRENT_HOSTNAME,
|
267
280
|
!!@listener,
|
268
281
|
"{\"#{@queue_names.join('","')}\"}",
|
282
|
+
Que.job_schema_version,
|
269
283
|
]
|
270
284
|
end
|
271
285
|
|
data/lib/que/migrations/4/up.sql
CHANGED
@@ -146,7 +146,9 @@ CREATE FUNCTION que_job_notify() RETURNS trigger AS $$
|
|
146
146
|
FROM (
|
147
147
|
SELECT *
|
148
148
|
FROM public.que_lockers ql, generate_series(1, ql.worker_count) AS id
|
149
|
-
WHERE
|
149
|
+
WHERE
|
150
|
+
listening AND
|
151
|
+
queues @> ARRAY[NEW.queue]
|
150
152
|
ORDER BY md5(pid::text || id::text)
|
151
153
|
) t1
|
152
154
|
) t2
|
@@ -0,0 +1,73 @@
|
|
1
|
+
DROP TRIGGER que_job_notify ON que_jobs;
|
2
|
+
DROP FUNCTION que_job_notify();
|
3
|
+
|
4
|
+
DROP INDEX que_poll_idx_with_job_schema_version;
|
5
|
+
|
6
|
+
ALTER TABLE que_jobs
|
7
|
+
DROP COLUMN job_schema_version;
|
8
|
+
|
9
|
+
ALTER TABLE que_lockers
|
10
|
+
DROP COLUMN job_schema_version;
|
11
|
+
|
12
|
+
CREATE FUNCTION que_job_notify() RETURNS trigger AS $$
|
13
|
+
DECLARE
|
14
|
+
locker_pid integer;
|
15
|
+
sort_key json;
|
16
|
+
BEGIN
|
17
|
+
-- Don't do anything if the job is scheduled for a future time.
|
18
|
+
IF NEW.run_at IS NOT NULL AND NEW.run_at > now() THEN
|
19
|
+
RETURN null;
|
20
|
+
END IF;
|
21
|
+
|
22
|
+
-- Pick a locker to notify of the job's insertion, weighted by their number
|
23
|
+
-- of workers. Should bounce pseudorandomly between lockers on each
|
24
|
+
-- invocation, hence the md5-ordering, but still touch each one equally,
|
25
|
+
-- hence the modulo using the job_id.
|
26
|
+
SELECT pid
|
27
|
+
INTO locker_pid
|
28
|
+
FROM (
|
29
|
+
SELECT *, last_value(row_number) OVER () + 1 AS count
|
30
|
+
FROM (
|
31
|
+
SELECT *, row_number() OVER () - 1 AS row_number
|
32
|
+
FROM (
|
33
|
+
SELECT *
|
34
|
+
FROM public.que_lockers ql, generate_series(1, ql.worker_count) AS id
|
35
|
+
WHERE
|
36
|
+
listening AND
|
37
|
+
queues @> ARRAY[NEW.queue]
|
38
|
+
ORDER BY md5(pid::text || id::text)
|
39
|
+
) t1
|
40
|
+
) t2
|
41
|
+
) t3
|
42
|
+
WHERE NEW.id % count = row_number;
|
43
|
+
|
44
|
+
IF locker_pid IS NOT NULL THEN
|
45
|
+
-- There's a size limit to what can be broadcast via LISTEN/NOTIFY, so
|
46
|
+
-- rather than throw errors when someone enqueues a big job, just
|
47
|
+
-- broadcast the most pertinent information, and let the locker query for
|
48
|
+
-- the record after it's taken the lock. The worker will have to hit the
|
49
|
+
-- DB in order to make sure the job is still visible anyway.
|
50
|
+
SELECT row_to_json(t)
|
51
|
+
INTO sort_key
|
52
|
+
FROM (
|
53
|
+
SELECT
|
54
|
+
'job_available' AS message_type,
|
55
|
+
NEW.queue AS queue,
|
56
|
+
NEW.priority AS priority,
|
57
|
+
NEW.id AS id,
|
58
|
+
-- Make sure we output timestamps as UTC ISO 8601
|
59
|
+
to_char(NEW.run_at AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"') AS run_at
|
60
|
+
) t;
|
61
|
+
|
62
|
+
PERFORM pg_notify('que_listener_' || locker_pid::text, sort_key::text);
|
63
|
+
END IF;
|
64
|
+
|
65
|
+
RETURN null;
|
66
|
+
END
|
67
|
+
$$
|
68
|
+
LANGUAGE plpgsql;
|
69
|
+
|
70
|
+
CREATE TRIGGER que_job_notify
|
71
|
+
AFTER INSERT ON que_jobs
|
72
|
+
FOR EACH ROW
|
73
|
+
EXECUTE PROCEDURE public.que_job_notify();
|
@@ -0,0 +1,76 @@
|
|
1
|
+
DROP TRIGGER que_job_notify ON que_jobs;
|
2
|
+
DROP FUNCTION que_job_notify();
|
3
|
+
|
4
|
+
ALTER TABLE que_jobs
|
5
|
+
ADD COLUMN job_schema_version INTEGER DEFAULT 1;
|
6
|
+
|
7
|
+
ALTER TABLE que_lockers
|
8
|
+
ADD COLUMN job_schema_version INTEGER DEFAULT 1;
|
9
|
+
|
10
|
+
CREATE INDEX que_poll_idx_with_job_schema_version
|
11
|
+
ON que_jobs (job_schema_version, queue, priority, run_at, id)
|
12
|
+
WHERE (finished_at IS NULL AND expired_at IS NULL);
|
13
|
+
|
14
|
+
CREATE FUNCTION que_job_notify() RETURNS trigger AS $$
|
15
|
+
DECLARE
|
16
|
+
locker_pid integer;
|
17
|
+
sort_key json;
|
18
|
+
BEGIN
|
19
|
+
-- Don't do anything if the job is scheduled for a future time.
|
20
|
+
IF NEW.run_at IS NOT NULL AND NEW.run_at > now() THEN
|
21
|
+
RETURN null;
|
22
|
+
END IF;
|
23
|
+
|
24
|
+
-- Pick a locker to notify of the job's insertion, weighted by their number
|
25
|
+
-- of workers. Should bounce pseudorandomly between lockers on each
|
26
|
+
-- invocation, hence the md5-ordering, but still touch each one equally,
|
27
|
+
-- hence the modulo using the job_id.
|
28
|
+
SELECT pid
|
29
|
+
INTO locker_pid
|
30
|
+
FROM (
|
31
|
+
SELECT *, last_value(row_number) OVER () + 1 AS count
|
32
|
+
FROM (
|
33
|
+
SELECT *, row_number() OVER () - 1 AS row_number
|
34
|
+
FROM (
|
35
|
+
SELECT *
|
36
|
+
FROM public.que_lockers ql, generate_series(1, ql.worker_count) AS id
|
37
|
+
WHERE
|
38
|
+
listening AND
|
39
|
+
queues @> ARRAY[NEW.queue] AND
|
40
|
+
ql.job_schema_version = NEW.job_schema_version
|
41
|
+
ORDER BY md5(pid::text || id::text)
|
42
|
+
) t1
|
43
|
+
) t2
|
44
|
+
) t3
|
45
|
+
WHERE NEW.id % count = row_number;
|
46
|
+
|
47
|
+
IF locker_pid IS NOT NULL THEN
|
48
|
+
-- There's a size limit to what can be broadcast via LISTEN/NOTIFY, so
|
49
|
+
-- rather than throw errors when someone enqueues a big job, just
|
50
|
+
-- broadcast the most pertinent information, and let the locker query for
|
51
|
+
-- the record after it's taken the lock. The worker will have to hit the
|
52
|
+
-- DB in order to make sure the job is still visible anyway.
|
53
|
+
SELECT row_to_json(t)
|
54
|
+
INTO sort_key
|
55
|
+
FROM (
|
56
|
+
SELECT
|
57
|
+
'job_available' AS message_type,
|
58
|
+
NEW.queue AS queue,
|
59
|
+
NEW.priority AS priority,
|
60
|
+
NEW.id AS id,
|
61
|
+
-- Make sure we output timestamps as UTC ISO 8601
|
62
|
+
to_char(NEW.run_at AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"') AS run_at
|
63
|
+
) t;
|
64
|
+
|
65
|
+
PERFORM pg_notify('que_listener_' || locker_pid::text, sort_key::text);
|
66
|
+
END IF;
|
67
|
+
|
68
|
+
RETURN null;
|
69
|
+
END
|
70
|
+
$$
|
71
|
+
LANGUAGE plpgsql;
|
72
|
+
|
73
|
+
CREATE TRIGGER que_job_notify
|
74
|
+
AFTER INSERT ON que_jobs
|
75
|
+
FOR EACH ROW
|
76
|
+
EXECUTE PROCEDURE public.que_job_notify();
|
data/lib/que/migrations.rb
CHANGED
@@ -4,7 +4,7 @@ module Que
|
|
4
4
|
module Migrations
|
5
5
|
# In order to ship a schema change, add the relevant up and down sql files
|
6
6
|
# to the migrations directory, and bump the version here.
|
7
|
-
CURRENT_VERSION =
|
7
|
+
CURRENT_VERSION = 5
|
8
8
|
|
9
9
|
class << self
|
10
10
|
def migrate!(version:)
|
@@ -28,7 +28,6 @@ module Que
|
|
28
28
|
step,
|
29
29
|
direction,
|
30
30
|
].join('/') << '.sql'
|
31
|
-
|
32
31
|
Que.execute(File.read(filename))
|
33
32
|
end
|
34
33
|
|
data/lib/que/poller.rb
CHANGED
@@ -68,6 +68,7 @@ module Que
|
|
68
68
|
SELECT j
|
69
69
|
FROM public.que_jobs AS j
|
70
70
|
WHERE queue = $1::text
|
71
|
+
AND job_schema_version = #{Que.job_schema_version}
|
71
72
|
AND NOT id = ANY($2::bigint[])
|
72
73
|
AND priority <= pg_temp.que_highest_remaining_priority($3::jsonb)
|
73
74
|
AND run_at <= now()
|
@@ -88,6 +89,7 @@ module Que
|
|
88
89
|
SELECT j
|
89
90
|
FROM public.que_jobs AS j
|
90
91
|
WHERE queue = $1::text
|
92
|
+
AND job_schema_version = #{Que.job_schema_version}
|
91
93
|
AND NOT id = ANY($2::bigint[])
|
92
94
|
AND priority <= pg_temp.que_highest_remaining_priority(jobs.remaining_priorities)
|
93
95
|
AND run_at <= now()
|
data/lib/que/version.rb
CHANGED
data/lib/que/worker.rb
CHANGED
@@ -137,6 +137,7 @@ module Que
|
|
137
137
|
error: {
|
138
138
|
class: error.class.to_s,
|
139
139
|
message: error.message,
|
140
|
+
backtrace: (error.backtrace || []).join("\n").slice(0, 10000),
|
140
141
|
},
|
141
142
|
)
|
142
143
|
|
@@ -164,7 +165,7 @@ module Que
|
|
164
165
|
Que.execute :set_error, [
|
165
166
|
delay,
|
166
167
|
"#{error.class}: #{error.message}".slice(0, 500),
|
167
|
-
error.backtrace.join("\n").slice(0, 10000),
|
168
|
+
(error.backtrace || []).join("\n").slice(0, 10000),
|
168
169
|
job.fetch(:id),
|
169
170
|
]
|
170
171
|
end
|
@@ -0,0 +1,14 @@
|
|
1
|
+
#!/bin/bash
|
2
|
+
|
3
|
+
set -Eeuo pipefail
|
4
|
+
|
5
|
+
# For using your own dotfiles within the Docker container
|
6
|
+
if [ -f /.docker-rc.d/.docker-bashrc ]; then
|
7
|
+
echo "source /.docker-rc.d/.docker-bashrc" >> ~/.bashrc
|
8
|
+
fi
|
9
|
+
|
10
|
+
gem list -i -e bundler -v "$RUBY_BUNDLER_VERSION" >/dev/null || gem install bundler -v "$RUBY_BUNDLER_VERSION"
|
11
|
+
|
12
|
+
bundle check --dry-run || bundle install
|
13
|
+
|
14
|
+
exec "${@-bash}"
|
data/scripts/test
ADDED
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: que
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.
|
4
|
+
version: 1.3.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Chris Hanks
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2022-
|
11
|
+
date: 2022-02-25 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: bundler
|
@@ -35,11 +35,18 @@ files:
|
|
35
35
|
- ".github/workflows/tests.yml"
|
36
36
|
- ".gitignore"
|
37
37
|
- CHANGELOG.md
|
38
|
+
- Dockerfile
|
38
39
|
- LICENSE.txt
|
39
40
|
- README.md
|
40
41
|
- Rakefile
|
42
|
+
- auto/dev
|
43
|
+
- auto/pre-push-hook
|
44
|
+
- auto/psql
|
45
|
+
- auto/test
|
46
|
+
- auto/test-postgres-14
|
41
47
|
- bin/command_line_interface.rb
|
42
48
|
- bin/que
|
49
|
+
- docker-compose.yml
|
43
50
|
- docs/README.md
|
44
51
|
- lib/que.rb
|
45
52
|
- lib/que/active_job/extensions.rb
|
@@ -62,6 +69,8 @@ files:
|
|
62
69
|
- lib/que/migrations/3/up.sql
|
63
70
|
- lib/que/migrations/4/down.sql
|
64
71
|
- lib/que/migrations/4/up.sql
|
72
|
+
- lib/que/migrations/5/down.sql
|
73
|
+
- lib/que/migrations/5/up.sql
|
65
74
|
- lib/que/poller.rb
|
66
75
|
- lib/que/rails/railtie.rb
|
67
76
|
- lib/que/result_queue.rb
|
@@ -79,6 +88,8 @@ files:
|
|
79
88
|
- lib/que/version.rb
|
80
89
|
- lib/que/worker.rb
|
81
90
|
- que.gemspec
|
91
|
+
- scripts/docker-entrypoint
|
92
|
+
- scripts/test
|
82
93
|
homepage: https://github.com/que-rb/que
|
83
94
|
licenses:
|
84
95
|
- MIT
|
@@ -98,7 +109,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
98
109
|
- !ruby/object:Gem::Version
|
99
110
|
version: '0'
|
100
111
|
requirements: []
|
101
|
-
rubygems_version: 3.
|
112
|
+
rubygems_version: 3.3.6
|
102
113
|
signing_key:
|
103
114
|
specification_version: 4
|
104
115
|
summary: A PostgreSQL-based Job Queue
|