que 1.3.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/Dockerfile CHANGED
@@ -1,4 +1,4 @@
1
- FROM ruby:2.7.5-slim-buster@sha256:4cbbe2fba099026b243200aa8663f56476950cc64ccd91d7aaccddca31e445b5 AS base
1
+ FROM ruby:3.1.1-slim-buster@sha256:2ada3e4fe7b1703c9333ad4eb9fc12c1d4d60bce0f981281b2151057e928d9ad AS base
2
2
 
3
3
  # Install libpq-dev in our base layer, as it's needed in all environments
4
4
  RUN apt-get update \
data/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # Que ![tests](https://github.com/que-rb/que/workflows/tests/badge.svg)
2
2
 
3
- **This README and the rest of the docs on the master branch all refer to Que 1.0. If you're using version 0.x, please refer to the docs on [the 0.x branch](https://github.com/que-rb/que/tree/0.x).**
3
+ **This README and the rest of the docs on the master branch all refer to Que 2.x. For older versions, please refer to the docs on the respective branches: [1.x](https://github.com/que-rb/que/tree/1.x), or [0.x](https://github.com/que-rb/que/tree/0.x).**
4
4
 
5
5
  *TL;DR: Que is a high-performance job queue that improves the reliability of your application by protecting your jobs with the same [ACID guarantees](https://en.wikipedia.org/wiki/ACID) as the rest of your data.*
6
6
 
@@ -23,9 +23,9 @@ Que's secondary goal is performance. The worker process is multithreaded, so tha
23
23
 
24
24
  Compatibility:
25
25
 
26
- - MRI Ruby 2.2+
26
+ - MRI Ruby 2.7+ (for Ruby 3, Que 2+ is required)
27
27
  - PostgreSQL 9.5+
28
- - Rails 4.1+ (optional)
28
+ - Rails 6.0+ (optional)
29
29
 
30
30
  **Please note** - Que's job table undergoes a lot of churn when it is under high load, and like any heavily-written table, is susceptible to bloat and slowness if Postgres isn't able to clean it up. The most common cause of this is long-running transactions, so it's recommended to try to keep all transactions against the database housing Que's job table as short as possible. This is good advice to remember for any high-activity database, but bears emphasizing when using tables that undergo a lot of writes.
31
31
 
@@ -54,12 +54,12 @@ gem install que
54
54
  First, create the queue schema in a migration. For example:
55
55
 
56
56
  ```ruby
57
- class CreateQueSchema < ActiveRecord::Migration[5.0]
57
+ class CreateQueSchema < ActiveRecord::Migration[6.0]
58
58
  def up
59
59
  # Whenever you use Que in a migration, always specify the version you're
60
60
  # migrating to. If you're unsure what the current version is, check the
61
61
  # changelog.
62
- Que.migrate!(version: 5)
62
+ Que.migrate!(version: 6)
63
63
  end
64
64
 
65
65
  def down
@@ -117,10 +117,14 @@ end
117
117
  You can also add options to run the job after a specific time, or with a specific priority:
118
118
 
119
119
  ```ruby
120
- ChargeCreditCard.enqueue card.id, user_id: current_user.id, run_at: 1.day.from_now, priority: 5
120
+ ChargeCreditCard.enqueue(card.id, user_id: current_user.id, job_options: { run_at: 1.day.from_now, priority: 5 })
121
121
  ```
122
+
123
+ [Learn more about job options](docs/README.md#job-options).
124
+
122
125
  ## Running the Que Worker
123
- In order to process jobs, you must start a separate worker process outside of your main server.
126
+
127
+ In order to process jobs, you must start a separate worker process outside of your main server.
124
128
 
125
129
  ```bash
126
130
  bundle exec que
@@ -142,7 +146,7 @@ You may need to pass que a file path to require so that it can load your app. Qu
142
146
 
143
147
  If you're using ActiveRecord to dump your database's schema, please [set your schema_format to :sql](http://guides.rubyonrails.org/migrations.html#types-of-schema-dumps) so that Que's table structure is managed correctly. This is a good idea regardless, as the `:ruby` schema format doesn't support many of PostgreSQL's advanced features.
144
148
 
145
- Pre-1.0, the default queue name needed to be configured in order for Que to work out of the box with Rails. In 1.0 the default queue name is now 'default', as Rails expects, but when Rails enqueues some types of jobs it may try to use another queue name that isn't worked by default. You can either:
149
+ Pre-1.0, the default queue name needed to be configured in order for Que to work out of the box with Rails. As of 1.0 the default queue name is now 'default', as Rails expects, but when Rails enqueues some types of jobs it may try to use another queue name that isn't worked by default. You can either:
146
150
 
147
151
  - [Configure Rails](https://guides.rubyonrails.org/configuring.html) to send all internal job types to the 'default' queue by adding the following to `config/application.rb`:
148
152
 
@@ -192,11 +196,11 @@ If you have a project that uses or relates to Que, feel free to submit a PR addi
192
196
 
193
197
  ## Community and Contributing
194
198
 
195
- - For bugs in the library, please feel free to [open an issue](https://github.com/que-rb/que/issues/new).
199
+ - For feature suggestions or bugs in the library, please feel free to [open an issue](https://github.com/que-rb/que/issues/new).
196
200
  - For general discussion and questions/concerns that don't relate to obvious bugs, join our [Discord Server](https://discord.gg/B3EW32H).
197
201
  - For contributions, pull requests submitted via Github are welcome.
198
202
 
199
- Regarding contributions, one of the project's priorities is to keep Que as simple, lightweight and dependency-free as possible, and pull requests that change too much or wouldn't be useful to the majority of Que's users have a good chance of being rejected. If you're thinking of submitting a pull request that adds a new feature, consider starting a discussion in [que-talk](https://groups.google.com/forum/#!forum/que-talk) first about what it would do and how it would be implemented. If it's a sufficiently large feature, or if most of Que's users wouldn't find it useful, it may be best implemented as a standalone gem, like some of the related projects above.
203
+ Regarding contributions, one of the project's priorities is to keep Que as simple, lightweight and dependency-free as possible, and pull requests that change too much or wouldn't be useful to the majority of Que's users have a good chance of being rejected. If you're thinking of submitting a pull request that adds a new feature, consider starting a discussion in an issue first about what it would do and how it would be implemented. If it's a sufficiently large feature, or if most of Que's users wouldn't find it useful, it may be best implemented as a standalone gem, like some of the related projects above.
200
204
 
201
205
  ### Specs
202
206
 
@@ -138,10 +138,9 @@ module Que
138
138
  opts.on(
139
139
  '--minimum-buffer-size [SIZE]',
140
140
  Integer,
141
- "Set minimum number of jobs to be locked and held in this " \
142
- "process awaiting a worker (default: 2)",
141
+ "Unused (deprecated)",
143
142
  ) do |s|
144
- options[:minimum_buffer_size] = s
143
+ warn "The --minimum-buffer-size SIZE option has been deprecated and will be removed in v2.0 (it's actually been unused since v1.0.0.beta4)"
145
144
  end
146
145
 
147
146
  opts.on(
@@ -221,7 +220,7 @@ OUTPUT
221
220
 
222
221
  locker =
223
222
  begin
224
- Que::Locker.new(options)
223
+ Que::Locker.new(**options)
225
224
  rescue => e
226
225
  output.puts(e.message)
227
226
  return 1
@@ -236,7 +235,7 @@ OUTPUT
236
235
  <<~STARTUP
237
236
  Que #{Que::VERSION} started worker process with:
238
237
  Worker threads: #{locker.workers.length} (priorities: #{locker.workers.map { |w| w.priority || 'any' }.join(', ')})
239
- Buffer size: #{locker.job_buffer.minimum_size}-#{locker.job_buffer.maximum_size}
238
+ Buffer size: #{locker.job_buffer.maximum_size}
240
239
  Queues:
241
240
  #{locker.queues.map { |queue, interval| " - #{queue} (poll interval: #{interval}s)" }.join("\n")}
242
241
  Que waiting for jobs...
data/docker-compose.yml CHANGED
@@ -10,13 +10,14 @@ services:
10
10
  - db
11
11
  volumes:
12
12
  - .:/work
13
- - ruby-2.7.5-gem-cache:/usr/local/bundle
13
+ - ruby-3.1.1-gem-cache:/usr/local/bundle
14
14
  - ~/.docker-rc.d/:/.docker-rc.d/:ro
15
15
  working_dir: /work
16
16
  entrypoint: /work/scripts/docker-entrypoint
17
17
  command: bash
18
18
  environment:
19
19
  DATABASE_URL: postgres://que:que@db/que-test
20
+ USE_RAILS: ~
20
21
 
21
22
  db:
22
23
  image: "postgres:${POSTGRES_VERSION-13}"
@@ -43,4 +44,4 @@ services:
43
44
 
44
45
  volumes:
45
46
  db-data: ~
46
- ruby-2.7.5-gem-cache: ~
47
+ ruby-3.1.1-gem-cache: ~
data/docs/README.md CHANGED
@@ -1,52 +1,62 @@
1
- Docs Index
2
- ===============
1
+ # Que documentation
2
+
3
+ <!-- MarkdownTOC autolink=true -->
3
4
 
4
5
  - [Command Line Interface](#command-line-interface)
5
- * [worker-priorities and worker-count](#worker-priorities-and-worker-count)
6
- * [poll-interval](#poll-interval)
7
- * [minimum-buffer-size and maximum-buffer-size](#minimum-buffer-size-and-maximum-buffer-size)
8
- * [connection-url](#connection-url)
9
- * [wait-period](#wait-period)
10
- * [log-internals](#log-internals)
6
+ - [`worker-priorities` and `worker-count`](#worker-priorities-and-worker-count)
7
+ - [`poll-interval`](#poll-interval)
8
+ - [`maximum-buffer-size`](#maximum-buffer-size)
9
+ - [`connection-url`](#connection-url)
10
+ - [`wait-period`](#wait-period)
11
+ - [`log-internals`](#log-internals)
11
12
  - [Advanced Setup](#advanced-setup)
12
- * [Using ActiveRecord Without Rails](#using-activerecord-without-rails)
13
- * [Managing the Jobs Table](#managing-the-jobs-table)
14
- * [Other Setup](#other-setup)
13
+ - [Using ActiveRecord Without Rails](#using-activerecord-without-rails)
14
+ - [Managing the Jobs Table](#managing-the-jobs-table)
15
+ - [Other Setup](#other-setup)
15
16
  - [Error Handling](#error-handling)
16
- * [Error Notifications](#error-notifications)
17
- * [Error-Specific Handling](#error-specific-handling)
17
+ - [Error Notifications](#error-notifications)
18
+ - [Error-Specific Handling](#error-specific-handling)
18
19
  - [Inspecting the Queue](#inspecting-the-queue)
19
- * [Job Stats](#job-stats)
20
- * [Custom Queries](#custom-queries)
21
- + [ActiveRecord Example](#activerecord-example)
22
- + [Sequel Example](#sequel-example)
20
+ - [Job Stats](#job-stats)
21
+ - [Custom Queries](#custom-queries)
22
+ - [ActiveRecord Example](#activerecord-example)
23
+ - [Sequel Example](#sequel-example)
23
24
  - [Managing Workers](#managing-workers)
24
- * [Working Jobs Via Executable](#working-jobs-via-executable)
25
- * [Thread-Unsafe Application Code](#thread-unsafe-application-code)
25
+ - [Working Jobs Via Executable](#working-jobs-via-executable)
26
+ - [Thread-Unsafe Application Code](#thread-unsafe-application-code)
26
27
  - [Logging](#logging)
27
- * [Logging Job Completion](#logging-job-completion)
28
+ - [Logging Job Completion](#logging-job-completion)
28
29
  - [Migrating](#migrating)
29
30
  - [Multiple Queues](#multiple-queues)
30
31
  - [Shutting Down Safely](#shutting-down-safely)
31
32
  - [Using Plain Postgres Connections](#using-plain-postgres-connections)
32
- * [Using ConnectionPool or Pond](#using-connectionpool-or-pond)
33
- * [Using Any Other Connection Pool](#using-any-other-connection-pool)
33
+ - [Using ConnectionPool or Pond](#using-connectionpool-or-pond)
34
+ - [Using Any Other Connection Pool](#using-any-other-connection-pool)
34
35
  - [Using Sequel](#using-sequel)
35
36
  - [Using Que With ActiveJob](#using-que-with-activejob)
36
37
  - [Job Helper Methods](#job-helper-methods)
37
- * [destroy](#destroy)
38
- * [finish](#finish)
39
- * [expire](#expire)
40
- * [retry_in](#retry_in)
41
- * [error_count](#error_count)
42
- * [default_resolve_action](#default_resolve_action)
38
+ - [`destroy`](#destroy)
39
+ - [`finish`](#finish)
40
+ - [`expire`](#expire)
41
+ - [`retry_in`](#retry_in)
42
+ - [`error_count`](#error_count)
43
+ - [`default_resolve_action`](#default_resolve_action)
43
44
  - [Writing Reliable Jobs](#writing-reliable-jobs)
44
- * [Timeouts](#timeouts)
45
+ - [Timeouts](#timeouts)
46
+ - [Job Options](#job-options)
47
+ - [`queue`](#queue)
48
+ - [`priority`](#priority)
49
+ - [`run_at`](#run_at)
50
+ - [`job_class`](#job_class)
51
+ - [`tags`](#tags)
45
52
  - [Middleware](#middleware)
46
- * [Defining Middleware For Jobs](#defining-middleware-for-jobs)
47
- * [Defining Middleware For SQL statements](#defining-middleware-for-sql-statements)
53
+ - [Defining Middleware For Jobs](#defining-middleware-for-jobs)
54
+ - [Defining Middleware For SQL statements](#defining-middleware-for-sql-statements)
48
55
  - [Vacuuming](#vacuuming)
56
+ - [Expired jobs](#expired-jobs)
57
+ - [Finished jobs](#finished-jobs)
49
58
 
59
+ <!-- /MarkdownTOC -->
50
60
 
51
61
  ## Command Line Interface
52
62
 
@@ -62,13 +72,12 @@ usage: que [options] [file/to/require] ...
62
72
  --connection-url [URL] Set a custom database url to connect to for locking purposes.
63
73
  --log-internals Log verbosely about Que's internal state. Only recommended for debugging issues
64
74
  --maximum-buffer-size [SIZE] Set maximum number of jobs to be locked and held in this process awaiting a worker (default: 8)
65
- --minimum-buffer-size [SIZE] Set minimum number of jobs to be locked and held in this process awaiting a worker (default: 2)
66
75
  --wait-period [PERIOD] Set maximum interval between checks of the in-memory job queue, in milliseconds (default: 50)
67
76
  ```
68
77
 
69
78
  Some explanation of the more unusual options:
70
79
 
71
- ### worker-priorities and worker-count
80
+ ### `worker-priorities` and `worker-count`
72
81
 
73
82
  These options dictate the size and priority distribution of the worker pool. The default worker-priorities is `10,30,50,any,any,any`. This means that the default worker pool will reserve one worker to only works jobs with priorities under 10, one for priorities under 30, and one for priorities under 50. Three more workers will work any job.
74
83
 
@@ -78,23 +87,23 @@ Instead of passing worker-priorities, you can pass a `worker-count` - this is a
78
87
 
79
88
  If you pass both worker-count and worker-priorities, the count will trim or pad the priorities list with `any` workers. So, `--worker-priorities=20,30,40 --worker-count=6` would be the same as passing `--worker-priorities=20,30,40,any,any,any`.
80
89
 
81
- ### poll-interval
90
+ ### `poll-interval`
82
91
 
83
92
  This option sets the number of seconds the process will wait between polls of the job queue. Jobs that are ready to be worked immediately will be broadcast via the LISTEN/NOTIFY system, so polling is unnecessary for them - polling is only necessary for jobs that are scheduled in the future or which are being delayed due to errors. The default is 5 seconds.
84
93
 
85
- ### minimum-buffer-size and maximum-buffer-size
94
+ ### `maximum-buffer-size`
86
95
 
87
- These options set the size of the internal buffer that Que uses to hold jobs until they're ready for workers. The default minimum is 2 and the maximum is 8, meaning that the process won't buffer more than 8 jobs that aren't yet ready to be worked, and will only resort to polling if the buffer dips below 2. If you don't want jobs to be buffered at all, you can set both of these values to zero.
96
+ This option sets the size of the internal buffer that Que uses to hold jobs until they're ready for workers. The default maximum is 8, meaning that the process won't buffer more than 8 jobs that aren't yet ready to be worked. If you don't want jobs to be buffered at all, you can set this value to zero.
88
97
 
89
- ### connection-url
98
+ ### `connection-url`
90
99
 
91
100
  This option sets the URL to be used to open a connection to the database for locking purposes. By default, Que will simply use a connection from the connection pool for locking - this option is only useful if your application connections can't use advisory locks - for example, if they're passed through an external connection pool like PgBouncer. In that case, you'll need to use this option to specify your actual database URL so that Que can establish a direct connection.
92
101
 
93
- ### wait-period
102
+ ### `wait-period`
94
103
 
95
104
  This option specifies (in milliseconds) how often the locking thread wakes up to check whether the workers have finished jobs, whether it's time to poll, etc. You shouldn't generally need to tweak this, but it may come in handy for some workloads. The default is 50 milliseconds.
96
105
 
97
- ### log-internals
106
+ ### `log-internals`
98
107
 
99
108
  This option instructs Que to output a lot of information about its internal state to the logger. It should only be used if it becomes necessary to debug issues.
100
109
 
@@ -129,8 +138,8 @@ There are other docs to read if you're using [Sequel](#using-sequel) or [plain P
129
138
  After you've connected Que to the database, you can manage the jobs table. You'll want to migrate to a specific version in a migration file, to ensure that they work the same way even when you upgrade Que in the future:
130
139
 
131
140
  ```ruby
132
- # Update the schema to version #5.
133
- Que.migrate!(version: 5)
141
+ # Update the schema to version #6.
142
+ Que.migrate!(version: 6)
134
143
 
135
144
  # Remove Que's jobs table entirely.
136
145
  Que.migrate!(version: 0)
@@ -148,7 +157,6 @@ Be sure to read the docs on [managing workers](#managing-workers) for more infor
148
157
 
149
158
  You'll also want to set up [logging](#logging) and an [error handler](#error-handling) to track errors raised by jobs.
150
159
 
151
-
152
160
  ## Error Handling
153
161
 
154
162
  If an error is raised and left uncaught by your job, Que will save the error message and backtrace to the database and schedule the job to be retried later.
@@ -426,7 +434,6 @@ Que.db_version #=> 3
426
434
 
427
435
  Note that you can remove Que from your database completely by migrating to version 0.
428
436
 
429
-
430
437
  ## Multiple Queues
431
438
 
432
439
  Que supports the use of multiple queues in a single job table. Please note that this feature is intended to support the case where multiple codebases are sharing the same job queue - if you want to support jobs of differing priorities, the numeric priority system offers better flexibility and performance.
@@ -442,7 +449,7 @@ que -q default -q credit_cards
442
449
  Then you can set jobs to be enqueued in that queue specifically:
443
450
 
444
451
  ```ruby
445
- ProcessCreditCard.enqueue current_user.id, queue: 'credit_cards'
452
+ ProcessCreditCard.enqueue(current_user.id, job_options: { queue: 'credit_cards' })
446
453
 
447
454
  # Or:
448
455
 
@@ -453,11 +460,7 @@ class ProcessCreditCard < Que::Job
453
460
  end
454
461
  ```
455
462
 
456
- In some cases, the ProcessCreditCard class may not be defined in the application that is enqueueing the job. In that case, you can specify the job class as a string:
457
-
458
- ```ruby
459
- Que.enqueue current_user.id, job_class: 'ProcessCreditCard', queue: 'credit_cards'
460
- ```
463
+ In some cases, the `ProcessCreditCard` class may not be defined in the application that is enqueueing the job. In that case, you can [specify the job class as a string](#job_class).
461
464
 
462
465
  ## Shutting Down Safely
463
466
 
@@ -467,7 +470,6 @@ To prevent this, Que will block the worker process from exiting until all jobs i
467
470
 
468
471
  So, be prepared to use SIGKILL on your Ruby processes if they run for too long. For example, Heroku takes a good approach to this - when Heroku's platform is shutting down a process, it sends SIGTERM, waits ten seconds, then sends SIGKILL if the process still hasn't exited. This is a nice compromise - it will give each of your currently running jobs ten seconds to complete, and any jobs that haven't finished by then will be interrupted and retried later.
469
472
 
470
-
471
473
  ## Using Plain Postgres Connections
472
474
 
473
475
  If you're not using an ORM like ActiveRecord or Sequel, you can use a distinct connection pool to manage your Postgres connections. Please be aware that if you **are** using ActiveRecord or Sequel, there's no reason for you to be using any of these methods - it's less efficient (unnecessary connections will waste memory on your database server) and you lose the reliability benefits of wrapping jobs in the same transactions as the rest of your data.
@@ -550,7 +552,7 @@ require 'que'
550
552
  Sequel.migration do
551
553
  up do
552
554
  Que.connection = self
553
- Que.migrate!(version: 5)
555
+ Que.migrate!(version: 6)
554
556
  end
555
557
  down do
556
558
  Que.connection = self
@@ -586,7 +588,7 @@ Sequel automatically wraps model persistance actions (create, update, destroy) i
586
588
 
587
589
  ## Using Que With ActiveJob
588
590
 
589
- You can include `Que::ActiveJob::JobExtensions` into your `ApplicationJob` subclass to get support for all of Que's
591
+ You can include `Que::ActiveJob::JobExtensions` into your `ApplicationJob` subclass to get support for all of Que's
590
592
  [helper methods](#job-helper-methods). These methods will become no-ops if you use a queue adapter that isn't Que, so if you like to use a different adapter in development they shouldn't interfere.
591
593
 
592
594
  Additionally, including `Que::ActiveJob::JobExtensions` lets you define a run() method that supports keyword arguments.
@@ -595,27 +597,29 @@ Additionally, including `Que::ActiveJob::JobExtensions` lets you define a run()
595
597
 
596
598
  There are a number of instance methods on Que::Job that you can use in your jobs, preferably in transactions. See [Writing Reliable Jobs](#writing-reliable-jobs) for more information on where to use these methods.
597
599
 
598
- ### destroy
600
+ ### `destroy`
599
601
 
600
602
  This method deletes the job from the queue table, ensuring that it won't be worked a second time.
601
603
 
602
- ### finish
604
+ ### `finish`
603
605
 
604
606
  This method marks the current job as finished, ensuring that it won't be worked a second time. This is like destroy, in that it finalizes a job, but this method leaves the job in the table, in case you want to query it later.
605
607
 
606
- ### expire
608
+ ### `expire`
607
609
 
608
610
  This method marks the current job as expired. It will be left in the table and won't be retried, but it will be easy to query for expired jobs. This method is called if the job exceeds its maximum_retry_count.
609
611
 
610
- ### retry_in
612
+ ### `retry_in`
611
613
 
612
614
  This method marks the current job to be retried later. You can pass a numeric to this method, in which case that is the number of seconds after which it can be retried (`retry_in(10)`, `retry_in(0.5)`), or, if you're using ActiveSupport, you can pass in a duration object (`retry_in(10.minutes)`). This automatically happens, with an exponentially-increasing interval, when the job encounters an error.
613
615
 
614
- ### error_count
616
+ Note that `retry_in` increments the job's `error_count`.
617
+
618
+ ### `error_count`
615
619
 
616
620
  This method returns the total number of times the job has errored, in case you want to modify the job's behavior after it has failed a given number of times.
617
621
 
618
- ### default_resolve_action
622
+ ### `default_resolve_action`
619
623
 
620
624
  If you don't perform a resolve action (destroy, finish, expire, retry_in) while the job is worked, Que will call this method for you. By default it simply calls `destroy`, but you can override it in your Job subclasses if you wish - for example, to call `finish`, or to invoke some more complicated logic.
621
625
 
@@ -728,6 +732,51 @@ end
728
732
 
729
733
  Now, if the request takes more than five seconds, an error will be raised (probably - check your library's documentation) and Que will just retry the job later.
730
734
 
735
+ ## Job Options
736
+
737
+ When enqueueing a job, you can specify particular options for it in a `job_options` hash, e.g.:
738
+
739
+ ```ruby
740
+ ChargeCreditCard.enqueue(card.id, user_id: current_user.id, job_options: { run_at: 1.day.from_now, priority: 5 })
741
+ ```
742
+
743
+ ### `queue`
744
+
745
+ See [Multiple Queues](#multiple-queues).
746
+
747
+ ### `priority`
748
+
749
+ Provide an integer to customise the priority level of the job.
750
+
751
+ We use the Linux priority scale - a lower number is more important.
752
+
753
+ ### `run_at`
754
+
755
+ Provide a `Time` as the `run_at` to make a job run at a later time (well, at some point after it, depending on how busy the workers are).
756
+
757
+ It's best not to use `Time.now` here, as the current time in the Ruby process and the database won't be perfectly aligned. When the database considers the `run_at` to be in the past, the job will not be broadcast via the LISTEN/NOTIFY system, and it will need to wait for a poll. This introduces an unnecessary delay of probably a few seconds (depending on your configured [poll interval](#poll-interval)). So if you want the job to run ASAP, just omit the `run_at` option.
758
+
759
+ ### `job_class`
760
+
761
+ Specifying `job_class` allows you to enqueue a job using `Que.enqueue`:
762
+
763
+ ```ruby
764
+ Que.enqueue(current_user.id, job_options: { job_class: 'ProcessCreditCard' })
765
+ ```
766
+
767
+ Rather than needing to use the job class (nor even have it defined in the enqueueing process):
768
+
769
+ ```ruby
770
+ ProcessCreditCard.enqueue(current_user.id)
771
+ ```
772
+
773
+ ### `tags`
774
+
775
+ You can provide an array of strings to give a job some tags. These are not used by Que and are completely custom.
776
+
777
+ A job can have up to five tags, each one up to 100 characters long.
778
+
779
+ Note that unlike the other job options, tags are stored within the `que_jobs.data` column, rather than a correspondingly-named column.
731
780
 
732
781
  ## Middleware
733
782
 
@@ -786,3 +835,23 @@ class ManualVacuumJob < CronJob
786
835
  end
787
836
  end
788
837
  ```
838
+
839
+ ## Expired jobs
840
+
841
+ Expired jobs hang around in the `que_jobs` table. If necessary, you can get an expired job to run again by clearing the `error_count` and `expired_at` columns, e.g.:
842
+
843
+ ```sql
844
+ UPDATE que_jobs SET error_count = 0, expired_at = NULL WHERE id = 172340879;
845
+ ```
846
+
847
+ ## Finished jobs
848
+
849
+ If you prefer to leave finished jobs in the database for a while, to performantly remove them periodically, you can use something like:
850
+
851
+ ```sql
852
+ BEGIN;
853
+ ALTER TABLE que_jobs DISABLE TRIGGER que_state_notify;
854
+ DELETE FROM que_jobs WHERE finished_at < (select now() - interval '7 days');
855
+ ALTER TABLE que_jobs ENABLE TRIGGER que_state_notify;
856
+ COMMIT;
857
+ ```
@@ -12,8 +12,10 @@ module Que
12
12
  end
13
13
 
14
14
  def perform(*args)
15
+ args, kwargs = Que.split_out_ruby2_keywords(args)
16
+
15
17
  Que.internal_log(:active_job_perform, self) do
16
- {args: args}
18
+ {args: args, kwargs: kwargs}
17
19
  end
18
20
 
19
21
  _run(
@@ -21,7 +23,12 @@ module Que
21
23
  que_filter_args(
22
24
  args.map { |a| a.is_a?(Hash) ? a.deep_symbolize_keys : a }
23
25
  )
24
- )
26
+ ),
27
+ kwargs: Que.recursively_freeze(
28
+ que_filter_args(
29
+ kwargs.deep_symbolize_keys,
30
+ )
31
+ ),
25
32
  )
26
33
  end
27
34
 
@@ -53,37 +60,46 @@ module Que
53
60
  # A module that we mix into ActiveJob's wrapper for Que::Job, to maintain
54
61
  # backwards-compatibility with internal changes we make.
55
62
  module WrapperExtensions
56
- # The Rails adapter (built against a pre-1.0 version of this gem)
57
- # assumes that it can access a job's id via job.attrs["job_id"]. So,
58
- # oblige it.
59
- def attrs
60
- {"job_id" => que_attrs[:id]}
63
+ module ClassMethods
64
+ # We've dropped support for job options supplied as top-level keywords, but ActiveJob's QueAdapter still uses them. So we have to move them into the job_options hash ourselves.
65
+ def enqueue(args, priority:, queue:, run_at: nil)
66
+ super(args, job_options: { priority: priority, queue: queue, run_at: run_at })
67
+ end
61
68
  end
62
69
 
63
- def run(args)
64
- # Our ActiveJob extensions expect to be able to operate on the actual
65
- # job object, but there's no way to access it through ActiveJob. So,
66
- # scope it to the current thread. It's a bit messy, but it's the best
67
- # option under the circumstances (doesn't require hacking ActiveJob in
68
- # any more extensive way).
70
+ module InstanceMethods
71
+ # The Rails adapter (built against a pre-1.0 version of this gem)
72
+ # assumes that it can access a job's id via job.attrs["job_id"]. So,
73
+ # oblige it.
74
+ def attrs
75
+ {"job_id" => que_attrs[:id]}
76
+ end
77
+
78
+ def run(args)
79
+ # Our ActiveJob extensions expect to be able to operate on the actual
80
+ # job object, but there's no way to access it through ActiveJob. So,
81
+ # scope it to the current thread. It's a bit messy, but it's the best
82
+ # option under the circumstances (doesn't require hacking ActiveJob in
83
+ # any more extensive way).
69
84
 
70
- # There's no reason this logic should ever nest, because it wouldn't
71
- # make sense to run a worker inside of a job, but even so, assert that
72
- # nothing absurd is going on.
73
- Que.assert NilClass, Thread.current[:que_current_job]
85
+ # There's no reason this logic should ever nest, because it wouldn't
86
+ # make sense to run a worker inside of a job, but even so, assert that
87
+ # nothing absurd is going on.
88
+ Que.assert NilClass, Thread.current[:que_current_job]
74
89
 
75
- begin
76
- Thread.current[:que_current_job] = self
90
+ begin
91
+ Thread.current[:que_current_job] = self
77
92
 
78
- # We symbolize the args hash but ActiveJob doesn't like that :/
79
- super(args.deep_stringify_keys)
80
- ensure
81
- # Also assert that the current job state was only removed now, but
82
- # unset the job first so that an assertion failure doesn't mess up
83
- # the state any more than it already has.
84
- current = Thread.current[:que_current_job]
85
- Thread.current[:que_current_job] = nil
86
- Que.assert(self, current)
93
+ # We symbolize the args hash but ActiveJob doesn't like that :/
94
+ super(args.deep_stringify_keys)
95
+ ensure
96
+ # Also assert that the current job state was only removed now, but
97
+ # unset the job first so that an assertion failure doesn't mess up
98
+ # the state any more than it already has.
99
+ current = Thread.current[:que_current_job]
100
+ Thread.current[:que_current_job] = nil
101
+ Que.assert(self, current)
102
+ end
87
103
  end
88
104
  end
89
105
  end
@@ -92,6 +108,7 @@ end
92
108
 
93
109
  class ActiveJob::QueueAdapters::QueAdapter
94
110
  class JobWrapper < Que::Job
95
- prepend Que::ActiveJob::WrapperExtensions
111
+ extend Que::ActiveJob::WrapperExtensions::ClassMethods
112
+ prepend Que::ActiveJob::WrapperExtensions::InstanceMethods
96
113
  end
97
114
  end
@@ -39,8 +39,8 @@ module Que
39
39
  where("que_jobs.data @> ?", JSON.dump(tags: [tag]))
40
40
  end
41
41
 
42
- def by_args(*args)
43
- where("que_jobs.args @> ?", JSON.dump(args))
42
+ def by_args(*args, **kwargs)
43
+ where("que_jobs.args @> ? AND que_jobs.kwargs @> ?", JSON.dump(args), JSON.dump(kwargs))
44
44
  end
45
45
  end
46
46
  end
@@ -62,7 +62,7 @@ module Que
62
62
  if params.empty?
63
63
  wrapped_connection.async_exec(sql)
64
64
  else
65
- wrapped_connection.async_exec(sql, params)
65
+ wrapped_connection.async_exec_params(sql, params)
66
66
  end
67
67
  end
68
68