que 1.0.0.beta3 → 1.0.0.beta4

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 832beb15feb06f24511a7adec121b635639a87449f1403dcf8763cf8517ab11c
4
- data.tar.gz: d39465285d79a8348753e4783a22c7432f369dc8ce0205c41872a87798cd5b48
3
+ metadata.gz: f2493b1dd8d4a7042ead34f78d19b83320c65f7eccd08a14336bbcfd8f349766
4
+ data.tar.gz: e3142695bd3ba1ad21aee57aeeeee47c260ed15ef86e93c6b942fe9e61745ed3
5
5
  SHA512:
6
- metadata.gz: bb927afb7f17dd55a2207686f2707bb994860a6c4a4fa2a30e79fb763a7d298b410e3dec36d4520b7617e996e7927b8b90c8fa1ab68a3e264d96b59df9c51a4d
7
- data.tar.gz: 421aaa5ce29afb83ac376966b2f859a4ba356087fad248738a57b3e9c77bb193ee8f08d58320ec10b6aa9bb3587c9c05bf6c9d7a22c6da62d908ef6876cdd173
6
+ metadata.gz: 8c2cadc33fc259cacc64cb0bdc7532451e9afbb344959b152d9477c75002ce1581858978ac69ac3a399968fd6f853ff93d41e3ac44edf09f67d8c547f97a92cb
7
+ data.tar.gz: 9319db034deccd32abdf97b69e5bcde5fe25d5b6cc424c9b07c26935b4b8c70e083314bf57ca3829f74bbecfdf895b2a92aec9386c17db81adf64127dfc3e4c7
@@ -0,0 +1,39 @@
1
+ name: Ruby
2
+
3
+ on: [pull_request]
4
+
5
+ jobs:
6
+ test:
7
+ runs-on: ubuntu-latest
8
+ strategy:
9
+ matrix:
10
+ ruby_version: [2.5.x, 2.6.x]
11
+ gemfile: ["4.2", "5.2", "6.0"]
12
+ postgres_version: [9, 10, 11]
13
+ services:
14
+ db:
15
+ image: postgres:${{ matrix.postgres_version }}
16
+ ports: ['5432:5432']
17
+ options: >-
18
+ --health-cmd pg_isready
19
+ --health-interval 10s
20
+ --health-timeout 5s
21
+ --health-retries 5
22
+ steps:
23
+ - uses: actions/checkout@v1
24
+ - name: Set up Ruby
25
+ uses: actions/setup-ruby@v1
26
+ with:
27
+ ruby-version: ${{ matrix.ruby_version }}
28
+ - name: Test with Rake
29
+ env:
30
+ PGHOST: 127.0.0.1
31
+ PGUSER: postgres
32
+ BUNDLE_GEMFILE: spec/gemfiles/Gemfile.${{ matrix.gemfile }}
33
+ run: |
34
+ sudo apt-get -yqq install libpq-dev postgresql-client
35
+ createdb que-test
36
+ gem install bundler
37
+ bundle install --jobs 4 --retry 3
38
+ USE_RAILS=true bundle exec rake test
39
+ bundle exec rake test
@@ -24,7 +24,7 @@
24
24
 
25
25
  ### 1.0.0.beta (2017-10-25)
26
26
 
27
- * **A schema upgrade to version 4 will be required for this release.** See [the migration doc](https://github.com/chanks/que/blob/master/docs/migrating.md) for information if you're upgrading from a previous release.
27
+ * **A schema upgrade to version 4 will be required for this release.** See [the migration doc](https://github.com/que-rb/que/blob/master/docs/migrating.md) for information if you're upgrading from a previous release.
28
28
 
29
29
  * Please note that this migration requires a rewrite of the jobs table, which makes it O(n) with the size of the table. If you have a very large backlog of jobs you may want to schedule downtime for this migration.
30
30
 
data/CHANGELOG.md CHANGED
@@ -1,6 +1,6 @@
1
1
  ### 1.0.0.beta2 (2018-04-13)
2
2
 
3
- * **A schema upgrade to version 4 will be required for this release.** See [the migration doc](https://github.com/chanks/que/blob/master/docs/migrating.md) for information if you're upgrading from a previous release.
3
+ * **A schema upgrade to version 4 will be required for this release.** See [the migration doc](https://github.com/que-rb/que/blob/master/docs/migrating.md) for information if you're upgrading from a previous release.
4
4
 
5
5
  * Please note that this migration requires a rewrite of the jobs table, which makes it O(n) with the size of the table. If you have a very large backlog of jobs you may want to schedule downtime for this migration.
6
6
 
@@ -272,7 +272,7 @@ For a detailed list of the changes between each beta release of 1.0.0, see [the
272
272
 
273
273
  ### 0.6.0 (2014-02-04)
274
274
 
275
- * **A schema upgrade to version 3 is required for this release.** See [the migration doc](https://github.com/chanks/que/blob/master/docs/migrating.md) for information if you're upgrading from a previous release.
275
+ * **A schema upgrade to version 3 is required for this release.** See [the migration doc](https://github.com/que-rb/que/blob/master/docs/migrating.md) for information if you're upgrading from a previous release.
276
276
 
277
277
  * You can now run a job's logic directly (without enqueueing it) like `MyJob.run(arg1, arg2, other_arg: arg3)`. This is useful when a job class encapsulates logic that you want to invoke without involving the entire queue.
278
278
 
@@ -284,11 +284,11 @@ For a detailed list of the changes between each beta release of 1.0.0, see [the
284
284
 
285
285
  * Log lines now include the machine's hostname, since a pid alone may not uniquely identify a process.
286
286
 
287
- * Multiple queues are now supported. See [the docs](https://github.com/chanks/que/blob/master/docs/multiple_queues.md) for details. (chanks, joevandyk)
287
+ * Multiple queues are now supported. See [the docs](https://github.com/que-rb/que/blob/master/docs/multiple_queues.md) for details. (chanks, joevandyk)
288
288
 
289
289
  * Rubinius 2.2 is now supported. (brixen)
290
290
 
291
- * Job classes may now define their own logic for determining the retry interval when a job raises an error. See [error handling](https://github.com/chanks/que/blob/master/docs/error_handling.md) for more information.
291
+ * Job classes may now define their own logic for determining the retry interval when a job raises an error. See [error handling](https://github.com/que-rb/que/blob/master/docs/error_handling.md) for more information.
292
292
 
293
293
  ### 0.5.0 (2014-01-14)
294
294
 
@@ -302,7 +302,7 @@ For a detailed list of the changes between each beta release of 1.0.0, see [the
302
302
 
303
303
  * Added a migration system to make it easier to change the schema when updating Que. You can now write, for example, `Que.migrate!(version: 2)` in your migrations. Migrations are run transactionally.
304
304
 
305
- * The logging format has changed to be more easily machine-readable. You can also now customize the logging format by assigning a callable to Que.log_formatter=. See the new doc on [logging](https://github.com/chanks/que/blob/master/docs/logging.md)) for details. The default logger level is INFO - for less critical information (such as when no jobs were found to be available or when a job-lock race condition has been detected and avoided) you can set the QUE_LOG_LEVEL environment variable to DEBUG.
305
+ * The logging format has changed to be more easily machine-readable. You can also now customize the logging format by assigning a callable to Que.log_formatter=. See the new doc on [logging](https://github.com/que-rb/que/blob/master/docs/logging.md)) for details. The default logger level is INFO - for less critical information (such as when no jobs were found to be available or when a job-lock race condition has been detected and avoided) you can set the QUE_LOG_LEVEL environment variable to DEBUG.
306
306
 
307
307
  * MultiJson is now a soft dependency. Que will use it if it is available, but it is not required.
308
308
 
data/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # Que
2
2
 
3
- **This README and the rest of the docs on the master branch all refer to Que 1.0, which is currently in beta. If you're using version 0.x, please refer to the docs on [the 0.x branch](https://github.com/chanks/que/tree/0.x).**
3
+ **This README and the rest of the docs on the master branch all refer to Que 1.0, which is currently in beta. If you're using version 0.x, please refer to the docs on [the 0.x branch](https://github.com/que-rb/que/tree/0.x).**
4
4
 
5
5
  *TL;DR: Que is a high-performance job queue that improves the reliability of your application by protecting your jobs with the same [ACID guarantees](https://en.wikipedia.org/wiki/ACID) as the rest of your data.*
6
6
 
@@ -17,7 +17,7 @@ Additionally, there are the general benefits of storing jobs in Postgres, alongs
17
17
  * **Fewer Dependencies** - If you're already using Postgres (and you probably should be), a separate queue is another moving part that can break.
18
18
  * **Security** - Postgres' support for SSL connections keeps your data safe in transport, for added protection when you're running workers on cloud platforms that you can't completely control.
19
19
 
20
- Que's primary goal is reliability. You should be able to leave your application running indefinitely without worrying about jobs being lost due to a lack of transactional support, or left in limbo due to a crashing process. Que does everything it can to ensure that jobs you queue are performed exactly once (though the occasional repetition of a job can be impossible to avoid - see the docs on [how to write a reliable job](https://github.com/chanks/que/blob/master/docs/writing_reliable_jobs.md)).
20
+ Que's primary goal is reliability. You should be able to leave your application running indefinitely without worrying about jobs being lost due to a lack of transactional support, or left in limbo due to a crashing process. Que does everything it can to ensure that jobs you queue are performed exactly once (though the occasional repetition of a job can be impossible to avoid - see the docs on [how to write a reliable job](https://github.com/que-rb/que/blob/master/docs/writing_reliable_jobs.md)).
21
21
 
22
22
  Que's secondary goal is performance. The worker process is multithreaded, so that a single process can run many jobs simultaneously.
23
23
 
@@ -126,17 +126,40 @@ usage: que [options] [file/to/require] ...
126
126
 
127
127
  You may need to pass que a file path to require so that it can load your app. Que will automatically load `config/environment.rb` if it exists, so you shouldn't need an argument if you're using Rails.
128
128
 
129
+ ## Additional Rails-specific Setup
130
+
131
+ If you're using ActiveRecord to dump your database's schema, please [set your schema_format to :sql](http://guides.rubyonrails.org/migrations.html#types-of-schema-dumps) so that Que's table structure is managed correctly. This is a good idea regardless, as the `:ruby` schema format doesn't support many of PostgreSQL's advanced features.
132
+
133
+ Pre-1.0, the default queue name needed to be configured in order for Que to work out of the box with Rails. In 1.0 the default queue name is now 'default', as Rails expects, but when Rails enqueues some types of jobs it may try to use another queue name that isn't worked by default - in particular, ActionMailer uses a queue named 'mailers' by default, so in your app config you'll also need to set `config.action_mailer.deliver_later_queue_name = 'default'` if you're using ActionMailer.
134
+
135
+ Also, if you would like to integrate Que with Active Job, you can do it by setting the adapter in `config/application.rb` or in a specific environment by setting it in `config/environments/production.rb`, for example:
136
+ ```ruby
137
+ config.active_job.queue_adapter = :que
138
+ ```
139
+
140
+ Que will automatically use the database configuration of your rails application, so there is no need to configure anything else.
141
+
142
+ You can then write your jobs as usual following the [Active Job documentation](https://guides.rubyonrails.org/active_job_basics.html). However, be aware that you'll lose the ability to finish the job in the same transaction as other database operations. That happens because Active Job is a generic background job framework that doesn't benefit from the database integration Que provides.
143
+
144
+ If you later decide to switch a job from Active Job to Que to have transactional integrity you can easily change the corresponding job class to inherit from `Que::Job` and follow the usage guidelines in the previous section.
145
+
129
146
  ## Testing
130
147
 
131
148
  There are a couple ways to do testing. You may want to set `Que::Job.run_synchronously = true`, which will cause JobClass.enqueue to simply execute the job's logic synchronously, as if you'd run JobClass.run(*your_args). Or, you may want to leave it disabled so you can assert on the job state once they are stored in the database.
132
149
 
133
- **If you're using ActiveRecord to dump your database's schema, please [set your schema_format to :sql](http://guides.rubyonrails.org/migrations.html#types-of-schema-dumps) so that Que's table structure is managed correctly.** This is a good idea regardless, as the `:ruby` schema format doesn't support many of PostgreSQL's advanced features.
150
+ ## Related Projects
151
+
152
+ These projects are tested to be compatible with Que 1.x:
153
+
154
+ - [que-web](https://github.com/statianzo/que-web) is a Sinatra-based UI for inspecting your job queue.
155
+ - [que-scheduler](https://github.com/hlascelles/que-scheduler) lets you schedule tasks using a cron style config file.
134
156
 
157
+ If you have a project that uses or relates to Que, feel free to submit a PR adding it to the list!
135
158
 
136
159
  ## Community and Contributing
137
160
 
138
- * For bugs in the library, please feel free to [open an issue](https://github.com/chanks/que/issues/new).
139
- * For general discussion and questions/concerns that don't relate to obvious bugs, try posting on the [que-talk Google Group](https://groups.google.com/forum/#!forum/que-talk).
161
+ * For bugs in the library, please feel free to [open an issue](https://github.com/que-rb/que/issues/new).
162
+ * For general discussion and questions/concerns that don't relate to obvious bugs, join our [Discord Server](https://discord.gg/B3EW32H).
140
163
  * For contributions, pull requests submitted via Github are welcome.
141
164
 
142
165
  Regarding contributions, one of the project's priorities is to keep Que as simple, lightweight and dependency-free as possible, and pull requests that change too much or wouldn't be useful to the majority of Que's users have a good chance of being rejected. If you're thinking of submitting a pull request that adds a new feature, consider starting a discussion in [que-talk](https://groups.google.com/forum/#!forum/que-talk) first about what it would do and how it would be implemented. If it's a sufficiently large feature, or if most of Que's users wouldn't find it useful, it may be best implemented as a standalone gem, like some of the related projects above.
data/docs/README.md CHANGED
@@ -1,36 +1,788 @@
1
1
  Docs Index
2
2
  ===============
3
3
 
4
- TODO: Fix doc links.
5
-
6
- - [Advanced Setup](advanced_setup.md#advanced-setup)
7
- - [Using ActiveRecord Without Rails](advanced_setup.md#using-activerecord-without-rails)
8
- - [Forking Servers](advanced_setup.md#forking-servers)
9
- - [Managing the Jobs Table](advanced_setup.md#managing-the-jobs-table)
10
- - [Other Setup](advanced_setup.md#other-setup)
11
- - [Customizing Que](customizing_que.md#customizing-que)
12
- - [Recurring Jobs](customizing_que.md#recurring-jobs)
13
- - [DelayedJob-style Jobs](customizing_que.md#delayedjob-style-jobs)
14
- - [QueueClassic-style Jobs](customizing_que.md#queueclassic-style-jobs)
15
- - [Retaining Finished Jobs](customizing_que.md#retaining-finished-jobs)
16
- - [Not Retrying Certain Failed Jobs](customizing_que.md#not-retrying-certain-failed-jobs)
17
- - [Error Handling](error_handling.md#error-handling)
18
- - [Inspecting the Queue](inspecting_the_queue.md#inspecting-the-queue)
19
- - [Job Stats](inspecting_the_queue.md#job-stats)
20
- - [Worker States](inspecting_the_queue.md#worker-states)
21
- - [Custom Queries](inspecting_the_queue.md#custom-queries)
22
- - [Logging](logging.md#logging)
23
- - [Logging Job Completion](logging.md#logging-job-completion)
24
- - [Managing Workers](managing_workers.md#managing-workers)
25
- - [Working Jobs Via Executable](managing_workers.md#working-jobs-via-executable)
26
- - [Thread-Unsafe Application Code](managing_workers.md#thread-unsafe-application-code)
27
- - [The Wake Interval](managing_workers.md#the-wake-interval)
28
- - [Manually Waking Workers](managing_workers.md#manually-waking-workers)
29
- - [Connection Pool Size](managing_workers.md#connection-pool-size)
30
- - [Migrating](migrating.md#migrating)
31
- - [Multiple Queues](multiple_queues.md#multiple-queues)
32
- - [Shutting Down Safely](shutting_down_safely.md#shutting-down-safely)
33
- - [Using Plain Postgres Connections](using_plain_connections.md#using-plain-postgres-connections)
34
- - [Using Sequel](using_sequel.md#using-sequel)
35
- - [Writing Reliable Jobs](writing_reliable_jobs.md#writing-reliable-jobs)
36
- - [Timeouts](writing_reliable_jobs.md#timeouts)
4
+ - [Command Line Interface](#command-line-interface)
5
+ * [worker-priorities and worker-count](#worker-priorities-and-worker-count)
6
+ * [poll-interval](#poll-interval)
7
+ * [minimum-buffer-size and maximum-buffer-size](#minimum-buffer-size-and-maximum-buffer-size)
8
+ * [connection-url](#connection-url)
9
+ * [wait-period](#wait-period)
10
+ * [log-internals](#log-internals)
11
+ - [Advanced Setup](#advanced-setup)
12
+ * [Using ActiveRecord Without Rails](#using-activerecord-without-rails)
13
+ * [Managing the Jobs Table](#managing-the-jobs-table)
14
+ * [Other Setup](#other-setup)
15
+ - [Error Handling](#error-handling)
16
+ * [Error Notifications](#error-notifications)
17
+ * [Error-Specific Handling](#error-specific-handling)
18
+ - [Inspecting the Queue](#inspecting-the-queue)
19
+ * [Job Stats](#job-stats)
20
+ * [Custom Queries](#custom-queries)
21
+ + [ActiveRecord Example](#activerecord-example)
22
+ + [Sequel Example](#sequel-example)
23
+ - [Managing Workers](#managing-workers)
24
+ * [Working Jobs Via Executable](#working-jobs-via-executable)
25
+ * [Thread-Unsafe Application Code](#thread-unsafe-application-code)
26
+ - [Logging](#logging)
27
+ * [Logging Job Completion](#logging-job-completion)
28
+ - [Migrating](#migrating)
29
+ - [Multiple Queues](#multiple-queues)
30
+ - [Shutting Down Safely](#shutting-down-safely)
31
+ - [Using Plain Postgres Connections](#using-plain-postgres-connections)
32
+ * [Using ConnectionPool or Pond](#using-connectionpool-or-pond)
33
+ * [Using Any Other Connection Pool](#using-any-other-connection-pool)
34
+ - [Using Sequel](#using-sequel)
35
+ - [Using Que With ActiveJob](#using-que-with-activejob)
36
+ - [Job Helper Methods](#job-helper-methods)
37
+ * [destroy](#destroy)
38
+ * [finish](#finish)
39
+ * [expire](#expire)
40
+ * [retry_in](#retry-in)
41
+ * [error_count](#error-count)
42
+ * [default_resolve_action](#default-resolve-action)
43
+ - [Writing Reliable Jobs](#writing-reliable-jobs)
44
+ * [Timeouts](#timeouts)
45
+ - [Middleware](#middleware)
46
+ * [Defining Middleware For Jobs](#defining-middleware-for-jobs)
47
+ * [Defining Middleware For SQL statements](#defining-middleware-for-sql-statements)
48
+ - [Vacuuming](#vacuuming)
49
+
50
+
51
+ ## Command Line Interface
52
+
53
+ ```
54
+ usage: que [options] [file/to/require] ...
55
+ -h, --help Show this help text.
56
+ -i, --poll-interval [INTERVAL] Set maximum interval between polls for available jobs, in seconds (default: 5)
57
+ -l, --log-level [LEVEL] Set level at which to log to STDOUT (debug, info, warn, error, fatal) (default: info)
58
+ -p, --worker-priorities [LIST] List of priorities to assign to workers (default: 10,30,50,any,any,any)
59
+ -q, --queue-name [NAME] Set a queue name to work jobs from. Can be passed multiple times. (default: the default queue only)
60
+ -w, --worker-count [COUNT] Set number of workers in process (default: 6)
61
+ -v, --version Print Que version and exit.
62
+ --connection-url [URL] Set a custom database url to connect to for locking purposes.
63
+ --log-internals Log verbosely about Que's internal state. Only recommended for debugging issues
64
+ --maximum-buffer-size [SIZE] Set maximum number of jobs to be locked and held in this process awaiting a worker (default: 8)
65
+ --minimum-buffer-size [SIZE] Set minimum number of jobs to be locked and held in this process awaiting a worker (default: 2)
66
+ --wait-period [PERIOD] Set maximum interval between checks of the in-memory job queue, in milliseconds (default: 50)
67
+ ```
68
+
69
+ Some explanation of the more unusual options:
70
+
71
+ ### worker-priorities and worker-count
72
+
73
+ These options dictate the size and priority distribution of the worker pool. The default worker-priorities is `10,30,50,any,any,any`. This means that the default worker pool will reserve one worker to only works jobs with priorities under 10, one for priorities under 30, and one for priorities under 50. Three more workers will work any job.
74
+
75
+ For example, with these defaults, you could have a large backlog of jobs of priority 100. When a more important job (priority 40) comes in, there's guaranteed to be a free worker. If the process then becomes saturated with jobs of priority 40, and then a priority 20 job comes in, there's guaranteed to be a free worker for it, and so on. You can pass a priority more than once to have multiple workers at that level (for example: `--worker-priorities=100,100,any,any`). This gives you a lot of freedom to manage your worker capacity at different priority levels.
76
+
77
+ Instead of passing worker-priorities, you can pass a `worker-count` - this is a shorthand for creating the given number of workers at the `any` priority level. So, `--worker-count=3` is just like passing equivalent to `worker-priorities=any,any,any`.
78
+
79
+ If you pass both worker-count and worker-priorities, the count will trim or pad the priorities list with `any` workers. So, `--worker-priorities=20,30,40 --worker-count=6` would be the same as passing `--worker-priorities=20,30,40,any,any,any`.
80
+
81
+ ### poll-interval
82
+
83
+ This option sets the number of seconds the process will wait between polls of the job queue. Jobs that are ready to be worked immediately will be broadcast via the LISTEN/NOTIFY system, so polling is unnecessary for them - polling is only necessary for jobs that are scheduled in the future or which are being delayed due to errors. The default is 5 seconds.
84
+
85
+ ### minimum-buffer-size and maximum-buffer-size
86
+
87
+ These options set the size of the internal buffer that Que uses to hold jobs until they're ready for workers. The default minimum is 2 and the maximum is 8, meaning that the process won't buffer more than 8 jobs that aren't yet ready to be worked, and will only resort to polling if the buffer dips below 2. If you don't want jobs to be buffered at all, you can set both of these values to zero.
88
+
89
+ ### connection-url
90
+
91
+ This option sets the URL to be used to open a connection to the database for locking purposes. By default, Que will simply use a connection from the connection pool for locking - this option is only useful if your application connections can't use advisory locks - for example, if they're passed through an external connection pool like PgBouncer. In that case, you'll need to use this option to specify your actual database URL so that Que can establish a direct connection.
92
+
93
+ ### wait-period
94
+
95
+ This option specifies (in milliseconds) how often the locking thread wakes up to check whether the workers have finished jobs, whether it's time to poll, etc. You shouldn't generally need to tweak this, but it may come in handy for some workloads. The default is 50 milliseconds.
96
+
97
+ ### log-internals
98
+
99
+ This option instructs Que to output a lot of information about its internal state to the logger. It should only be used if it becomes necessary to debug issues.
100
+
101
+ ## Advanced Setup
102
+
103
+ ### Using ActiveRecord Without Rails
104
+
105
+ If you're using both Rails and ActiveRecord, the README describes how to get started with Que (which is pretty straightforward, since it includes a Railtie that handles a lot of setup for you). Otherwise, you'll need to do some manual setup.
106
+
107
+ If you're using ActiveRecord outside of Rails, you'll need to tell Que to piggyback on its connection pool after you've connected to the database:
108
+
109
+ ```ruby
110
+ ActiveRecord::Base.establish_connection(ENV['DATABASE_URL'])
111
+
112
+ require 'que'
113
+ Que.connection = ActiveRecord
114
+ ```
115
+
116
+ Then you can queue jobs just as you would in Rails:
117
+
118
+ ```ruby
119
+ ActiveRecord::Base.transaction do
120
+ @user = User.create(params[:user])
121
+ SendRegistrationEmail.enqueue user_id: @user.id
122
+ end
123
+ ```
124
+
125
+ There are other docs to read if you're using [Sequel](#using-sequel) or [plain Postgres connections](#using-plain-connections) (with no ORM at all) instead of ActiveRecord.
126
+
127
+ ### Managing the Jobs Table
128
+
129
+ After you've connected Que to the database, you can manage the jobs table. You'll want to migrate to a specific version in a migration file, to ensure that they work the same way even when you upgrade Que in the future:
130
+
131
+ ```ruby
132
+ # Update the schema to version #4.
133
+ Que.migrate! version: 4
134
+
135
+ # Remove Que's jobs table entirely.
136
+ Que.migrate! version: 0
137
+ ```
138
+
139
+ There's also a helper method to clear all jobs from the jobs table:
140
+
141
+ ```ruby
142
+ Que.clear!
143
+ ```
144
+
145
+ ### Other Setup
146
+
147
+ Be sure to read the docs on [managing workers](#managing-workers) for more information on using the worker pool.
148
+
149
+ You'll also want to set up [logging](#logging) and an [error handler](#error-handling) to track errors raised by jobs.
150
+
151
+
152
+ ## Error Handling
153
+
154
+ If an error is raised and left uncaught by your job, Que will save the error message and backtrace to the database and schedule the job to be retried later.
155
+
156
+ If a given job fails repeatedly, Que will retry it at exponentially-increasing intervals equal to (failure_count^4 + 3) seconds. This means that a job will be retried 4 seconds after its first failure, 19 seconds after its second, 84 seconds after its third, 259 seconds after its fourth, and so on until it succeeds. This pattern is very similar to DelayedJob's. Alternately, you can define your own retry logic by setting an interval to delay each time, or a callable that accepts the number of failures and returns an interval:
157
+
158
+ ```ruby
159
+ class MyJob < Que::Job
160
+ # Just retry a failed job every 5 seconds:
161
+ self.retry_interval = 5
162
+
163
+ # Always retry this job immediately (not recommended, or transient
164
+ # errors will spam your error reporting):
165
+ self.retry_interval = 0
166
+
167
+ # Increase the delay by 30 seconds every time this job fails:
168
+ self.retry_interval = proc { |count| count * 30 }
169
+ end
170
+ ```
171
+
172
+ There is a maximum_retry_count option for jobs. It defaults to 15 retries, which with the default retry interval means that a job will stop retrying after a little more than two days.
173
+
174
+ ### Error Notifications
175
+
176
+ If you're using an error notification system (highly recommended, of course), you can hook Que into it by setting a callable as the error notifier:
177
+
178
+ ```ruby
179
+ Que.error_notifier = proc do |error, job|
180
+ # Do whatever you want with the error object or job row here. Note that the
181
+ # job passed is not the actual job object, but the hash representing the job
182
+ # row in the database, which looks like:
183
+
184
+ # {
185
+ # :priority => 100,
186
+ # :run_at => "2017-09-15T20:18:52.018101Z",
187
+ # :id => 172340879,
188
+ # :job_class => "TestJob",
189
+ # :error_count => 0,
190
+ # :last_error_message => nil,
191
+ # :queue => "default",
192
+ # :last_error_backtrace => nil,
193
+ # :finished_at => nil,
194
+ # :expired_at => nil,
195
+ # :args => [],
196
+ # :data => {}
197
+ # }
198
+
199
+ # This is done because the job may not have been able to be deserialized
200
+ # properly, if the name of the job class was changed or the job class isn't
201
+ # loaded for some reason. The job argument may also be nil, if there was a
202
+ # connection failure or something similar.
203
+ end
204
+ ```
205
+
206
+ ### Error-Specific Handling
207
+
208
+ You can also define a handle_error method in your job, like so:
209
+
210
+ ```ruby
211
+ class MyJob < Que::Job
212
+ def run(*args)
213
+ # Your code goes here.
214
+ end
215
+
216
+ def handle_error(error)
217
+ case error
218
+ when TemporaryError then retry_in 10.seconds
219
+ when PermanentError then expire
220
+ else super # Default (exponential backoff) behavior.
221
+ end
222
+ end
223
+ end
224
+ ```
225
+
226
+ The return value of handle_error determines whether the error object is passed to the error notifier. The helper methods like expire and retry_in return true, so these errors will be notified. You can explicitly return false to skip notification.
227
+
228
+ ```ruby
229
+ class MyJob < Que::Job
230
+ def handle_error(error)
231
+ case error
232
+ when AnnoyingError
233
+ retry_in 10.seconds
234
+ false
235
+ when TransientError
236
+ super
237
+ error_count > 3
238
+ else
239
+ super # Default (exponential backoff) behavior.
240
+ end
241
+ end
242
+ end
243
+ ```
244
+
245
+ In this example, AnnoyingError will never be notified, while TransientError will only be notified once it has affected a given job at least three times.
246
+
247
+ ## Inspecting the Queue
248
+
249
+ In order to remain simple and compatible with any ORM (or no ORM at all), Que is really just a very thin wrapper around some raw SQL. There are two methods available that query the jobs table and Postgres' system catalogs to retrieve information on the current state of the queue:
250
+
251
+ ### Job Stats
252
+
253
+ You can call `Que.job_stats` to return some aggregate data on the types of jobs currently in the queue. Example output:
254
+
255
+ ```ruby
256
+ [
257
+ {
258
+ :job_class=>"ChargeCreditCard",
259
+ :count=>10,
260
+ :count_working=>4,
261
+ :count_errored=>2,
262
+ :highest_error_count=>5,
263
+ :oldest_run_at=>2017-09-08 16:13:18 -0400
264
+ },
265
+ {
266
+ :job_class=>"SendRegistrationEmail",
267
+ :count=>1,
268
+ :count_working=>0,
269
+ :count_errored=>0,
270
+ :highest_error_count=>0,
271
+ :oldest_run_at=>2017-09-08 17:13:18 -0400
272
+ }
273
+ ]
274
+ ```
275
+
276
+ This tells you that, for instance, there are ten ChargeCreditCard jobs in the queue, four of which are currently being worked, and two of which have experienced errors. One of them has started to process but experienced an error five times. The oldest_run_at is helpful for determining how long jobs have been sitting around, if you have a large backlog.
277
+
278
+ ### Custom Queries
279
+
280
+ If you're using ActiveRecord or Sequel, Que ships with models that wrap the job queue so you can write your own logic to inspect it. They include some helpful scopes to write your queries - see the gem source for a complete accounting.
281
+
282
+ #### ActiveRecord Example
283
+
284
+ ``` ruby
285
+ # app/models/que_job.rb
286
+
287
+ require 'que/active_record/model'
288
+
289
+ class QueJob < Que::ActiveRecord::Model
290
+ end
291
+
292
+ QueJob.finished.to_sql # => "SELECT \"que_jobs\".* FROM \"que_jobs\" WHERE (\"que_jobs\".\"finished_at\" IS NOT NULL)"
293
+
294
+ # You could also name the model whatever you like, or just query from
295
+ # Que::ActiveRecord::Model directly if you don't need to write your own model
296
+ # logic.
297
+ ```
298
+
299
+ #### Sequel Example
300
+
301
+ ``` ruby
302
+ # app/models/que_job.rb
303
+
304
+ require 'que/sequel/model'
305
+
306
+ class QueJob < Que::Sequel::Model
307
+ end
308
+
309
+ QueJob.finished # => #<Sequel::Postgres::Dataset: "SELECT * FROM \"public\".\"que_jobs\" WHERE (\"public\".\"que_jobs\".\"finished_at\" IS NOT NULL)">
310
+ ```
311
+
312
+ ## Managing Workers
313
+
314
+ Que uses a multithreaded pool of workers to run jobs in parallel - this allows you to save memory by working many jobs simultaneously in the same process. The `que` executable starts up a pool of 6 workers by default. This is fine for most use cases, but the ideal number for your app will depend on your interpreter and what types of jobs you're running.
315
+
316
+ Ruby MRI has a global interpreter lock (GIL), which prevents it from using more than one CPU core at a time. Having multiple workers running makes sense if your jobs tend to spend a lot of time in I/O (waiting on complex database queries, sending emails, making HTTP requests, etc.), as most jobs do. However, if your jobs are doing a lot of work in Ruby, they'll be spending a lot of time blocking each other, and having too many workers running will cause you to lose efficiency to context-switching. So, you'll want to choose the appropriate number of workers for your use case.
317
+
318
+ ### Working Jobs Via Executable
319
+
320
+ ```shell
321
+ # Run a pool of 6 workers:
322
+ que
323
+
324
+ # Or configure the number of workers:
325
+ que --worker-count 10
326
+ ```
327
+
328
+ See `que -h` for a complete list of command-line options.
329
+
330
+ ### Thread-Unsafe Application Code
331
+
332
+ If your application code is not thread-safe, you won't want any workers to be processing jobs while anything else is happening in the Ruby process. So, you'll want to run a single worker at a time, like so:
333
+
334
+ ```shell
335
+ que --worker-count 1
336
+ ```
337
+
338
+ ## Logging
339
+
340
+ By default, Que logs important information in JSON to either Rails' logger (when running in a Rails web process) or STDOUT (when running via the `que` executable). So, your logs will look something like:
341
+
342
+ ```
343
+ I, [2017-08-12T05:07:31.094201 #4687] INFO -- : {"lib":"que","hostname":"lovelace","pid":21626,"thread":21471100,"event":"job_worked","job_id":6157665,"elapsed":0.531411}
344
+ ```
345
+
346
+ Of course you can have it log wherever you like:
347
+
348
+ ```ruby
349
+ Que.logger = Logger.new(...)
350
+ ```
351
+
352
+ If you don't like logging in JSON, you can also customize the format of the logging output by passing a callable object (such as a proc) to Que.log_formatter=. The proc should take a hash (the keys are symbols) and return a string. The keys and values are just as you would expect from the JSON output:
353
+
354
+ ```ruby
355
+ Que.log_formatter = proc do |data|
356
+ "Thread number #{data[:thread]} experienced a #{data[:event]}"
357
+ end
358
+ ```
359
+
360
+ If the log formatter returns nil or false, nothing will be logged at all. You could use this to narrow down what you want to emit, for example:
361
+
362
+ ```ruby
363
+ Que.log_formatter = proc do |data|
364
+ if [:job_worked, :job_unavailable].include?(data[:event])
365
+ JSON.dump(data)
366
+ end
367
+ end
368
+ ```
369
+
370
+ ### Logging Job Completion
371
+
372
+ Que logs a `job_worked` event whenever a job completes, though by default this event is logged at the `DEBUG` level. Since people often run their applications at the `INFO` level or above, this can make the logs too silent for some use cases. Similarly, you may want to log at a higher level if a time-sensitive job begins taking too long to run.
373
+
374
+ You can solve these problems by configuring the level at which a job is logged on a per-job basis. Simply define a `log_level` method in your job class - it will be called with a float representing the number of seconds it took for the job to run, and it should return a symbol indicating what level to log the job at:
375
+
376
+ ```ruby
377
+ class TimeSensitiveJob < Que::Job
378
+ def run(*args)
379
+ RemoteAPI.execute_important_request
380
+ end
381
+
382
+ def log_level(elapsed)
383
+ if elapsed > 60
384
+ # This job took over a minute! We should complain about it!
385
+ :warn
386
+ elsif elapsed > 30
387
+ # A little long, but no big deal!
388
+ :info
389
+ else
390
+ # This is fine, don't bother logging at all.
391
+ false
392
+ end
393
+ end
394
+ end
395
+ ```
396
+
397
+ This method should return a symbol that is a valid logging level (one of `[:debug, :info, :warn, :error, :fatal, :unknown]`). If the method returns anything other than one of these symbols, the job won't be logged.
398
+
399
+ If a job errors, a `job_errored` event will be emitted at the `ERROR` log level. This is not currently configurable.
400
+
401
+ ## Migrating
402
+
403
+ Some new releases of Que may require updates to the database schema. It's recommended that you integrate these updates alongside your other database migrations. For example, when Que released version 0.6.0, the schema version was updated from 2 to 3. If you're running ActiveRecord, you could make a migration to perform this upgrade like so:
404
+
405
+ ```ruby
406
+ class UpdateQue < ActiveRecord::Migration[5.0]
407
+ def self.up
408
+ Que.migrate! version: 3
409
+ end
410
+
411
+ def self.down
412
+ Que.migrate! version: 2
413
+ end
414
+ end
415
+ ```
416
+
417
+ This will make sure that your database schema stays consistent with your codebase. If you're looking for something quicker and dirtier, you can always manually migrate in a console session:
418
+
419
+ ```ruby
420
+ # Change schema to version 3.
421
+ Que.migrate! version: 3
422
+
423
+ # Check your current schema version.
424
+ Que.db_version #=> 3
425
+ ```
426
+
427
+ Note that you can remove Que from your database completely by migrating to version 0.
428
+
429
+
430
+ ## Multiple Queues
431
+
432
+ Que supports the use of multiple queues in a single job table. Please note that this feature is intended to support the case where multiple codebases are sharing the same job queue - if you want to support jobs of differing priorities, the numeric priority system offers better flexibility and performance.
433
+
434
+ For instance, you might have a separate Ruby application that handles only processing credit cards. In that case, you can run that application's workers against a specific queue:
435
+
436
+ ```shell
437
+ que --queue-name credit_cards
438
+ # The -q flag is equivalent, and either can be passed multiple times.
439
+ que -q default -q credit_cards
440
+ ```
441
+
442
+ Then you can set jobs to be enqueued in that queue specifically:
443
+
444
+ ```ruby
445
+ ProcessCreditCard.enqueue current_user.id, queue: 'credit_cards'
446
+
447
+ # Or:
448
+
449
+ class ProcessCreditCard < Que::Job
450
+ # Set a default queue for this job class; this can be overridden by
451
+ # passing the :queue parameter to enqueue like above.
452
+ self.queue = 'credit_cards'
453
+ end
454
+ ```
455
+
456
+ In some cases, the ProcessCreditCard class may not be defined in the application that is enqueueing the job. In that case, you can specify the job class as a string:
457
+
458
+ ```ruby
459
+ Que.enqueue current_user.id, job_class: 'ProcessCreditCard', queue: 'credit_cards'
460
+ ```
461
+
462
+ ## Shutting Down Safely
463
+
464
+ To ensure safe operation, Que needs to be very careful in how it shuts down. When a Ruby process ends normally, it calls Thread#kill on any threads that are still running - unfortunately, if a thread is in the middle of a transaction when this happens, there is a risk that it will be prematurely commited, resulting in data corruption. See [here](http://blog.headius.com/2008/02/ruby-threadraise-threadkill-timeoutrb.html) and [here](http://coderrr.wordpress.com/2011/05/03/beware-of-threadkill-or-your-activerecord-transactions-are-in-danger-of-being-partially-committed/) for more detail on this.
465
+
466
+ To prevent this, Que will block the worker process from exiting until all jobs it is working have completed normally. Unfortunately, if you have long-running jobs, this may take a very long time (and if something goes wrong with a job's logic, it may never happen). The solution in this case is SIGKILL - luckily, Ruby processes that are killed via SIGKILL will end without using Thread#kill on its running threads. This is safer than exiting normally - when PostgreSQL loses the connection it will simply roll back the open transaction, if any, and unlock the job so it can be retried later by another worker. Be sure to read [Writing Reliable Jobs](#writing-reliable-jobs.md) for information on how to design your jobs to fail safely.
467
+
468
+ So, be prepared to use SIGKILL on your Ruby processes if they run for too long. For example, Heroku takes a good approach to this - when Heroku's platform is shutting down a process, it sends SIGTERM, waits ten seconds, then sends SIGKILL if the process still hasn't exited. This is a nice compromise - it will give each of your currently running jobs ten seconds to complete, and any jobs that haven't finished by then will be interrupted and retried later.
469
+
470
+
471
+ ## Using Plain Postgres Connections
472
+
473
+ If you're not using an ORM like ActiveRecord or Sequel, you can use a distinct connection pool to manage your Postgres connections. Please be aware that if you **are** using ActiveRecord or Sequel, there's no reason for you to be using any of these methods - it's less efficient (unnecessary connections will waste memory on your database server) and you lose the reliability benefits of wrapping jobs in the same transactions as the rest of your data.
474
+
475
+ ### Using ConnectionPool or Pond
476
+
477
+ Support for two connection pool gems is included in Que. The first is the ConnectionPool gem (be sure to add `gem 'connection_pool'` to your Gemfile):
478
+
479
+ ```ruby
480
+ require 'uri'
481
+ require 'pg'
482
+ require 'connection_pool'
483
+
484
+ uri = URI.parse(ENV['DATABASE_URL'])
485
+
486
+ Que.connection = ConnectionPool.new(size: 10) do
487
+ PG::Connection.open(
488
+ host: uri.host,
489
+ user: uri.user,
490
+ password: uri.password,
491
+ port: uri.port || 5432,
492
+ dbname: uri.path[1..-1]
493
+ )end
494
+ ```
495
+
496
+ Be sure to pick your pool size carefully - if you use 10 for the size, you'll incur the overhead of having 10 connections open to Postgres even if you never use more than a couple of them.
497
+
498
+ The Pond gem doesn't have this drawback - it is very similar to ConnectionPool, but establishes connections lazily (add `gem 'pond'` to your Gemfile):
499
+
500
+ ```ruby
501
+ require 'uri'
502
+ require 'pg'
503
+ require 'pond'
504
+
505
+ uri = URI.parse(ENV['DATABASE_URL'])
506
+
507
+ Que.connection = Pond.new(maximum_size: 10) do
508
+ PG::Connection.open(
509
+ host: uri.host,
510
+ user: uri.user,
511
+ password: uri.password,
512
+ port: uri.port || 5432,
513
+ dbname: uri.path[1..-1]
514
+ )
515
+ end
516
+ ```
517
+
518
+ ### Using Any Other Connection Pool
519
+
520
+ You can use any other in-process connection pool by defining access to it in a proc that's passed to `Que.connection_proc = proc`. The proc you pass should accept a block and call it with a connection object. For instance, Que's built-in interface to Sequel's connection pool is basically implemented like:
521
+
522
+ ```ruby
523
+ Que.connection_proc = proc do |&block|
524
+ DB.synchronize do |connection|
525
+ block.call(connection)
526
+ end
527
+ end
528
+ ```
529
+
530
+ This proc must meet a few requirements:
531
+ - The yielded object must be an instance of `PG::Connection`.
532
+ - It must be reentrant - if it is called with a block, and then called again inside that block, it must return the same object. For example, in `proc.call{|conn1| proc.call{|conn2| conn1.object_id == conn2.object_id}}` the innermost condition must be true.
533
+ - It must lock the connection object and prevent any other thread from accessing it for the duration of the block.
534
+
535
+ If any of these conditions aren't met, Que will raise an error.
536
+
537
+ ## Using Sequel
538
+
539
+ If you're using Sequel, with or without Rails, you'll need to give Que a specific database instance to use:
540
+
541
+ ```ruby
542
+ DB = Sequel.connect(ENV['DATABASE_URL'])
543
+ Que.connection = DB
544
+ ```
545
+
546
+ If you are using Sequel's migrator, your app initialization won't happen, so you may need to tweak your migrations to `require 'que'` and set its connection:
547
+
548
+ ```ruby
549
+ require 'que'
550
+ Sequel.migration do
551
+ up do
552
+ Que.connection = self
553
+ Que.migrate! :version => 3
554
+ end
555
+ down do
556
+ Que.connection = self
557
+ Que.migrate! :version => 0
558
+ end
559
+ end
560
+ ```
561
+
562
+ Then you can safely use the same database object to transactionally protect your jobs:
563
+
564
+ ```ruby
565
+ class MyJob < Que::Job
566
+ def run(user_id:)
567
+ # Do stuff.
568
+
569
+ DB.transaction do
570
+ # Make changes to the database.
571
+
572
+ # Destroying this job will be protected by the same transaction.
573
+ destroy
574
+ end
575
+ end
576
+ end
577
+
578
+ # Or, in your controller action:
579
+ DB.transaction do
580
+ @user = User.create(params[:user])
581
+ MyJob.enqueue user_id: @user.id
582
+ end
583
+ ```
584
+
585
+ Sequel automatically wraps model persistance actions (create, update, destroy) in transactions, so you can simply call #enqueue methods from your models' callbacks, if you wish.
586
+
587
+ ## Using Que With ActiveJob
588
+
589
+ You can include `Que::ActiveJob::JobExtensions` into your `ApplicationJob` subclass to get support for all of Que's
590
+ [helper methods](#job-helper-methods). These methods will become no-ops if you use a queue adapter that isn't Que, so if you like to use a different adapter in development they shouldn't interfere.
591
+
592
+ Additionally, including `Que::ActiveJob::JobExtensions` lets you define a run() method that supports keyword arguments.
593
+
594
+ ## Job Helper Methods
595
+
596
+ There are a number of instance methods on Que::Job that you can use in your jobs, preferably in transactions. See [Writing Reliable Jobs](/writing_reliable_jobs.md) for more information on where to use these methods.
597
+
598
+ ### destroy
599
+
600
+ This method deletes the job from the queue table, ensuring that it won't be worked a second time.
601
+
602
+ ### finish
603
+
604
+ This method marks the current job as finished, ensuring that it won't be worked a second time. This is like destroy, in that it finalizes a job, but this method leaves the job in the table, in case you want to query it later.
605
+
606
+ ### expire
607
+
608
+ This method marks the current job as expired. It will be left in the table and won't be retried, but it will be easy to query for expired jobs. This method is called if the job exceeds its maximum_retry_count.
609
+
610
+ ### retry_in
611
+
612
+ This method marks the current job to be retried later. You can pass a numeric to this method, in which case that is the number of seconds after which it can be retried (`retry_in(10)`, `retry_in(0.5)`), or, if you're using ActiveSupport, you can pass in a duration object (`retry_in(10.minutes)`). This automatically happens, with an exponentially-increasing interval, when the job encounters an error.
613
+
614
+ ### error_count
615
+
616
+ This method returns the total number of times the job has errored, in case you want to modify the job's behavior after it has failed a given number of times.
617
+
618
+ ### default_resolve_action
619
+
620
+ If you don't perform a resolve action (destroy, finish, expire, retry_in) while the job is worked, Que will call this method for you. By default it simply calls `destroy`, but you can override it in your Job subclasses if you wish - for example, to call `finish`, or to invoke some more complicated logic.
621
+
622
+ ## Writing Reliable Jobs
623
+
624
+ Que does everything it can to ensure that jobs are worked exactly once, but if something bad happens when a job is halfway completed, there's no way around it - the job will need be repeated over again from the beginning, probably by a different worker. When you're writing jobs, you need to be prepared for this to happen.
625
+
626
+ The safest type of job is one that reads in data, either from the database or from external APIs, then does some number crunching and writes the results to the database. These jobs are easy to make safe - simply write the results to the database inside a transaction, and also destroy the job inside that transaction, like so:
627
+
628
+ ```ruby
629
+ class UpdateWidgetPrice < Que::Job
630
+ def run(widget_id)
631
+ widget = Widget[widget_id]
632
+ price = ExternalService.get_widget_price(widget_id)
633
+
634
+ ActiveRecord::Base.transaction do
635
+ # Make changes to the database.
636
+ widget.update price: price
637
+
638
+ # Mark the job as destroyed, so it doesn't run again.
639
+ destroy
640
+ end
641
+ end
642
+ end
643
+ ```
644
+
645
+ Here, you're taking advantage of the guarantees of an [ACID](https://en.wikipedia.org/wiki/ACID) database. The job is destroyed along with the other changes, so either the write will succeed and the job will be run only once, or it will fail and the database will be left untouched. But even if it fails, the job can simply be retried, and there are no lingering effects from the first attempt, so no big deal.
646
+
647
+ The more difficult type of job is one that makes changes that can't be controlled transactionally. For example, writing to an external service:
648
+
649
+ ```ruby
650
+ class ChargeCreditCard < Que::Job
651
+ def run(user_id, credit_card_id)
652
+ CreditCardService.charge(credit_card_id, amount: "$10.00")
653
+
654
+ ActiveRecord::Base.transaction do
655
+ User.where(id: user_id).update_all charged_at: Time.now
656
+ destroy
657
+ end
658
+ end
659
+ end
660
+ ```
661
+
662
+ What if the process abruptly dies after we tell the provider to charge the credit card, but before we finish the transaction? Que will retry the job, but there's no way to tell where (or even if) it failed the first time. The credit card will be charged a second time, and then you've got an angry customer. The ideal solution in this case is to make the job [idempotent](https://en.wikipedia.org/wiki/Idempotence), meaning that it will have the same effect no matter how many times it is run:
663
+
664
+ ```ruby
665
+ class ChargeCreditCard < Que::Job
666
+ def run(user_id, credit_card_id)
667
+ unless CreditCardService.check_for_previous_charge(credit_card_id)
668
+ CreditCardService.charge(credit_card_id, amount: "$10.00")
669
+ end
670
+
671
+ ActiveRecord::Base.transaction do
672
+ User.where(id: user_id).update_all charged_at: Time.now
673
+ destroy
674
+ end
675
+ end
676
+ end
677
+ ```
678
+
679
+ This makes the job slightly more complex, but reliable (or, at least, as reliable as your credit card service).
680
+
681
+ Finally, there are some jobs where you won't want to write to the database at all:
682
+
683
+ ```ruby
684
+ class SendVerificationEmail < Que::Job
685
+ def run(email_address)
686
+ Mailer.verification_email(email_address).deliver
687
+ end
688
+ end
689
+ ```
690
+
691
+ In this case, we don't have a way to prevent the occasional double-sending of an email. But, for ease of use, you can leave out the transaction and the `destroy` call entirely - Que will recognize that the job wasn't destroyed and will clean it up for you.
692
+
693
+ ### Timeouts
694
+
695
+ Long-running jobs aren't necessarily a problem for the database, since the overhead of an individual job is very small (just an advisory lock held in memory). But jobs that hang indefinitely can tie up a worker and [block the Ruby process from exiting gracefully](#shutting-down-safely), which is a pain.
696
+
697
+ If there's part of your job that is prone to hang (due to an API call or other HTTP request that never returns, for example), you can (and should) timeout those parts of your job. For example, consider a job that needs to make an HTTP request and then write to the database:
698
+
699
+ ```ruby
700
+ class ScrapeStuff < Que::Job
701
+ def run(url_to_scrape)
702
+ result = YourHTTPLibrary.get(url_to_scrape)
703
+
704
+ ActiveRecord::Base.transaction do
705
+ # Insert result...
706
+
707
+ destroy
708
+ end
709
+ end
710
+ end
711
+ ```
712
+
713
+ That request could take a very long time, or never return at all. Let's use the timeout feature that almost all HTTP libraries offer some version of:
714
+
715
+ ```ruby
716
+ class ScrapeStuff < Que::Job
717
+ def run(url_to_scrape)
718
+ result = YourHTTPLibrary.get(url_to_scrape, timeout: 5)
719
+
720
+ ActiveRecord::Base.transaction do
721
+ # Insert result...
722
+
723
+ destroy
724
+ end
725
+ end
726
+ end
727
+ ```
728
+
729
+ Now, if the request takes more than five seconds, an error will be raised (probably - check your library's documentation) and Que will just retry the job later.
730
+
731
+
732
+ ## Middleware
733
+
734
+ A new feature in 1.0 is support for custom middleware around various actions.
735
+
736
+ This API is experimental for the 1.0 beta and may change.
737
+
738
+ ### Defining Middleware For Jobs
739
+
740
+ You can define middleware to wrap worked jobs. You can use this to add custom instrumentation around jobs, log how long they take to complete, etc.
741
+
742
+ ``` ruby
743
+ Que.job_middleware.push(
744
+ -> (job, &block) {
745
+ # Do stuff with the job object - report on it, count time elapsed, etc.
746
+ block.call
747
+ nil # Doesn't matter what's returned.
748
+ }
749
+ )
750
+ ```
751
+
752
+ ### Defining Middleware For SQL statements
753
+
754
+ SQL middleware wraps queries that Que executes, or which you might decide to execute via Que.execute(). You can use hook this into NewRelic or a similar service to instrument how long SQL queries take, for example.
755
+
756
+ ``` ruby
757
+ Que.sql_middleware.push(
758
+ -> (sql, params, &block) {
759
+ Service.instrument(sql: sql, params: params) do
760
+ block.call
761
+ end
762
+ nil # Still doesn't matter what's returned.
763
+ }
764
+ )
765
+ ```
766
+
767
+ Please be careful with what you do inside an SQL middleware - this code will execute inside Que's locking thread, which runs in a fairly tight loop that is optimized for performance. If you do something inside this block that incurs blocking I/O (like synchronously touching an external service) you may find Que being less able to pick up jobs quickly.
768
+
769
+ ## Vacuuming
770
+
771
+ Because the que_jobs table is "high churn" (lots of rows are being created and deleted), it needs to be vacuumed fairly frequently to keep the dead tuple count down otherwise [acquring a job to work will start taking longer and longer](https://brandur.org/postgres-queues).
772
+
773
+ In many cases postgres will vacuum these dead tuples automatically using autovacuum, so no intervention is required. However, if your database has a lot of other large tables that take hours for autovacuum to run on, it is possible that there won't be any autovacuum processes available within a reasonable amount of time. If that happens the dead tuple count on the que_jobs table will reach a point where it starts taking so long to acquire a job to work that the jobs are being added faster than they can be worked.
774
+
775
+ In order to avoid this situation you can kick off a manual vacuum against the que_jobs table on a regular basis. This manual vacuum will be more aggressive than an autovacuum since by default it does not back-off and sleep, so you will want to make sure your server has enough disk I/O available to handle the vacuum + any autovacuums + your workload + some overhead. However, by keeping the interval between vacuums small you will also be limiting the amount of work to be done which will aleviate some of the afforementiond risk of I/O usage.
776
+
777
+ Here is an example recurring manual vacuum job that assumes you are using Sequel:
778
+
779
+ ```
780
+ class ManualVacuumJob < CronJob
781
+ self.priority = 1 # set this to the highest priority since it keeps the table healthy for other jobs
782
+ INTERVAL = 300
783
+
784
+ def run(args)
785
+ DB.run "VACUUM VERBOSE ANALYZE que_jobs"
786
+ end
787
+ end
788
+ ```