backburner-allq 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/.gitignore +17 -0
- data/.travis.yml +29 -0
- data/CHANGELOG.md +133 -0
- data/CONTRIBUTING.md +37 -0
- data/Gemfile +4 -0
- data/HOOKS.md +99 -0
- data/LICENSE +22 -0
- data/README.md +658 -0
- data/Rakefile +17 -0
- data/TODO +4 -0
- data/backburner-allq.gemspec +26 -0
- data/bin/backburner +7 -0
- data/circle.yml +3 -0
- data/deploy.sh +3 -0
- data/examples/custom.rb +25 -0
- data/examples/demo.rb +60 -0
- data/examples/god.rb +46 -0
- data/examples/hooked.rb +87 -0
- data/examples/retried.rb +31 -0
- data/examples/simple.rb +43 -0
- data/examples/stress.rb +31 -0
- data/lib/backburner.rb +75 -0
- data/lib/backburner/allq_wrapper.rb +317 -0
- data/lib/backburner/async_proxy.rb +25 -0
- data/lib/backburner/cli.rb +53 -0
- data/lib/backburner/configuration.rb +48 -0
- data/lib/backburner/connection.rb +157 -0
- data/lib/backburner/helpers.rb +193 -0
- data/lib/backburner/hooks.rb +53 -0
- data/lib/backburner/job.rb +118 -0
- data/lib/backburner/logger.rb +53 -0
- data/lib/backburner/performable.rb +95 -0
- data/lib/backburner/queue.rb +145 -0
- data/lib/backburner/tasks.rb +54 -0
- data/lib/backburner/version.rb +3 -0
- data/lib/backburner/worker.rb +221 -0
- data/lib/backburner/workers/forking.rb +52 -0
- data/lib/backburner/workers/simple.rb +29 -0
- data/lib/backburner/workers/threading.rb +163 -0
- data/lib/backburner/workers/threads_on_fork.rb +263 -0
- data/test/async_proxy_test.rb +36 -0
- data/test/back_burner_test.rb +88 -0
- data/test/connection_test.rb +179 -0
- data/test/fixtures/hooked.rb +122 -0
- data/test/fixtures/test_fork_jobs.rb +72 -0
- data/test/fixtures/test_forking_jobs.rb +56 -0
- data/test/fixtures/test_jobs.rb +87 -0
- data/test/fixtures/test_queue_settings.rb +14 -0
- data/test/helpers/templogger.rb +22 -0
- data/test/helpers_test.rb +278 -0
- data/test/hooks_test.rb +112 -0
- data/test/job_test.rb +185 -0
- data/test/logger_test.rb +44 -0
- data/test/performable_test.rb +88 -0
- data/test/queue_test.rb +69 -0
- data/test/test_helper.rb +128 -0
- data/test/worker_test.rb +157 -0
- data/test/workers/forking_worker_test.rb +181 -0
- data/test/workers/simple_worker_test.rb +350 -0
- data/test/workers/threading_worker_test.rb +104 -0
- data/test/workers/threads_on_fork_worker_test.rb +484 -0
- metadata +217 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA256:
|
3
|
+
metadata.gz: b341ad669c86cbe42e3cb8b8c521fdf51eb62fedaf02c7f5fc35c2701b275995
|
4
|
+
data.tar.gz: d5cd03e528baa91099f161e3ca98f9bcdc482058bbfa7183fdfb2dc779955b71
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: 37f3bb1322f1b8366c5e033dc2818bc2adecb2f2f8ae414d55c2c352826a615df31e371422c478aa41a8e690546caf1f71b0797aed3df7fa6d8691204470a51d
|
7
|
+
data.tar.gz: 2e7029d2e04e6f37a6b634e7be2a68c59500dadfef25b4332a350bbd7d87000c593d9e88050935b7943196ce4fa91a51291f532136b120c5212fee2470bb37d4
|
data/.gitignore
ADDED
data/.travis.yml
ADDED
@@ -0,0 +1,29 @@
|
|
1
|
+
# http://about.travis-ci.org/docs/user/build-configuration/
|
2
|
+
rvm:
|
3
|
+
- 1.9.3
|
4
|
+
- 2.0.0
|
5
|
+
- 2.1
|
6
|
+
- 2.2
|
7
|
+
- 2.3
|
8
|
+
- 2.4
|
9
|
+
- 2.5
|
10
|
+
- rbx-2
|
11
|
+
before_install:
|
12
|
+
- curl -L https://github.com/kr/beanstalkd/archive/v1.9.tar.gz | tar xz -C /tmp
|
13
|
+
- cd /tmp/beanstalkd-1.9/
|
14
|
+
- make
|
15
|
+
- ./beanstalkd &
|
16
|
+
- cd $TRAVIS_BUILD_DIR
|
17
|
+
- gem update --system
|
18
|
+
- gem update bundler
|
19
|
+
matrix:
|
20
|
+
allow_failures:
|
21
|
+
- rvm: rbx-2
|
22
|
+
- rvm: 2.0.0
|
23
|
+
script:
|
24
|
+
- bundle exec rake test
|
25
|
+
gemfile: Gemfile
|
26
|
+
notifications:
|
27
|
+
recipients:
|
28
|
+
- nesquena@gmail.com
|
29
|
+
- therealdave.myron@gmail.com
|
data/CHANGELOG.md
ADDED
@@ -0,0 +1,133 @@
|
|
1
|
+
# CHANGELOG
|
2
|
+
|
3
|
+
## Version 1.5.0 (September 10 2018)
|
4
|
+
|
5
|
+
* TBD
|
6
|
+
|
7
|
+
## Version 1.4.1 (June 10 2017)
|
8
|
+
|
9
|
+
* Fix warning for constant ::Fixnum is deprecated (@amatsuda)
|
10
|
+
|
11
|
+
## Version 1.4.0 (May 13 2017)
|
12
|
+
|
13
|
+
* Fix unit tests to be more consistent (@eltone)
|
14
|
+
* Ensure job supports body hash with symbol keys (@eltone)
|
15
|
+
* Add support for custom serialization formats (@eltone)
|
16
|
+
* Log the params when a job timeout occurs (@nathantsoi)
|
17
|
+
|
18
|
+
## Version 1.3.1 (April 21 2016)
|
19
|
+
|
20
|
+
* Addition of thread-pool-based concurrency (@contentfree)
|
21
|
+
|
22
|
+
## Version 1.3.0 (February 05 2016)
|
23
|
+
|
24
|
+
* Enqueue command now responds with beanstalk response details
|
25
|
+
|
26
|
+
## Version 1.2.0 (November 01 2015)
|
27
|
+
|
28
|
+
* FIX Made connections to beanstalkd more resilient (@contentfree)
|
29
|
+
|
30
|
+
## Version 1.2.0.pre (October 24 2015)
|
31
|
+
|
32
|
+
* FIX Replace static Beaneater connection with individual connections per worker instance/thread (@contentfree)
|
33
|
+
* FIX Beaneater connections try really hard to repair themselves if disconnected accidentally (@contentfree)
|
34
|
+
* NEW Event hook for workers: on_reconnect (@contentfree)
|
35
|
+
|
36
|
+
## Version 1.1.0 (September 14 2015)
|
37
|
+
|
38
|
+
* NEW Ability to configure namespace separator (@bfolkens)
|
39
|
+
* NEW Avoid timeouts altogether by setting queue_respond_timeout to 0 (@zacviandier)
|
40
|
+
* NEW Event hooks for on_retry and on_bury (@contentfree)
|
41
|
+
* NEW Support lambdas for queue names (@contentfree)
|
42
|
+
* NEW Allow for control of delay calculation (@contentfree)
|
43
|
+
* NEW Ability to specify environment when running the CLI (@contentfree)
|
44
|
+
* NEW Control default async behavior of methods (@contentfree)
|
45
|
+
|
46
|
+
## Version 1.0.0 (April 26 2015)
|
47
|
+
|
48
|
+
* NEW Updating to Beaneater 1.0 (@alup)
|
49
|
+
|
50
|
+
## Version 0.4.6 (October 26 2014)
|
51
|
+
|
52
|
+
* NEW Add job to on_error handler if the handler has a 4th argument (@Nitrodist)
|
53
|
+
* NEW Use a timeout when looking for a job to reserve (@EasyPost)
|
54
|
+
* NEW Support configuring settings on threads on fork class (@silentshade)
|
55
|
+
* FIX queue override by existing queues (@silentshade)
|
56
|
+
* FIX Use thread to log exit message (@silentshade)
|
57
|
+
|
58
|
+
## Version 0.4.5 (December 16 2013)
|
59
|
+
|
60
|
+
* FIX #47 Create a backburner connection per thread (Thanks @thcrock)
|
61
|
+
|
62
|
+
## Version 0.4.4 (October 27 2013)
|
63
|
+
|
64
|
+
* NEW #51 Added ability to set per-queue default ttr's (Thanks @ryanjohns)
|
65
|
+
|
66
|
+
## Version 0.4.3 (July 19 2013)
|
67
|
+
|
68
|
+
* FIX #44 Additional fix to issue introduced in 0.4.2
|
69
|
+
* FIX #45 More graceful shutdown using Kernel.exit and rescuing SystemExit. (Thanks @ryanjohns)
|
70
|
+
|
71
|
+
## Version 0.4.2 (July 3 2013)
|
72
|
+
|
73
|
+
* FIX #44 Properly retry to connect to beanstalkd when connection fails.
|
74
|
+
|
75
|
+
## Version 0.4.1 (June 28 2013)
|
76
|
+
|
77
|
+
* FIX #43 Properly support CLI options and smart load the app environment.
|
78
|
+
|
79
|
+
## Version 0.4.0 (June 28 2013)
|
80
|
+
|
81
|
+
NOTE: This is the start of working with @bradgessler to improve backburner and merge with quebert
|
82
|
+
|
83
|
+
* NEW #26 #27 Remove need for Queue mixin, allow plain ruby objects
|
84
|
+
* NEW Default all jobs to a single general queue rather than separate queues
|
85
|
+
* NEW Add support for named priorities, allowing shorthand names for priority values
|
86
|
+
|
87
|
+
## Version 0.3.4 (April 23 2013)
|
88
|
+
|
89
|
+
* FIX #22 Adds signal handlers for worker to manage proper shutdown (Thanks @tkiley)
|
90
|
+
|
91
|
+
## Version 0.3.3 (April 19 2013)
|
92
|
+
|
93
|
+
* Fix naming conflict rename 'config' to 'queue_config'
|
94
|
+
|
95
|
+
## Version 0.3.2 (Jan 23 2013)
|
96
|
+
|
97
|
+
* Bump version of beaneater to 0.3.0 (better socket handling)
|
98
|
+
|
99
|
+
## Version 0.3.1 (Dec 28 2012)
|
100
|
+
|
101
|
+
* Adds basic forking processing strategy and rake tasks (Thanks @danielfarrell)
|
102
|
+
|
103
|
+
## Version 0.3.0 (Nov 14 2012)
|
104
|
+
|
105
|
+
* Major update with support for a 'threads_on_fork' processing strategy (Thanks @ShadowBelmolve)
|
106
|
+
* Different workers have different rake tasks (Thanks @ShadowBelmolve)
|
107
|
+
* Added processing strategy specific examples i.e stress.rb and adds new unit tests. (Thanks @ShadowBelmolve)
|
108
|
+
|
109
|
+
## Version 0.2.6 (Nov 12 2012)
|
110
|
+
|
111
|
+
* Upgrade to beaneater 0.2.0
|
112
|
+
|
113
|
+
## Version 0.2.5 (Nov 9 2012)
|
114
|
+
|
115
|
+
* Add support for multiple worker processing strategies through subclassing.
|
116
|
+
|
117
|
+
## Version 0.2.0 (Nov 7 2012)
|
118
|
+
|
119
|
+
* Add new plugin hooks feature (see HOOKS.md)
|
120
|
+
|
121
|
+
## Version 0.1.2 (Nov 7 2012)
|
122
|
+
|
123
|
+
* Adds ability to specify a custom logger.
|
124
|
+
* Adds job retry configuration and worker support.
|
125
|
+
|
126
|
+
## Version 0.1.1 (Nov 6 2012)
|
127
|
+
|
128
|
+
* Fix issue with timed out reserves
|
129
|
+
|
130
|
+
## Version 0.1.0 (Nov 4 2012)
|
131
|
+
|
132
|
+
* Switch to beaneater as new ruby beanstalkd client
|
133
|
+
* Add support for array of connections in `beanstalk_url`
|
data/CONTRIBUTING.md
ADDED
@@ -0,0 +1,37 @@
|
|
1
|
+
We love pull requests. Here's a quick guide:
|
2
|
+
|
3
|
+
1. Fork the repo.
|
4
|
+
|
5
|
+
2. Run the tests. We only take pull requests with passing tests, and it's great
|
6
|
+
to know that you have a clean slate: `bundle && rake test`
|
7
|
+
|
8
|
+
3. Add a test for your change. Only refactoring and documentation changes
|
9
|
+
require no new tests. If you are adding functionality or fixing a bug, we need
|
10
|
+
a test!
|
11
|
+
|
12
|
+
4. Make the test pass.
|
13
|
+
|
14
|
+
5. Push to your fork and submit a pull request.
|
15
|
+
|
16
|
+
At this point you're waiting on us. We like to at least comment on, if not
|
17
|
+
accept, pull requests within three business days (and, typically, one business
|
18
|
+
day). We may suggest some changes or improvements or alternatives.
|
19
|
+
|
20
|
+
Some things that will increase the chance that your pull request is accepted:
|
21
|
+
|
22
|
+
* Use Rails idioms and helpers
|
23
|
+
* Include tests that fail without your code, and pass with it
|
24
|
+
* Update the documentation and README for anything affected by your contribution
|
25
|
+
|
26
|
+
Syntax:
|
27
|
+
|
28
|
+
* Two spaces, no tabs.
|
29
|
+
* No trailing whitespace. Blank lines should not have any space.
|
30
|
+
* Prefer &&/|| over and/or.
|
31
|
+
* MyClass.my_method(my_arg) not my_method( my_arg ) or my_method my_arg.
|
32
|
+
* a = b and not a=b.
|
33
|
+
* Follow the conventions you see used in the source already.
|
34
|
+
|
35
|
+
And in case we didn't emphasize it enough: we love tests!
|
36
|
+
|
37
|
+
NOTE: Adapted from https://raw.github.com/thoughtbot/factory_girl_rails/master/CONTRIBUTING.md
|
data/Gemfile
ADDED
data/HOOKS.md
ADDED
@@ -0,0 +1,99 @@
|
|
1
|
+
# Backburner Hooks
|
2
|
+
|
3
|
+
You can customize Backburner or write plugins using its hook API.
|
4
|
+
In many cases you can use a hook rather than mess around with Backburner's internals.
|
5
|
+
|
6
|
+
## Job Hooks
|
7
|
+
|
8
|
+
Hooks are transparently adapted from [Resque](https://github.com/resque/resque/blob/master/docs/HOOKS.md), so
|
9
|
+
if you are familiar with their hook API, now you can use nearly the same ones with beanstalkd and backburner!
|
10
|
+
|
11
|
+
There are a variety of hooks available that are triggered during the lifecycle of a job:
|
12
|
+
|
13
|
+
* `before_enqueue`: Called with the job args before a job is placed on the queue.
|
14
|
+
If the hook returns `false`, the job will not be placed on the queue.
|
15
|
+
|
16
|
+
* `after_enqueue`: Called with the job args after a job is placed on the queue.
|
17
|
+
Any exception raised propagates up to the code which queued the job.
|
18
|
+
|
19
|
+
* `before_perform`: Called with the job args before perform. If a hook returns false,
|
20
|
+
the job is aborted. Other exceptions are treated like regular job exceptions.
|
21
|
+
|
22
|
+
* `after_perform`: Called with the job args after it performs. Uncaught
|
23
|
+
exceptions will be treated like regular job exceptions.
|
24
|
+
|
25
|
+
* `around_perform`: Called with the job args. It is expected to yield in order
|
26
|
+
to perform the job (but is not required to do so). It may handle exceptions
|
27
|
+
thrown by perform, but uncaught exceptions will be treated like regular job exceptions.
|
28
|
+
|
29
|
+
* `on_retry`: Called with the retry count, the delay and the job args whenever a job is retried.
|
30
|
+
|
31
|
+
* `on_bury`: Called with the job args when the job is buried.
|
32
|
+
|
33
|
+
* `on_failure`: Called with the exception and job args if any exception occurs
|
34
|
+
while performing the job (or hooks).
|
35
|
+
|
36
|
+
Hooks are just methods prefixed with the hook type. For example:
|
37
|
+
|
38
|
+
```ruby
|
39
|
+
class SomeJob
|
40
|
+
def self.before_perform_log_job(*args)
|
41
|
+
logger.info "About to perform #{self} with #{args.inspect}"
|
42
|
+
end
|
43
|
+
|
44
|
+
def self.on_failure_bury(e, *args)
|
45
|
+
logger.info "Performing #{self} caused an exception (#{e})"
|
46
|
+
self.bury
|
47
|
+
end
|
48
|
+
|
49
|
+
def self.perform(*args)
|
50
|
+
# ...
|
51
|
+
end
|
52
|
+
|
53
|
+
def self.logger
|
54
|
+
@_logger ||= Logger.new(STDOUT)
|
55
|
+
end
|
56
|
+
end
|
57
|
+
```
|
58
|
+
|
59
|
+
You can also setup modules to create compose-able and reusable hooks for your jobs. For example:
|
60
|
+
|
61
|
+
```ruby
|
62
|
+
module LoggedJob
|
63
|
+
def before_perform_log_job(*args)
|
64
|
+
Logger.info "About to perform #{self} with #{args.inspect}"
|
65
|
+
end
|
66
|
+
end
|
67
|
+
|
68
|
+
module BuriedJob
|
69
|
+
def on_failure_bury(e, *args)
|
70
|
+
Logger.info "Performing #{self} caused an exception (#{e}). Retrying..."
|
71
|
+
self.bury
|
72
|
+
end
|
73
|
+
end
|
74
|
+
|
75
|
+
class MyJob
|
76
|
+
extend LoggedJob
|
77
|
+
extend BuriedJob
|
78
|
+
|
79
|
+
def self.perform(*args)
|
80
|
+
# ...
|
81
|
+
end
|
82
|
+
end
|
83
|
+
```
|
84
|
+
|
85
|
+
## Worker Hooks
|
86
|
+
|
87
|
+
Currently, there is just one hook:
|
88
|
+
|
89
|
+
* `on_reconnect`: Called on the worker whose connection has been reset. The connection
|
90
|
+
is given as the argument
|
91
|
+
|
92
|
+
An example:
|
93
|
+
|
94
|
+
```ruby
|
95
|
+
class MyWorker < Backburner::Worker
|
96
|
+
def on_reconnect(conn)
|
97
|
+
prepare
|
98
|
+
end
|
99
|
+
end
|
data/LICENSE
ADDED
@@ -0,0 +1,22 @@
|
|
1
|
+
Copyright (c) 2012 Nathan Esquenazi
|
2
|
+
|
3
|
+
MIT License
|
4
|
+
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining
|
6
|
+
a copy of this software and associated documentation files (the
|
7
|
+
"Software"), to deal in the Software without restriction, including
|
8
|
+
without limitation the rights to use, copy, modify, merge, publish,
|
9
|
+
distribute, sublicense, and/or sell copies of the Software, and to
|
10
|
+
permit persons to whom the Software is furnished to do so, subject to
|
11
|
+
the following conditions:
|
12
|
+
|
13
|
+
The above copyright notice and this permission notice shall be
|
14
|
+
included in all copies or substantial portions of the Software.
|
15
|
+
|
16
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
17
|
+
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
18
|
+
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
19
|
+
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
20
|
+
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
21
|
+
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
22
|
+
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
data/README.md
ADDED
@@ -0,0 +1,658 @@
|
|
1
|
+
# Backburner [](https://travis-ci.org/nesquena/backburner)
|
2
|
+
|
3
|
+
Backburner is a [beanstalkd](http://kr.github.com/beanstalkd/)-powered job queue that can handle a very high volume of jobs.
|
4
|
+
You create background jobs and place them on multiple work queues to be processed later.
|
5
|
+
|
6
|
+
Processing background jobs reliably has never been easier than with beanstalkd and Backburner. This gem works with any ruby-based
|
7
|
+
web framework, but is especially suited for use with [Sinatra](http://sinatrarb.com), [Padrino](http://padrinorb.com) and Rails.
|
8
|
+
|
9
|
+
If you want to use beanstalk for your job processing, consider using Backburner.
|
10
|
+
Backburner is heavily inspired by Resque and DelayedJob. Backburner stores all jobs as simple JSON message payloads.
|
11
|
+
Persistent queues are supported when beanstalkd persistence mode is enabled.
|
12
|
+
|
13
|
+
Backburner supports multiple queues, job priorities, delays, and timeouts. In addition,
|
14
|
+
Backburner has robust support for retrying failed jobs, handling error cases,
|
15
|
+
custom logging, and extensible plugin hooks.
|
16
|
+
|
17
|
+
## Why Backburner?
|
18
|
+
|
19
|
+
Backburner is well tested and has a familiar, no-nonsense approach to job processing, but that is of secondary importance.
|
20
|
+
Let's face it, there are a lot of options for background job processing. [DelayedJob](https://github.com/collectiveidea/delayed_job),
|
21
|
+
and [Resque](https://github.com/defunkt/resque) are the first that come to mind immediately. So, how do we make sense
|
22
|
+
of which one to use? And why use Backburner over other alternatives?
|
23
|
+
|
24
|
+
The key to understanding the differences lies in understanding the different projects and protocols that power these popular queue
|
25
|
+
libraries under the hood. Every job queue requires a queue store that jobs are put into and pulled out of.
|
26
|
+
In the case of Resque, jobs are processed through **Redis**, a persistent key-value store. In the case of DelayedJob, jobs are processed through
|
27
|
+
**ActiveRecord** and a database such as PostgreSQL.
|
28
|
+
|
29
|
+
The work queue underlying these gems tells you infinitely more about the differences than anything else.
|
30
|
+
Beanstalk is probably the best solution for job queues available today for many reasons.
|
31
|
+
The real question then is... "Why Beanstalk?".
|
32
|
+
|
33
|
+
## Why Beanstalk?
|
34
|
+
|
35
|
+
Illya has an excellent blog post
|
36
|
+
[Scalable Work Queues with Beanstalk](http://www.igvita.com/2010/05/20/scalable-work-queues-with-beanstalk/) and
|
37
|
+
Adam Wiggins posted [an excellent comparison](http://adam.herokuapp.com/past/2010/4/24/beanstalk_a_simple_and_fast_queueing_backend/).
|
38
|
+
|
39
|
+
You will quickly see that **beanstalkd** is an underrated but incredible project that is extremely well-suited as a job queue.
|
40
|
+
Significantly better suited for this task than Redis or a database. Beanstalk is a simple,
|
41
|
+
and a very fast work queue service rolled into a single binary - it is the memcached of work queues.
|
42
|
+
Originally built to power the backend for the 'Causes' Facebook app, it is a mature and production ready open source project.
|
43
|
+
[PostRank](http://www.postrank.com) uses beanstalk to reliably process millions of jobs a day.
|
44
|
+
|
45
|
+
A single instance of Beanstalk is perfectly capable of handling thousands of jobs a second (or more, depending on your job size)
|
46
|
+
because it is an in-memory, event-driven system. Powered by libevent under the hood,
|
47
|
+
it requires zero setup (launch and forget, à la memcached), optional log based persistence, an easily parsed ASCII protocol,
|
48
|
+
and a rich set of tools for job management that go well beyond a simple FIFO work queue.
|
49
|
+
|
50
|
+
Beanstalkd supports the following features out of the box:
|
51
|
+
|
52
|
+
| Feature | Description |
|
53
|
+
| ------- | ------------------------------- |
|
54
|
+
| **Parallelized** | Supports multiple work queues created on demand. |
|
55
|
+
| **Reliable** | Beanstalk’s reserve, work, delete cycle ensures reliable processing. |
|
56
|
+
| **Scheduling** | Delay enqueuing jobs by a specified interval to schedule processing later |
|
57
|
+
| **Fast** | Processes thousands of jobs per second without breaking a sweat. |
|
58
|
+
| **Priorities** | Specify priority so important jobs can be processed quickly. |
|
59
|
+
| **Persistence** | Jobs are stored in memory for speed, but logged to disk for safe keeping. |
|
60
|
+
| **Federation** | Horizontal scalability provided through federation by the client. |
|
61
|
+
| **Error Handling** | Bury any job which causes an error for later debugging and inspection.|
|
62
|
+
|
63
|
+
Keep in mind that these features are supported out of the box with beanstalk and require no special code within this gem to support.
|
64
|
+
In the end, **beanstalk is the ideal job queue** while also being ridiculously easy to install and setup.
|
65
|
+
|
66
|
+
## Installation
|
67
|
+
|
68
|
+
First, you probably want to [install beanstalkd](http://kr.github.com/beanstalkd/download.html), which powers the job queues.
|
69
|
+
Depending on your platform, this should be as simple as (for Ubuntu):
|
70
|
+
|
71
|
+
$ sudo apt-get install beanstalkd
|
72
|
+
|
73
|
+
Add this line to your application's Gemfile:
|
74
|
+
|
75
|
+
gem 'backburner'
|
76
|
+
|
77
|
+
And then execute:
|
78
|
+
|
79
|
+
$ bundle
|
80
|
+
|
81
|
+
Or install it yourself as:
|
82
|
+
|
83
|
+
$ gem install backburner
|
84
|
+
|
85
|
+
## Configuration ##
|
86
|
+
|
87
|
+
Backburner is extremely simple to setup. Just configure basic settings for backburner:
|
88
|
+
|
89
|
+
```ruby
|
90
|
+
Backburner.configure do |config|
|
91
|
+
config.beanstalk_url = "beanstalk://127.0.0.1"
|
92
|
+
config.tube_namespace = "some.app.production"
|
93
|
+
config.namespace_separator = "."
|
94
|
+
config.on_error = lambda { |e| puts e }
|
95
|
+
config.max_job_retries = 3 # default 0 retries
|
96
|
+
config.retry_delay = 2 # default 5 seconds
|
97
|
+
config.retry_delay_proc = lambda { |min_retry_delay, num_retries| min_retry_delay + (num_retries ** 3) }
|
98
|
+
config.default_priority = 65536
|
99
|
+
config.respond_timeout = 120
|
100
|
+
config.default_worker = Backburner::Workers::Simple
|
101
|
+
config.logger = Logger.new(STDOUT)
|
102
|
+
config.primary_queue = "backburner-jobs"
|
103
|
+
config.priority_labels = { :custom => 50, :useless => 1000 }
|
104
|
+
config.reserve_timeout = nil
|
105
|
+
config.job_serializer_proc = lambda { |body| JSON.dump(body) }
|
106
|
+
config.job_parser_proc = lambda { |body| JSON.parse(body) }
|
107
|
+
|
108
|
+
end
|
109
|
+
```
|
110
|
+
|
111
|
+
The key options available are:
|
112
|
+
|
113
|
+
| Option | Description |
|
114
|
+
| ----------------- | ------------------------------- |
|
115
|
+
| `beanstalk_url` | Address for beanstalkd connection i.e 'beanstalk://127.0.0.1' |
|
116
|
+
| `tube_namespace` | Prefix used for all tubes related to this backburner queue. |
|
117
|
+
| `namespace_separator` | Separator used for namespace and queue name |
|
118
|
+
| `on_error` | Lambda invoked with the error whenever any job in the system fails. |
|
119
|
+
| `max_job_retries` | Integer defines how many times to retry a job before burying. |
|
120
|
+
| `retry_delay` | Integer defines the base time to wait (in secs) between job retries. |
|
121
|
+
| `retry_delay_proc` | Lambda calculates the delay used, allowing for exponential back-off. |
|
122
|
+
| `default_priority` | Integer The default priority of jobs |
|
123
|
+
| `respond_timeout` | Integer defines how long a job has to complete its task |
|
124
|
+
| `default_worker` | Worker class that will be used if no other worker is specified. |
|
125
|
+
| `logger` | Logger recorded to when backburner wants to report info or errors. |
|
126
|
+
| `primary_queue` | Primary queue used for a job when an alternate queue is not given. |
|
127
|
+
| `priority_labels` | Hash of named priority definitions for your app. |
|
128
|
+
| `reserve_timeout` | Duration to wait for work from a single server, or nil for forever. |
|
129
|
+
| `job_serializer_proc` | Lambda serializes a job body to a string to write to the task |
|
130
|
+
| `job_parser_proc` | Lambda parses a task body string to a hash |
|
131
|
+
|
132
|
+
## Breaking Changes
|
133
|
+
|
134
|
+
Before **v0.4.0**: Jobs were placed into default queues based on the name of the class creating the queue. i.e NewsletterJob would
|
135
|
+
be put into a 'newsletter-job' queue. As of 0.4.0, all jobs are placed into a primary queue named "my.app.namespace.backburner-jobs"
|
136
|
+
unless otherwise specified.
|
137
|
+
|
138
|
+
## Usage
|
139
|
+
|
140
|
+
Backburner allows you to create jobs and place them onto any number of beanstalk tubes, and later pull those jobs off the tubes and
|
141
|
+
process them asynchronously with a worker.
|
142
|
+
|
143
|
+
### Enqueuing Jobs ###
|
144
|
+
|
145
|
+
At the core, Backburner is about jobs that can be processed asynchronously. Jobs are simple ruby objects which respond to `perform`.
|
146
|
+
|
147
|
+
Job objects are queued as JSON onto a tube to be later processed by a worker. Here's an example:
|
148
|
+
|
149
|
+
```ruby
|
150
|
+
class NewsletterJob
|
151
|
+
# required
|
152
|
+
def self.perform(email, body)
|
153
|
+
NewsletterMailer.deliver_text_to_email(email, body)
|
154
|
+
end
|
155
|
+
|
156
|
+
# optional, defaults to 'backburner-jobs' tube
|
157
|
+
def self.queue
|
158
|
+
"newsletter-sender"
|
159
|
+
end
|
160
|
+
|
161
|
+
# optional, defaults to default_priority
|
162
|
+
def self.queue_priority
|
163
|
+
1000 # most urgent priority is 0
|
164
|
+
end
|
165
|
+
|
166
|
+
# optional, defaults to respond_timeout in config
|
167
|
+
def self.queue_respond_timeout
|
168
|
+
300 # number of seconds before job times out, 0 to avoid timeout. NB: A timeout of 1 second will likely lead to race conditions between Backburner and beanstalkd and should be avoided
|
169
|
+
end
|
170
|
+
|
171
|
+
# optional, defaults to retry_delay_proc in config
|
172
|
+
def self.queue_retry_delay_proc
|
173
|
+
lambda { |min_retry_delay, num_retries| min_retry_delay + (num_retries ** 5) }
|
174
|
+
end
|
175
|
+
|
176
|
+
# optional, defaults to retry_delay in config
|
177
|
+
def self.queue_retry_delay
|
178
|
+
5
|
179
|
+
end
|
180
|
+
|
181
|
+
# optional, defaults to max_job_retries in config
|
182
|
+
def self.queue_max_job_retries
|
183
|
+
5
|
184
|
+
end
|
185
|
+
end
|
186
|
+
```
|
187
|
+
|
188
|
+
You can include the optional `Backburner::Queue` module so you can easily specify queue settings for this job:
|
189
|
+
|
190
|
+
```ruby
|
191
|
+
class NewsletterJob
|
192
|
+
include Backburner::Queue
|
193
|
+
queue "newsletter-sender" # defaults to 'backburner-jobs' tube
|
194
|
+
queue_priority 1000 # most urgent priority is 0
|
195
|
+
queue_respond_timeout 300 # number of seconds before job times out, 0 to avoid timeout
|
196
|
+
|
197
|
+
def self.perform(email, body)
|
198
|
+
NewsletterMailer.deliver_text_to_email(email, body)
|
199
|
+
end
|
200
|
+
end
|
201
|
+
```
|
202
|
+
|
203
|
+
Jobs can be enqueued with:
|
204
|
+
|
205
|
+
```ruby
|
206
|
+
Backburner.enqueue NewsletterJob, 'foo@admin.com', 'lorem ipsum...'
|
207
|
+
```
|
208
|
+
|
209
|
+
`Backburner.enqueue` accepts first a ruby object that supports `perform` and then a series of parameters
|
210
|
+
to that object's `perform` method. The queue name used by default is `{namespace}.backburner-jobs`
|
211
|
+
unless otherwise specified.
|
212
|
+
|
213
|
+
You may also pass a lambda as the queue name and it will be evaluated when enqueuing a
|
214
|
+
job (and passed the Job's class as an argument). This is especially useful when combined
|
215
|
+
with "Simple Async Jobs" (see below).
|
216
|
+
|
217
|
+
### Simple Async Jobs ###
|
218
|
+
|
219
|
+
In addition to defining custom jobs, a job can also be enqueued by invoking the `async` method on any object which
|
220
|
+
includes `Backburner::Performable`. Async enqueuing works for both instance and class methods on any _performable_ object.
|
221
|
+
|
222
|
+
```ruby
|
223
|
+
class User
|
224
|
+
include Backburner::Performable
|
225
|
+
queue "user-jobs" # defaults to 'user'
|
226
|
+
queue_priority 500 # most urgent priority is 0
|
227
|
+
queue_respond_timeout 300 # number of seconds before job times out, 0 to avoid timeout
|
228
|
+
|
229
|
+
def activate(device_id)
|
230
|
+
@device = Device.find(device_id)
|
231
|
+
# ...
|
232
|
+
end
|
233
|
+
|
234
|
+
def self.reset_password(user_id)
|
235
|
+
# ...
|
236
|
+
end
|
237
|
+
end
|
238
|
+
|
239
|
+
# Async works for instance methods on a persisted object with an `id`
|
240
|
+
@user = User.first
|
241
|
+
@user.async(:ttr => 100, :queue => "activate").activate(@device.id)
|
242
|
+
# ..and for class methods
|
243
|
+
User.async(:pri => 100, :delay => 10.seconds).reset_password(@user.id)
|
244
|
+
```
|
245
|
+
|
246
|
+
This automatically enqueues a job for that user record that will run `activate` with the specified argument.
|
247
|
+
Note that you can set the queue name and queue priority at the class level and
|
248
|
+
you are also able to pass `pri`, `ttr`, `delay` and `queue` directly as options into `async`.
|
249
|
+
|
250
|
+
The queue name used by default is `{namespace}.backburner-jobs` if not otherwise
|
251
|
+
specified.
|
252
|
+
|
253
|
+
If a lambda is given for `queue`, then it will be called and given the
|
254
|
+
_performable_ object's class as an argument:
|
255
|
+
|
256
|
+
```ruby
|
257
|
+
# Given the User class above
|
258
|
+
User.async(:queue => lambda { |user_klass| ["queue1","queue2"].sample(1).first }).do_hard_work # would add the job to either queue1 or queue2 randomly
|
259
|
+
```
|
260
|
+
|
261
|
+
### Using Async Asynchronously ###
|
262
|
+
|
263
|
+
It's often useful to be able to configure your app in production such that every invocation of a method is asynchronous by default as seen in [delayed_job](https://github.com/collectiveidea/delayed_job#queuing-jobs). To accomplish this, the `Backburner::Performable` module exposes two `handle_asynchronously` convenience methods
|
264
|
+
which accept the same options as the `async` method:
|
265
|
+
|
266
|
+
```ruby
|
267
|
+
class User
|
268
|
+
include Backburner::Performable
|
269
|
+
|
270
|
+
def send_welcome_email
|
271
|
+
# ...
|
272
|
+
end
|
273
|
+
|
274
|
+
# ---> For instance methods
|
275
|
+
handle_asynchronously :send_welcome_email, queue: 'send-mail', pri: 5000, ttr: 60
|
276
|
+
|
277
|
+
def self.update_recent_visitors
|
278
|
+
# ...
|
279
|
+
end
|
280
|
+
|
281
|
+
# ---> For class methods
|
282
|
+
handle_static_asynchronously :update_recent_visitors, queue: 'long-tasks', ttr: 300
|
283
|
+
end
|
284
|
+
```
|
285
|
+
|
286
|
+
Now, all calls to `User.update_recent_visitors` or `User#send_welcome_email` will automatically be handled asynchronously when invoked. Similarly, you can call these methods directly on the `Backburner::Performable` module to apply async behavior outside the class:
|
287
|
+
|
288
|
+
```ruby
|
289
|
+
# Given the User class above
|
290
|
+
Backburner::Performable.handle_asynchronously(User, :activate, ttr: 100, queue: 'activate')
|
291
|
+
```
|
292
|
+
|
293
|
+
Now all calls to the `activate` method on a `User` instance will be async with the provided options.
|
294
|
+
|
295
|
+
#### A Note About Auto-Async
|
296
|
+
|
297
|
+
Because an async proxy is injected and used in place of the original method, you must not rely on the return value of the method. Using the example `User` class above, if my `send_welcome_email` returned the status of an email submission and I relied on that to take some further action, I will be surprised after rewiring things with `handle_asynchronously` because the async proxy actually returns the (boolean) result of `Backburner::Worker.enqueue`.
|
298
|
+
|
299
|
+
### Working Jobs
|
300
|
+
|
301
|
+
Backburner workers are processes that run forever handling jobs that are reserved from the queue. Starting a worker in ruby code is simple:
|
302
|
+
|
303
|
+
```ruby
|
304
|
+
Backburner.work
|
305
|
+
```
|
306
|
+
|
307
|
+
This will process jobs in all queues but you can also restrict processing to specific queues:
|
308
|
+
|
309
|
+
```ruby
|
310
|
+
Backburner.work('newsletter-sender', 'push-notifier')
|
311
|
+
```
|
312
|
+
|
313
|
+
The Backburner worker also exists as a rake task:
|
314
|
+
|
315
|
+
```ruby
|
316
|
+
require 'backburner/tasks'
|
317
|
+
```
|
318
|
+
|
319
|
+
so you can run:
|
320
|
+
|
321
|
+
```
|
322
|
+
$ QUEUE=newsletter-sender,push-notifier rake backburner:work
|
323
|
+
```
|
324
|
+
|
325
|
+
You can also run the backburner binary for a convenient worker:
|
326
|
+
|
327
|
+
```
|
328
|
+
bundle exec backburner -q newsletter-sender,push-notifier -d -P /var/run/backburner.pid -l /var/log/backburner.log
|
329
|
+
```
|
330
|
+
|
331
|
+
This will daemonize the worker and store the pid and logs automatically. For Rails and Padrino, the environment should
|
332
|
+
load automatically. For other cases, use the `-r` flag to specify a file to require.
|
333
|
+
|
334
|
+
### Delaying Jobs
|
335
|
+
|
336
|
+
In Backburner, jobs can be delayed by specifying the `delay` option whenever you enqueue a job. If you want to schedule a job for an hour from now, simply add that option while enqueuing the standard job:
|
337
|
+
|
338
|
+
```ruby
|
339
|
+
Backburner::Worker.enqueue(NewsletterJob, ['foo@admin.com', 'lorem ipsum...'], :delay => 1.hour)
|
340
|
+
```
|
341
|
+
|
342
|
+
or while you schedule an async method call:
|
343
|
+
|
344
|
+
```ruby
|
345
|
+
User.async(:delay => 1.hour).reset_password(@user.id)
|
346
|
+
```
|
347
|
+
|
348
|
+
Backburner will take care of the rest!
|
349
|
+
|
350
|
+
### Persistence
|
351
|
+
|
352
|
+
Jobs are persisted to queues as JSON objects. Let's take our `User`
|
353
|
+
example from above. We'll run the following code to create a job:
|
354
|
+
|
355
|
+
``` ruby
|
356
|
+
User.async.reset_password(@user.id)
|
357
|
+
```
|
358
|
+
|
359
|
+
The following JSON will be put on the `{namespace}.backburner-jobs` queue:
|
360
|
+
|
361
|
+
``` javascript
|
362
|
+
{
|
363
|
+
'class': 'User',
|
364
|
+
'args': [nil, 'reset_password', 123]
|
365
|
+
}
|
366
|
+
```
|
367
|
+
|
368
|
+
The first argument is the 'id' of the object in the case of an instance method being async'ed. For example:
|
369
|
+
|
370
|
+
```ruby
|
371
|
+
@device = Device.find(987)
|
372
|
+
@user = User.find(246)
|
373
|
+
@user.async.activate(@device.id)
|
374
|
+
```
|
375
|
+
|
376
|
+
would be stored as:
|
377
|
+
|
378
|
+
``` javascript
|
379
|
+
{
|
380
|
+
'class': 'User',
|
381
|
+
'args': [246, 'activate', 987]
|
382
|
+
}
|
383
|
+
```
|
384
|
+
|
385
|
+
Since all jobs are persisted in JSON, your jobs must only accept arguments that can be encoded into that format.
|
386
|
+
This is why our examples use object IDs instead of passing around objects.
|
387
|
+
|
388
|
+
### Named Priorities
|
389
|
+
|
390
|
+
As of v0.4.0, Backburner has support for named priorities. beanstalkd priorities are numerical but
|
391
|
+
backburner supports a mapping between a word and a numerical value. The following priorities are
|
392
|
+
available by default: `high` is 0, `medium` is 100, and `low` is 200.
|
393
|
+
|
394
|
+
Priorities can be customized with:
|
395
|
+
|
396
|
+
```ruby
|
397
|
+
Backburner.configure do |config|
|
398
|
+
config.priority_labels = { :custom => 50, :useful => 5 }
|
399
|
+
# or append to default priorities with
|
400
|
+
# config.priority_labels = Backburner::Configuration::PRIORITY_LABELS.merge(:foo => 5)
|
401
|
+
end
|
402
|
+
```
|
403
|
+
|
404
|
+
and then these aliases can be used anywhere that a numerical value can:
|
405
|
+
|
406
|
+
```ruby
|
407
|
+
Backburner::Worker.enqueue NewsletterJob, ["foo", "bar"], :pri => :custom
|
408
|
+
User.async(:pri => :useful, :delay => 10.seconds).reset_password(@user.id)
|
409
|
+
```
|
410
|
+
|
411
|
+
Using named priorities can greatly simplify priority management.
|
412
|
+
|
413
|
+
### Processing Strategies
|
414
|
+
|
415
|
+
In Backburner, there are several different strategies for processing jobs
|
416
|
+
which are reflected by multiple worker subclasses.
|
417
|
+
Custom workers can be [defined fairly easily](https://github.com/nesquena/backburner/wiki/Defining-Workers).
|
418
|
+
By default, Backburner comes with the following workers built-in:
|
419
|
+
|
420
|
+
| Worker | Description |
|
421
|
+
| ------- | ------------------------------- |
|
422
|
+
| `Backburner::Workers::Simple` | Single threaded, no forking worker. Simplest option. |
|
423
|
+
| `Backburner::Workers::Forking` | Basic forking worker that manages crashes and memory bloat. |
|
424
|
+
| `Backburner::Workers::ThreadsOnFork` | Forking worker that utilizes threads for concurrent processing. |
|
425
|
+
| `Backburner::Workers::Threading` | Utilizes thread pools for concurrent processing. |
|
426
|
+
|
427
|
+
You can select the default worker for processing with:
|
428
|
+
|
429
|
+
```ruby
|
430
|
+
Backburner.configure do |config|
|
431
|
+
config.default_worker = Backburner::Workers::Forking
|
432
|
+
end
|
433
|
+
```
|
434
|
+
|
435
|
+
or determine the worker on the fly when invoking `work`:
|
436
|
+
|
437
|
+
```ruby
|
438
|
+
Backburner.work('newsletter-sender', :worker => Backburner::Workers::ThreadsOnFork)
|
439
|
+
```
|
440
|
+
|
441
|
+
or through associated rake tasks with:
|
442
|
+
|
443
|
+
```
|
444
|
+
$ QUEUE=newsletter-sender,push-message THREADS=2 GARBAGE=1000 rake backburner:threads_on_fork:work
|
445
|
+
```
|
446
|
+
|
447
|
+
When running on MRI or another Ruby implementation with a Global Interpreter Lock (GIL), do not be surprised if you're unable to saturate multiple cores, even with the threads_on_fork worker. To utilize multiple cores, you must run multiple worker processes.
|
448
|
+
|
449
|
+
Additional concurrency strategies will hopefully be contributed in the future.
|
450
|
+
If you are interested in helping out, please let us know.
|
451
|
+
|
452
|
+
#### More info: Threads on Fork Worker
|
453
|
+
|
454
|
+
For more information on the threads_on_fork worker, check out the
|
455
|
+
[ThreadsOnFork Worker](https://github.com/nesquena/backburner/wiki/ThreadsOnFork-worker) documentation. Please note that the `ThreadsOnFork` worker does not work on Windows due to its lack of `fork`.
|
456
|
+
|
457
|
+
#### More info: Threading Worker (thread-pool-based)
|
458
|
+
|
459
|
+
Configuration options for the Threading worker are similar to the threads_on_fork worker, sans the garbage option. When running via the `backburner` CLI, it's simplest to provide the queue names and maximum number of threads in the format "{queue name}:{max threads in pool}[,{name}:{threads}]":
|
460
|
+
|
461
|
+
```
|
462
|
+
$ bundle exec backburner -q queue1:4,queue2:4 # and then other options, like environment, pidfile, app root, etc. See docs for the CLI
|
463
|
+
```
|
464
|
+
|
465
|
+
### Default Queues
|
466
|
+
|
467
|
+
Workers can be easily restricted to processing only a specific set of queues as shown above. However, if you want a worker to
|
468
|
+
process **all** queues instead, then you can leave the queue list blank.
|
469
|
+
|
470
|
+
When you execute a worker without any queues specified, queues for known job queue class with `include Backburner::Queue` will be processed.
|
471
|
+
To access the list of known queue classes, you can use:
|
472
|
+
|
473
|
+
```ruby
|
474
|
+
Backburner::Worker.known_queue_classes
|
475
|
+
# => [NewsletterJob, SomeOtherJob]
|
476
|
+
```
|
477
|
+
|
478
|
+
Dynamic queues created by passing queue options **will not be processed** by a default worker. For this reason, you may want to take control over the default list of
|
479
|
+
queues processed when none are specified. To do this, you can use the `default_queues` class method:
|
480
|
+
|
481
|
+
```ruby
|
482
|
+
Backburner.default_queues.concat(["foo", "bar"])
|
483
|
+
```
|
484
|
+
|
485
|
+
This will ensure that the _foo_ and _bar_ queues are processed by any default workers. You can also add job queue names with:
|
486
|
+
|
487
|
+
```ruby
|
488
|
+
Backburner.default_queues << NewsletterJob.queue
|
489
|
+
```
|
490
|
+
|
491
|
+
The `default_queues` stores the specific list of queues that should be processed by default by a worker.
|
492
|
+
|
493
|
+
### Failures
|
494
|
+
|
495
|
+
When a job fails in backburner (usually because an exception was raised), the job will be released
|
496
|
+
and retried again until the `max_job_retries` configuration is reached.
|
497
|
+
|
498
|
+
```ruby
|
499
|
+
Backburner.configure do |config|
|
500
|
+
config.max_job_retries = 3 # retry jobs 3 times
|
501
|
+
config.retry_delay = 2 # wait 2 seconds in between retries
|
502
|
+
end
|
503
|
+
```
|
504
|
+
|
505
|
+
Note the default `max_job_retries` is 0, meaning that by default **jobs are not retried**.
|
506
|
+
|
507
|
+
As jobs are retried, a progressively-increasing delay is added to give time for transient
|
508
|
+
problems to resolve themselves. This may be configured using `retry_delay_proc`. It expects
|
509
|
+
an object that responds to `#call` and receives the value of `retry_delay` and the number
|
510
|
+
of times the job has been retried already. The default is a cubic back-off, eg:
|
511
|
+
|
512
|
+
```ruby
|
513
|
+
Backburner.configure do |config|
|
514
|
+
config.retry_delay = 2 # The minimum number of seconds a retry will be delayed
|
515
|
+
config.retry_delay_proc = lambda { |min_retry_delay, num_retries| min_retry_delay + (num_retries ** 3) }
|
516
|
+
end
|
517
|
+
```
|
518
|
+
|
519
|
+
If continued retry attempts fail, the job will be buried and can be 'kicked' later for inspection.
|
520
|
+
|
521
|
+
You can also setup a custom error handler for jobs using configure:
|
522
|
+
|
523
|
+
```ruby
|
524
|
+
Backburner.configure do |config|
|
525
|
+
config.on_error = lambda { |ex| Airbrake.notify(ex) }
|
526
|
+
end
|
527
|
+
```
|
528
|
+
|
529
|
+
Now all backburner queue errors will appear on airbrake for deeper inspection.
|
530
|
+
|
531
|
+
If you wish to retry a job without logging an error (for example when handling transient issues in a cloud or service oriented environment),
|
532
|
+
simply raise a `Backburner::Job::RetryJob` error.
|
533
|
+
|
534
|
+
### Logging
|
535
|
+
|
536
|
+
Logging in backburner is rather simple. When a job is run, the log records that. When a job
|
537
|
+
fails, the log records that. When any exceptions occur during processing, the log records that.
|
538
|
+
|
539
|
+
By default, the log will print to standard out. You can customize the log to output to any
|
540
|
+
standard logger by controlling the configuration option:
|
541
|
+
|
542
|
+
```ruby
|
543
|
+
Backburner.configure do |config|
|
544
|
+
config.logger = Logger.new(STDOUT)
|
545
|
+
end
|
546
|
+
```
|
547
|
+
|
548
|
+
Be sure to check logs whenever things do not seem to be processing.
|
549
|
+
|
550
|
+
### Hooks
|
551
|
+
|
552
|
+
Backburner is highly extensible and can be tailored to your needs by using various hooks that
|
553
|
+
can be triggered across the job processing lifecycle.
|
554
|
+
Often using hooks is much easier then trying to monkey patch the externals.
|
555
|
+
|
556
|
+
Check out [HOOKS.md](https://github.com/nesquena/backburner/blob/master/HOOKS.md) for a detailed overview on using hooks.
|
557
|
+
|
558
|
+
### Workers in Production
|
559
|
+
|
560
|
+
Once you have Backburner setup in your application, starting workers is really easy. Once [beanstalkd](http://kr.github.com/beanstalkd/download.html)
|
561
|
+
is installed, your best bet is to use the built-in rake task that comes with Backburner. Simply add the task to your Rakefile:
|
562
|
+
|
563
|
+
```ruby
|
564
|
+
# Rakefile
|
565
|
+
require 'backburner/tasks'
|
566
|
+
```
|
567
|
+
|
568
|
+
and then you can start the rake task with:
|
569
|
+
|
570
|
+
```bash
|
571
|
+
$ rake backburner:work
|
572
|
+
$ QUEUE=newsletter-sender,push-notifier rake backburner:work
|
573
|
+
```
|
574
|
+
|
575
|
+
The best way to deploy these rake tasks is using a monitoring library. We suggest [God](https://github.com/mojombo/god/)
|
576
|
+
which watches processes and ensures their stability. A simple God recipe for Backburner can be found in
|
577
|
+
[examples/god](https://github.com/nesquena/backburner/blob/master/examples/god.rb).
|
578
|
+
|
579
|
+
#### Command-Line Interface
|
580
|
+
|
581
|
+
Instead of using the Rake tasks, you can use Backburner's command-line interface (CLI) – powered by the [Dante gem](https://github.com/nesquena/dante) – to launch daemonized workers. Several flags are available to control the process. Many of these are provided by Dante itself, such as flags for logging (`-l`), the process' PID (`-P`), whether to daemonize (`-d`) or kill a running process (`-k`). Backburner provides a few more:
|
582
|
+
|
583
|
+
|
584
|
+
##### Queues (`-q`)
|
585
|
+
|
586
|
+
Control which queues the worker will watch with the `-q` flag. Comma-separate multiple queue names and, if you're using the `ThreadsOnFork` worker, colon-separate the settings for thread limit, garbage limit and retries limit (eg. `send_mail:4:10:3`). See its [wiki page](https://github.com/nesquena/backburner/wiki/ThreadsOnFork-worker) for some more details.
|
587
|
+
|
588
|
+
```ruby
|
589
|
+
backburner -q send_mail,create_thumbnail # You may need to use `bundle exec`
|
590
|
+
```
|
591
|
+
|
592
|
+
##### Boot an app (`-r`)
|
593
|
+
|
594
|
+
Load an app with the `-r` flag. Backburner supports automatic loading for both Rails and Padrino apps when started from the their root folder. However, you may point to a specific app's root using this flag, which is very useful when running workers from a service script.
|
595
|
+
|
596
|
+
```ruby
|
597
|
+
path="/var/www/my-app/current"
|
598
|
+
backburner -r "$path"
|
599
|
+
```
|
600
|
+
|
601
|
+
##### Load an environment (`-e`)
|
602
|
+
|
603
|
+
Use the `-e` flag to control which environment your app should use:
|
604
|
+
|
605
|
+
```ruby
|
606
|
+
environment="production"
|
607
|
+
backburner -e $environment
|
608
|
+
```
|
609
|
+
|
610
|
+
#### Reconnecting
|
611
|
+
|
612
|
+
In Backburner, if the beanstalkd connection is temporarily severed, several retries to establish the connection will be attempted.
|
613
|
+
After several retries, if the connection is still not able to be made, a `Beaneater::NotConnected` exception will be raised.
|
614
|
+
You can manually catch this exception, and attempt another manual retry using `Backburner::Worker.retry_connection!`.
|
615
|
+
|
616
|
+
### Web Front-end
|
617
|
+
|
618
|
+
Be sure to check out the Sinatra-powered project [beanstalkd_view](https://github.com/denniskuczynski/beanstalkd_view)
|
619
|
+
by [denniskuczynski](http://github.com/denniskuczynski) which provides an excellent overview of the tubes and
|
620
|
+
jobs processed by your beanstalk workers. An excellent addition to your Backburner setup.
|
621
|
+
|
622
|
+
## Acknowledgements
|
623
|
+
|
624
|
+
* [Nathan Esquenazi](https://github.com/nesquena) - Project maintainer
|
625
|
+
* [Dave Myron](https://github.com/contentfree) - Multiple features and doc improvements
|
626
|
+
* Kristen Tucker - Coming up with the gem name
|
627
|
+
* [Tim Lee](https://github.com/timothy1ee), [Josh Hull](https://github.com/joshbuddy), [Nico Taing](https://github.com/Nico-Taing) - Helping me work through the idea
|
628
|
+
* [Miso](http://gomiso.com) - Open-source friendly place to work
|
629
|
+
* [Evgeniy Denisov](https://github.com/silentshade) - Multiple fixes and cleanups
|
630
|
+
* [Andy Bakun](https://github.com/thwarted) - Fixes to how multiple beanstalkd instances are processed
|
631
|
+
* [Renan T. Fernandes](https://github.com/ShadowBelmolve) - Added threads_on_fork worker
|
632
|
+
* [Daniel Farrell](https://github.com/danielfarrell) - Added forking worker
|
633
|
+
|
634
|
+
## Contributing
|
635
|
+
|
636
|
+
1. Fork it
|
637
|
+
2. Create your feature branch (`git checkout -b my-new-feature`)
|
638
|
+
3. Commit your changes (`git commit -am 'Added some feature'`)
|
639
|
+
4. Push to the branch (`git push origin my-new-feature`)
|
640
|
+
5. Create new Pull Request
|
641
|
+
|
642
|
+
## References
|
643
|
+
|
644
|
+
The code in this project has been made in light of a few excellent projects:
|
645
|
+
|
646
|
+
* [DelayedJob](https://github.com/collectiveidea/delayed_job)
|
647
|
+
* [Resque](https://github.com/defunkt/resque)
|
648
|
+
* [Stalker](https://github.com/han/stalker)
|
649
|
+
|
650
|
+
Thanks to these projects for inspiration and certain design and implementation decisions.
|
651
|
+
|
652
|
+
## Links
|
653
|
+
|
654
|
+
* Code: `git clone git://github.com/nesquena/backburner.git`
|
655
|
+
* Home: <http://github.com/nesquena/backburner>
|
656
|
+
* Docs: <http://rdoc.info/github/nesquena/backburner/master/frames>
|
657
|
+
* Bugs: <http://github.com/nesquena/backburner/issues>
|
658
|
+
* Gems: <http://gemcutter.org/gems/backburner>
|