kbaum-resque-retry 0.0.5
Sign up to get free protection for your applications and to get access to all the features.
- data/HISTORY.md +23 -0
- data/LICENSE +21 -0
- data/README.md +168 -0
- data/Rakefile +25 -0
- data/lib/resque-retry.rb +7 -0
- data/lib/resque-retry/server.rb +34 -0
- data/lib/resque-retry/server/views/retry.erb +44 -0
- data/lib/resque-retry/server/views/retry_timestamp.erb +42 -0
- data/lib/resque/plugins/exponential_backoff.rb +68 -0
- data/lib/resque/plugins/retry.rb +178 -0
- data/lib/resque/plugins/retry_failure_backend.rb +42 -0
- data/test/exponential_backoff_test.rb +59 -0
- data/test/redis-test.conf +132 -0
- data/test/resque_test.rb +18 -0
- data/test/retry_test.rb +145 -0
- data/test/test_helper.rb +63 -0
- data/test/test_jobs.rb +74 -0
- metadata +143 -0
data/HISTORY.md
ADDED
@@ -0,0 +1,23 @@
|
|
1
|
+
## 0.0.5 (2010-06-27)
|
2
|
+
|
3
|
+
* Handle our own dependancies.
|
4
|
+
|
5
|
+
## 0.0.4 (2010-06-16)
|
6
|
+
|
7
|
+
* Relax gemspec dependancies.
|
8
|
+
|
9
|
+
## 0.0.3 (2010-06-02)
|
10
|
+
|
11
|
+
* Bugfix: Make sure that `redis_retry_key` has no whitespace.
|
12
|
+
|
13
|
+
## 0.0.2 (2010-05-06)
|
14
|
+
|
15
|
+
* Bugfix: Were calling non-existent method to delete redis key.
|
16
|
+
* Delay no-longer falls back to `sleep`. resque-scheduler is a required
|
17
|
+
dependancy.
|
18
|
+
* Redis key doesn't include ending colon `:` if no args were passed
|
19
|
+
to the job.
|
20
|
+
|
21
|
+
## 0.0.1 (2010-04-27)
|
22
|
+
|
23
|
+
* First release.
|
data/LICENSE
ADDED
@@ -0,0 +1,21 @@
|
|
1
|
+
Copyright (c) 2010 Luke Antins
|
2
|
+
Copyright (c) 2010 Ryan Carver
|
3
|
+
|
4
|
+
Permission is hereby granted, free of charge, to any person obtaining
|
5
|
+
a copy of this software and associated documentation files (the
|
6
|
+
Software), to deal in the Software without restriction, including
|
7
|
+
without limitation the rights to use, copy, modify, merge, publish,
|
8
|
+
distribute, sublicense, and/or sell copies of the Software, and to
|
9
|
+
permit persons to whom the Software is furnished to do so, subject to
|
10
|
+
the following conditions:
|
11
|
+
|
12
|
+
The above copyright notice and this permission notice shall be
|
13
|
+
included in all copies or substantial portions of the Software.
|
14
|
+
|
15
|
+
THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND,
|
16
|
+
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
17
|
+
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
18
|
+
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
19
|
+
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
20
|
+
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
21
|
+
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
data/README.md
ADDED
@@ -0,0 +1,168 @@
|
|
1
|
+
resque-retry
|
2
|
+
============
|
3
|
+
|
4
|
+
A [Resque][rq] plugin. Requires Resque 1.8.0 & [resque-scheduler][rqs]
|
5
|
+
|
6
|
+
resque-retry provides retry, delay and exponential backoff support for
|
7
|
+
resque jobs.
|
8
|
+
|
9
|
+
### Features
|
10
|
+
|
11
|
+
- Redis backed retry count/limit.
|
12
|
+
- Retry on all or specific exceptions.
|
13
|
+
- Exponential backoff (varying the delay between retrys).
|
14
|
+
- Small & Extendable - plenty of places to override retry logic/settings.
|
15
|
+
|
16
|
+
Usage / Examples
|
17
|
+
----------------
|
18
|
+
|
19
|
+
Just extend your module/class with this module, and your ready to retry!
|
20
|
+
|
21
|
+
Customisation is pretty easy, the below examples should give you
|
22
|
+
some ideas =), adapt for your own usage and feel free to pick and mix!
|
23
|
+
|
24
|
+
### Retry
|
25
|
+
|
26
|
+
Retry the job **once** on failure, with zero delay.
|
27
|
+
|
28
|
+
require 'require-retry'
|
29
|
+
|
30
|
+
class DeliverWebHook
|
31
|
+
extend Resque::Plugins::Retry
|
32
|
+
@queue = :web_hooks
|
33
|
+
|
34
|
+
def self.perform(url, hook_id, hmac_key)
|
35
|
+
heavy_lifting
|
36
|
+
end
|
37
|
+
end
|
38
|
+
|
39
|
+
When a job runs, the number of retry attempts is checked and incremented
|
40
|
+
in Redis. If your job fails, the number of retry attempts is used to
|
41
|
+
determine if we can requeue the job for another go.
|
42
|
+
|
43
|
+
### Custom Retry
|
44
|
+
|
45
|
+
class DeliverWebHook
|
46
|
+
extend Resque::Plugins::Retry
|
47
|
+
@queue = :web_hooks
|
48
|
+
|
49
|
+
@retry_limit = 10
|
50
|
+
@retry_delay = 120
|
51
|
+
|
52
|
+
def self.perform(url, hook_id, hmac_key)
|
53
|
+
heavy_lifting
|
54
|
+
end
|
55
|
+
end
|
56
|
+
|
57
|
+
The above modification will allow your job to retry upto 10 times, with
|
58
|
+
a delay of 120 seconds, or 2 minutes between retry attempts.
|
59
|
+
|
60
|
+
Alternatively you could override the `retry_delay` method to do something
|
61
|
+
more special.
|
62
|
+
|
63
|
+
### Exponential Backoff
|
64
|
+
|
65
|
+
Use this if you wish to vary the delay between retry attempts:
|
66
|
+
|
67
|
+
class DeliverSMS
|
68
|
+
extend Resque::Plugins::ExponentialBackoff
|
69
|
+
@queue = :mt_messages
|
70
|
+
|
71
|
+
def self.perform(mt_id, mobile_number, message)
|
72
|
+
heavy_lifting
|
73
|
+
end
|
74
|
+
end
|
75
|
+
|
76
|
+
**Default Settings**
|
77
|
+
|
78
|
+
key: m = minutes, h = hours
|
79
|
+
|
80
|
+
no delay, 1m, 10m, 1h, 3h, 6h
|
81
|
+
@backoff_strategy = [0, 60, 600, 3600, 10800, 21600]
|
82
|
+
|
83
|
+
The first delay will be 0 seconds, the 2nd will be 60 seconds, etc...
|
84
|
+
Again, tweak to your own needs.
|
85
|
+
|
86
|
+
The number if retrys is equal to the size of the `backoff_strategy`
|
87
|
+
array, unless you set `retry_limit` yourself.
|
88
|
+
|
89
|
+
### Retry Specific Exceptions
|
90
|
+
|
91
|
+
The default will allow a retry for any type of exception. You may change
|
92
|
+
it so only specific exceptions are retried using `retry_exceptions`:
|
93
|
+
|
94
|
+
class DeliverSMS
|
95
|
+
extend Resque::Plugins::Retry
|
96
|
+
@queue = :mt_messages
|
97
|
+
|
98
|
+
@retry_exceptions = [NetworkError]
|
99
|
+
|
100
|
+
def self.perform(mt_id, mobile_number, message)
|
101
|
+
heavy_lifting
|
102
|
+
end
|
103
|
+
end
|
104
|
+
|
105
|
+
The above modification will **only** retry if a `NetworkError` (or subclass)
|
106
|
+
exception is thrown.
|
107
|
+
|
108
|
+
Customise & Extend
|
109
|
+
------------------
|
110
|
+
|
111
|
+
Please take a look at the yardoc/code for more details on methods you may
|
112
|
+
wish to override.
|
113
|
+
|
114
|
+
Some things worth noting:
|
115
|
+
|
116
|
+
### Job Identifier/Key
|
117
|
+
|
118
|
+
The retry attempt is incremented and stored in a Redis key. The key is
|
119
|
+
built using the `identifier`. If you have a lot of arguments or really long
|
120
|
+
ones, you should consider overriding `identifier` to define a more precise
|
121
|
+
or loose custom identifier.
|
122
|
+
|
123
|
+
The default identifier is just your job arguments joined with a dash `-`.
|
124
|
+
|
125
|
+
By default the key uses this format:
|
126
|
+
`resque-retry:<job class name>:<identifier>`.
|
127
|
+
|
128
|
+
Or you can define the entire key by overriding `redis_retry_key`.
|
129
|
+
|
130
|
+
class DeliverSMS
|
131
|
+
extend Resque::Plugins::Retry
|
132
|
+
@queue = :mt_messages
|
133
|
+
|
134
|
+
def self.identifier(mt_id, mobile_number, message)
|
135
|
+
"#{mobile_number}:#{mt_id}"
|
136
|
+
end
|
137
|
+
|
138
|
+
self.perform(mt_id, mobile_number, message)
|
139
|
+
heavy_lifting
|
140
|
+
end
|
141
|
+
end
|
142
|
+
|
143
|
+
### Retry Arguments
|
144
|
+
|
145
|
+
You may override `args_for_retry`, which is passed the current
|
146
|
+
job arguments, to modify the arguments for the next retry attempt.
|
147
|
+
|
148
|
+
class DeliverViaSMSC
|
149
|
+
extend Resque::Plugins::Retry
|
150
|
+
@queue = :mt_smsc_messages
|
151
|
+
|
152
|
+
# retry using the emergency SMSC.
|
153
|
+
def self.args_for_retry(smsc_id, mt_message)
|
154
|
+
[999, mt_message]
|
155
|
+
end
|
156
|
+
|
157
|
+
self.perform(smsc_id, mt_message)
|
158
|
+
heavy_lifting
|
159
|
+
end
|
160
|
+
end
|
161
|
+
|
162
|
+
Install
|
163
|
+
-------
|
164
|
+
|
165
|
+
$ gem install resque-retry
|
166
|
+
|
167
|
+
[rq]: http://github.com/defunkt/resque
|
168
|
+
[rqs]: http://github.com/bvandenbos/resque-scheduler
|
data/Rakefile
ADDED
@@ -0,0 +1,25 @@
|
|
1
|
+
$LOAD_PATH.unshift 'lib'
|
2
|
+
|
3
|
+
require 'rake/testtask'
|
4
|
+
require 'fileutils'
|
5
|
+
require 'yard'
|
6
|
+
require 'yard/rake/yardoc_task'
|
7
|
+
|
8
|
+
task :default => :test
|
9
|
+
|
10
|
+
##
|
11
|
+
# Test task.
|
12
|
+
Rake::TestTask.new(:test) do |task|
|
13
|
+
task.test_files = FileList['test/*_test.rb']
|
14
|
+
task.verbose = true
|
15
|
+
end
|
16
|
+
|
17
|
+
##
|
18
|
+
# docs task.
|
19
|
+
YARD::Rake::YardocTask.new :yardoc do |t|
|
20
|
+
t.files = ['lib/**/*.rb']
|
21
|
+
t.options = ['--output-dir', "doc/",
|
22
|
+
'--files', 'LICENSE',
|
23
|
+
'--readme', 'README.md',
|
24
|
+
'--title', 'resque-retry documentation']
|
25
|
+
end
|
data/lib/resque-retry.rb
ADDED
@@ -0,0 +1,34 @@
|
|
1
|
+
# Extend Resque::Server to add tabs
|
2
|
+
module ResqueRetry
|
3
|
+
|
4
|
+
module Server
|
5
|
+
|
6
|
+
def self.included(base)
|
7
|
+
base.class_eval do
|
8
|
+
|
9
|
+
|
10
|
+
get "/retry" do
|
11
|
+
# Is there a better way to specify alternate template locations with sinatra?
|
12
|
+
erb File.read(File.join(File.dirname(__FILE__), 'server/views/retry.erb'))
|
13
|
+
end
|
14
|
+
|
15
|
+
get "/retry/:timestamp" do
|
16
|
+
# Is there a better way to specify alternate template locations with sinatra?
|
17
|
+
erb File.read(File.join(File.dirname(__FILE__), 'server/views/retry_timestamp.erb'))
|
18
|
+
end
|
19
|
+
|
20
|
+
|
21
|
+
end
|
22
|
+
|
23
|
+
end
|
24
|
+
|
25
|
+
|
26
|
+
end
|
27
|
+
|
28
|
+
end
|
29
|
+
|
30
|
+
Resque::Server.tabs << 'Retry'
|
31
|
+
|
32
|
+
Resque::Server.class_eval do
|
33
|
+
include ResqueRetry::Server
|
34
|
+
end
|
@@ -0,0 +1,44 @@
|
|
1
|
+
<h1>Jobs Scheduled to Retry</h1>
|
2
|
+
|
3
|
+
<p class='intro'>
|
4
|
+
This list below contains the timestamps for scheduled delayed jobs.
|
5
|
+
</p>
|
6
|
+
|
7
|
+
<p class='sub'>
|
8
|
+
Showing <%= start = params[:start].to_i %> to <%= start + 20 %> of <b><%=size = resque.delayed_queue_schedule_size %></b> timestamps
|
9
|
+
</p>
|
10
|
+
|
11
|
+
<table>
|
12
|
+
<tr>
|
13
|
+
<th></th>
|
14
|
+
<th>Timestamp</th>
|
15
|
+
<th>Job count</th>
|
16
|
+
<th>Class</th>
|
17
|
+
<th>Args</th>
|
18
|
+
<th>Retry Count</th>
|
19
|
+
</tr>
|
20
|
+
<% resque.delayed_queue_peek(start, start+20).each do |timestamp| %>
|
21
|
+
<tr>
|
22
|
+
<td>
|
23
|
+
<form action="<%= url "/delayed/queue_now" %>" method="post">
|
24
|
+
<input type="hidden" name="timestamp" value="<%= timestamp.to_i %>">
|
25
|
+
<input type="submit" value="Queue now">
|
26
|
+
</form>
|
27
|
+
</td>
|
28
|
+
<td><a href="<%= url "retry/#{timestamp}" %>"><%= format_time(Time.at(timestamp)) %></a></td>
|
29
|
+
<td><%= delayed_timestamp_size = resque.delayed_timestamp_size(timestamp) %></td>
|
30
|
+
<% job = resque.delayed_timestamp_peek(timestamp, 0, 1).first %>
|
31
|
+
<td>
|
32
|
+
<% if job && delayed_timestamp_size == 1 %>
|
33
|
+
<%= h(job['class']) %>
|
34
|
+
<% else %>
|
35
|
+
<a href="<%= url "delayed/#{timestamp}" %>">see details</a>
|
36
|
+
<% end %>
|
37
|
+
</td>
|
38
|
+
<td><%= h(job['args'].inspect) if job && delayed_timestamp_size == 1 %></td>
|
39
|
+
<td><%=Resque.redis.get(Resque.constantize(job['class']).redis_retry_key(job["args"]))%> </td>
|
40
|
+
</tr>
|
41
|
+
<% end %>
|
42
|
+
</table>
|
43
|
+
|
44
|
+
<%= partial :next_more, :start => start, :size => size %>
|
@@ -0,0 +1,42 @@
|
|
1
|
+
<% timestamp = params[:timestamp].to_i %>
|
2
|
+
|
3
|
+
<h1>Delayed jobs scheduled for <%= format_time(Time.at(timestamp)) %></h1>
|
4
|
+
|
5
|
+
<p class='sub'>Showing <%= start = params[:start].to_i %> to <%= start + 20 %> of
|
6
|
+
<b><%= size = resque.delayed_timestamp_size(timestamp) %></b> jobs</p>
|
7
|
+
|
8
|
+
<table class='jobs'>
|
9
|
+
<tr>
|
10
|
+
<th>Class</th>
|
11
|
+
<th>Args</th>
|
12
|
+
<th>Retry Count</th>
|
13
|
+
<th>Exception</th>
|
14
|
+
<th>Backtrace</th>
|
15
|
+
</tr>
|
16
|
+
<% jobs = resque.delayed_timestamp_peek(timestamp, start, 20) %>
|
17
|
+
<% jobs.each do |job| %>
|
18
|
+
<% retry_key = Resque.constantize(job['class']).redis_retry_key(job["args"]) %>
|
19
|
+
<tr>
|
20
|
+
<td class='class'><%= job['class'] %></td>
|
21
|
+
<td class='args'><%= h job['args'].inspect %></td>
|
22
|
+
<td><%= Resque.redis.get(retry_key) %> </td>
|
23
|
+
<% failure = Resque.decode(Resque.redis["failure_#{retry_key}"] )%>
|
24
|
+
<td><code><%= failure['exception'] %></code></td>
|
25
|
+
<td class='error'>
|
26
|
+
<% if failure['backtrace'] %>
|
27
|
+
<a href="#" class="backtrace"><%= h(failure['error']) %></a>
|
28
|
+
<pre style='display:none'><%= h failure['backtrace'].join("\n") %></pre>
|
29
|
+
<% else %>
|
30
|
+
<%= h failure['error'] %>
|
31
|
+
<% end %>
|
32
|
+
</td>
|
33
|
+
</tr>
|
34
|
+
<% end %>
|
35
|
+
<% if jobs.empty? %>
|
36
|
+
<tr>
|
37
|
+
<td class='no-data' colspan='2'>There are no pending jobs scheduled for this time.</td>
|
38
|
+
</tr>
|
39
|
+
<% end %>
|
40
|
+
</table>
|
41
|
+
|
42
|
+
<%= partial :next_more, :start => start, :size => size %>
|
@@ -0,0 +1,68 @@
|
|
1
|
+
module Resque
|
2
|
+
module Plugins
|
3
|
+
|
4
|
+
##
|
5
|
+
# If you want your job to retry on failure using a varying delay, simply
|
6
|
+
# extend your module/class with this module:
|
7
|
+
#
|
8
|
+
# class DeliverSMS
|
9
|
+
# extend Resque::Plugins::ExponentialBackoff
|
10
|
+
# @queue = :mt_messages
|
11
|
+
#
|
12
|
+
# def self.perform(mt_id, mobile_number, message)
|
13
|
+
# heavy_lifting
|
14
|
+
# end
|
15
|
+
# end
|
16
|
+
#
|
17
|
+
# Easily do something custom:
|
18
|
+
#
|
19
|
+
# class DeliverSMS
|
20
|
+
# extend Resque::Plugins::ExponentialBackoff
|
21
|
+
# @queue = :mt_messages
|
22
|
+
#
|
23
|
+
# @retry_limit = 4
|
24
|
+
#
|
25
|
+
# # retry delay in seconds; [0] => 1st retry, [1] => 2nd..4th retry.
|
26
|
+
# @backoff_strategy = [0, 60]
|
27
|
+
#
|
28
|
+
# # used to build redis key, for counting job attempts.
|
29
|
+
# def self.identifier(mt_id, mobile_number, message)
|
30
|
+
# "#{mobile_number}:#{mt_id}"
|
31
|
+
# end
|
32
|
+
#
|
33
|
+
# self.perform(mt_id, mobile_number, message)
|
34
|
+
# heavy_lifting
|
35
|
+
# end
|
36
|
+
# end
|
37
|
+
#
|
38
|
+
module ExponentialBackoff
|
39
|
+
include Resque::Plugins::Retry
|
40
|
+
|
41
|
+
##
|
42
|
+
# Defaults to the number of delays in the backoff strategy.
|
43
|
+
#
|
44
|
+
# @return [Number] maximum number of retries
|
45
|
+
def retry_limit
|
46
|
+
@retry_limit ||= backoff_strategy.length
|
47
|
+
end
|
48
|
+
|
49
|
+
##
|
50
|
+
# Selects the delay from the backoff strategy.
|
51
|
+
#
|
52
|
+
# @return [Number] seconds to delay until the next retry.
|
53
|
+
def retry_delay
|
54
|
+
backoff_strategy[retry_attempt] || backoff_strategy.last
|
55
|
+
end
|
56
|
+
|
57
|
+
##
|
58
|
+
# @abstract
|
59
|
+
# The backoff strategy is used to vary the delay between retry attempts.
|
60
|
+
#
|
61
|
+
# @return [Array] array of delays. index = retry attempt
|
62
|
+
def backoff_strategy
|
63
|
+
@backoff_strategy ||= [0, 60, 600, 3600, 10_800, 21_600]
|
64
|
+
end
|
65
|
+
end
|
66
|
+
|
67
|
+
end
|
68
|
+
end
|
@@ -0,0 +1,178 @@
|
|
1
|
+
module Resque
|
2
|
+
module Plugins
|
3
|
+
|
4
|
+
##
|
5
|
+
# If you want your job to retry on failure, simply extend your module/class
|
6
|
+
# with this module:
|
7
|
+
#
|
8
|
+
# class DeliverWebHook
|
9
|
+
# extend Resque::Plugins::Retry # allows 1 retry by default.
|
10
|
+
# @queue = :web_hooks
|
11
|
+
#
|
12
|
+
# def self.perform(url, hook_id, hmac_key)
|
13
|
+
# heavy_lifting
|
14
|
+
# end
|
15
|
+
# end
|
16
|
+
#
|
17
|
+
# Easily do something custom:
|
18
|
+
#
|
19
|
+
# class DeliverWebHook
|
20
|
+
# extend Resque::Plugins::Retry
|
21
|
+
# @queue = :web_hooks
|
22
|
+
#
|
23
|
+
# @retry_limit = 8 # default: 1
|
24
|
+
# @retry_delay = 60 # default: 0
|
25
|
+
#
|
26
|
+
# # used to build redis key, for counting job attempts.
|
27
|
+
# def self.identifier(url, hook_id, hmac_key)
|
28
|
+
# "#{url}-#{hook_id}"
|
29
|
+
# end
|
30
|
+
#
|
31
|
+
# def self.perform(url, hook_id, hmac_key)
|
32
|
+
# heavy_lifting
|
33
|
+
# end
|
34
|
+
# end
|
35
|
+
#
|
36
|
+
module Retry
|
37
|
+
##
|
38
|
+
# @abstract You may override to implement a custom identifier,
|
39
|
+
# you should consider doing this if your job arguments
|
40
|
+
# are many/long or may not cleanly cleanly to strings.
|
41
|
+
#
|
42
|
+
# Builds an identifier using the job arguments. This identifier
|
43
|
+
# is used as part of the redis key.
|
44
|
+
#
|
45
|
+
# @param [Array] args job arguments
|
46
|
+
# @return [String] job identifier
|
47
|
+
def identifier(*args)
|
48
|
+
args_string = args.join('-')
|
49
|
+
args_string.empty? ? nil : args_string
|
50
|
+
end
|
51
|
+
|
52
|
+
##
|
53
|
+
# Builds the redis key to be used for keeping state of the job
|
54
|
+
# attempts.
|
55
|
+
#
|
56
|
+
# @return [String] redis key
|
57
|
+
def redis_retry_key(*args)
|
58
|
+
['resque-retry', name, identifier(*args)].compact.join(":").gsub(/\s/, '')
|
59
|
+
end
|
60
|
+
|
61
|
+
##
|
62
|
+
# Maximum number of retrys we can attempt to successfully perform the job.
|
63
|
+
# A retry limit of 0 or below will retry forever.
|
64
|
+
#
|
65
|
+
# @return [Fixnum]
|
66
|
+
def retry_limit
|
67
|
+
@retry_limit ||= 1
|
68
|
+
end
|
69
|
+
|
70
|
+
##
|
71
|
+
# Number of retry attempts used to try and perform the job.
|
72
|
+
#
|
73
|
+
# The real value is kept in Redis, it is accessed and incremented using
|
74
|
+
# a before_perform hook.
|
75
|
+
#
|
76
|
+
# @return [Fixnum] number of attempts
|
77
|
+
def retry_attempt
|
78
|
+
@retry_attempt ||= 0
|
79
|
+
end
|
80
|
+
|
81
|
+
##
|
82
|
+
# @abstract
|
83
|
+
# Number of seconds to delay until the job is retried.
|
84
|
+
#
|
85
|
+
# @return [Number] number of seconds to delay
|
86
|
+
def retry_delay
|
87
|
+
@retry_delay ||= 0
|
88
|
+
end
|
89
|
+
|
90
|
+
##
|
91
|
+
# @abstract
|
92
|
+
# Modify the arguments used to retry the job. Use this to do something
|
93
|
+
# other than try the exact same job again.
|
94
|
+
#
|
95
|
+
# @return [Array] new job arguments
|
96
|
+
def args_for_retry(*args)
|
97
|
+
args
|
98
|
+
end
|
99
|
+
|
100
|
+
##
|
101
|
+
# Convenience method to test whether you may retry on a given exception.
|
102
|
+
#
|
103
|
+
# @return [Boolean]
|
104
|
+
def retry_exception?(exception)
|
105
|
+
return true if retry_exceptions.nil?
|
106
|
+
!! retry_exceptions.any? { |ex| ex >= exception }
|
107
|
+
end
|
108
|
+
|
109
|
+
##
|
110
|
+
# @abstract
|
111
|
+
# Controls what exceptions may be retried.
|
112
|
+
#
|
113
|
+
# Default: `nil` - this will retry all exceptions.
|
114
|
+
#
|
115
|
+
# @return [Array, nil]
|
116
|
+
def retry_exceptions
|
117
|
+
@retry_exceptions ||= nil
|
118
|
+
end
|
119
|
+
|
120
|
+
##
|
121
|
+
# Test if the retry criteria is valid.
|
122
|
+
#
|
123
|
+
# @param [Exception] exception
|
124
|
+
# @param [Array] args job arguments
|
125
|
+
# @return [Boolean]
|
126
|
+
def retry_criteria_valid?(exception, *args)
|
127
|
+
# FIXME: let people extend retry criteria, give them a chance to say no.
|
128
|
+
if retry_limit > 0
|
129
|
+
return false if retry_attempt >= retry_limit
|
130
|
+
end
|
131
|
+
retry_exception?(exception.class)
|
132
|
+
end
|
133
|
+
|
134
|
+
##
|
135
|
+
# Will retry the job.
|
136
|
+
def try_again(*args)
|
137
|
+
if retry_delay <= 0
|
138
|
+
# If the delay is 0, no point passing it through the scheduler
|
139
|
+
Resque.enqueue(self, *args_for_retry(*args))
|
140
|
+
else
|
141
|
+
Resque.enqueue_in(retry_delay, self, *args_for_retry(*args))
|
142
|
+
end
|
143
|
+
end
|
144
|
+
|
145
|
+
##
|
146
|
+
# Resque before_perform hook.
|
147
|
+
#
|
148
|
+
# Increments and sets the `@retry_attempt` count.
|
149
|
+
def before_perform_retry(*args)
|
150
|
+
retry_key = redis_retry_key(*args)
|
151
|
+
Resque.redis.setnx(retry_key, -1) # default to -1 if not set.
|
152
|
+
@retry_attempt = Resque.redis.incr(retry_key) # increment by 1.
|
153
|
+
end
|
154
|
+
|
155
|
+
##
|
156
|
+
# Resque after_perform hook.
|
157
|
+
#
|
158
|
+
# Deletes retry attempt count from Redis.
|
159
|
+
def after_perform_retry(*args)
|
160
|
+
Resque.redis.del(redis_retry_key(*args))
|
161
|
+
end
|
162
|
+
|
163
|
+
##
|
164
|
+
# Resque on_failure hook.
|
165
|
+
#
|
166
|
+
# Checks if our retry criteria is valid, if it is we try again.
|
167
|
+
# Otherwise the retry attempt count is deleted from Redis.
|
168
|
+
def on_failure_retry(exception, *args)
|
169
|
+
if retry_criteria_valid?(exception, *args)
|
170
|
+
try_again(*args)
|
171
|
+
else
|
172
|
+
Resque.redis.del(redis_retry_key(*args))
|
173
|
+
end
|
174
|
+
end
|
175
|
+
end
|
176
|
+
|
177
|
+
end
|
178
|
+
end
|
@@ -0,0 +1,42 @@
|
|
1
|
+
require 'resque/failure/multiple'
|
2
|
+
|
3
|
+
class RetryFailureBackend < Resque::Failure::Multiple
|
4
|
+
|
5
|
+
include Resque::Helpers
|
6
|
+
|
7
|
+
def save
|
8
|
+
unless retrying?
|
9
|
+
super
|
10
|
+
else
|
11
|
+
data = {
|
12
|
+
:failed_at => Time.now.strftime("%Y/%m/%d %H:%M:%S"),
|
13
|
+
:payload => payload,
|
14
|
+
:exception => exception.class.to_s,
|
15
|
+
:error => exception.to_s,
|
16
|
+
:backtrace => Array(exception.backtrace),
|
17
|
+
:worker => worker.to_s,
|
18
|
+
:queue => queue
|
19
|
+
}
|
20
|
+
data = Resque.encode(data)
|
21
|
+
Resque.redis[failure_key]=data
|
22
|
+
end
|
23
|
+
end
|
24
|
+
|
25
|
+
protected
|
26
|
+
|
27
|
+
def retrying?
|
28
|
+
Resque.redis.get(retry_key)
|
29
|
+
end
|
30
|
+
|
31
|
+
def failure_key
|
32
|
+
"failure_#{retry_key}"
|
33
|
+
end
|
34
|
+
|
35
|
+
def retry_key
|
36
|
+
klass.redis_retry_key(payload["args"])
|
37
|
+
end
|
38
|
+
|
39
|
+
def klass
|
40
|
+
constantize(payload["class"])
|
41
|
+
end
|
42
|
+
end
|
@@ -0,0 +1,59 @@
|
|
1
|
+
require File.dirname(__FILE__) + '/test_helper'
|
2
|
+
|
3
|
+
class ExponentialBackoffTest < Test::Unit::TestCase
|
4
|
+
def setup
|
5
|
+
Resque.redis.flushall
|
6
|
+
@worker = Resque::Worker.new(:testing)
|
7
|
+
@worker.register_worker
|
8
|
+
end
|
9
|
+
|
10
|
+
def test_resque_plugin_lint
|
11
|
+
assert_nothing_raised do
|
12
|
+
Resque::Plugin.lint(Resque::Plugins::ExponentialBackoff)
|
13
|
+
end
|
14
|
+
end
|
15
|
+
|
16
|
+
def test_default_backoff_strategy
|
17
|
+
now = Time.now
|
18
|
+
Resque.enqueue(ExponentialBackoffJob)
|
19
|
+
2.times do
|
20
|
+
perform_next_job @worker
|
21
|
+
end
|
22
|
+
|
23
|
+
assert_equal 2, Resque.info[:processed], 'processed jobs'
|
24
|
+
assert_equal 2, Resque.info[:failed], 'failed jobs'
|
25
|
+
assert_equal 0, Resque.info[:pending], 'pending jobs'
|
26
|
+
|
27
|
+
delayed = Resque.delayed_queue_peek(0, 1)
|
28
|
+
assert_equal now.to_i + 60, delayed[0], '2nd delay' # the first had a zero delay.
|
29
|
+
|
30
|
+
5.times do
|
31
|
+
Resque.enqueue(ExponentialBackoffJob)
|
32
|
+
perform_next_job @worker
|
33
|
+
end
|
34
|
+
|
35
|
+
delayed = Resque.delayed_queue_peek(0, 5)
|
36
|
+
assert_equal now.to_i + 600, delayed[1], '3rd delay'
|
37
|
+
assert_equal now.to_i + 3600, delayed[2], '4th delay'
|
38
|
+
assert_equal now.to_i + 10_800, delayed[3], '5th delay'
|
39
|
+
assert_equal now.to_i + 21_600, delayed[4], '6th delay'
|
40
|
+
end
|
41
|
+
|
42
|
+
def test_custom_backoff_strategy
|
43
|
+
now = Time.now
|
44
|
+
4.times do
|
45
|
+
Resque.enqueue(CustomExponentialBackoffJob, 'http://lividpenguin.com', 1305, 'cd8079192d379dc612f17c660591a6cfb05f1dda')
|
46
|
+
perform_next_job @worker
|
47
|
+
end
|
48
|
+
|
49
|
+
delayed = Resque.delayed_queue_peek(0, 3)
|
50
|
+
assert_equal now.to_i + 10, delayed[0], '1st delay'
|
51
|
+
assert_equal now.to_i + 20, delayed[1], '2nd delay'
|
52
|
+
assert_equal now.to_i + 30, delayed[2], '3rd delay'
|
53
|
+
assert_equal 2, Resque.delayed_timestamp_size(delayed[2]), '4th delay should share delay with 3rd'
|
54
|
+
|
55
|
+
assert_equal 4, Resque.info[:processed], 'processed jobs'
|
56
|
+
assert_equal 4, Resque.info[:failed], 'failed jobs'
|
57
|
+
assert_equal 0, Resque.info[:pending], 'pending jobs'
|
58
|
+
end
|
59
|
+
end
|
@@ -0,0 +1,132 @@
|
|
1
|
+
# Redis configuration file example
|
2
|
+
|
3
|
+
# By default Redis does not run as a daemon. Use 'yes' if you need it.
|
4
|
+
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
|
5
|
+
daemonize yes
|
6
|
+
|
7
|
+
# When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
|
8
|
+
# You can specify a custom pid file location here.
|
9
|
+
pidfile ./test/redis-test.pid
|
10
|
+
|
11
|
+
# Accept connections on the specified port, default is 6379
|
12
|
+
port 9736
|
13
|
+
|
14
|
+
# If you want you can bind a single interface, if the bind option is not
|
15
|
+
# specified all the interfaces will listen for connections.
|
16
|
+
#
|
17
|
+
# bind 127.0.0.1
|
18
|
+
|
19
|
+
# Close the connection after a client is idle for N seconds (0 to disable)
|
20
|
+
timeout 300
|
21
|
+
|
22
|
+
# Save the DB on disk:
|
23
|
+
#
|
24
|
+
# save <seconds> <changes>
|
25
|
+
#
|
26
|
+
# Will save the DB if both the given number of seconds and the given
|
27
|
+
# number of write operations against the DB occurred.
|
28
|
+
#
|
29
|
+
# In the example below the behaviour will be to save:
|
30
|
+
# after 900 sec (15 min) if at least 1 key changed
|
31
|
+
# after 300 sec (5 min) if at least 10 keys changed
|
32
|
+
# after 60 sec if at least 10000 keys changed
|
33
|
+
save 900 1
|
34
|
+
save 300 10
|
35
|
+
save 60 10000
|
36
|
+
|
37
|
+
# The filename where to dump the DB
|
38
|
+
dbfilename dump.rdb
|
39
|
+
|
40
|
+
# For default save/load DB in/from the working directory
|
41
|
+
# Note that you must specify a directory not a file name.
|
42
|
+
dir ./test/
|
43
|
+
|
44
|
+
# Set server verbosity to 'debug'
|
45
|
+
# it can be one of:
|
46
|
+
# debug (a lot of information, useful for development/testing)
|
47
|
+
# notice (moderately verbose, what you want in production probably)
|
48
|
+
# warning (only very important / critical messages are logged)
|
49
|
+
loglevel debug
|
50
|
+
|
51
|
+
# Specify the log file name. Also 'stdout' can be used to force
|
52
|
+
# the demon to log on the standard output. Note that if you use standard
|
53
|
+
# output for logging but daemonize, logs will be sent to /dev/null
|
54
|
+
logfile stdout
|
55
|
+
|
56
|
+
# Set the number of databases. The default database is DB 0, you can select
|
57
|
+
# a different one on a per-connection basis using SELECT <dbid> where
|
58
|
+
# dbid is a number between 0 and 'databases'-1
|
59
|
+
databases 16
|
60
|
+
|
61
|
+
################################# REPLICATION #################################
|
62
|
+
|
63
|
+
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
|
64
|
+
# another Redis server. Note that the configuration is local to the slave
|
65
|
+
# so for example it is possible to configure the slave to save the DB with a
|
66
|
+
# different interval, or to listen to another port, and so on.
|
67
|
+
|
68
|
+
# slaveof <masterip> <masterport>
|
69
|
+
|
70
|
+
################################## SECURITY ###################################
|
71
|
+
|
72
|
+
# Require clients to issue AUTH <PASSWORD> before processing any other
|
73
|
+
# commands. This might be useful in environments in which you do not trust
|
74
|
+
# others with access to the host running redis-server.
|
75
|
+
#
|
76
|
+
# This should stay commented out for backward compatibility and because most
|
77
|
+
# people do not need auth (e.g. they run their own servers).
|
78
|
+
|
79
|
+
# requirepass foobared
|
80
|
+
|
81
|
+
################################### LIMITS ####################################
|
82
|
+
|
83
|
+
# Set the max number of connected clients at the same time. By default there
|
84
|
+
# is no limit, and it's up to the number of file descriptors the Redis process
|
85
|
+
# is able to open. The special value '0' means no limts.
|
86
|
+
# Once the limit is reached Redis will close all the new connections sending
|
87
|
+
# an error 'max number of clients reached'.
|
88
|
+
|
89
|
+
# maxclients 128
|
90
|
+
|
91
|
+
# Don't use more memory than the specified amount of bytes.
|
92
|
+
# When the memory limit is reached Redis will try to remove keys with an
|
93
|
+
# EXPIRE set. It will try to start freeing keys that are going to expire
|
94
|
+
# in little time and preserve keys with a longer time to live.
|
95
|
+
# Redis will also try to remove objects from free lists if possible.
|
96
|
+
#
|
97
|
+
# If all this fails, Redis will start to reply with errors to commands
|
98
|
+
# that will use more memory, like SET, LPUSH, and so on, and will continue
|
99
|
+
# to reply to most read-only commands like GET.
|
100
|
+
#
|
101
|
+
# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
|
102
|
+
# 'state' server or cache, not as a real DB. When Redis is used as a real
|
103
|
+
# database the memory usage will grow over the weeks, it will be obvious if
|
104
|
+
# it is going to use too much memory in the long run, and you'll have the time
|
105
|
+
# to upgrade. With maxmemory after the limit is reached you'll start to get
|
106
|
+
# errors for write operations, and this may even lead to DB inconsistency.
|
107
|
+
|
108
|
+
# maxmemory <bytes>
|
109
|
+
|
110
|
+
############################### ADVANCED CONFIG ###############################
|
111
|
+
|
112
|
+
# Glue small output buffers together in order to send small replies in a
|
113
|
+
# single TCP packet. Uses a bit more CPU but most of the times it is a win
|
114
|
+
# in terms of number of queries per second. Use 'yes' if unsure.
|
115
|
+
glueoutputbuf yes
|
116
|
+
|
117
|
+
# Use object sharing. Can save a lot of memory if you have many common
|
118
|
+
# string in your dataset, but performs lookups against the shared objects
|
119
|
+
# pool so it uses more CPU and can be a bit slower. Usually it's a good
|
120
|
+
# idea.
|
121
|
+
#
|
122
|
+
# When object sharing is enabled (shareobjects yes) you can use
|
123
|
+
# shareobjectspoolsize to control the size of the pool used in order to try
|
124
|
+
# object sharing. A bigger pool size will lead to better sharing capabilities.
|
125
|
+
# In general you want this value to be at least the double of the number of
|
126
|
+
# very common strings you have in your dataset.
|
127
|
+
#
|
128
|
+
# WARNING: object sharing is experimental, don't enable this feature
|
129
|
+
# in production before of Redis 1.0-stable. Still please try this feature in
|
130
|
+
# your development environment so that we can test it better.
|
131
|
+
# shareobjects no
|
132
|
+
# shareobjectspoolsize 1024
|
data/test/resque_test.rb
ADDED
@@ -0,0 +1,18 @@
|
|
1
|
+
require File.dirname(__FILE__) + '/test_helper'
|
2
|
+
|
3
|
+
# make sure the worlds not fallen from beneith us.
|
4
|
+
class ResqueTest < Test::Unit::TestCase
|
5
|
+
def test_resque_version
|
6
|
+
major, minor, patch = Resque::Version.split('.')
|
7
|
+
assert_equal 1, major.to_i, 'major version does not match'
|
8
|
+
assert_operator minor.to_i, :>=, 8, 'minor version is too low'
|
9
|
+
end
|
10
|
+
|
11
|
+
def test_good_job
|
12
|
+
clean_perform_job(GoodJob, 1234, { :cats => :maiow }, [true, false, false])
|
13
|
+
|
14
|
+
assert_equal 0, Resque.info[:failed], 'failed jobs'
|
15
|
+
assert_equal 1, Resque.info[:processed], 'processed job'
|
16
|
+
assert_equal 0, Resque.delayed_queue_schedule_size
|
17
|
+
end
|
18
|
+
end
|
data/test/retry_test.rb
ADDED
@@ -0,0 +1,145 @@
|
|
1
|
+
require File.dirname(__FILE__) + '/test_helper'
|
2
|
+
|
3
|
+
class RetryTest < Test::Unit::TestCase
|
4
|
+
def setup
|
5
|
+
Resque.redis.flushall
|
6
|
+
@worker = Resque::Worker.new(:testing)
|
7
|
+
@worker.register_worker
|
8
|
+
end
|
9
|
+
|
10
|
+
def test_resque_plugin_lint
|
11
|
+
assert_nothing_raised do
|
12
|
+
Resque::Plugin.lint(Resque::Plugins::Retry)
|
13
|
+
end
|
14
|
+
end
|
15
|
+
|
16
|
+
def test_default_settings
|
17
|
+
assert_equal 1, RetryDefaultsJob.retry_limit, 'default retry limit'
|
18
|
+
assert_equal 0, RetryDefaultsJob.retry_attempt, 'default number of retry attempts'
|
19
|
+
assert_equal nil, RetryDefaultsJob.retry_exceptions, 'default retry exceptions; nil = any'
|
20
|
+
assert_equal 0, RetryDefaultsJob.retry_delay, 'default seconds until retry'
|
21
|
+
end
|
22
|
+
|
23
|
+
def test_retry_once_by_default
|
24
|
+
Resque.enqueue(RetryDefaultsJob)
|
25
|
+
3.times do
|
26
|
+
perform_next_job(@worker)
|
27
|
+
end
|
28
|
+
|
29
|
+
assert_equal 0, Resque.info[:pending], 'pending jobs'
|
30
|
+
assert_equal 2, Resque.info[:failed], 'failed jobs'
|
31
|
+
assert_equal 2, Resque.info[:processed], 'processed job'
|
32
|
+
end
|
33
|
+
|
34
|
+
def test_job_args_are_maintained
|
35
|
+
test_args = ['maiow', 'cat', [42, 84]]
|
36
|
+
|
37
|
+
Resque.enqueue(RetryDefaultsJob, *test_args)
|
38
|
+
perform_next_job(@worker)
|
39
|
+
|
40
|
+
assert job = Resque.pop(:testing)
|
41
|
+
assert_equal test_args, job['args']
|
42
|
+
end
|
43
|
+
|
44
|
+
def test_job_args_may_be_modified
|
45
|
+
Resque.enqueue(RetryWithModifiedArgsJob, 'foo', 'bar')
|
46
|
+
perform_next_job(@worker)
|
47
|
+
|
48
|
+
assert job = Resque.pop(:testing)
|
49
|
+
assert_equal ['foobar', 'barbar'], job['args']
|
50
|
+
end
|
51
|
+
|
52
|
+
def test_retry_never_give_up
|
53
|
+
Resque.enqueue(NeverGiveUpJob)
|
54
|
+
10.times do
|
55
|
+
perform_next_job(@worker)
|
56
|
+
end
|
57
|
+
|
58
|
+
assert_equal 1, Resque.info[:pending], 'pending jobs'
|
59
|
+
assert_equal 10, Resque.info[:failed], 'failed jobs'
|
60
|
+
assert_equal 10, Resque.info[:processed], 'processed job'
|
61
|
+
end
|
62
|
+
|
63
|
+
def test_fail_five_times_then_succeed
|
64
|
+
Resque.enqueue(FailFiveTimesJob)
|
65
|
+
7.times do
|
66
|
+
perform_next_job(@worker)
|
67
|
+
end
|
68
|
+
|
69
|
+
assert_equal 5, Resque.info[:failed], 'failed jobs'
|
70
|
+
assert_equal 6, Resque.info[:processed], 'processed job'
|
71
|
+
assert_equal 0, Resque.info[:pending], 'pending jobs'
|
72
|
+
end
|
73
|
+
|
74
|
+
def test_can_determine_if_exception_may_be_retried
|
75
|
+
assert_equal true, RetryDefaultsJob.retry_exception?(StandardError), 'StandardError may retry'
|
76
|
+
assert_equal true, RetryDefaultsJob.retry_exception?(CustomException), 'CustomException may retry'
|
77
|
+
assert_equal true, RetryDefaultsJob.retry_exception?(HierarchyCustomException), 'HierarchyCustomException may retry'
|
78
|
+
|
79
|
+
assert_equal true, RetryCustomExceptionsJob.retry_exception?(CustomException), 'CustomException may retry'
|
80
|
+
assert_equal true, RetryCustomExceptionsJob.retry_exception?(HierarchyCustomException), 'HierarchyCustomException may retry'
|
81
|
+
assert_equal false, RetryCustomExceptionsJob.retry_exception?(AnotherCustomException), 'AnotherCustomException may not retry'
|
82
|
+
end
|
83
|
+
|
84
|
+
def test_retry_if_failed_and_exception_may_retry
|
85
|
+
Resque.enqueue(RetryCustomExceptionsJob, CustomException)
|
86
|
+
Resque.enqueue(RetryCustomExceptionsJob, HierarchyCustomException)
|
87
|
+
4.times do
|
88
|
+
perform_next_job(@worker)
|
89
|
+
end
|
90
|
+
|
91
|
+
assert_equal 4, Resque.info[:failed], 'failed jobs'
|
92
|
+
assert_equal 4, Resque.info[:processed], 'processed job'
|
93
|
+
assert_equal 2, Resque.info[:pending], 'pending jobs'
|
94
|
+
end
|
95
|
+
|
96
|
+
def test_do_not_retry_if_failed_and_exception_does_not_allow_retry
|
97
|
+
Resque.enqueue(RetryCustomExceptionsJob, AnotherCustomException)
|
98
|
+
Resque.enqueue(RetryCustomExceptionsJob, RuntimeError)
|
99
|
+
4.times do
|
100
|
+
perform_next_job(@worker)
|
101
|
+
end
|
102
|
+
|
103
|
+
assert_equal 2, Resque.info[:failed], 'failed jobs'
|
104
|
+
assert_equal 2, Resque.info[:processed], 'processed job'
|
105
|
+
assert_equal 0, Resque.info[:pending], 'pending jobs'
|
106
|
+
end
|
107
|
+
|
108
|
+
def test_delete_redis_key_when_job_is_successful
|
109
|
+
Resque.enqueue(GoodJob, 'arg1')
|
110
|
+
|
111
|
+
assert_equal nil, Resque.redis.get(GoodJob.redis_retry_key('arg1'))
|
112
|
+
perform_next_job(@worker)
|
113
|
+
assert_equal nil, Resque.redis.get(GoodJob.redis_retry_key('arg1'))
|
114
|
+
end
|
115
|
+
|
116
|
+
def test_delete_redis_key_after_final_failed_retry
|
117
|
+
Resque.enqueue(FailFiveTimesJob, 'yarrrr')
|
118
|
+
assert_equal nil, Resque.redis.get(FailFiveTimesJob.redis_retry_key('yarrrr'))
|
119
|
+
|
120
|
+
perform_next_job(@worker)
|
121
|
+
assert_equal '0', Resque.redis.get(FailFiveTimesJob.redis_retry_key('yarrrr'))
|
122
|
+
|
123
|
+
perform_next_job(@worker)
|
124
|
+
assert_equal '1', Resque.redis.get(FailFiveTimesJob.redis_retry_key('yarrrr'))
|
125
|
+
|
126
|
+
5.times do
|
127
|
+
perform_next_job(@worker)
|
128
|
+
end
|
129
|
+
assert_equal nil, Resque.redis.get(FailFiveTimesJob.redis_retry_key('yarrrr'))
|
130
|
+
|
131
|
+
assert_equal 5, Resque.info[:failed], 'failed jobs'
|
132
|
+
assert_equal 6, Resque.info[:processed], 'processed job'
|
133
|
+
assert_equal 0, Resque.info[:pending], 'pending jobs'
|
134
|
+
end
|
135
|
+
|
136
|
+
def test_job_without_args_has_no_ending_colon_in_redis_key
|
137
|
+
assert_equal 'resque-retry:GoodJob:yarrrr', GoodJob.redis_retry_key('yarrrr')
|
138
|
+
assert_equal 'resque-retry:GoodJob:foo', GoodJob.redis_retry_key('foo')
|
139
|
+
assert_equal 'resque-retry:GoodJob', GoodJob.redis_retry_key
|
140
|
+
end
|
141
|
+
|
142
|
+
def test_redis_retry_key_removes_whitespace
|
143
|
+
assert_equal 'resque-retry:GoodJob:arg1-removespace', GoodJob.redis_retry_key('arg1', 'remove space')
|
144
|
+
end
|
145
|
+
end
|
data/test/test_helper.rb
ADDED
@@ -0,0 +1,63 @@
|
|
1
|
+
dir = File.dirname(File.expand_path(__FILE__))
|
2
|
+
$LOAD_PATH.unshift dir + '/../lib'
|
3
|
+
$TESTING = true
|
4
|
+
|
5
|
+
require 'test/unit'
|
6
|
+
require 'rubygems'
|
7
|
+
require 'turn'
|
8
|
+
|
9
|
+
require 'resque-retry'
|
10
|
+
require dir + '/test_jobs'
|
11
|
+
|
12
|
+
|
13
|
+
##
|
14
|
+
# make sure we can run redis
|
15
|
+
if !system("which redis-server")
|
16
|
+
puts '', "** can't find `redis-server` in your path"
|
17
|
+
puts "** try running `sudo rake install`"
|
18
|
+
abort ''
|
19
|
+
end
|
20
|
+
|
21
|
+
|
22
|
+
##
|
23
|
+
# start our own redis when the tests start,
|
24
|
+
# kill it when they end
|
25
|
+
at_exit do
|
26
|
+
next if $!
|
27
|
+
|
28
|
+
if defined?(MiniTest)
|
29
|
+
exit_code = MiniTest::Unit.new.run(ARGV)
|
30
|
+
else
|
31
|
+
exit_code = Test::Unit::AutoRunner.run
|
32
|
+
end
|
33
|
+
|
34
|
+
pid = `ps -e -o pid,command | grep [r]edis-test`.split(" ")[0]
|
35
|
+
puts "Killing test redis server..."
|
36
|
+
`rm -f #{dir}/dump.rdb`
|
37
|
+
Process.kill("KILL", pid.to_i)
|
38
|
+
exit exit_code
|
39
|
+
end
|
40
|
+
|
41
|
+
puts "Starting redis for testing at localhost:9736..."
|
42
|
+
`redis-server #{dir}/redis-test.conf`
|
43
|
+
Resque.redis = '127.0.0.1:9736'
|
44
|
+
|
45
|
+
##
|
46
|
+
# Test helpers
|
47
|
+
class Test::Unit::TestCase
|
48
|
+
def perform_next_job(worker, &block)
|
49
|
+
return unless job = @worker.reserve
|
50
|
+
@worker.perform(job, &block)
|
51
|
+
@worker.done_working
|
52
|
+
end
|
53
|
+
|
54
|
+
def clean_perform_job(klass, *args)
|
55
|
+
Resque.redis.flushall
|
56
|
+
Resque.enqueue(klass, *args)
|
57
|
+
|
58
|
+
worker = Resque::Worker.new(:testing)
|
59
|
+
return false unless job = worker.reserve
|
60
|
+
worker.perform(job)
|
61
|
+
worker.done_working
|
62
|
+
end
|
63
|
+
end
|
data/test/test_jobs.rb
ADDED
@@ -0,0 +1,74 @@
|
|
1
|
+
CustomException = Class.new(StandardError)
|
2
|
+
HierarchyCustomException = Class.new(CustomException)
|
3
|
+
AnotherCustomException = Class.new(StandardError)
|
4
|
+
|
5
|
+
class GoodJob
|
6
|
+
extend Resque::Plugins::Retry
|
7
|
+
@queue = :testing
|
8
|
+
def self.perform(*args)
|
9
|
+
end
|
10
|
+
end
|
11
|
+
|
12
|
+
class RetryDefaultsJob
|
13
|
+
extend Resque::Plugins::Retry
|
14
|
+
@queue = :testing
|
15
|
+
|
16
|
+
def self.perform(*args)
|
17
|
+
raise
|
18
|
+
end
|
19
|
+
end
|
20
|
+
|
21
|
+
class RetryWithModifiedArgsJob < RetryDefaultsJob
|
22
|
+
@queue = :testing
|
23
|
+
|
24
|
+
def self.args_for_retry(*args)
|
25
|
+
args.each { |arg| arg << 'bar' }
|
26
|
+
end
|
27
|
+
end
|
28
|
+
|
29
|
+
class NeverGiveUpJob < RetryDefaultsJob
|
30
|
+
@queue = :testing
|
31
|
+
@retry_limit = 0
|
32
|
+
end
|
33
|
+
|
34
|
+
class FailFiveTimesJob < RetryDefaultsJob
|
35
|
+
@queue = :testing
|
36
|
+
@retry_limit = 6
|
37
|
+
|
38
|
+
def self.perform(*args)
|
39
|
+
raise if retry_attempt <= 4
|
40
|
+
end
|
41
|
+
end
|
42
|
+
|
43
|
+
class ExponentialBackoffJob < RetryDefaultsJob
|
44
|
+
extend Resque::Plugins::ExponentialBackoff
|
45
|
+
@queue = :testing
|
46
|
+
end
|
47
|
+
|
48
|
+
class CustomExponentialBackoffJob
|
49
|
+
extend Resque::Plugins::ExponentialBackoff
|
50
|
+
@queue = :testing
|
51
|
+
|
52
|
+
@retry_limit = 4
|
53
|
+
@backoff_strategy = [10, 20, 30]
|
54
|
+
|
55
|
+
def self.perform(url, hook_id, hmac_key)
|
56
|
+
raise
|
57
|
+
end
|
58
|
+
end
|
59
|
+
|
60
|
+
class RetryCustomExceptionsJob < RetryDefaultsJob
|
61
|
+
@queue = :testing
|
62
|
+
|
63
|
+
@retry_limit = 5
|
64
|
+
@retry_exceptions = [CustomException, HierarchyCustomException]
|
65
|
+
|
66
|
+
def self.perform(exception)
|
67
|
+
case exception
|
68
|
+
when 'CustomException' then raise CustomException
|
69
|
+
when 'HierarchyCustomException' then raise HierarchyCustomException
|
70
|
+
when 'AnotherCustomException' then raise AnotherCustomException
|
71
|
+
else raise StandardError
|
72
|
+
end
|
73
|
+
end
|
74
|
+
end
|
metadata
ADDED
@@ -0,0 +1,143 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: kbaum-resque-retry
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
hash: 21
|
5
|
+
prerelease: false
|
6
|
+
segments:
|
7
|
+
- 0
|
8
|
+
- 0
|
9
|
+
- 5
|
10
|
+
version: 0.0.5
|
11
|
+
platform: ruby
|
12
|
+
authors:
|
13
|
+
- Luke Antins
|
14
|
+
- Ryan Carver
|
15
|
+
autorequire:
|
16
|
+
bindir: bin
|
17
|
+
cert_chain: []
|
18
|
+
|
19
|
+
date: 2010-07-11 00:00:00 -04:00
|
20
|
+
default_executable:
|
21
|
+
dependencies:
|
22
|
+
- !ruby/object:Gem::Dependency
|
23
|
+
name: resque
|
24
|
+
prerelease: false
|
25
|
+
requirement: &id001 !ruby/object:Gem::Requirement
|
26
|
+
none: false
|
27
|
+
requirements:
|
28
|
+
- - ">="
|
29
|
+
- !ruby/object:Gem::Version
|
30
|
+
hash: 55
|
31
|
+
segments:
|
32
|
+
- 1
|
33
|
+
- 8
|
34
|
+
- 0
|
35
|
+
version: 1.8.0
|
36
|
+
type: :runtime
|
37
|
+
version_requirements: *id001
|
38
|
+
- !ruby/object:Gem::Dependency
|
39
|
+
name: resque-scheduler
|
40
|
+
prerelease: false
|
41
|
+
requirement: &id002 !ruby/object:Gem::Requirement
|
42
|
+
none: false
|
43
|
+
requirements:
|
44
|
+
- - ">="
|
45
|
+
- !ruby/object:Gem::Version
|
46
|
+
hash: 55
|
47
|
+
segments:
|
48
|
+
- 1
|
49
|
+
- 8
|
50
|
+
- 0
|
51
|
+
version: 1.8.0
|
52
|
+
type: :runtime
|
53
|
+
version_requirements: *id002
|
54
|
+
- !ruby/object:Gem::Dependency
|
55
|
+
name: turn
|
56
|
+
prerelease: false
|
57
|
+
requirement: &id003 !ruby/object:Gem::Requirement
|
58
|
+
none: false
|
59
|
+
requirements:
|
60
|
+
- - ">="
|
61
|
+
- !ruby/object:Gem::Version
|
62
|
+
hash: 3
|
63
|
+
segments:
|
64
|
+
- 0
|
65
|
+
version: "0"
|
66
|
+
type: :development
|
67
|
+
version_requirements: *id003
|
68
|
+
- !ruby/object:Gem::Dependency
|
69
|
+
name: yard
|
70
|
+
prerelease: false
|
71
|
+
requirement: &id004 !ruby/object:Gem::Requirement
|
72
|
+
none: false
|
73
|
+
requirements:
|
74
|
+
- - ">="
|
75
|
+
- !ruby/object:Gem::Version
|
76
|
+
hash: 3
|
77
|
+
segments:
|
78
|
+
- 0
|
79
|
+
version: "0"
|
80
|
+
type: :development
|
81
|
+
version_requirements: *id004
|
82
|
+
description: " resque-retry provides retry, delay and exponential backoff support for\n resque jobs.\n\n Features:\n\n * Redis backed retry count/limit.\n * Retry on all or specific exceptions.\n * Exponential backoff (varying the delay between retrys).\n * Small & Extendable - plenty of places to override retry logic/settings.\n"
|
83
|
+
email: luke@lividpenguin.com
|
84
|
+
executables: []
|
85
|
+
|
86
|
+
extensions: []
|
87
|
+
|
88
|
+
extra_rdoc_files: []
|
89
|
+
|
90
|
+
files:
|
91
|
+
- LICENSE
|
92
|
+
- Rakefile
|
93
|
+
- README.md
|
94
|
+
- HISTORY.md
|
95
|
+
- test/exponential_backoff_test.rb
|
96
|
+
- test/redis-test.conf
|
97
|
+
- test/resque_test.rb
|
98
|
+
- test/retry_test.rb
|
99
|
+
- test/test_helper.rb
|
100
|
+
- test/test_jobs.rb
|
101
|
+
- lib/resque/plugins/exponential_backoff.rb
|
102
|
+
- lib/resque/plugins/retry.rb
|
103
|
+
- lib/resque/plugins/retry_failure_backend.rb
|
104
|
+
- lib/resque-retry/server/views/retry.erb
|
105
|
+
- lib/resque-retry/server/views/retry_timestamp.erb
|
106
|
+
- lib/resque-retry/server.rb
|
107
|
+
- lib/resque-retry.rb
|
108
|
+
has_rdoc: true
|
109
|
+
homepage: http://github.com/lantins/resque-retry
|
110
|
+
licenses: []
|
111
|
+
|
112
|
+
post_install_message:
|
113
|
+
rdoc_options: []
|
114
|
+
|
115
|
+
require_paths:
|
116
|
+
- lib
|
117
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
118
|
+
none: false
|
119
|
+
requirements:
|
120
|
+
- - ">="
|
121
|
+
- !ruby/object:Gem::Version
|
122
|
+
hash: 3
|
123
|
+
segments:
|
124
|
+
- 0
|
125
|
+
version: "0"
|
126
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
127
|
+
none: false
|
128
|
+
requirements:
|
129
|
+
- - ">="
|
130
|
+
- !ruby/object:Gem::Version
|
131
|
+
hash: 3
|
132
|
+
segments:
|
133
|
+
- 0
|
134
|
+
version: "0"
|
135
|
+
requirements: []
|
136
|
+
|
137
|
+
rubyforge_project:
|
138
|
+
rubygems_version: 1.3.7
|
139
|
+
signing_key:
|
140
|
+
specification_version: 3
|
141
|
+
summary: A resque plugin; provides retry, delay and exponential backoff support for resque jobs.
|
142
|
+
test_files: []
|
143
|
+
|