resque-loner 0.1.1
Sign up to get free protection for your applications and to get access to all the features.
- data/README.markdown +96 -0
- data/lib/resque-ext/job.rb +54 -0
- data/lib/resque-ext/resque.rb +24 -0
- data/lib/resque-loner.rb +6 -0
- data/lib/resque-loner/helpers.rb +56 -0
- data/lib/resque-loner/unique_job.rb +27 -0
- data/lib/resque-loner/version.rb +7 -0
- metadata +93 -0
data/README.markdown
ADDED
@@ -0,0 +1,96 @@
|
|
1
|
+
|
2
|
+
Resque-Loner
|
3
|
+
======
|
4
|
+
|
5
|
+
Resque-Loner is a plugin for defunkt/resque which adds unique jobs to resque. In each queue, there can be at most one UniqueJob with the same parameters.
|
6
|
+
|
7
|
+
|
8
|
+
Installation
|
9
|
+
-------------
|
10
|
+
|
11
|
+
First install the gem:
|
12
|
+
|
13
|
+
$ gem install resque-loner
|
14
|
+
|
15
|
+
Then include it in your app:
|
16
|
+
|
17
|
+
require 'resque-loner'
|
18
|
+
|
19
|
+
To make sure this plugin works on your installation, you should run the tests. resque-loner is tested in RSpec, but it also includes resque's original testsuite. You can run all tests specific to resque-loner with `rake spec`. To make sure the plugin did not break resque, you can run `rake test` (the standard resque test suite).r
|
20
|
+
|
21
|
+
Example
|
22
|
+
--------
|
23
|
+
|
24
|
+
Unique jobs can be useful in situations where running the same job multiple times issues the same results. Let's say you have a job called CacheSweeper that refreshes some cache. A user has edited some_article, so you put a job on the queue to refresh the cache for that article.
|
25
|
+
|
26
|
+
>> Resque.enqueue CacheSweeper, some_article.id
|
27
|
+
=> "OK"
|
28
|
+
|
29
|
+
Your queue is really full, so the job does not get executed right away. But the user editing the article has noticed another error, and updates the article again, and your app kindly queues another job to update that article's cache.
|
30
|
+
|
31
|
+
>> Resque.enqueue CacheSweeper, some_article.id
|
32
|
+
=> "OK"
|
33
|
+
|
34
|
+
At this point you will have two jobs in the queue, the second of which has no effect: You don't have to run it, once the cache has been updated for the first time. This is where resque-loner's UniqueJobs come in. If you define CacheSweeper like this:
|
35
|
+
|
36
|
+
class CacheSweeper < Resque::Plugins::Loner::UniqueJob
|
37
|
+
@queue = :cache_sweeps
|
38
|
+
|
39
|
+
def self.perform(article_id)
|
40
|
+
# Cache Me If You Can...
|
41
|
+
end
|
42
|
+
end
|
43
|
+
|
44
|
+
Just like that you've assured that on the :cache_sweeps queue, there can only be one CacheSweeper job for each article. Let's see what happens when you try to enqueue a couple of these jobs now:
|
45
|
+
|
46
|
+
>> Resque.enqueue CacheSweeper, 1
|
47
|
+
=> "OK"
|
48
|
+
>> Resque.enqueue CacheSweeper, 1
|
49
|
+
=> "EXISTED"
|
50
|
+
>> Resque.enqueue CacheSweeper, 1
|
51
|
+
=> "EXISTED"
|
52
|
+
>> Resque.size :cache_sweeps
|
53
|
+
=> 1
|
54
|
+
|
55
|
+
Since resque-loner keeps track of which jobs are queued in a way that allows for finding jobs very quickly, you can also query if a job is currently in a queue:
|
56
|
+
|
57
|
+
>> Resque.enqueue CacheSweeper, 1
|
58
|
+
=> "OK"
|
59
|
+
>> Resque.enqueued? CacheSweeper, 1
|
60
|
+
=> true
|
61
|
+
>> Resque.enqueued? CacheSweeper, 2
|
62
|
+
=> false
|
63
|
+
|
64
|
+
How it works
|
65
|
+
--------
|
66
|
+
|
67
|
+
### Keeping track of queued unique jobs
|
68
|
+
|
69
|
+
For each created UniqueJob, resque-loner sets a redis key to 1. This key remains set until the job has either been fetched from the queue or destroyed through the Resque::Job.destroy method. As long as the key is set, the job is considered queued and consequent queue adds are being rejected.
|
70
|
+
|
71
|
+
Here's how these keys are constructed:
|
72
|
+
|
73
|
+
resque:loners:queue:cache_sweeps:job:5ac5a005253450606aa9bc3b3d52ea5b
|
74
|
+
| | | |
|
75
|
+
| | | `---- Job's ID (#redis_key method)
|
76
|
+
| | `--------------------- Name of the queue
|
77
|
+
| `------------------------------ Prefix for this plugin
|
78
|
+
`----------------------------------------- Your redis namespace
|
79
|
+
|
80
|
+
The last part of this key is the job's ID, which is pretty much your queue item's payload. For our CacheSweeper job, the payload would be:
|
81
|
+
|
82
|
+
{ 'class': 'CacheSweeper', 'args': [1] }`
|
83
|
+
|
84
|
+
The default method to create a job ID from these parameters is to do some normalization on the payload and then md5'ing it (defined in `Resque::Plugins::Loner::UniqueJob#redis_key`).
|
85
|
+
|
86
|
+
You could also use the whole payload or anything else as a redis key, as long as you make sure these requirements are met:
|
87
|
+
|
88
|
+
1. Two jobs of the same class with the same parameters/arguments/workload must produce the same redis_key
|
89
|
+
2. Two jobs with either a different class or different parameters/arguments/workloads must not produce the same redis key
|
90
|
+
3. The key must not be binary, because this restriction applies to redis keys: *Keys are not binary safe strings in Redis, but just strings not containing a space or a newline character. For instance "foo" or "123456789" or "foo_bar" are valid keys, while "hello world" or "hello\n" are not.* (see http://code.google.com/p/redis/wiki/IntroductionToRedisDataTypes)
|
91
|
+
|
92
|
+
So when your job overwrites the #redis_key method, make sure these requirements are met. And all should be good.
|
93
|
+
|
94
|
+
### Resque integration
|
95
|
+
|
96
|
+
Unfortunately not everything could be done as a plugin, so I overwrote three methods of Resque::Job: create, reserve and destroy (there were no hooks for these events). All the logic is in `module Resque::Plugins::Loner` though, so it should be fairly easy to make this a *pure* plugin once the hooks are there.
|
@@ -0,0 +1,54 @@
|
|
1
|
+
#
|
2
|
+
# Since there were not enough hooks to hook into, I have to overwrite
|
3
|
+
# 3 methods of Resque::Job - the rest of the implementation is in the
|
4
|
+
# proper Plugin namespace.
|
5
|
+
#
|
6
|
+
module Resque
|
7
|
+
class Job
|
8
|
+
|
9
|
+
|
10
|
+
#
|
11
|
+
# Overwriting original create method to mark an item as queued
|
12
|
+
# after Resque::Job.create has called Resque.push
|
13
|
+
#
|
14
|
+
def self.create_with_loner(queue, klass, *args)
|
15
|
+
item = { :class => klass.to_s, :args => args }
|
16
|
+
return "EXISTED" if Resque::Plugins::Loner::Helpers.loner_queued?(queue, item)
|
17
|
+
job = create_without_loner(queue, klass, *args)
|
18
|
+
Resque::Plugins::Loner::Helpers.mark_loner_as_queued(queue, item)
|
19
|
+
job
|
20
|
+
end
|
21
|
+
|
22
|
+
#
|
23
|
+
# Overwriting original reserve method to mark an item as unqueued
|
24
|
+
#
|
25
|
+
def self.reserve_with_loner(queue)
|
26
|
+
item = reserve_without_loner(queue)
|
27
|
+
Resque::Plugins::Loner::Helpers.mark_loner_as_unqueued( queue, item ) if item
|
28
|
+
item
|
29
|
+
end
|
30
|
+
|
31
|
+
#
|
32
|
+
# Overwriting original destroy method to mark all destroyed jobs as unqueued.
|
33
|
+
# Because the original method only returns the amount of jobs destroyed, but not
|
34
|
+
# the jobs themselves. Hence Resque::Plugins::Loner::Helpers.job_destroy looks almost
|
35
|
+
# as the original method Resque::Job.destroy. Couldn't make it any dry'er.
|
36
|
+
#
|
37
|
+
def self.destroy_with_loner(queue, klass, *args)
|
38
|
+
Resque::Plugins::Loner::Helpers.job_destroy(queue, klass, *args)
|
39
|
+
destroy_without_loner(queue, klass, *args)
|
40
|
+
end
|
41
|
+
|
42
|
+
#
|
43
|
+
# Chain..
|
44
|
+
#
|
45
|
+
class << self
|
46
|
+
alias_method :create_without_loner, :create
|
47
|
+
alias_method :create, :create_with_loner
|
48
|
+
alias_method :reserve_without_loner, :reserve
|
49
|
+
alias_method :reserve, :reserve_with_loner
|
50
|
+
alias_method :destroy_without_loner, :destroy
|
51
|
+
alias_method :destroy, :destroy_with_loner
|
52
|
+
end
|
53
|
+
end
|
54
|
+
end
|
@@ -0,0 +1,24 @@
|
|
1
|
+
module Resque
|
2
|
+
|
3
|
+
#
|
4
|
+
# Why force one job type into one queue?
|
5
|
+
#
|
6
|
+
def self.enqueue_to( queue, klass, *args )
|
7
|
+
Job.create(queue, klass, *args)
|
8
|
+
end
|
9
|
+
|
10
|
+
def self.dequeue_from( queue, klass, *args)
|
11
|
+
Job.destroy(queue, klass, *args)
|
12
|
+
end
|
13
|
+
|
14
|
+
def self.enqueued?( klass, *args)
|
15
|
+
enqueued_in?(queue_from_class(klass), klass, *args )
|
16
|
+
end
|
17
|
+
|
18
|
+
def self.enqueued_in?(queue, klass, *args)
|
19
|
+
item = { :class => klass.to_s, :args => args }
|
20
|
+
return nil unless Resque::Plugins::Loner::Helpers.item_is_a_unique_job?(item)
|
21
|
+
Resque::Plugins::Loner::Helpers.loner_queued?(queue, item)
|
22
|
+
end
|
23
|
+
|
24
|
+
end
|
data/lib/resque-loner.rb
ADDED
@@ -0,0 +1,56 @@
|
|
1
|
+
module Resque
|
2
|
+
module Plugins
|
3
|
+
module Loner
|
4
|
+
class Helpers
|
5
|
+
extend Resque::Helpers
|
6
|
+
|
7
|
+
def self.loner_queued?(queue, item)
|
8
|
+
return false unless item_is_a_unique_job?(item)
|
9
|
+
redis.get(unique_job_queue_key(queue, item)) == "1"
|
10
|
+
end
|
11
|
+
|
12
|
+
def self.mark_loner_as_queued(queue, item)
|
13
|
+
return unless item_is_a_unique_job?(item)
|
14
|
+
redis.set(unique_job_queue_key(queue, item), 1)
|
15
|
+
end
|
16
|
+
|
17
|
+
def self.mark_loner_as_unqueued(queue, job)
|
18
|
+
item = job.is_a?(Resque::Job) ? job.payload : job
|
19
|
+
return unless item_is_a_unique_job?(item)
|
20
|
+
redis.del(unique_job_queue_key(queue, item))
|
21
|
+
end
|
22
|
+
|
23
|
+
def self.unique_job_queue_key(queue, item)
|
24
|
+
job_key = constantize(item[:class] || item["class"]).redis_key(item)
|
25
|
+
"loners:queue:#{queue}:job:#{job_key}"
|
26
|
+
end
|
27
|
+
|
28
|
+
def self.item_is_a_unique_job?(item)
|
29
|
+
begin
|
30
|
+
klass = constantize(item[:class] || item["class"])
|
31
|
+
klass.ancestors.include?(::Resque::Plugins::Loner::UniqueJob)
|
32
|
+
rescue
|
33
|
+
false # Resque testsuite also submits strings as job classes while Resque.enqueue'ing,
|
34
|
+
end # so resque-loner should not start throwing up when that happens.
|
35
|
+
end
|
36
|
+
|
37
|
+
def self.job_destroy(queue, klass, *args)
|
38
|
+
klass = klass.to_s
|
39
|
+
redis_queue = "queue:#{queue}"
|
40
|
+
|
41
|
+
redis.lrange(redis_queue, 0, -1).each do |string|
|
42
|
+
json = decode(string)
|
43
|
+
|
44
|
+
match = json['class'] == klass
|
45
|
+
match &= json['args'] == args unless args.empty?
|
46
|
+
|
47
|
+
if match
|
48
|
+
Resque::Plugins::Loner::Helpers.mark_loner_as_unqueued( queue, json )
|
49
|
+
end
|
50
|
+
end
|
51
|
+
end
|
52
|
+
|
53
|
+
end
|
54
|
+
end
|
55
|
+
end
|
56
|
+
end
|
@@ -0,0 +1,27 @@
|
|
1
|
+
require 'digest/md5'
|
2
|
+
|
3
|
+
#
|
4
|
+
# If you want your job to be unique, subclass it from this class. If you wish,
|
5
|
+
# you can overwrite this implementation of redis_key to fit your needs
|
6
|
+
#
|
7
|
+
module Resque
|
8
|
+
module Plugins
|
9
|
+
module Loner
|
10
|
+
class UniqueJob
|
11
|
+
extend Resque::Helpers
|
12
|
+
|
13
|
+
#
|
14
|
+
# Payload is what Resque stored for this job along with the job's class name.
|
15
|
+
# On a Resque with no plugins installed, this is a hash containing :class and :args
|
16
|
+
#
|
17
|
+
def self.redis_key(payload)
|
18
|
+
payload = decode(encode(payload)) # This is the cycle the data goes when being enqueued/dequeued
|
19
|
+
job = payload[:class] || payload["class"]
|
20
|
+
args = payload[:args] || payload["args"]
|
21
|
+
digest = Digest::MD5.hexdigest encode(:class => job, :args => args)
|
22
|
+
digest
|
23
|
+
end
|
24
|
+
end
|
25
|
+
end
|
26
|
+
end
|
27
|
+
end
|
metadata
ADDED
@@ -0,0 +1,93 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: resque-loner
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
prerelease: false
|
5
|
+
segments:
|
6
|
+
- 0
|
7
|
+
- 1
|
8
|
+
- 1
|
9
|
+
version: 0.1.1
|
10
|
+
platform: ruby
|
11
|
+
authors:
|
12
|
+
- Jannis Hermanns
|
13
|
+
autorequire:
|
14
|
+
bindir: bin
|
15
|
+
cert_chain: []
|
16
|
+
|
17
|
+
date: 2010-06-21 00:00:00 +02:00
|
18
|
+
default_executable:
|
19
|
+
dependencies:
|
20
|
+
- !ruby/object:Gem::Dependency
|
21
|
+
name: rspec
|
22
|
+
prerelease: false
|
23
|
+
requirement: &id001 !ruby/object:Gem::Requirement
|
24
|
+
requirements:
|
25
|
+
- - ">="
|
26
|
+
- !ruby/object:Gem::Version
|
27
|
+
segments:
|
28
|
+
- 0
|
29
|
+
version: "0"
|
30
|
+
type: :development
|
31
|
+
version_requirements: *id001
|
32
|
+
description: |
|
33
|
+
Makes sure that for special jobs, there can be only one job with the same workload in one queue.
|
34
|
+
|
35
|
+
Example:
|
36
|
+
class CacheSweeper < Resque::Plugins::Loner::UniqueJob
|
37
|
+
@queue = :cache_sweeps
|
38
|
+
|
39
|
+
def self.perform(article_id)
|
40
|
+
# Cache Me If You Can...
|
41
|
+
end
|
42
|
+
end
|
43
|
+
|
44
|
+
email:
|
45
|
+
- jannis@moviepilot.com
|
46
|
+
executables: []
|
47
|
+
|
48
|
+
extensions: []
|
49
|
+
|
50
|
+
extra_rdoc_files: []
|
51
|
+
|
52
|
+
files:
|
53
|
+
- lib/resque-ext/job.rb
|
54
|
+
- lib/resque-ext/resque.rb
|
55
|
+
- lib/resque-loner/helpers.rb
|
56
|
+
- lib/resque-loner/unique_job.rb
|
57
|
+
- lib/resque-loner/version.rb
|
58
|
+
- lib/resque-loner.rb
|
59
|
+
- README.markdown
|
60
|
+
has_rdoc: true
|
61
|
+
homepage: http://github.com/jayniz/resque-loner
|
62
|
+
licenses: []
|
63
|
+
|
64
|
+
post_install_message:
|
65
|
+
rdoc_options: []
|
66
|
+
|
67
|
+
require_paths:
|
68
|
+
- lib
|
69
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
70
|
+
requirements:
|
71
|
+
- - ">="
|
72
|
+
- !ruby/object:Gem::Version
|
73
|
+
segments:
|
74
|
+
- 0
|
75
|
+
version: "0"
|
76
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
77
|
+
requirements:
|
78
|
+
- - ">="
|
79
|
+
- !ruby/object:Gem::Version
|
80
|
+
segments:
|
81
|
+
- 1
|
82
|
+
- 3
|
83
|
+
- 5
|
84
|
+
version: 1.3.5
|
85
|
+
requirements: []
|
86
|
+
|
87
|
+
rubyforge_project: resque-loner
|
88
|
+
rubygems_version: 1.3.6
|
89
|
+
signing_key:
|
90
|
+
specification_version: 3
|
91
|
+
summary: Adds unique jobs to resque
|
92
|
+
test_files: []
|
93
|
+
|