infopark-politics 0.2.7
Sign up to get free protection for your applications and to get access to all the features.
- data/History.rdoc +31 -0
- data/LICENSE +20 -0
- data/README.rdoc +49 -0
- data/examples/queue_worker_example.rb +37 -0
- data/examples/token_worker_example.rb +35 -0
- data/lib/init.rb +4 -0
- data/lib/politics/discoverable_node.rb +137 -0
- data/lib/politics/static_queue_worker.rb +306 -0
- data/lib/politics/token_worker.rb +174 -0
- data/lib/politics/version.rb +5 -0
- data/lib/politics.rb +15 -0
- data/spec/static_queue_worker_spec.rb +54 -0
- data/test/static_queue_worker_test.rb +42 -0
- data/test/test_helper.rb +19 -0
- data/test/token_worker_test.rb +78 -0
- metadata +98 -0
data/History.rdoc
ADDED
@@ -0,0 +1,31 @@
|
|
1
|
+
= Changelog
|
2
|
+
|
3
|
+
== 0.2.5 (2009-02-04)
|
4
|
+
|
5
|
+
* Gracefully handle MemCache::MemCacheErrors. Just sleep until memcached comes back.
|
6
|
+
|
7
|
+
== 0.2.4 (2009-01-28)
|
8
|
+
|
9
|
+
* Reduce leader token expiration time to discourage a get/set race condition. (Brian Dainton)
|
10
|
+
|
11
|
+
== 0.2.3 (2009-01-12)
|
12
|
+
|
13
|
+
* Fix invalid result check in previous change. (Brian Dainton)
|
14
|
+
|
15
|
+
== 0.2.2 (2009-01-07)
|
16
|
+
|
17
|
+
* Fix invalid leader? logic in TokenWorker which could allow
|
18
|
+
two workers to become leader at the same time. (Brian Dainton)
|
19
|
+
|
20
|
+
== 0.2.1 (2008-11-04)
|
21
|
+
|
22
|
+
* Cleanup and prepare for public release for RubyConf 2008.
|
23
|
+
* Election Day. Politics. Get it? Hee hee.
|
24
|
+
|
25
|
+
== 0.2.0 (2008-10-24)
|
26
|
+
|
27
|
+
* Remove BucketWorker based on initial feedback. Add StaticQueueWorker as a more reliable replacement.
|
28
|
+
|
29
|
+
== 0.1.0 (2008-10-07)
|
30
|
+
|
31
|
+
* Add BucketWorker and TokenWorker mixins.
|
data/LICENSE
ADDED
@@ -0,0 +1,20 @@
|
|
1
|
+
Copyright (c) 2008 Mike Perham
|
2
|
+
|
3
|
+
Permission is hereby granted, free of charge, to any person obtaining
|
4
|
+
a copy of this software and associated documentation files (the
|
5
|
+
"Software"), to deal in the Software without restriction, including
|
6
|
+
without limitation the rights to use, copy, modify, merge, publish,
|
7
|
+
distribute, sublicense, and/or sell copies of the Software, and to
|
8
|
+
permit persons to whom the Software is furnished to do so, subject to
|
9
|
+
the following conditions:
|
10
|
+
|
11
|
+
The above copyright notice and this permission notice shall be
|
12
|
+
included in all copies or substantial portions of the Software.
|
13
|
+
|
14
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
15
|
+
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
16
|
+
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
17
|
+
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
18
|
+
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
19
|
+
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
20
|
+
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
data/README.rdoc
ADDED
@@ -0,0 +1,49 @@
|
|
1
|
+
= Politics
|
2
|
+
|
3
|
+
Politics is a Ruby library providing utilities and algorithms for solving common distributed
|
4
|
+
computing problems. Distributed Computing and Politics have a number of things in common:
|
5
|
+
1) they can be beautiful in theory but get really ugly in reality; 2) after working with
|
6
|
+
either for a few weeks/months/years (depending on your moral flexibility) you'll find yourself
|
7
|
+
intellectually devoid, a hollow shell of a man/woman/cybernetic killing machine.
|
8
|
+
|
9
|
+
So the name is to be taken tongue in cheek. Onto the real details.
|
10
|
+
|
11
|
+
== Common Problems in Distributed Computing
|
12
|
+
|
13
|
+
Ruby services are often deployed as a cloud of many processes across several machines,
|
14
|
+
for fault tolerance. This introduces the problem of coordination between those processes.
|
15
|
+
Specifically, how do you keep those processes from stepping on each other's electronic
|
16
|
+
toes? There are several answers:
|
17
|
+
|
18
|
+
1. Break the processing into N 'buckets'. Have an individual process fetch a bucket,
|
19
|
+
work on it, and ask for another. This is a very scalable solution as it allows N workers
|
20
|
+
to work on different parts of the same task concurrently. See the +StaticQueueWorker+ mixin.
|
21
|
+
1. Elect a leader for a short period of time. The leader is the process which performs the
|
22
|
+
actual processing. After a length of time, a new leader is elected from the group. This
|
23
|
+
is fault tolerant but not as scalable, as only one process is performing the task at a given
|
24
|
+
point in time. See the +TokenWorker+ mixin.
|
25
|
+
|
26
|
+
== Installation
|
27
|
+
|
28
|
+
sudo gem install mperham-politics -s http://gems.github.com
|
29
|
+
|
30
|
+
== Dependencies
|
31
|
+
|
32
|
+
StaticQueueWorker mixin
|
33
|
+
* memcached - the mechanism to elect a leader amongst a set of peers.
|
34
|
+
* DRb - the mechanism to communicate between peers.
|
35
|
+
* mDNS - the mechanism to discover peers.
|
36
|
+
|
37
|
+
TokenWorker mixin
|
38
|
+
* memcached - the mechanism to elect a leader amongst a set of peers.
|
39
|
+
|
40
|
+
|
41
|
+
= Author
|
42
|
+
|
43
|
+
Name:: Mike Perham
|
44
|
+
Email:: mailto:mperham@gmail.com
|
45
|
+
Twitter:: http://twitter.com/mperham
|
46
|
+
Homepage:: http://mikeperham.com/
|
47
|
+
|
48
|
+
This software is free for you to use as you'd like. If you find it useful, please consider giving
|
49
|
+
me a recommendation at {Working with Rails}[http://workingwithrails.com/person/10797-mike-perham].
|
@@ -0,0 +1,37 @@
|
|
1
|
+
#gem 'mperham-politics'
|
2
|
+
require 'politics'
|
3
|
+
require 'politics/static_queue_worker'
|
4
|
+
|
5
|
+
# Test this example by starting memcached locally and then in two irb sessions, run this:
|
6
|
+
#
|
7
|
+
=begin
|
8
|
+
require 'queue_worker_example'
|
9
|
+
p = Politics::QueueWorkerExample.new
|
10
|
+
p.start
|
11
|
+
=end
|
12
|
+
#
|
13
|
+
# You can then watch as one of them is elected leader. You can kill the leader and verify
|
14
|
+
# the backup process is elected after approximately iteration_length seconds.
|
15
|
+
#
|
16
|
+
module Politics
|
17
|
+
class QueueWorkerExample
|
18
|
+
include Politics::StaticQueueWorker
|
19
|
+
TOTAL_BUCKETS = 20
|
20
|
+
|
21
|
+
def initialize
|
22
|
+
register_worker 'queue-example', TOTAL_BUCKETS, :iteration_length => 60, :servers => memcached_servers
|
23
|
+
end
|
24
|
+
|
25
|
+
def start
|
26
|
+
process_bucket do |bucket|
|
27
|
+
puts "PID #{$$} processing bucket #{bucket}/#{TOTAL_BUCKETS} at #{Time.now}..."
|
28
|
+
sleep 1.5
|
29
|
+
end
|
30
|
+
end
|
31
|
+
|
32
|
+
def memcached_servers
|
33
|
+
['127.0.0.1:11211']
|
34
|
+
end
|
35
|
+
|
36
|
+
end
|
37
|
+
end
|
@@ -0,0 +1,35 @@
|
|
1
|
+
#gem 'mperham-politics'
|
2
|
+
require 'politics'
|
3
|
+
require 'politics/token_worker'
|
4
|
+
|
5
|
+
# Test this example by starting memcached locally and then in two irb sessions, run this:
|
6
|
+
#
|
7
|
+
=begin
|
8
|
+
require 'token_worker_example'
|
9
|
+
p = Politics::TokenWorkerExample.new
|
10
|
+
p.start
|
11
|
+
=end
|
12
|
+
#
|
13
|
+
# You can then watch as one of them is elected leader. You can kill the leader and verify
|
14
|
+
# the backup process is elected after approximately iteration_length seconds.
|
15
|
+
#
|
16
|
+
module Politics
|
17
|
+
class TokenWorkerExample
|
18
|
+
include Politics::TokenWorker
|
19
|
+
|
20
|
+
def initialize
|
21
|
+
register_worker 'token-example', :iteration_length => 10, :servers => memcached_servers
|
22
|
+
end
|
23
|
+
|
24
|
+
def start
|
25
|
+
process do
|
26
|
+
puts "PID #{$$} processing at #{Time.now}..."
|
27
|
+
end
|
28
|
+
end
|
29
|
+
|
30
|
+
def memcached_servers
|
31
|
+
['localhost:11211']
|
32
|
+
end
|
33
|
+
|
34
|
+
end
|
35
|
+
end
|
data/lib/init.rb
ADDED
@@ -0,0 +1,137 @@
|
|
1
|
+
require 'socket'
|
2
|
+
require 'ipaddr'
|
3
|
+
require 'uri'
|
4
|
+
require 'drb'
|
5
|
+
|
6
|
+
require 'net/dns/mdns-sd'
|
7
|
+
require 'net/dns/resolv-mdns'
|
8
|
+
require 'net/dns/resolv-replace'
|
9
|
+
|
10
|
+
=begin
|
11
|
+
IRB setup:
|
12
|
+
require 'lib/politics'
|
13
|
+
require 'lib/politics/discoverable_node'
|
14
|
+
require 'lib/politics/convention'
|
15
|
+
Object.send(:include, Election::Candidate)
|
16
|
+
p = Object.new
|
17
|
+
p.register
|
18
|
+
=end
|
19
|
+
|
20
|
+
module Politics
|
21
|
+
|
22
|
+
# A module to solve the Group Membership problem in distributed computing.
|
23
|
+
# The "group" is the cloud of processes which are replicas and need to coordinate.
|
24
|
+
# Handling group membership is the first step in solving distributed computing
|
25
|
+
# problems. There are two issues:
|
26
|
+
# 1) replica discovery
|
27
|
+
# 2) controlling and maintaining a consistent group of replicas in each replica
|
28
|
+
#
|
29
|
+
# Peer discovery is implemented using Bonjour for local network auto-discovery.
|
30
|
+
# Each process registers itself on the network as a process of a given type.
|
31
|
+
# Each process then queries the network for other replicas of the same type.
|
32
|
+
#
|
33
|
+
# The replicas then run the Multi-Paxos algorithm to provide consensus on a given
|
34
|
+
# replica set. The algorithm is robust in the face of crash failures, but not
|
35
|
+
# Byzantine failures.
|
36
|
+
module DiscoverableNode
|
37
|
+
|
38
|
+
attr_accessor :group
|
39
|
+
attr_accessor :coordinator
|
40
|
+
|
41
|
+
def register(group='foo')
|
42
|
+
self.group = group
|
43
|
+
start_drb
|
44
|
+
register_with_bonjour(group)
|
45
|
+
Politics::log.info { "Registered #{self} in group #{group} with RID #{rid}" }
|
46
|
+
sleep 0.5
|
47
|
+
find_replicas(0)
|
48
|
+
end
|
49
|
+
|
50
|
+
def replicas
|
51
|
+
@replicas ||= {}
|
52
|
+
end
|
53
|
+
|
54
|
+
def find_replicas(count)
|
55
|
+
replicas.clear if count % 5 == 0
|
56
|
+
return if count > 10 # Guaranteed to terminate, but not successfully :-(
|
57
|
+
|
58
|
+
#puts "Finding replicas"
|
59
|
+
peer_set = []
|
60
|
+
bonjour_scan do |replica|
|
61
|
+
(his_rid, his_peers) = replica.hello(rid)
|
62
|
+
unless replicas.has_key?(his_rid)
|
63
|
+
replicas[his_rid] = replica
|
64
|
+
end
|
65
|
+
his_peers.each do |peer|
|
66
|
+
peer_set << peer unless peer_set.include? peer
|
67
|
+
end
|
68
|
+
end
|
69
|
+
#p [peer_set.sort, replicas.keys.sort]
|
70
|
+
if peer_set.sort != replicas.keys.sort
|
71
|
+
# Recursively call ourselves until the network has settled down and all
|
72
|
+
# peers have reached agreement on the peer group membership.
|
73
|
+
sleep 0.2
|
74
|
+
find_replicas(count + 1)
|
75
|
+
end
|
76
|
+
Politics::log.info { "Found #{replicas.size} peers: #{replicas.keys.sort.inspect}" } if count == 0
|
77
|
+
replicas
|
78
|
+
end
|
79
|
+
|
80
|
+
# Called for one peer to introduce itself to another peer. The caller
|
81
|
+
# sends his RID, the responder sends his RID and his list of current peer
|
82
|
+
# RIDs.
|
83
|
+
def hello(remote_rid)
|
84
|
+
[rid, replicas.keys]
|
85
|
+
end
|
86
|
+
|
87
|
+
# A process's Replica ID is its PID + a random 16-bit value. We don't want
|
88
|
+
# weigh solely based on PID or IP as that may unduly load one machine.
|
89
|
+
def rid
|
90
|
+
@rid ||= begin
|
91
|
+
rand(65536) + $$
|
92
|
+
end
|
93
|
+
end
|
94
|
+
|
95
|
+
private
|
96
|
+
|
97
|
+
def register_with_bonjour(group)
|
98
|
+
# Register our DRb server with Bonjour.
|
99
|
+
handle = Net::DNS::MDNSSD.register("#{self.group}-#{local_ip}-#{$$}",
|
100
|
+
"_#{self.group}._tcp", 'local', @port)
|
101
|
+
|
102
|
+
['INT', 'TERM'].each { |signal|
|
103
|
+
trap(signal) { handle.stop }
|
104
|
+
}
|
105
|
+
end
|
106
|
+
|
107
|
+
def start_drb
|
108
|
+
server = DRb.start_service(nil, self)
|
109
|
+
@port = URI.parse(DRb.uri).port
|
110
|
+
['INT', 'TERM'].each { |signal|
|
111
|
+
trap(signal) { server.stop_service }
|
112
|
+
}
|
113
|
+
end
|
114
|
+
|
115
|
+
def bonjour_scan
|
116
|
+
Net::DNS::MDNSSD.browse("_#{@group}._tcp") do |b|
|
117
|
+
Net::DNS::MDNSSD.resolve(b.name, b.type) do |r|
|
118
|
+
drburl = "druby://#{r.target}:#{r.port}"
|
119
|
+
replica = DRbObject.new(nil, drburl)
|
120
|
+
yield replica
|
121
|
+
end
|
122
|
+
end
|
123
|
+
end
|
124
|
+
|
125
|
+
# http://coderrr.wordpress.com/2008/05/28/get-your-local-ip-address/
|
126
|
+
def local_ip
|
127
|
+
orig, Socket.do_not_reverse_lookup = Socket.do_not_reverse_lookup, true # turn off reverse DNS resolution temporarily
|
128
|
+
|
129
|
+
UDPSocket.open do |s|
|
130
|
+
s.connect '64.233.187.99', 1
|
131
|
+
IPAddr.new(s.addr.last).to_i
|
132
|
+
end
|
133
|
+
ensure
|
134
|
+
Socket.do_not_reverse_lookup = orig
|
135
|
+
end
|
136
|
+
end
|
137
|
+
end
|
@@ -0,0 +1,306 @@
|
|
1
|
+
require 'socket'
|
2
|
+
require 'ipaddr'
|
3
|
+
require 'uri'
|
4
|
+
require 'drb'
|
5
|
+
|
6
|
+
begin
|
7
|
+
require 'net/dns/mdns-sd'
|
8
|
+
require 'net/dns/resolv-mdns'
|
9
|
+
require 'net/dns/resolv-replace'
|
10
|
+
rescue LoadError => e
|
11
|
+
puts "Unable to load net-mdns, please run `sudo gem install net-mdns`: #{e.message}"
|
12
|
+
exit(1)
|
13
|
+
end
|
14
|
+
|
15
|
+
begin
|
16
|
+
require 'memcache'
|
17
|
+
rescue LoadError => e
|
18
|
+
puts "Unable to load memcache client, please run `sudo gem install memcache-client`: #{e.message}"
|
19
|
+
exit(1)
|
20
|
+
end
|
21
|
+
|
22
|
+
module Politics
|
23
|
+
|
24
|
+
# The StaticQueueWorker mixin allows a processing daemon to "lease" or checkout
|
25
|
+
# a portion of a problem space to ensure no other process is processing that same
|
26
|
+
# space at the same time. The processing space is cut into N "buckets", each of
|
27
|
+
# which is placed in a queue. Processes then fetch entries from the queue
|
28
|
+
# and process them. It is up to the application to map the bucket number onto its
|
29
|
+
# specific problem space.
|
30
|
+
#
|
31
|
+
# Note that memcached is used for leader election. The leader owns the queue during
|
32
|
+
# the iteration period and other peers fetch buckets from the current leader during the
|
33
|
+
# iteration.
|
34
|
+
#
|
35
|
+
# The leader hands out buckets in order. Once all the buckets have been processed, the
|
36
|
+
# leader returns nil to the processors which causes them to sleep until the end of the
|
37
|
+
# iteration. Then everyone wakes up, a new leader is elected, and the processing starts
|
38
|
+
# all over again.
|
39
|
+
#
|
40
|
+
# DRb and mDNS are used for peer discovery and communication.
|
41
|
+
#
|
42
|
+
# Example usage:
|
43
|
+
#
|
44
|
+
# class Analyzer
|
45
|
+
# include Politics::StaticQueueWorker
|
46
|
+
# TOTAL_BUCKETS = 16
|
47
|
+
#
|
48
|
+
# def start
|
49
|
+
# register_worker(self.class.name, TOTAL_BUCKETS)
|
50
|
+
# process_bucket do |bucket|
|
51
|
+
# puts "Analyzing bucket #{bucket} of #{TOTAL_BUCKETS}"
|
52
|
+
# sleep 5
|
53
|
+
# end
|
54
|
+
# end
|
55
|
+
# end
|
56
|
+
#
|
57
|
+
# Note: process_bucket never returns i.e. this should be the main loop of your processing daemon.
|
58
|
+
#
|
59
|
+
module StaticQueueWorker
|
60
|
+
|
61
|
+
def self.included(model) #:nodoc:
|
62
|
+
model.class_eval do
|
63
|
+
attr_accessor :group_name, :iteration_length
|
64
|
+
end
|
65
|
+
end
|
66
|
+
|
67
|
+
# Register this process as able to work on buckets.
|
68
|
+
def register_worker(name, bucket_count, config={})
|
69
|
+
options = {
|
70
|
+
:iteration_length => 60,
|
71
|
+
:servers => ['127.0.0.1:11211']
|
72
|
+
}
|
73
|
+
options.merge!(config)
|
74
|
+
|
75
|
+
self.group_name = name
|
76
|
+
self.iteration_length = options[:iteration_length]
|
77
|
+
@nominated_at = Time.now - self.iteration_length
|
78
|
+
@memcache_client = client_for(Array(options[:servers]))
|
79
|
+
# FIXME: Tests
|
80
|
+
@domain = options[:domain]
|
81
|
+
|
82
|
+
@buckets = []
|
83
|
+
@bucket_count = bucket_count
|
84
|
+
|
85
|
+
register_with_bonjour
|
86
|
+
log.progname = @uri
|
87
|
+
log.info { "Registered in group #{group_name} at port #{@port}" }
|
88
|
+
end
|
89
|
+
|
90
|
+
# Fetch a bucket out of the queue and pass it to the given block to be processed.
|
91
|
+
#
|
92
|
+
# +bucket+:: The bucket number to process, within the range 0...TOTAL_BUCKETS
|
93
|
+
def process_bucket(&block)
|
94
|
+
log.debug "start bucket processing"
|
95
|
+
raise ArgumentError, "process_bucket requires a block!" unless block_given?
|
96
|
+
raise ArgumentError, "You must call register_worker before processing!" unless @memcache_client
|
97
|
+
|
98
|
+
begin
|
99
|
+
begin
|
100
|
+
nominate
|
101
|
+
if leader?
|
102
|
+
# Drb thread handles leader duties
|
103
|
+
log.info { "has been elected leader" }
|
104
|
+
initialize_buckets
|
105
|
+
# keeping leader state as long as buckets are available by renominating before
|
106
|
+
# nomination times out
|
107
|
+
while !@buckets.empty? do
|
108
|
+
log.debug { "relaxes half the time until next iteration" }
|
109
|
+
relax(until_next_iteration / 2)
|
110
|
+
log.debug { "renew nomination too keep the hat and finish the work" }
|
111
|
+
@memcache_client.set(token, @uri, iteration_length)
|
112
|
+
@nominated_at = Time.now
|
113
|
+
log.error { "tried to staying leader but failed" } unless leader?
|
114
|
+
end
|
115
|
+
relax until_next_iteration
|
116
|
+
else
|
117
|
+
# Get a bucket from the leader and process it
|
118
|
+
begin
|
119
|
+
log.debug "getting bucket request from leader (#{leader_uri}) and processing it"
|
120
|
+
bucket_process(*leader.bucket_request, &block)
|
121
|
+
rescue DRb::DRbError => dre
|
122
|
+
log.error { "Error talking to leader: #{dre.message}" }
|
123
|
+
relax until_next_iteration
|
124
|
+
end
|
125
|
+
end
|
126
|
+
rescue MemCache::MemCacheError => e
|
127
|
+
log.error { "Unexpected MemCacheError: #{e.message}" }
|
128
|
+
relax until_next_iteration
|
129
|
+
end
|
130
|
+
end while loop?
|
131
|
+
end
|
132
|
+
|
133
|
+
def bucket_request
|
134
|
+
if leader?
|
135
|
+
log.debug "delivering bucket request"
|
136
|
+
[@buckets.pop, until_next_iteration]
|
137
|
+
else
|
138
|
+
log.debug "received request for bucket but am not leader - delivering :not_leader"
|
139
|
+
[:not_leader, 0]
|
140
|
+
end
|
141
|
+
end
|
142
|
+
|
143
|
+
def until_next_iteration
|
144
|
+
left = iteration_length - (Time.now - @nominated_at)
|
145
|
+
left > 0 ? left : 0
|
146
|
+
end
|
147
|
+
|
148
|
+
private
|
149
|
+
|
150
|
+
def bucket_process(bucket, sleep_time)
|
151
|
+
case bucket
|
152
|
+
when nil
|
153
|
+
# No more buckets to process this iteration
|
154
|
+
log.info { "No more buckets in this iteration, sleeping for #{sleep_time} sec" }
|
155
|
+
sleep sleep_time
|
156
|
+
when :not_leader
|
157
|
+
# Uh oh, race condition? Invalid any local cache and check again
|
158
|
+
log.warn { "Recv'd NOT_LEADER from peer." }
|
159
|
+
relax 1
|
160
|
+
@leader_uri = nil
|
161
|
+
else
|
162
|
+
log.info { "processing #{bucket}" }
|
163
|
+
yield bucket
|
164
|
+
end
|
165
|
+
end
|
166
|
+
|
167
|
+
def log
|
168
|
+
@logger ||= Logger.new(STDOUT)
|
169
|
+
end
|
170
|
+
|
171
|
+
def initialize_buckets
|
172
|
+
@buckets.clear
|
173
|
+
@bucket_count.times { |idx| @buckets << idx }
|
174
|
+
end
|
175
|
+
|
176
|
+
def replicas
|
177
|
+
@replicas ||= []
|
178
|
+
end
|
179
|
+
|
180
|
+
def leader
|
181
|
+
log.debug "trying to get the leader (#{leader_uri})"
|
182
|
+
name = leader_uri
|
183
|
+
repl = nil
|
184
|
+
log.debug "replicas: #{replicas}"
|
185
|
+
loops = 0
|
186
|
+
while loops < 10 and (replicas.empty? or repl == nil)
|
187
|
+
repl = replicas.detect { |replica| replica.__drburi == name }
|
188
|
+
log.debug "repl: #{repl && repl.__drburi}"
|
189
|
+
unless repl
|
190
|
+
log.debug "scan bonjour for other nodes (replicas)"
|
191
|
+
relax 1
|
192
|
+
bonjour_scan do |replica|
|
193
|
+
log.debug "found replica #{replica.__drburi}"
|
194
|
+
replicas << replica
|
195
|
+
end
|
196
|
+
end
|
197
|
+
loops += 1
|
198
|
+
end
|
199
|
+
repl || raise(DRb::DRbError.new("Could not contact leader #{leader_uri}"))
|
200
|
+
end
|
201
|
+
|
202
|
+
def loop?
|
203
|
+
true
|
204
|
+
end
|
205
|
+
|
206
|
+
def token
|
207
|
+
"#{group_name}_token"
|
208
|
+
end
|
209
|
+
|
210
|
+
def cleanup
|
211
|
+
at_exit do
|
212
|
+
@memcache_client.delete(token) if leader?
|
213
|
+
end
|
214
|
+
end
|
215
|
+
|
216
|
+
def pause_until_expiry(elapsed)
|
217
|
+
pause_time = (iteration_length - elapsed).to_f
|
218
|
+
if pause_time > 0
|
219
|
+
relax(pause_time)
|
220
|
+
else
|
221
|
+
raise ArgumentError, "Negative iteration time left. Assuming the worst and exiting... #{iteration_length}/#{elapsed}"
|
222
|
+
end
|
223
|
+
end
|
224
|
+
|
225
|
+
def relax(time)
|
226
|
+
sleep time
|
227
|
+
end
|
228
|
+
|
229
|
+
# Nominate ourself as leader by contacting the memcached server
|
230
|
+
# and attempting to add the token with our name attached.
|
231
|
+
def nominate
|
232
|
+
log.debug("try to nominate")
|
233
|
+
@memcache_client.add(token, @uri, iteration_length)
|
234
|
+
@nominated_at = Time.now
|
235
|
+
@leader_uri = nil
|
236
|
+
end
|
237
|
+
|
238
|
+
def leader_uri
|
239
|
+
@leader_uri ||= @memcache_client.get(token)
|
240
|
+
end
|
241
|
+
|
242
|
+
# Check to see if we are leader by looking at the process name
|
243
|
+
# associated with the token.
|
244
|
+
def leader?
|
245
|
+
until_next_iteration > 0 && @uri == leader_uri
|
246
|
+
end
|
247
|
+
|
248
|
+
# Easy to mock or monkey-patch if another MemCache client is preferred.
|
249
|
+
def client_for(servers)
|
250
|
+
MemCache.new(servers)
|
251
|
+
end
|
252
|
+
|
253
|
+
def time_for(&block)
|
254
|
+
a = Time.now
|
255
|
+
yield
|
256
|
+
Time.now - a
|
257
|
+
end
|
258
|
+
|
259
|
+
|
260
|
+
def register_with_bonjour
|
261
|
+
server = DRb.start_service(nil, self)
|
262
|
+
@uri = DRb.uri
|
263
|
+
@port = URI.parse(DRb.uri).port
|
264
|
+
|
265
|
+
# Register our DRb server with Bonjour.
|
266
|
+
name = "#{self.group_name}-#{local_ip}-#{$$}"
|
267
|
+
type = "_#{group_name}._tcp"
|
268
|
+
domain = "local"
|
269
|
+
log.debug "register service #{name} of type #{type} within domain #{domain} at port #{@port}"
|
270
|
+
handle = Net::DNS::MDNSSD.register(name, type, domain, @port) do |reply|
|
271
|
+
log.debug "registered as #{reply.fullname}"
|
272
|
+
end
|
273
|
+
|
274
|
+
# ['INT', 'TERM'].each { |signal|
|
275
|
+
# trap(signal) do
|
276
|
+
# handle.stop
|
277
|
+
# server.stop_service
|
278
|
+
# end
|
279
|
+
# }
|
280
|
+
end
|
281
|
+
|
282
|
+
def bonjour_scan
|
283
|
+
Net::DNS::MDNSSD.browse("_#{group_name}._tcp") do |b|
|
284
|
+
Net::DNS::MDNSSD.resolve(b.name, b.type, b.domain) do |r|
|
285
|
+
yield DRbObject.new(nil, "druby://#{r.target}:#{r.port}")
|
286
|
+
unless !@domain || r.target =~ /\.#{@domain}$/
|
287
|
+
yield DRbObject.new(nil, "druby://#{r.target}.#{@domain}:#{r.port}")
|
288
|
+
end
|
289
|
+
end
|
290
|
+
end
|
291
|
+
end
|
292
|
+
|
293
|
+
# http://coderrr.wordpress.com/2008/05/28/get-your-local-ip-address/
|
294
|
+
def local_ip
|
295
|
+
orig, Socket.do_not_reverse_lookup = Socket.do_not_reverse_lookup, true # turn off reverse DNS resolution temporarily
|
296
|
+
|
297
|
+
UDPSocket.open do |s|
|
298
|
+
s.connect '64.233.187.99', 1
|
299
|
+
IPAddr.new(s.addr.last).to_i
|
300
|
+
end
|
301
|
+
ensure
|
302
|
+
Socket.do_not_reverse_lookup = orig
|
303
|
+
end
|
304
|
+
|
305
|
+
end
|
306
|
+
end
|
@@ -0,0 +1,174 @@
|
|
1
|
+
begin
|
2
|
+
require 'memcache'
|
3
|
+
rescue LoadError => e
|
4
|
+
puts "Unable to load memcache client, please run `sudo gem install memcache-client`: #{e.message}"
|
5
|
+
exit(1)
|
6
|
+
end
|
7
|
+
|
8
|
+
module Politics
|
9
|
+
|
10
|
+
# An algorithm to provide leader election between a set of identical processing daemons.
|
11
|
+
#
|
12
|
+
# Each TokenWorker is an instance which needs to perform some processing.
|
13
|
+
# The worker instance must obtain the leader token before performing some task.
|
14
|
+
# We use a memcached server as a central token authority to provide a shared,
|
15
|
+
# network-wide view for all processors. This reliance on a single resource means
|
16
|
+
# if your memcached server goes down, so do the processors. Oftentimes,
|
17
|
+
# this is an acceptable trade-off since many high-traffic web sites would
|
18
|
+
# not be useable without memcached running anyhow.
|
19
|
+
#
|
20
|
+
# Essentially each TokenWorker attempts to elect itself every +:iteration_length+
|
21
|
+
# seconds by simply setting a key in memcached to its own name. Memcached tracks
|
22
|
+
# which name got there first. The key expires after +:iteration_length+ seconds.
|
23
|
+
#
|
24
|
+
# Example usage:
|
25
|
+
# class Analyzer
|
26
|
+
# include Politics::TokenWorker
|
27
|
+
#
|
28
|
+
# def initialize
|
29
|
+
# register_worker 'analyzer', :iteration_length => 120, :servers => ['localhost:11211']
|
30
|
+
# end
|
31
|
+
#
|
32
|
+
# def start
|
33
|
+
# process do
|
34
|
+
# # do analysis here, will only be done when this process
|
35
|
+
# # is actually elected leader, otherwise it will sleep for
|
36
|
+
# # iteration_length seconds.
|
37
|
+
# end
|
38
|
+
# end
|
39
|
+
# end
|
40
|
+
#
|
41
|
+
# Notes:
|
42
|
+
# * This will not work with multiple instances in the same Ruby process.
|
43
|
+
# The library is only designed to elect a leader from a set of processes, not instances within
|
44
|
+
# a single process.
|
45
|
+
# * The algorithm makes no attempt to keep the same leader during the next iteration.
|
46
|
+
# This can often times be quite beneficial (e.g. leveraging a warm cache from the last iteration)
|
47
|
+
# for performance but is left to the reader to implement.
|
48
|
+
module TokenWorker
|
49
|
+
|
50
|
+
def self.included(model) #:nodoc:
|
51
|
+
model.class_eval do
|
52
|
+
attr_accessor :memcache_client, :token, :iteration_length, :worker_name
|
53
|
+
class << self
|
54
|
+
attr_accessor :worker_instance #:nodoc:
|
55
|
+
end
|
56
|
+
end
|
57
|
+
end
|
58
|
+
|
59
|
+
# Register this instance as a worker.
|
60
|
+
#
|
61
|
+
# Options:
|
62
|
+
# +:iteration_length+:: The length of a processing iteration, in seconds. The
|
63
|
+
# leader's 'reign' lasts for this length of time.
|
64
|
+
# +:servers+:: An array of memcached server strings
|
65
|
+
def register_worker(name, config={})
|
66
|
+
# track the latest instance of this class, there's really only supposed to be
|
67
|
+
# a single TokenWorker instance per process.
|
68
|
+
self.class.worker_instance = self
|
69
|
+
|
70
|
+
options = { :iteration_length => 60, :servers => ['localhost:11211'] }
|
71
|
+
options.merge!(config)
|
72
|
+
|
73
|
+
self.token = "#{name}_token"
|
74
|
+
self.memcache_client = client_for(Array(options[:servers]))
|
75
|
+
self.iteration_length = options[:iteration_length]
|
76
|
+
self.worker_name = "#{Socket.gethostname}:#{$$}"
|
77
|
+
|
78
|
+
cleanup
|
79
|
+
end
|
80
|
+
|
81
|
+
def process(*args, &block)
|
82
|
+
verify_registration
|
83
|
+
|
84
|
+
begin
|
85
|
+
# Try to add our name as the worker with the master token.
|
86
|
+
# If another process got there first, this is a noop.
|
87
|
+
# We add an expiry so that the master token will constantly
|
88
|
+
# need to be refreshed (in case the current leader dies).
|
89
|
+
time = 0
|
90
|
+
begin
|
91
|
+
nominate
|
92
|
+
|
93
|
+
if leader?
|
94
|
+
Politics::log.info { "#{worker_name} elected leader at #{Time.now}" }
|
95
|
+
# If we are the master worker, do the work.
|
96
|
+
time = time_for do
|
97
|
+
result = block.call(*args)
|
98
|
+
end
|
99
|
+
end
|
100
|
+
rescue MemCache::MemCacheError => me
|
101
|
+
Politics::log.error("Error from memcached, pausing until the next iteration...")
|
102
|
+
Politics::log.error(me.message)
|
103
|
+
Politics::log.error(me.backtrace.join("\n"))
|
104
|
+
self.memcache_client.reset
|
105
|
+
end
|
106
|
+
|
107
|
+
pause_until_expiry(time)
|
108
|
+
reset_state
|
109
|
+
end while loop?
|
110
|
+
end
|
111
|
+
|
112
|
+
private
|
113
|
+
|
114
|
+
def reset_state
|
115
|
+
@leader = nil
|
116
|
+
end
|
117
|
+
|
118
|
+
def verify_registration
|
119
|
+
unless self.class.worker_instance
|
120
|
+
raise ArgumentError, "Cannot call process without first calling register_worker"
|
121
|
+
end
|
122
|
+
unless self.class.worker_instance == self
|
123
|
+
raise SecurityError, "Only one instance of #{self.class} per process. Another instance was created after this one."
|
124
|
+
end
|
125
|
+
end
|
126
|
+
|
127
|
+
def loop?
|
128
|
+
true
|
129
|
+
end
|
130
|
+
|
131
|
+
def cleanup
|
132
|
+
at_exit do
|
133
|
+
memcache_client.delete(token) if leader?
|
134
|
+
end
|
135
|
+
end
|
136
|
+
|
137
|
+
def pause_until_expiry(elapsed)
|
138
|
+
pause_time = (iteration_length - elapsed).to_f
|
139
|
+
if pause_time > 0
|
140
|
+
relax(pause_time)
|
141
|
+
else
|
142
|
+
raise ArgumentError, "Negative iteration time left. Assuming the worst and exiting... #{iteration_length}/#{elapsed}"
|
143
|
+
end
|
144
|
+
end
|
145
|
+
|
146
|
+
def relax(time)
|
147
|
+
sleep time
|
148
|
+
end
|
149
|
+
|
150
|
+
# Nominate ourself as leader by contacting the memcached server
|
151
|
+
# and attempting to add the token with our name attached.
|
152
|
+
# The result will tell us if memcached stored our value and therefore
|
153
|
+
# if we are now leader.
|
154
|
+
def nominate
|
155
|
+
result = memcache_client.add(token, worker_name, (iteration_length * 0.9).to_i)
|
156
|
+
@leader = (result =~ /\ASTORED/)
|
157
|
+
end
|
158
|
+
|
159
|
+
def leader?
|
160
|
+
@leader
|
161
|
+
end
|
162
|
+
|
163
|
+
# Easy to mock or monkey-patch if another MemCache client is preferred.
|
164
|
+
def client_for(servers)
|
165
|
+
MemCache.new(servers)
|
166
|
+
end
|
167
|
+
|
168
|
+
def time_for(&block)
|
169
|
+
a = Time.now
|
170
|
+
yield
|
171
|
+
Time.now - a
|
172
|
+
end
|
173
|
+
end
|
174
|
+
end
|
data/lib/politics.rb
ADDED
@@ -0,0 +1,54 @@
|
|
1
|
+
# FIXME oberen Teil in spec_helper.rb auslagern
|
2
|
+
require 'rubygems'
|
3
|
+
$:.unshift(File.dirname(__FILE__) + '/../lib')
|
4
|
+
require File.dirname(__FILE__) + '/../lib/init'
|
5
|
+
Politics::log.level = Logger::FATAL
|
6
|
+
|
7
|
+
class Worker
|
8
|
+
include Politics::StaticQueueWorker
|
9
|
+
def initialize
|
10
|
+
log.level = Logger::FATAL
|
11
|
+
register_worker 'worker', 10, :iteration_length => 10
|
12
|
+
end
|
13
|
+
|
14
|
+
def start
|
15
|
+
process_bucket do |bucket|
|
16
|
+
sleep 1
|
17
|
+
end
|
18
|
+
end
|
19
|
+
end
|
20
|
+
|
21
|
+
describe Worker do
|
22
|
+
before do
|
23
|
+
@worker = Worker.new
|
24
|
+
end
|
25
|
+
|
26
|
+
it "it should provide 'until_next_iteration' even if nominate was not completed" do
|
27
|
+
@worker.until_next_iteration
|
28
|
+
end
|
29
|
+
|
30
|
+
it "should return zero for 'until_next_iteration' if nominate was not completed" do
|
31
|
+
@worker.until_next_iteration.should == 0
|
32
|
+
end
|
33
|
+
|
34
|
+
describe Worker, "when processing bucket" do
|
35
|
+
before do
|
36
|
+
@worker.stub!(:until_next_iteration).and_return 666
|
37
|
+
@worker.should_receive(:loop?).and_return true, true, true, false
|
38
|
+
end
|
39
|
+
|
40
|
+
it "should relax until next iteration on MemCache errors during nomination" do
|
41
|
+
@worker.should_receive(:nominate).at_least(1).and_raise MemCache::MemCacheError.new("Buh!")
|
42
|
+
@worker.should_receive(:relax).with(666).exactly(4).times
|
43
|
+
|
44
|
+
@worker.start
|
45
|
+
end
|
46
|
+
|
47
|
+
it "should relax until next iteration on MemCache errors during request for leader" do
|
48
|
+
@worker.should_receive(:leader_uri).at_least(1).and_raise(MemCache::MemCacheError.new("Buh!"))
|
49
|
+
@worker.should_receive(:relax).with(666).exactly(4).times
|
50
|
+
|
51
|
+
@worker.start
|
52
|
+
end
|
53
|
+
end
|
54
|
+
end
|
@@ -0,0 +1,42 @@
|
|
1
|
+
require File.dirname(__FILE__) + '/test_helper'
|
2
|
+
|
3
|
+
Thread.abort_on_exception = true
|
4
|
+
|
5
|
+
class Worker
|
6
|
+
include Politics::StaticQueueWorker
|
7
|
+
def initialize
|
8
|
+
register_worker 'worker', 10, :iteration_length => 10
|
9
|
+
end
|
10
|
+
|
11
|
+
def start
|
12
|
+
process_bucket do |bucket|
|
13
|
+
sleep 1
|
14
|
+
end
|
15
|
+
end
|
16
|
+
end
|
17
|
+
|
18
|
+
class StaticQueueWorkerTest < Test::Unit::TestCase
|
19
|
+
|
20
|
+
context "nodes" do
|
21
|
+
setup do
|
22
|
+
@nodes = []
|
23
|
+
5.times do
|
24
|
+
@nodes << nil
|
25
|
+
end
|
26
|
+
end
|
27
|
+
|
28
|
+
should "start up" do
|
29
|
+
processes = @nodes.map do
|
30
|
+
fork do
|
31
|
+
['INT', 'TERM'].each { |signal|
|
32
|
+
trap(signal) { exit(0) }
|
33
|
+
}
|
34
|
+
Worker.new.start
|
35
|
+
end
|
36
|
+
end
|
37
|
+
sleep 10
|
38
|
+
puts "Terminating"
|
39
|
+
Process.kill('INT', *processes)
|
40
|
+
end
|
41
|
+
end
|
42
|
+
end
|
data/test/test_helper.rb
ADDED
@@ -0,0 +1,19 @@
|
|
1
|
+
require 'rubygems'
|
2
|
+
require 'test/unit'
|
3
|
+
|
4
|
+
begin
|
5
|
+
gem 'thoughtbot-shoulda', '>=2.0.2'
|
6
|
+
require 'shoulda'
|
7
|
+
rescue LoadError => e
|
8
|
+
puts "Please install shoulda: `sudo gem install thoughtbot-shoulda -s http://gems.github.com`"
|
9
|
+
end
|
10
|
+
|
11
|
+
begin
|
12
|
+
require 'mocha'
|
13
|
+
rescue LoadError => e
|
14
|
+
puts "Please install mocha: `sudo gem install mocha`"
|
15
|
+
end
|
16
|
+
|
17
|
+
$:.unshift(File.dirname(__FILE__) + '/../lib')
|
18
|
+
require File.dirname(__FILE__) + '/../lib/init'
|
19
|
+
Politics::log.level = Logger::WARN
|
@@ -0,0 +1,78 @@
|
|
1
|
+
require 'test_helper'
|
2
|
+
|
3
|
+
class TokenWorkerTest < Test::Unit::TestCase
|
4
|
+
|
5
|
+
context "token workers" do
|
6
|
+
setup do
|
7
|
+
@harness = Class.new
|
8
|
+
@harness.send(:include, Politics::TokenWorker)
|
9
|
+
@harness.any_instance.stubs(:cleanup)
|
10
|
+
@harness.any_instance.stubs(:loop?).returns(false)
|
11
|
+
@harness.any_instance.stubs(:pause_until_expiry)
|
12
|
+
@harness.any_instance.stubs(:relax)
|
13
|
+
|
14
|
+
@worker = @harness.new
|
15
|
+
end
|
16
|
+
|
17
|
+
should "test_instance_property_accessors" do
|
18
|
+
assert @worker.iteration_length = 20
|
19
|
+
assert_equal 20, @worker.iteration_length
|
20
|
+
end
|
21
|
+
|
22
|
+
should 'test_tracks_a_registered_singleton' do
|
23
|
+
assert_nil @worker.class.worker_instance
|
24
|
+
@worker.register_worker('testing')
|
25
|
+
assert_equal @worker.class.worker_instance, @worker
|
26
|
+
end
|
27
|
+
|
28
|
+
should 'not process if they are not leader' do
|
29
|
+
@worker.expects(:nominate)
|
30
|
+
@worker.expects(:leader?).returns(false)
|
31
|
+
@worker.register_worker('testing')
|
32
|
+
@worker.process do
|
33
|
+
assert false
|
34
|
+
end
|
35
|
+
end
|
36
|
+
|
37
|
+
should 'handle unexpected MemCache errors' do
|
38
|
+
@worker.expects(:nominate)
|
39
|
+
@worker.expects(:leader?).raises(MemCache::MemCacheError)
|
40
|
+
Politics::log.expects(:error).times(3)
|
41
|
+
|
42
|
+
@worker.register_worker('testing')
|
43
|
+
@worker.process do
|
44
|
+
assert false
|
45
|
+
end
|
46
|
+
end
|
47
|
+
|
48
|
+
should 'process if they are leader' do
|
49
|
+
@worker.expects(:nominate)
|
50
|
+
@worker.expects(:leader?).returns(true)
|
51
|
+
@worker.register_worker('testing')
|
52
|
+
|
53
|
+
worked = 0
|
54
|
+
@worker.process do
|
55
|
+
worked += 1
|
56
|
+
end
|
57
|
+
|
58
|
+
assert_equal 1, worked
|
59
|
+
end
|
60
|
+
|
61
|
+
should 'not allow processing without registration' do
|
62
|
+
assert_raises ArgumentError do
|
63
|
+
@worker.process
|
64
|
+
end
|
65
|
+
end
|
66
|
+
|
67
|
+
should 'not allow processing by old instances' do
|
68
|
+
@worker.register_worker('testing')
|
69
|
+
|
70
|
+
foo = @worker.class.new
|
71
|
+
foo.register_worker('testing')
|
72
|
+
|
73
|
+
assert_raises SecurityError do
|
74
|
+
@worker.process
|
75
|
+
end
|
76
|
+
end
|
77
|
+
end
|
78
|
+
end
|
metadata
ADDED
@@ -0,0 +1,98 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: infopark-politics
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.2.7
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- Mike Perham
|
8
|
+
- "Tilo Pr\xC3\xBCtz"
|
9
|
+
autorequire:
|
10
|
+
bindir: bin
|
11
|
+
cert_chain: []
|
12
|
+
|
13
|
+
date: 2009-05-29 00:00:00 -07:00
|
14
|
+
default_executable:
|
15
|
+
dependencies:
|
16
|
+
- !ruby/object:Gem::Dependency
|
17
|
+
name: memcache-client
|
18
|
+
type: :runtime
|
19
|
+
version_requirement:
|
20
|
+
version_requirements: !ruby/object:Gem::Requirement
|
21
|
+
requirements:
|
22
|
+
- - ">="
|
23
|
+
- !ruby/object:Gem::Version
|
24
|
+
version: 1.5.0
|
25
|
+
version:
|
26
|
+
- !ruby/object:Gem::Dependency
|
27
|
+
name: starling-starling
|
28
|
+
type: :runtime
|
29
|
+
version_requirement:
|
30
|
+
version_requirements: !ruby/object:Gem::Requirement
|
31
|
+
requirements:
|
32
|
+
- - ">="
|
33
|
+
- !ruby/object:Gem::Version
|
34
|
+
version: 0.9.8
|
35
|
+
version:
|
36
|
+
- !ruby/object:Gem::Dependency
|
37
|
+
name: net-mdns
|
38
|
+
type: :runtime
|
39
|
+
version_requirement:
|
40
|
+
version_requirements: !ruby/object:Gem::Requirement
|
41
|
+
requirements:
|
42
|
+
- - ">="
|
43
|
+
- !ruby/object:Gem::Version
|
44
|
+
version: "0.4"
|
45
|
+
version:
|
46
|
+
description: Algorithms and Tools for Distributed Computing in Ruby.
|
47
|
+
email: tilo@infopark.de
|
48
|
+
executables: []
|
49
|
+
|
50
|
+
extensions: []
|
51
|
+
|
52
|
+
extra_rdoc_files:
|
53
|
+
- History.rdoc
|
54
|
+
- LICENSE
|
55
|
+
- README.rdoc
|
56
|
+
files:
|
57
|
+
- examples/queue_worker_example.rb
|
58
|
+
- examples/token_worker_example.rb
|
59
|
+
- lib/init.rb
|
60
|
+
- lib/politics.rb
|
61
|
+
- lib/politics/discoverable_node.rb
|
62
|
+
- lib/politics/static_queue_worker.rb
|
63
|
+
- lib/politics/token_worker.rb
|
64
|
+
- lib/politics/version.rb
|
65
|
+
- History.rdoc
|
66
|
+
- LICENSE
|
67
|
+
- README.rdoc
|
68
|
+
has_rdoc: true
|
69
|
+
homepage: http://github.com/infopark/politics
|
70
|
+
post_install_message:
|
71
|
+
rdoc_options:
|
72
|
+
- --charset=UTF-8
|
73
|
+
require_paths:
|
74
|
+
- lib
|
75
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
76
|
+
requirements:
|
77
|
+
- - ">="
|
78
|
+
- !ruby/object:Gem::Version
|
79
|
+
version: "0"
|
80
|
+
version:
|
81
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
82
|
+
requirements:
|
83
|
+
- - ">="
|
84
|
+
- !ruby/object:Gem::Version
|
85
|
+
version: "0"
|
86
|
+
version:
|
87
|
+
requirements: []
|
88
|
+
|
89
|
+
rubyforge_project:
|
90
|
+
rubygems_version: 1.2.0
|
91
|
+
signing_key:
|
92
|
+
specification_version: 2
|
93
|
+
summary: Algorithms and Tools for Distributed Computing in Ruby.
|
94
|
+
test_files:
|
95
|
+
- spec/static_queue_worker_spec.rb
|
96
|
+
- test/test_helper.rb
|
97
|
+
- test/static_queue_worker_test.rb
|
98
|
+
- test/token_worker_test.rb
|