archipelago 0.2.5 → 0.2.6

Sign up to get free protection for your applications and to get access to all the features.
data/README CHANGED
@@ -4,7 +4,7 @@ It consists of several different parts, that can be used standalone or in conjun
4
4
 
5
5
  == Dependencies:
6
6
  Archipelago::Hashish::BerkeleyHashishProvider:: ruby bdb: http://moulon.inra.fr/ruby/bdb.html
7
- String:: ruby inline: http://www.zenspider.com/ZSS/Products/RubyInline/
7
+ Archipelago::Client::Base:: rbtree: in this same repository is a patched version, original code is at http://www.geocities.co.jp/SiliconValley-PaloAlto/3388/rbtree/README.html
8
8
 
9
9
  == Sub packages:
10
10
  Archipelago::Disco:: A UDP multicast discovery service useful to find services in your network with a minimum of configuration.
data/TODO CHANGED
@@ -1,19 +1,4 @@
1
1
 
2
- * Create a failover/redundancy framework
3
- * For example: Create a new HashishProvider with built in redundancy,
4
- for example using the Chord project: http://pdos.csail.mit.edu/chord/
5
- * Or: Create migration methods that move objects between Chests opon
6
- startup and shutdown, and make them keep backups at each others
7
- persistence backends.
8
- * Or: Create something that stores data the way Chord does (with erasure
9
- codes) but doesnt use the same look up mechanism.
10
- * Problem: We still have to implement the entire maintenance protocol
11
- of Chord (continously checking if our data is safely replicated across
12
- the network, continously checking that our data belong with us)
13
-
14
- * Replace Raider with some well known near-optimal erasure code, for example
15
- Online Codes: http://en.wikipedia.org/wiki/Online_codes
16
-
17
2
  * Make Chest aware about whether transactions have affected it 'for real' ie
18
3
  check whether the instance before the call differs from the instance after
19
4
  the call. Preferably without incurring performance lossage.
@@ -22,11 +7,17 @@
22
7
  * This is now done. But it is not yet used to provide intelligence for the
23
8
  transaction mechanism. How should it compare dirty state before and after?
24
9
 
25
- * Test the transaction recovery mechanism of Chest.
26
-
27
10
  * Create a memcached-starter that publishes the address to the started memcached
28
11
  instance on the Disco network.
29
12
 
30
13
  * Create a memcached-client that uses Disco instance to find all memcached instances
31
14
  in the network and distribute requests among them.
32
15
 
16
+ * Create a file server service that handles really big files within transactions
17
+ and allows the POSTing and GETing of them via HTTP.
18
+
19
+ * Test the ability of Dubloons to reconnect to the proper chest.
20
+ * If the first chest is disconnected and doesnt reappear.
21
+ * If the first chest is disconnected and then reappears.
22
+ * If a new chest that takes responsibility for the Dubloon appears.
23
+
data/lib/archipelago.rb CHANGED
@@ -16,15 +16,3 @@
16
16
  # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
17
17
 
18
18
  $: << File.dirname(File.expand_path(__FILE__))
19
-
20
- require 'archipelago/bitstring'
21
- require 'archipelago/oneliner'
22
- require 'archipelago/raider'
23
- require 'archipelago/disco'
24
- require 'archipelago/current'
25
- require 'archipelago/tranny'
26
- require 'archipelago/treasure'
27
- require 'archipelago/client'
28
- require 'archipelago/pirate'
29
- require 'archipelago/cove'
30
- require 'archipelago/exxon'
@@ -16,39 +16,65 @@
16
16
  # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
17
17
 
18
18
  require 'archipelago/disco'
19
+ require 'rbtree'
19
20
 
20
21
  module Archipelago
21
22
 
22
23
  module Client
23
24
 
25
+ #
26
+ # The initial interval between calling update_services!
27
+ #
24
28
  INITIAL_SERVICE_UPDATE_INTERVAL = 1
29
+ #
30
+ # The initial interval will be doubled each time update_services! is called,
31
+ # but will never be greater than the maximum interval.
32
+ #
25
33
  MAXIMUM_SERVICE_UPDATE_INTERVAL = 60
34
+ #
35
+ # The timeout that will be used in the first lookup to ensure that we actually
36
+ # have any services.
37
+ #
38
+ INITIAL_LOOKUP_TIMEOUT = 5
26
39
 
27
40
  class Base
28
- attr_reader :jockey
41
+ include Archipelago::Disco::Camel
42
+ attr_reader :jockey, :services, :service_descriptions
43
+
29
44
  #
30
- # Initialize an instance using Archipelago::Disco::MC or <i>:jockey</i> if given,
45
+ # Initialize a Client using Archipelago::Disco::MC or <i>:jockey</i> if given,
31
46
  # or a new Archipelago::Disco::Jockey if none, that looks for new services
32
47
  # <i>:initial_service_update_interval</i> or INITIAL_SERVICE_UPDATE_INTERVAL,
33
48
  # when it starts and never slower than every <i>:maximum_service_update_interval</i>
34
49
  # or MAXIMUM_SERVICE_UPDATE_INTERVAL.
35
50
  #
36
- def initialize(options = {})
37
- setup(options)
51
+ def setup_client(options = {})
52
+ setup_jockey(options)
53
+
54
+ @initial_service_update_interval = options[:initial_service_update_interval] || INITIAL_SERVICE_UPDATE_INTERVAL
55
+ @maximum_service_update_interval = options[:maximum_service_update_interval] || MAXIMUM_SERVICE_UPDATE_INTERVAL
56
+ @service_descriptions = options[:service_descriptions]
57
+ @initial_lookup_timeout = options[:initial_lookup_timeout] || INITIAL_LOOKUP_TIMEOUT
58
+ @services = {}
59
+ @service_descriptions.each do |name, description|
60
+ t = RBTree.new
61
+ t.extend(Archipelago::Current::ThreadedCollection)
62
+ @services[name] = t
63
+ end
64
+
65
+ start_service_updater
66
+ start_subscriptions
38
67
  end
68
+
39
69
  #
40
- # Sets up this instance with the given +options+.
70
+ # Finding our services dynamically.
41
71
  #
42
- def setup(options = {})
43
- @jockey.stop! if defined?(@jockey) && @jockey != Archipelago::Disco::MC
44
- if defined?(Archipelago::Disco::MC)
45
- @jockey = options[:jockey] || Archipelago::Disco::MC
72
+ def method_missing(meth, *args)
73
+ if @services.include?(meth)
74
+ return @services[meth]
46
75
  else
47
- @jockey = options[:jockey] || Archipelago::Disco::Jockey.new
76
+ super
48
77
  end
49
-
50
- @initial_service_update_interval = options[:initial_service_update_interval] || INITIAL_SERVICE_UPDATE_INTERVAL
51
- @maximum_service_update_interval = options[:maximum_service_update_interval] || MAXIMUM_SERVICE_UPDATE_INTERVAL
52
78
  end
53
79
 
54
80
  #
@@ -57,26 +83,137 @@ module Archipelago
57
83
  #
58
84
  def stop!
59
85
  @service_update_thread.kill if @service_update_thread
86
+ stop_subscriptions
60
87
  unless defined?(Archipelago::Disco::MC) && @jockey && @jockey == Archipelago::Disco::MC
61
88
  @jockey.stop!
62
89
  end
63
90
  end
64
91
 
65
92
  #
66
- # Override this to do whatever you want to when finding services.
93
+ # Override this if you want to do something special before or
94
+ # after calling update_services!
95
+ #
96
+ def around_update_services(&block)
97
+ yield
98
+ end
99
+
100
+ #
101
+ # Make our @jockey lookup all our services.
67
102
  #
68
- def update_services!
69
- # /moo
103
+ def update_services!(options = {})
104
+ timeout = options[:timeout] || 0
105
+ validate = options[:validate] || false
106
+ around_update_services do
107
+ @service_descriptions.each do |name, description|
108
+ #
109
+ # This sure sounds inefficient, but hey, listen up:
110
+ #
111
+ # * RBTrees are nice and fast when it comes to looking up ordered stuff.
112
+ # * They are horribly slow in all other ways.
113
+ # * As an example: It takes (as of this writing) 10x the time to insert 10k elements in an RBTree
114
+ # as it takes to sort 10k elements in an Array.
115
+ #
116
+ # This means that using them in Archipelago::Disco::ServiceLocker will be inefficient as hell, since they
117
+ # are merging and creating new maps all the time. But in here, I expect us to not renew our service lists
118
+ # more than on average once every MAXIMUM_SERVICE_UPDATE_INTERVAL, so that MAY make it worthwhile to do
119
+ # the RBTree song and dance in here. Hopefully.
120
+ #
121
+ @services[name] = @jockey.lookup(Archipelago::Disco::Query.new(description), timeout)
122
+ @services[name].convert_to_tree!
123
+ @services[name].validate! if validate
124
+ end
125
+ end
70
126
  end
71
127
 
72
128
  private
73
129
 
130
+ #
131
+ # Subscribe to our service changes.
132
+ #
133
+ def start_subscriptions
134
+ @service_descriptions.each do |name, description|
135
+ @jockey.subscribe(:found,
136
+ Archipelago::Disco::Query.new(description),
137
+ object_id) do |record|
138
+ @services[name][record[:service_id]] = record
139
+ end
140
+ @jockey.subscribe(:lost,
141
+ Archipelago::Disco::Query.new(description),
142
+ object_id) do |record|
143
+ @services[name].delete(record[:service_id])
144
+ end
145
+ end
146
+ end
147
+
148
+ #
149
+ # Stop our subscriptions to our service changes.
150
+ #
151
+ def stop_subscriptions
152
+ @service_descriptions.each do |name, description|
153
+ @jockey.unsubscribe(:found,
154
+ Archipelago::Disco::Query.new(description),
155
+ object_id)
156
+ @jockey.subscribe(:lost,
157
+ Archipelago::Disco::Query.new(description),
158
+ object_id)
159
+ end
160
+ end
161
+
162
+ #
163
+ # Gets the +n+ smallest keys from services denoted +service_name+ that
164
+ # are greater than +o+.
165
+ #
166
+ # Will loop to the beginning if the number of elements run out.
167
+ #
168
+ def get_least_greater_than(service_name, o, n)
169
+ rval = []
170
+ self.send(service_name).each(o) do |id, desc|
171
+ rval << desc if id > o
172
+ break if rval.size == n
173
+ end
174
+ return rval if rval.size == n
175
+ fill(self.send(service_name), rval, :each, n)
176
+ end
177
+
178
+ #
179
+ # Gets the +n+ values for the largest keys from +hash+ that
180
+ # are less than +o+.
181
+ #
182
+ # Will loop to the end if the number of elements run out.
183
+ #
184
+ def get_greatest_less_than(service_name, o, n)
185
+ rval = []
186
+ self.send(service_name).reverse_each(o) do |id, desc|
187
+ rval << desc if id < o
188
+ break if rval.size == n
189
+ end
190
+ return rval if rval.size == n
191
+ fill(self.send(service_name), rval, :reverse_each, n)
192
+ end
193
+
194
+ #
195
+ # Will fill +receiver+ up with the return values of +collection+.send(+meth+)
196
+ # until +receiver+ is of size +n+.
197
+ #
198
+ def fill(collection, receiver, meth, n)
199
+ unless collection.empty?
200
+ while receiver.size < n
201
+ collection.send(meth) do |id, desc|
202
+ receiver << desc
203
+ break if receiver.size == n
204
+ end
205
+ end
206
+ end
207
+ return receiver
208
+ end
209
+
74
210
  #
75
211
  # Start a thread looking up existing chests between every
76
212
  # +initial+ and +maximum+ seconds.
77
213
  #
78
214
  def start_service_updater
79
- update_services!
215
+ update_services!(:timeout => @initial_lookup_timeout, :validate => true)
216
+ @service_update_thread.kill if defined?(@service_update_thread)
80
217
  @service_update_thread = Thread.start do
81
218
  standoff = @initial_service_update_interval
82
219
  loop do
@@ -213,7 +213,7 @@ module Archipelago
213
213
  sync_initialize
214
214
  end
215
215
  end
216
-
216
+
217
217
  #
218
218
  # Just a convenience empty class with locking functionality.
219
219
  #
@@ -20,9 +20,11 @@ require 'thread'
20
20
  require 'ipaddr'
21
21
  require 'pp'
22
22
  require 'archipelago/current'
23
+ require 'archipelago/hashish'
23
24
  require 'drb'
24
25
  require 'set'
25
26
  require 'digest/sha1'
27
+ require 'forwardable'
26
28
 
27
29
  module Archipelago
28
30
 
@@ -53,7 +55,7 @@ module Archipelago
53
55
  # Default pause between trying to validate all services we
54
56
  # know about.
55
57
  #
56
- VALIDATION_INTERVAL = 60
58
+ VALIDATION_INTERVAL = 30
57
59
  #
58
60
  # Only save stuff that we KNOW we want.
59
61
  #
@@ -71,9 +73,48 @@ module Archipelago
71
73
  # The host we are running on.
72
74
  #
73
75
  HOST = "#{Socket::gethostbyname(Socket::gethostname)[0]}" rescue "localhost"
76
+
77
+ #
78
+ # Anything that has a @jockey that is an Archipelago::Disco::Jockey
79
+ # can include this for simplicity.
80
+ #
81
+ module Camel
82
+ private
83
+ #
84
+ # Setup our @jockey as a Archipelago::Disco::Jockey with given options.
85
+ #
86
+ # It will first stop any @jockey we currently have that is NOT the global Archipelago::Disco::MC.
87
+ #
88
+ # If +jockey_options+ or +jockey+ are given it will always use a new or given Archipelago::Disco::Jockey,
89
+ # otherwise it will try to use the global Archipelago::Disco::Jockey instead.
90
+ #
91
+ def setup_jockey(options = {})
92
+ @jockey.stop! if defined?(@jockey) && (!defined?(Archipelago::Disco::MC) || @jockey != Archipelago::Disco::MC)
93
+
94
+ @jockey_options ||= {}
95
+ jockey_options = @jockey_options.merge(options[:jockey_options] || {})
96
+
97
+ if options[:jockey]
98
+ @jockey = options[:jockey]
99
+ unless jockey_options.empty?
100
+ @jockey.setup(jockey_options)
101
+ end
102
+ else
103
+ if jockey_options.empty?
104
+ if defined?(Archipelago::Disco::MC)
105
+ @jockey = Archipelago::Disco::MC
106
+ else
107
+ @jockey = Archipelago::Disco::Jockey.new
108
+ end
109
+ else
110
+ @jockey = Archipelago::Disco::Jockey.new(jockey_options)
111
+ end
112
+ end
113
+ end
114
+ end
74
115
 
75
116
  #
76
- # A module to simplify publishing services.
117
+ # A module to simplify publishing of services.
77
118
  #
78
119
  # If you include it you can use the publish! method
79
120
  # at your convenience.
@@ -88,7 +129,7 @@ module Archipelago
88
129
  # define <b>@persistence_provider</b> before you call <b>initialize_publishable</b>.
89
130
  #
90
131
  module Publishable
91
-
132
+ include Camel
92
133
  #
93
134
  # Also add the ClassMethods to +base+.
94
135
  #
@@ -113,21 +154,6 @@ module Archipelago
113
154
  DRbObject.new(self)._dump(dummy_param)
114
155
  end
115
156
 
116
- #
117
- # Will initialize this instance with @service_description and @jockey_options
118
- # and merge these with the optionally given <i>:service_description</i> and
119
- # <i>:jockey_options</i>.
120
- #
121
- def initialize_publishable(options = {})
122
- @service_description = {
123
- :service_id => service_id,
124
- :validator => self,
125
- :service => self,
126
- :class => self.class.name
127
- }.merge(options[:service_description] || {})
128
- @jockey_options = options[:jockey_options] || {}
129
- end
130
-
131
157
  #
132
158
  # Create an Archipelago::Disco::Jockey for this instance using @jockey_options
133
159
  # or optionally given <i>:jockey_options</i>.
@@ -136,8 +162,10 @@ module Archipelago
136
162
  # <i>:service_description</i>.
137
163
  #
138
164
  def publish!(options = {})
139
- @jockey ||= defined?(Archipelago::Disco::MC) ? Archipelago::Disco::MC : Archipelago::Disco::Jockey.new(@jockey_options.merge(options[:jockey_options] || {}))
140
- @jockey.publish(Archipelago::Disco::Record.new(@service_description.merge(options[:service_description] || {})))
165
+ setup_jockey(options)
166
+ around_publish do
167
+ @jockey.publish(Archipelago::Disco::Record.new(@service_description.merge(options[:service_description] || {})))
168
+ end
141
169
  end
142
170
 
143
171
  #
@@ -154,30 +182,104 @@ module Archipelago
154
182
  #
155
183
  # Stops the publishing of this Publishable.
156
184
  #
157
- def stop!
158
- if valid?
159
- @valid = false
160
- if defined?(Archipelago::Disco::MC) && @jockey == Archipelago::Disco::MC
161
- @jockey.unpublish(self.service_id)
162
- else
163
- @jockey.stop!
185
+ def unpublish!
186
+ if defined?(@jockey)
187
+ if valid?
188
+ around_unpublish do
189
+ @valid = false
190
+ if defined?(Archipelago::Disco::MC) && @jockey == Archipelago::Disco::MC
191
+ @jockey.unpublish(self.service_id)
192
+ else
193
+ @jockey.stop!
194
+ end
195
+ end
164
196
  end
165
197
  end
166
198
  end
167
-
199
+
200
+ #
201
+ # Closes the persistence backend of this Publishable.
202
+ #
203
+ def close!
204
+ unpublish!
205
+ around_close do
206
+ @persistence_provider.close!
207
+ end
208
+ end
209
+
168
210
  #
169
211
  # Returns our semi-unique id so that we can be found again.
170
212
  #
171
213
  def service_id
214
+ return @service_id ||= @metadata["service_id"]
215
+ end
216
+
217
+ private
218
+
219
+ #
220
+ # Will initialize this instance with @service_description and @jockey_options
221
+ # and merge these with the optionally given <i>:service_description</i> and
222
+ # <i>:jockey_options</i>.
223
+ #
224
+ def initialize_publishable(options = {})
172
225
  #
173
226
  # The provider of happy magic persistent hashes of different kinds.
174
227
  #
175
- @persistence_provider ||= Archipelago::Hashish::BerkeleyHashishProvider.new(Pathname.new(File.expand_path(__FILE__)).parent.join(self.class.name + ".db"))
228
+ @persistence_provider ||= Archipelago::Hashish::BerkeleyHashishProvider.new(options[:persistence_directory] || Pathname.new(File.expand_path(__FILE__)).parent.join(self.class.name + ".db"))
176
229
  #
177
230
  # Stuff that didnt fit in any of the other databases.
178
231
  #
179
232
  @metadata ||= @persistence_provider.get_hashish("metadata")
180
- return @metadata["service_id"] ||= Digest::SHA1.hexdigest("#{HOST}:#{Time.new.to_f}:#{self.object_id}:#{rand(1 << 32)}").to_s
233
+ #
234
+ # Our service_description that is supposed to define and describe
235
+ # us in the discovery network.
236
+ #
237
+ @service_description = {
238
+ :service_id => service_id || Digest::SHA1.hexdigest("#{HOST}:#{Time.new.to_f}:#{self.object_id}:#{rand(1 << 32)}").to_s,
239
+ :validator => self,
240
+ :service => self,
241
+ :class => self.class.name
242
+ }.merge(options[:service_description] || {})
243
+ #
244
+ # Our service_id that is supposed to be unique and persistent.
245
+ #
246
+ @metadata["service_id"] = @service_description[:service_id]
247
+ #
248
+ # Setup our Archipelago::Disco::Jockey.
249
+ #
250
+ @jockey_options = options[:jockey_options] || {}
251
+ end
252
+
253
+ #
254
+ # Override this if you want to do something magical before or after you
255
+ # get published.
256
+ #
257
+ def around_publish(&block)
258
+ yield
259
+ end
260
+
261
+ #
262
+ # Override this if you want to do something magical before or after you
263
+ # get unpublished.
264
+ #
265
+ def around_stop(&block)
266
+ yield
267
+ end
268
+
269
+ #
270
+ # Override this if you want to do something magical before or after you
271
+ # get closed.
272
+ #
273
+ def around_close(&block)
274
+ yield
275
+ end
276
+
277
+ #
278
+ # Override this if you want to do something magical before or after you
279
+ # get unpublished.
280
+ #
281
+ def around_unpublish(&block)
282
+ yield
181
283
  end
182
284
 
183
285
  end
@@ -196,8 +298,10 @@ module Archipelago
196
298
  # A Hash-like description of a service.
197
299
  #
198
300
  class ServiceDescription
301
+ extend Forwardable
199
302
  IGNORABLE_ATTRIBUTES = Set[:unicast_reply]
200
303
  attr_reader :attributes
304
+ def_delegators :@attributes, :[], :[]=, :each
201
305
  #
202
306
  # Initialize this service description with a hash
203
307
  # that describes its attributes.
@@ -206,14 +310,10 @@ module Archipelago
206
310
  @attributes = hash
207
311
  end
208
312
  #
209
- # Forwards as much as possible to our Hash.
313
+ # Returns whether our @attributes are equal to that of +o+.
210
314
  #
211
- def method_missing(meth, *args, &block)
212
- if @attributes.respond_to?(meth)
213
- @attributes.send(meth, *args, &block)
214
- else
215
- super(*args)
216
- end
315
+ def eql?(o)
316
+ ServiceDescription === o && @attributes == o.attributes
217
317
  end
218
318
  #
219
319
  # Returns whether this ServiceDescription matches the given +match+.
@@ -272,40 +372,70 @@ module Archipelago
272
372
  # A container of services.
273
373
  #
274
374
  class ServiceLocker
375
+ extend Forwardable
275
376
  attr_reader :hash
276
- include Archipelago::Current::Synchronized
277
377
  include Archipelago::Current::ThreadedCollection
278
- def initialize(hash = nil)
279
- super
280
- @hash = hash || {}
378
+ def_delegators :@hash, :[], :size, :empty?, :values, :keys, :include?
379
+ def initialize(options = {})
380
+ @hash = options[:hash] || {}
381
+ @hash.extend(Archipelago::Current::ThreadedCollection)
382
+ @jockey = options[:jockey]
383
+ end
384
+ def each(*args, &block)
385
+ clone = @hash.clone
386
+ clone.extend(Archipelago::Current::ThreadedCollection)
387
+ clone.each(*args, &block)
388
+ end
389
+ def reverse_each(*args, &block)
390
+ clone = @hash.clone
391
+ clone.extend(Archipelago::Current::ThreadedCollection)
392
+ clone.reverse_each(*args, &block)
281
393
  end
282
394
  #
283
- # Merge this locker with another.
395
+ # Set +key+ to +value+.
284
396
  #
285
- def merge(sd)
286
- rval = @hash.clone
287
- rval.merge!(sd.hash)
288
- ServiceLocker.new(rval)
397
+ def []=(key, value)
398
+ existed_before = @hash.include?(key)
399
+ @hash[key] = value
400
+ # Notifying AFTER the fact to avoid loops.
401
+ if @jockey && !existed_before
402
+ @jockey.instance_eval do notify_subscribers(:found, value) end
403
+ end
289
404
  end
290
405
  #
291
- # Forwards as much as possible to our Hash.
406
+ # Delete +key+.
292
407
  #
293
- def method_missing(meth, *args, &block)
294
- if @hash.respond_to?(meth)
295
- synchronize do
296
- @hash.send(meth, *args, &block)
297
- end
298
- else
299
- super(meth, *args, &block)
408
+ def delete(key)
409
+ value = @hash.delete(key)
410
+ # Notifying AFTER the fact to avoid loops.
411
+ @jockey.instance_eval do notify_subscribers(:lost, value) end if @jockey && value
412
+ end
413
+ #
414
+ # Will make this ServiceLocker convert its Hash into an RBTree.
415
+ #
416
+ def convert_to_tree!
417
+ t = RBTree.new
418
+ @hash.each do |k,v|
419
+ t[k] = v
300
420
  end
421
+ @hash = t
422
+ end
423
+ #
424
+ # Merge this locker with another.
425
+ #
426
+ def merge(sd)
427
+ rval = @hash.merge(sd.hash)
428
+ ServiceLocker.new(:hash => rval)
301
429
  end
302
430
  #
303
431
  # Find all containing services matching +match+.
304
432
  #
305
433
  def get_services(match)
306
434
  rval = ServiceLocker.new
307
- self.each do |service_id, service_data|
308
- rval[service_id] = service_data if service_data.matches?(match) && service_data.valid?
435
+ self.t_each do |service_id, service_data|
436
+ if service_data.matches?(match)
437
+ rval[service_id] = service_data
438
+ end
309
439
  end
310
440
  return rval
311
441
  end
@@ -313,9 +443,12 @@ module Archipelago
313
443
  # Remove all non-valid services.
314
444
  #
315
445
  def validate!
316
- self.clone.each do |service_id, service_data|
317
- self.delete(service_id) unless service_data.valid?
446
+ self.t_each do |service_id, service_data|
447
+ unless service_data.valid?
448
+ self.delete(service_id)
449
+ end
318
450
  end
451
+ return self
319
452
  end
320
453
  end
321
454
 
@@ -348,22 +481,38 @@ module Archipelago
348
481
  #
349
482
  def initialize(options = {})
350
483
  @valid = true
351
- @remote_services = ServiceLocker.new
352
- @local_services = ServiceLocker.new
484
+ @remote_services = ServiceLocker.new(:jockey => self)
485
+ @local_services = ServiceLocker.new(:jockey => self)
353
486
  @subscribed_services = Set.new
354
487
 
355
488
  @incoming = Queue.new
356
489
  @outgoing = Queue.new
357
490
 
358
491
  @new_service_semaphore = MonitorMixin::ConditionVariable.new(Archipelago::Current::Lock.new)
492
+ @service_change_subscribers_by_event_type = {:found => {}, :lost => {}}
493
+
494
+ @validation_interval = options[:validation_interval] || VALIDATION_INTERVAL
359
495
 
360
496
  setup(options)
497
+
498
+ start!
499
+ end
361
500
 
362
- start_listener
363
- start_unilistener
364
- start_shouter
365
- start_picker
366
- start_validator(options[:validation_interval] || VALIDATION_INTERVAL)
501
+ #
502
+ # Will listen for +event_type+s matching the Query +match+
503
+ # and do +block+.call with the matching Record.
504
+ #
505
+ # Recognized +event_types+: :found, :lost
506
+ #
507
+ def subscribe(event_type, match, identity, &block)
508
+ @service_change_subscribers_by_event_type[event_type][[match, identity]] = block
509
+ end
510
+
511
+ #
512
+ # Will stop listening for +event_type+ and +match+.
513
+ #
514
+ def unsubscribe(event_type, match, identity)
515
+ @service_change_subscribers_by_event_type[event_type].delete([match, identity])
367
516
  end
368
517
 
369
518
  #
@@ -421,12 +570,12 @@ module Archipelago
421
570
  # Clears our local and remote services.
422
571
  #
423
572
  def clear!
424
- @local_services = ServiceLocker.new
425
- @remote_services = ServiceLocker.new
573
+ @local_services = ServiceLocker.new(:jockey => self)
574
+ @remote_services = ServiceLocker.new(:jockey => self)
426
575
  end
427
576
 
428
577
  #
429
- # Stops all the threads in this instance.
578
+ # Stops all the threads and close all sockets in this instance.
430
579
  #
431
580
  def stop!
432
581
  if @valid
@@ -434,14 +583,24 @@ module Archipelago
434
583
  @local_services.each do |service_id, service_description|
435
584
  self.unpublish(service_id)
436
585
  end
586
+
437
587
  @listener_thread.kill
438
588
  @unilistener_thread.kill
439
- @validator_thread.kill
589
+ until @incoming.empty?
590
+ sleep(0.01)
591
+ end
592
+ @listener.close
593
+ @unilistener.close
440
594
  @picker_thread.kill
595
+
441
596
  until @outgoing.empty?
442
597
  sleep(0.01)
443
598
  end
444
599
  @shouter_thread.kill
600
+ @sender.close
601
+ @unisender.close
602
+
603
+ @validator_thread.kill
445
604
  end
446
605
  end
447
606
 
@@ -459,12 +618,12 @@ module Archipelago
459
618
 
460
619
  @outgoing << [nil, match]
461
620
  known_services = @remote_services.get_services(match).merge(@local_services.get_services(match))
462
- return known_services unless known_services.empty?
621
+ return known_services if timeout == 0 || !known_services.empty?
463
622
 
464
- @new_service_semaphore.wait(standoff)
623
+ t = Time.new
624
+ @new_service_semaphore.wait([standoff, timeout].min)
465
625
  standoff *= 2
466
626
 
467
- t = Time.new
468
627
  while Time.new < t + timeout
469
628
  known_services = @remote_services.get_services(match).merge(@local_services.get_services(match))
470
629
  return known_services unless known_services.empty?
@@ -503,8 +662,45 @@ module Archipelago
503
662
  end
504
663
  end
505
664
 
665
+ #
666
+ # Validate all our known services.
667
+ #
668
+ def validate!
669
+ @local_services.validate!
670
+ @remote_services.validate!
671
+ end
672
+
506
673
  private
507
674
 
675
+ #
676
+ # Start all our threads.
677
+ #
678
+ def start!(options = {})
679
+ start_listener
680
+ start_unilistener
681
+ start_shouter
682
+ start_picker
683
+ start_validator(options[:validation_interval] || @validation_interval)
684
+ end
685
+
686
+ #
687
+ # Will notify all subscribers to +event_type+ looking for +record+.
688
+ #
689
+ def notify_subscribers(event_type, record)
690
+ @service_change_subscribers_by_event_type[event_type].clone.each do |query_and_identity, proc|
691
+ query = query_and_identity.first
692
+ Thread.new do
693
+ begin
694
+ proc.call(record)
695
+ rescue Exception => e
696
+ @service_change_subscribers_by_event_type[event_type].delete(query_and_identity)
697
+ puts e
698
+ pp e.backtrace
699
+ end
700
+ end if record.matches?(query)
701
+ end
702
+ end
703
+
508
704
  #
509
705
  # Start the validating thread.
510
706
  #
@@ -512,8 +708,7 @@ module Archipelago
512
708
  @validator_thread = Thread.new do
513
709
  loop do
514
710
  begin
515
- @local_services.validate!
516
- @remote_services.validate!
711
+ validate!
517
712
  sleep(validation_interval)
518
713
  rescue Exception => e
519
714
  puts e