improved-queue 1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. data/lib/improved-queue.rb +469 -0
  2. metadata +81 -0
@@ -0,0 +1,469 @@
1
+ require 'thread'
2
+
3
+ #
4
+ # A simple elaboration on Ruby's native SizedQueue which allows using the queue
5
+ # object to re-awaken a blocked thread and cause it to abandon its blocking
6
+ # enqueue/dequeue operation. Useful for simplifying program logic, reducing the
7
+ # need for external flags/Muteces (yes, I said Muteces), and for cleanly
8
+ # resolving queues on program termination without risk of data loss or deadlock.
9
+ #
10
+ # Why use this queue? There are two reasons. For one thing, under several
11
+ # circumstances it is _considerably_ faster than Ruby's native SizedQueue. I
12
+ # admit I'm not entirely sure why, but I have tested this on multiple platforms
13
+ # and it seems to hold true as a generality. You can feel free to confirm or
14
+ # dispel that this advantage holds for your use case at your own leisure.
15
+ #
16
+ # The second reason is the aforementioned simplification of program logic.
17
+ # In the case that all data passing through the queues must be preserved on
18
+ # program termination, SizedQueue can require some elaborate trickery to ensure
19
+ # that even the most remote possibility of deadlock is removed.
20
+ # ImprovedSizedQueue solves this problem by making it possible to use the queue
21
+ # to pass control messages between threads, irrespective of the queue's actual
22
+ # content.
23
+ #
24
+ # Version:: 1.0
25
+ # Author:: Lincoln McCormick (mailto:iamtheiconoclast@gmail.com)
26
+ # Copyright:: Copyright (c) 2013 Lincoln McCormick
27
+ # License:: GNU General Public License version 3
28
+ # Requires:: Ruby 1.9
29
+ #--
30
+ # This program is free software: you can redistribute it and/or modify
31
+ # it under the terms of the GNU General Public License as published by
32
+ # the Free Software Foundation, either version 3 of the License, or
33
+ # (at your option) any later version.
34
+ #
35
+ # This program is distributed in the hope that it will be useful,
36
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
37
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
38
+ # GNU General Public License for more details.
39
+ #
40
+ # You should have received a copy of the GNU General Public License
41
+ # along with this program. If not, see <http://www.gnu.org/licenses/>.
42
+ #++
43
+ #
44
+ #:title:ImprovedQueue
45
+ #
46
+ module ImprovedQueue
47
+
48
+ #
49
+ # The queue will raise this class of error within each waiting thread you wish
50
+ # to unblock, upon calling the #unblock_deq, #unblock_enq or #unblock_waiting
51
+ # methods. It inherits from StopIteration, so you can take advantage of the
52
+ # implicit rescue clause in Kernel.loop if you wish, like so:
53
+ #
54
+ # queue = ImprovedQueue.new 10
55
+ #
56
+ # thread1 = Thread.new do
57
+ # loop do
58
+ # queue.pop
59
+ # end
60
+ # end
61
+ #
62
+ # thread2 = Thread.new do
63
+ # queue.unblock_deq thread1
64
+ # end
65
+ #
66
+ class UnblockError < StopIteration; end
67
+
68
+ #
69
+ # In most respects this class can be used identically to SizedQueue. It
70
+ # implements all of the methods which are usually available, but bears the
71
+ # important distinction that any thread blocking on a dequeue or enqueue
72
+ # operation can be "woken up" by calling the unblock methods and passing a
73
+ # thread object or an array of thread objects.
74
+ #
75
+ # This can be done independently of the status of the queue (full, empty,
76
+ # etc.), so it is superior to makeshift resolution mechanisms such as passing
77
+ # a custom message or dummy item into the queue, because there is no need to
78
+ # maintain flags or Mutex objects in your program to ensure that such an
79
+ # operation is not at risk of causing deadlock.
80
+ #
81
+ # When a blocking thread receives the unblock signal, it will abandon its
82
+ # planned operation and raise UnblockError. As UnblockError inherits from
83
+ # StopIteration, it can be rescued in a variety of ways, including implicitly
84
+ # through Kernel.loop, allowing you to further simplify your program logic:
85
+ #
86
+ # queue = ImprovedQueue.new 50
87
+ # threads = []
88
+ #
89
+ # # a consumer thread
90
+ # threads << Thread.new do
91
+ # loop do
92
+ # val = queue.pop
93
+ # do_something(val)
94
+ # end
95
+ # end
96
+ #
97
+ # # another consumer
98
+ # threads << Thread.new do
99
+ # loop do
100
+ # val = queue.pop
101
+ # do_something(val)
102
+ # end
103
+ # end
104
+ #
105
+ # # a producer
106
+ # threads << Thread.new do
107
+ # while not_finished_yet == true
108
+ # queue << some_stuff
109
+ # ...
110
+ # end
111
+ #
112
+ # queue.unblock_deq [threads[0], threads[1]]
113
+ # end
114
+ #
115
+ # threads.each {|thr| thr.join }
116
+ #
117
+ # One important thing to note: calling #unblock_deq is *always* safe (will
118
+ # never lose data), but when calling #unblock_enq, it is your responsibility
119
+ # to decide what is to be done with an item which failed to enqueue:
120
+ #
121
+ # queue = ImprovedQueue.new 10
122
+ #
123
+ # 1.upto(11) do |x|
124
+ # begin
125
+ # queue << x
126
+ # rescue ImprovedQueue::UnblockError
127
+ # dont_lose_things(x)
128
+ # end
129
+ # end
130
+ #
131
+ class ImprovedSizedQueue
132
+
133
+ #
134
+ # Returns a new instance of ImprovedQueue::ImprovedSizedQueue. Argument to
135
+ # ImprovedSizedQueue.new must be an integer value greater than 1.
136
+ #
137
+ def initialize(size)
138
+ raise TypeError unless size.is_a? Integer and size > 1
139
+ @que = []
140
+ @max = size
141
+ @mutex = Mutex.new
142
+ @kill_enq = []
143
+ @kill_deq = []
144
+ @deq_waiting = []
145
+ @enq_waiting = []
146
+ @unblock = UnblockError.new
147
+ end
148
+
149
+ #
150
+ # Returns the current maximum size of the queue.
151
+ #
152
+ attr_reader :max
153
+
154
+ #
155
+ # Removes and returns the next item from the queue. Will block if the queue
156
+ # is empty, until an item becomes available to dequeue, or until
157
+ # #unblock_deq is called, in which case it will abandon the dequeue and
158
+ # raise UnblockError.
159
+ #
160
+ def deq
161
+ @mutex.lock
162
+ if @que.size > 0
163
+ val = @que.shift
164
+ if next_waiting = @enq_waiting.shift
165
+ @que << next_waiting[:value]
166
+ next_waiting[:pipe] << true
167
+ end
168
+ @mutex.unlock
169
+ else
170
+ if @kill_deq == true or @kill_deq.include? Thread.current
171
+ @mutex.unlock
172
+ raise UnblockError
173
+ end
174
+ pipe = Queue.new
175
+ @deq_waiting << {thread: Thread.current, pipe: pipe}
176
+ @mutex.unlock
177
+ val = pipe.pop
178
+ raise UnblockError if @unblock.equal? val
179
+ end
180
+ val
181
+ end
182
+
183
+ alias :pop :deq
184
+ alias :shift :deq
185
+
186
+ #
187
+ # Adds an item to the queue. Will block if the queue is full, until an item
188
+ # is removed from the queue, or until #unblock_enq is called, in which case
189
+ # it will abandon the enqueue and raise an UnblockError. *Important:* If
190
+ # data integrity is important, you will need to include any code to deal
191
+ # with the orphaned item in your rescue clause.
192
+ #
193
+ def enq(val)
194
+ @mutex.lock
195
+ if @que.size < @max
196
+ if next_waiting = @deq_waiting.shift
197
+ next_waiting[:pipe] << val
198
+ else
199
+ @que << val
200
+ end
201
+ @mutex.unlock
202
+ else
203
+ if @kill_enq == true or @kill_enq.include? Thread.current
204
+ @mutex.unlock
205
+ raise UnblockError
206
+ end
207
+ pipe = Queue.new
208
+ @enq_waiting << {thread: Thread.current, pipe: pipe, value: val}
209
+ @mutex.unlock
210
+ answer = pipe.pop
211
+ raise UnblockError if @unblock.equal? answer
212
+ end
213
+ nil
214
+ end
215
+
216
+ alias :<< :enq
217
+ alias :push :enq
218
+
219
+ #
220
+ # Call this method to notify blocking threads that they should no longer
221
+ # wait to enqueue:
222
+ #
223
+ # queue.unblock_enq([thread1, thread2, ...])
224
+ # queue.unblock_enq thread3
225
+ # queue.unblock_enq # unblock all threads
226
+ #
227
+ # Note that the effect is permanent; once a thread has been unblocked, it is
228
+ # blacklisted and can no longer wait on this queue. To clear the enqueue
229
+ # blacklist, see the #clear_blacklists and #clear_enq methods.
230
+ #
231
+ def unblock_enq(thrs=nil)
232
+ @mutex.synchronize { enq_shutdown thrs }
233
+ end
234
+
235
+ #
236
+ # Call this method to notify blocking threads that they should no longer
237
+ # wait to dequeue:
238
+ #
239
+ # queue.unblock_deq([thread1, thread2, ...])
240
+ # queue.unblock_deq thread3
241
+ # queue.unblock_deq # unblock all threads
242
+ #
243
+ # Note that the effect is permanent; once a thread has been unblocked, it is
244
+ # blacklisted and can no longer wait on this queue. To clear the dequeue
245
+ # blacklist, see the #clear_blacklists and #clear_deq methods.
246
+ #
247
+ def unblock_deq(thrs=nil)
248
+ @mutex.synchronize { deq_shutdown thrs }
249
+ end
250
+
251
+ #
252
+ # Stops a thread from blocking on either enqueue or dequeue. Equivalent to
253
+ # calling:
254
+ #
255
+ # queue.unblock_deq thread1
256
+ # queue.unblock_enq thread1
257
+ #
258
+ def unblock_waiting(thrs=nil)
259
+ @mutex.synchronize do
260
+ enq_shutdown thrs
261
+ deq_shutdown thrs
262
+ end
263
+ end
264
+
265
+ #
266
+ # Clear the enqueue blacklist or remove specified threads to allow them to
267
+ # block on enqueue once again:
268
+ #
269
+ # queue.clear_enq([thread1, thread2, ...])
270
+ # queue.clear_enq thread3
271
+ # queue.clear_enq # allow all threads to block on enqueue
272
+ #
273
+ # *Note:* raises a TypeError if you attempt to remove specific
274
+ # threads from the blacklist after unblocking all. i.e. this is okay:
275
+ #
276
+ # queue.unblock_enq [thread1, thread2]
277
+ # queue.clear_enq thread2
278
+ #
279
+ # ... but this is not:
280
+ #
281
+ # queue.unblock_enq
282
+ # queue.clear_enq thread2
283
+ #
284
+ def clear_enq(thrs=nil)
285
+ @mutex.synchronize { enq_allow thrs }
286
+ end
287
+
288
+ #
289
+ # Clear the dequeue blacklist or remove specified threads to allow them to
290
+ # block on dequeue once again:
291
+ #
292
+ # queue.clear_deq([thread1, thread2, ...])
293
+ # queue.clear_deq thread3
294
+ # queue.clear_deq # allow all threads to block on dequeue
295
+ #
296
+ # *Note:* raises a TypeError if you attempt to remove specific
297
+ # threads from the blacklist after unblocking all. i.e. this is okay:
298
+ #
299
+ # queue.unblock_deq [thread1, thread2]
300
+ # queue.clear_deq thread2
301
+ #
302
+ # ... but this is not:
303
+ #
304
+ # queue.unblock_deq
305
+ # queue.clear_deq thread2
306
+ #
307
+ def clear_deq(thrs=nil)
308
+ @mutex.synchronize { deq_allow thrs }
309
+ end
310
+
311
+ #
312
+ # Removes a thread or threads from both blacklists and allows them to block
313
+ # on either enqueue or dequeue. Equivalent to calling:
314
+ #
315
+ # queue.clear_deq thread1
316
+ # queue.clear_enq thread1
317
+ #
318
+ def clear_blacklists(thrs=nil)
319
+ @mutex.synchronize do
320
+ enq_allow thrs
321
+ deq_allow thrs
322
+ end
323
+ end
324
+
325
+ #
326
+ # Returns the total number of items currently in the queue.
327
+ #
328
+ def length
329
+ @mutex.synchronize { @que.size }
330
+ end
331
+
332
+ alias :size :length
333
+
334
+ #
335
+ # Returns true if the queue is empty; false otherwise.
336
+ #
337
+ def empty?
338
+ @mutex.synchronize { @que.size == 0 }
339
+ end
340
+
341
+ #
342
+ # Returns the number of threads currently waiting on the queue.
343
+ #
344
+ def num_waiting
345
+ @mutex.synchronize { @enq_waiting.size + @deq_waiting.size }
346
+ end
347
+
348
+ #
349
+ # Deletes all items from the queue. Threads waiting to dequeue will continue
350
+ # waiting; threads waiting to enqueue will be allowed to proceed (until the
351
+ # queue is full again).
352
+ #
353
+ def clear
354
+ @mutex.synchronize do
355
+ @que.clear
356
+ enqueue_multiple_waiting
357
+ end
358
+ end
359
+
360
+ #
361
+ # Completely resets the queue: deletes all queue contents, raises
362
+ # UnblockError in any waiting threads, and clears all blacklists, such that
363
+ # all threads can once again block on enqueue or dequeue.
364
+ #
365
+ def reset
366
+ @mutex.synchronize do
367
+ @que.clear
368
+ @kill_enq = []
369
+ @kill_deq = []
370
+ @enq_waiting.each {|waiting| waiting[:pipe] << @unblock }.clear
371
+ @deq_waiting.each {|waiting| waiting[:pipe] << @unblock }.clear
372
+ end
373
+ end
374
+
375
+ #
376
+ # Sets a new maximum size for the queue. If the new size is larger than the
377
+ # existing size, any threads blocking on enqueue will be allowed to proceed
378
+ # (until the queue is full again). *Important:* if the new size is smaller
379
+ # than the existing size, the queue will _not_ be truncated. Rather, it will
380
+ # remain oversized until dequeue operations have normalized it to the new,
381
+ # reduced maximum, and will not be allowed to grow beyond the new size
382
+ # again. This may result in a brief window where #size reports a value
383
+ # greater than #max.
384
+ #
385
+ def max=(new_max)
386
+ @mutex.synchronize do
387
+ raise TypeError unless new_max.is_a? Integer and new_max > 1
388
+ old_max, @max = @max, new_max
389
+ enqueue_multiple_waiting if new_max > old_max
390
+ end
391
+ new_max
392
+ end
393
+
394
+ private
395
+
396
+ def enq_shutdown(thrs)
397
+ if thrs
398
+ thrs = [thrs] unless thrs.respond_to? :each
399
+ thrs.each do |thr|
400
+ @kill_enq << thr
401
+ @enq_waiting.delete_if do |waiting|
402
+ if waiting[:thread] == thr
403
+ waiting[:pipe] << @unblock
404
+ end
405
+ end
406
+ end
407
+ else
408
+ @kill_enq = true
409
+ @enq_waiting.each {|waiting| waiting[:pipe] << @unblock }.clear
410
+ end
411
+ end
412
+
413
+ def deq_shutdown(thrs)
414
+ if thrs
415
+ thrs = [thrs] unless thrs.respond_to? :each
416
+ thrs.each do |thr|
417
+ @kill_deq << thr
418
+ @deq_waiting.delete_if do |waiting|
419
+ if waiting[:thread] == thr
420
+ waiting[:pipe] << @unblock
421
+ end
422
+ end
423
+ end
424
+ else
425
+ @kill_deq = true
426
+ @deq_waiting.each {|waiting| waiting[:pipe] << @unblock }.clear
427
+ end
428
+ end
429
+
430
+ def enq_allow(thrs)
431
+ if thrs
432
+ raise TypeError unless @kill_enq.respond_to? :each
433
+ thrs = [thrs] unless thrs.respond_to? :each
434
+ thrs.each {|thr| @kill_enq.delete thr }
435
+ else
436
+ @kill_enq = []
437
+ end
438
+ end
439
+
440
+ def deq_allow(thrs)
441
+ if thrs
442
+ raise TypeError unless @kill_deq.respond_to? :each
443
+ thrs = [thrs] unless thrs.respond_to? :each
444
+ thrs.each {|thr| @kill_deq.delete thr }
445
+ else
446
+ @kill_deq = []
447
+ end
448
+ end
449
+
450
+ def enqueue_multiple_waiting
451
+ while @que.size < @max and waiting = @enq_waiting.shift
452
+ @que << waiting[:value]
453
+ waiting[:pipe] << true
454
+ end
455
+ end
456
+
457
+ end # class ImprovedSizedQueue
458
+
459
+ module_function
460
+
461
+ #
462
+ # Returns a new instance of ImprovedQueue::ImprovedSizedQueue. *Note*:
463
+ # size must be a positive integer >= 2
464
+ #
465
+ def new(size)
466
+ ImprovedSizedQueue.new size
467
+ end
468
+
469
+ end # module ImprovedQueue
metadata ADDED
@@ -0,0 +1,81 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: improved-queue
3
+ version: !ruby/object:Gem::Version
4
+ version: '1.0'
5
+ prerelease:
6
+ platform: ruby
7
+ authors:
8
+ - Lincoln McCormick
9
+ autorequire:
10
+ bindir: bin
11
+ cert_chain: []
12
+ date: 2013-10-30 00:00:00.000000000 Z
13
+ dependencies: []
14
+ description: ! 'A simple elaboration on Ruby''s native SizedQueue which allows using
15
+ the queue
16
+
17
+ object to re-awaken a blocked thread and cause it to abandon its blocking
18
+
19
+ enqueue/dequeue operation. Useful for simplifying program logic, reducing the
20
+
21
+ need for external flags/Muteces (yes, I said Muteces), and for cleanly
22
+
23
+ resolving queues on program termination without risk of data loss or deadlock.
24
+
25
+
26
+ Why use this queue? There are two reasons. For one thing, under several
27
+
28
+ circumstances it is _considerably_ faster than Ruby''s native SizedQueue. I
29
+
30
+ admit I''m not entirely sure why, but I have tested this on multiple platforms
31
+
32
+ and it seems to hold true as a generality. You can feel free to confirm or
33
+
34
+ dispel that this advantage holds for your use case at your own leisure.
35
+
36
+
37
+ The second reason is the aforementioned simplification of program logic.
38
+
39
+ In the case that all data passing through the queues must be preserved on
40
+
41
+ program termination, SizedQueue can require some elaborate trickery to ensure
42
+
43
+ that even the most remote possibility of deadlock is removed.
44
+
45
+ ImprovedSizedQueue solves this problem by making it possible to use the queue
46
+
47
+ to pass control messages between threads, irrespective of the queue''s actual
48
+
49
+ content.'
50
+ email: iamtheiconoclast@gmail.com
51
+ executables: []
52
+ extensions: []
53
+ extra_rdoc_files: []
54
+ files:
55
+ - lib/improved-queue.rb
56
+ homepage: http://rubygems.org/gems/improved-queue
57
+ licenses:
58
+ - GPL v3
59
+ post_install_message:
60
+ rdoc_options: []
61
+ require_paths:
62
+ - lib
63
+ required_ruby_version: !ruby/object:Gem::Requirement
64
+ none: false
65
+ requirements:
66
+ - - ! '>='
67
+ - !ruby/object:Gem::Version
68
+ version: '0'
69
+ required_rubygems_version: !ruby/object:Gem::Requirement
70
+ none: false
71
+ requirements:
72
+ - - ! '>='
73
+ - !ruby/object:Gem::Version
74
+ version: '0'
75
+ requirements: []
76
+ rubyforge_project:
77
+ rubygems_version: 1.8.23
78
+ signing_key:
79
+ specification_version: 3
80
+ summary: Improved Sized Queue
81
+ test_files: []