thread_storm 0.4.0 → 0.5.0

Sign up to get free protection for your applications and to get access to all the features.
data/CHANGELOG CHANGED
@@ -1,3 +1,8 @@
1
+ 0.5.0
2
+ - Refactored to use Monitor.
3
+ - Implemented ThreadStorm#clear_executions.
4
+ - Implemented the :execute_blocks option.
5
+
1
6
  0.4.0
2
7
  - Renamed to thread_storm... ugh.
3
8
  - Simplified the shutdown process by using my own thread safe queue.
data/README.rdoc CHANGED
@@ -11,22 +11,22 @@ Simple thread pool with a few advanced features.
11
11
 
12
12
  == Example
13
13
 
14
- pool = ThreadStorm.new :size => 2
15
- pool.execute{ sleep(0.01); "a" }
16
- pool.execute{ sleep(0.01); "b" }
17
- pool.execute{ sleep(0.01); "c" }
18
- pool.join # Should return in about 0.02 seconds... ;)
19
- pool.values # ["a", "b", "c"]
14
+ storm = ThreadStorm.new :size => 2
15
+ storm.execute{ sleep(0.01); "a" }
16
+ storm.execute{ sleep(0.01); "b" }
17
+ storm.execute{ sleep(0.01); "c" }
18
+ storm.join # Should return in about 0.02 seconds... ;)
19
+ storm.values # ["a", "b", "c"]
20
20
 
21
21
  == Execution state
22
22
 
23
23
  You can query the state of an execution.
24
24
 
25
- pool = ThreadStorm.new :size => 2
26
- execution = pool.execute{ sleep(0.01); "a" }
27
- pool.execute{ sleep(0.01); "b" }
28
- pool.execute{ sleep(0.01); "c" }
29
- pool.join
25
+ storm = ThreadStorm.new :size => 2
26
+ execution = storm.execute{ sleep(0.01); "a" }
27
+ storm.execute{ sleep(0.01); "b" }
28
+ storm.execute{ sleep(0.01); "c" }
29
+ storm.join
30
30
  execution.started? # true
31
31
  execution.finished? # true
32
32
  execution.timed_out? # false
@@ -37,24 +37,24 @@ You can query the state of an execution.
37
37
 
38
38
  You can restrict how long executions are allowed to run for.
39
39
 
40
- pool = ThreadStorm.new :size => 2, :timeout => 0.02, :default_value => "failed"
41
- pool.execute{ sleep(0.01); "a" }
42
- pool.execute{ sleep(0.03); "b" }
43
- pool.execute{ sleep(0.01); "c" }
44
- pool.join
45
- pool.executions[1].started? # true
46
- pool.executions[1].finished? # true
47
- pool.executions[1].timed_out? # true
48
- pool.executions[1].duration # ~0.02
49
- pool.executions[1].value # "failed"
40
+ storm = ThreadStorm.new :size => 2, :timeout => 0.02, :default_value => "failed"
41
+ storm.execute{ sleep(0.01); "a" }
42
+ storm.execute{ sleep(0.03); "b" }
43
+ storm.execute{ sleep(0.01); "c" }
44
+ storm.join
45
+ storm.executions[1].started? # true
46
+ storm.executions[1].finished? # true
47
+ storm.executions[1].timed_out? # true
48
+ storm.executions[1].duration # ~0.02
49
+ storm.executions[1].value # "failed"
50
50
 
51
51
  == Error handling
52
52
 
53
53
  If an execution causes an exception, it will be reraised when ThreadStorm#join (or any other method that calls ThreadStorm#join) is called, unless you pass <tt>:reraise => false</tt> to ThreadStorm#new. The exception is stored in ThreadStorm::Execution#exception.
54
54
 
55
- pool = ThreadStorm.new :size => 2, :reraise => false, :default_value => "failure"
56
- execution = pool.execute{ raise("busted"); "a" }
57
- pool.join
55
+ storm = ThreadStorm.new :size => 2, :reraise => false, :default_value => "failure"
56
+ execution = storm.execute{ raise("busted"); "a" }
57
+ storm.join
58
58
  execution.value # "failure"
59
59
  execution.exception # RuntimeError: busted
60
60
 
@@ -64,27 +64,27 @@ ThreadStorm#join blocks until all pending executions are done running. It does
64
64
 
65
65
  Sometimes it can be a pain to remember to call #shutdown, so as a convenience, you can pass a block to ThreadStorm#new and #join and #shutdown will be called for you.
66
66
 
67
- party = ThreadStorm.new do |p|
68
- p.execute{ "a" }
69
- p.execute{ "b" }
70
- p.execute{ "c" }
67
+ storm = ThreadStorm.new do |s|
68
+ s.execute{ "a" }
69
+ s.execute{ "b" }
70
+ s.execute{ "c" }
71
71
  end
72
72
  # At this point, #join and #shutdown have been called.
73
- party.values # ["a", "b", "c"]
73
+ storm.values # ["a", "b", "c"]
74
74
 
75
75
  == Configurable timeout method
76
76
 
77
77
  <tt>Timeout.timeout</tt> is unreliable in MRI 1.8.x. To address this, you can have ThreadStorm use an alternative implementation.
78
78
 
79
79
  require "system_timer"
80
- party = ThreadStorm.new :timeout_method => SystemTimer.method(:timeout) do
80
+ storm = ThreadStorm.new :timeout_method => SystemTimer.method(:timeout) do
81
81
  ...
82
82
  end
83
83
 
84
84
  The <tt>:timeout_method</tt> option takes any callable object (i.e. <tt>responds_to?(:call)</tt>) that implements something similar to <tt>Timeout.timeout</tt> (i.e. takes the same arguments and raises <tt>Timeout::Error</tt>).
85
85
 
86
86
  require "system_timer"
87
- party = ThreadStorm.new :timeout_method => Proc.new{ |seconds, &block| SystemTimer.timeout(seconds, &block) }
87
+ storm = ThreadStorm.new :timeout_method => Proc.new{ |seconds, &block| SystemTimer.timeout(seconds, &block) }
88
88
  ...
89
89
  end
90
90
 
data/VERSION CHANGED
@@ -1 +1 @@
1
- 0.4.0
1
+ 0.5.0
@@ -1,5 +1,22 @@
1
1
  # Things I miss from active_support.
2
2
 
3
+ class Array #:nodoc:
4
+
5
+ def separate
6
+ selected = []
7
+ rejected = []
8
+ each do |item|
9
+ if yield(item)
10
+ selected << item
11
+ else
12
+ rejected << item
13
+ end
14
+ end
15
+ [selected, rejected]
16
+ end unless method_defined?(:separate)
17
+
18
+ end
19
+
3
20
  class Hash #:nodoc:
4
21
 
5
22
  def symbolize_keys
@@ -1,20 +1,22 @@
1
+ require "monitor"
2
+
1
3
  class ThreadStorm
2
4
  # Encapsulates a unit of work to be sent to the thread pool.
3
5
  class Execution
4
6
  attr_writer :value, :exception #:nodoc:
5
7
  attr_reader :args, :block, :thread #:nodoc:
6
8
 
7
- def initialize(args, &block) #:nodoc:
9
+ def initialize(args, default_value, &block) #:nodoc:
8
10
  @args = args
11
+ @value = default_value
9
12
  @block = block
10
13
  @start_time = nil
11
14
  @finish_time = nil
12
- @value = nil
13
15
  @exception = nil
14
16
  @timed_out = false
15
17
  @thread = nil
16
- @mutex = Mutex.new
17
- @join = ConditionVariable.new
18
+ @lock = Monitor.new
19
+ @cond = @lock.new_cond
18
20
  end
19
21
 
20
22
  def start! #:nodoc:
@@ -33,9 +35,9 @@ class ThreadStorm
33
35
  end
34
36
 
35
37
  def finish! #:nodoc:
36
- @mutex.synchronize do
38
+ @lock.synchronize do
37
39
  @finish_time = Time.now
38
- @join.signal
40
+ @cond.signal
39
41
  end
40
42
  end
41
43
 
@@ -70,8 +72,8 @@ class ThreadStorm
70
72
 
71
73
  # Block until this execution has finished running.
72
74
  def join
73
- @mutex.synchronize do
74
- @join.wait(@mutex) unless finished?
75
+ @lock.synchronize do
76
+ @cond.wait_until{ finished? }
75
77
  end
76
78
  end
77
79
 
@@ -0,0 +1,18 @@
1
+ require "monitor"
2
+
3
+ class ThreadStorm
4
+ class Sentinel
5
+ attr_reader :e_cond, :p_cond
6
+
7
+ def initialize
8
+ @lock = Monitor.new
9
+ @e_cond = @lock.new_cond # execute condition
10
+ @p_cond = @lock.new_cond # pop condition
11
+ end
12
+
13
+ def synchronize
14
+ @lock.synchronize{ yield(@e_cond, @p_cond) }
15
+ end
16
+
17
+ end
18
+ end
@@ -1,12 +1,14 @@
1
1
  class ThreadStorm
2
2
  class Worker #:nodoc:
3
- attr_reader :thread
3
+ attr_reader :thread, :execution
4
4
 
5
5
  # Takes the threadsafe queue and options from the thread pool.
6
- def initialize(queue, options)
7
- @queue = queue
8
- @options = options
9
- @thread = Thread.new(self){ |me| me.run }
6
+ def initialize(queue, sentinel, options)
7
+ @queue = queue
8
+ @sentinel = sentinel
9
+ @options = options
10
+ @execution = nil # Current execution we're working on.
11
+ @thread = Thread.new(self){ |me| me.run }
10
12
  end
11
13
 
12
14
  def timeout
@@ -24,17 +26,29 @@ class ThreadStorm
24
26
 
25
27
  # Pop an execution off the queue and process it, or pass off control to a different thread.
26
28
  def pop_and_process_execution
27
- execution = @queue.pop and process_execution_with_timeout(execution)
29
+ @sentinel.synchronize do |e_cond, p_cond|
30
+ # Become idle and signal that we're idle.
31
+ @execution = nil
32
+ e_cond.signal
33
+
34
+ # Give up the lock and wait until there is work to do.
35
+ p_cond.wait_while{ @queue.empty? }
36
+
37
+ # Get the work to do (implicitly becoming busy).
38
+ @execution = @queue.pop
39
+ end
40
+
41
+ process_execution_with_timeout unless die?
28
42
  end
29
43
 
30
44
  # Process the execution, handling timeouts and exceptions.
31
- def process_execution_with_timeout(execution)
45
+ def process_execution_with_timeout
32
46
  execution.start!
33
47
  begin
34
48
  if timeout
35
- timeout_method.call(timeout){ process_execution(execution) }
49
+ timeout_method.call(timeout){ process_execution }
36
50
  else
37
- process_execution(execution)
51
+ process_execution
38
52
  end
39
53
  rescue Timeout::Error => e
40
54
  execution.timed_out!
@@ -46,18 +60,17 @@ class ThreadStorm
46
60
  end
47
61
 
48
62
  # Seriously, process the execution.
49
- def process_execution(execution)
63
+ def process_execution
50
64
  execution.value = execution.block.call(*execution.args)
51
65
  end
52
66
 
53
- # So the thread pool can signal this worker's thread to end.
54
- def die!
55
- @die = true
67
+ def busy?
68
+ !!@execution and not die?
56
69
  end
57
70
 
58
71
  # True if this worker's thread should die.
59
72
  def die?
60
- !!@die
73
+ @execution == :die
61
74
  end
62
75
 
63
76
  end
data/lib/thread_storm.rb CHANGED
@@ -1,13 +1,13 @@
1
1
  require "thread"
2
2
  require "timeout"
3
3
  require "thread_storm/active_support"
4
- require "thread_storm/queue"
4
+ require "thread_storm/sentinel"
5
5
  require "thread_storm/execution"
6
6
  require "thread_storm/worker"
7
7
 
8
8
  class ThreadStorm
9
9
 
10
- # Array of executions in order as defined by calls to ThreadStorm#execute.
10
+ # Array of executions in order as they are defined by calls to ThreadStorm#execute.
11
11
  attr_reader :executions
12
12
 
13
13
  # Valid options are
@@ -16,16 +16,18 @@ class ThreadStorm
16
16
  # :timeout_method => An object that implements something like Timeout.timeout via #call. Default is Timeout.method(:timeout).
17
17
  # :default_value => Value of an execution if it times out or errors. Default is nil.
18
18
  # :reraise => True if you want exceptions reraised when ThreadStorm#join is called. Default is true.
19
+ # :execute_blocks => True if you want #execute to block until there is an available thread. Default is false.
19
20
  def initialize(options = {})
20
21
  @options = options.option_merge :size => 2,
21
22
  :timeout => nil,
22
23
  :timeout_method => Timeout.method(:timeout),
23
24
  :default_value => nil,
24
- :reraise => true
25
- @queue = Queue.new # This is threadsafe.
25
+ :reraise => true,
26
+ :execute_blocks => false
27
+ @sentinel = Sentinel.new
28
+ @queue = []
26
29
  @executions = []
27
- @workers = (1..@options[:size]).collect{ Worker.new(@queue, @options) }
28
- @start_time = Time.now
30
+ @workers = (1..@options[:size]).collect{ Worker.new(@queue, @sentinel, @options) }
29
31
  if block_given?
30
32
  yield(self)
31
33
  join
@@ -33,25 +35,20 @@ class ThreadStorm
33
35
  end
34
36
  end
35
37
 
36
- def size
38
+ def size #:nodoc:
37
39
  @options[:size]
38
40
  end
39
41
 
40
- def default_value
41
- @options[:default_value]
42
- end
43
-
44
- def reraise?
45
- @options[:reraise]
46
- end
47
-
48
- # Create and execution and schedules it to be run by the thread pool.
42
+ # Creates an execution and schedules it to be run by the thread pool.
49
43
  # Return value is a ThreadStorm::Execution.
50
44
  def execute(*args, &block)
51
- Execution.new(args, &block).tap do |execution|
52
- execution.value = default_value
53
- @executions << execution
54
- @queue.push(execution)
45
+ Execution.new(args, default_value, &block).tap do |execution|
46
+ @sentinel.synchronize do |e_cond, p_cond|
47
+ e_cond.wait_while{ all_workers_busy? } if execute_blocks?
48
+ @queue << execution
49
+ @executions << execution
50
+ p_cond.signal
51
+ end
55
52
  end
56
53
  end
57
54
 
@@ -66,22 +63,65 @@ class ThreadStorm
66
63
 
67
64
  # Calls ThreadStorm#join, then collects the values of each execution.
68
65
  def values
69
- join
70
- @executions.collect{ |execution| execution.value }
66
+ join and @executions.collect{ |execution| execution.value }
71
67
  end
72
68
 
73
69
  # Signals the worker threads to terminate immediately (ignoring any pending
74
70
  # executions) and blocks until they do.
75
71
  def shutdown
76
- @workers.each{ |worker| worker.die! }
77
- @queue.die!
72
+ @sentinel.synchronize do |e_cond, p_cond|
73
+ @queue.replace([:die] * size)
74
+ p_cond.broadcast
75
+ end
78
76
  @workers.each{ |worker| worker.thread.join }
79
77
  true
80
78
  end
81
79
 
82
- # Returns an array of threads in the pool.
80
+ # Returns workers that are currently running executions.
81
+ def busy_workers
82
+ @workers.select{ |worker| worker.busy? }
83
+ end
84
+
85
+ # Returns an array of Ruby threads in the pool.
83
86
  def threads
84
87
  @workers.collect{ |worker| worker.thread }
85
88
  end
86
89
 
90
+ # Removes executions stored at ThreadStorm#executions. You can selectively remove
91
+ # them by passing in a block or a symbol. The following two lines are equivalent.
92
+ # storm.clear_executions(:finished?)
93
+ # storm.clear_executions{ |e| e.finished? }
94
+ # Because of the nature of threading, the following code could happen:
95
+ # storm.clear_executions(:finished?)
96
+ # storm.executions.any?{ |e| e.finished? }
97
+ # Some executions could have finished between the two calls.
98
+ def clear_executions(method_name = nil, &block)
99
+ cleared, @executions = @executions.separate do |execution|
100
+ if block_given?
101
+ yield(execution)
102
+ else
103
+ execution.send(method_name)
104
+ end
105
+ end
106
+ cleared
107
+ end
108
+
109
+ private
110
+
111
+ def default_value #:nodoc:
112
+ @options[:default_value]
113
+ end
114
+
115
+ def reraise? #:nodoc:
116
+ @options[:reraise]
117
+ end
118
+
119
+ def execute_blocks? #:nodoc:
120
+ @options[:execute_blocks]
121
+ end
122
+
123
+ def all_workers_busy? #:nodoc:
124
+ @workers.all?{ |worker| worker.busy? }
125
+ end
126
+
87
127
  end
@@ -3,40 +3,40 @@ require 'helper'
3
3
  class TestThreadStorm < Test::Unit::TestCase
4
4
 
5
5
  def test_no_concurrency
6
- pool = ThreadStorm.new :size => 1
7
- pool.execute{ sleep(0.01); "one" }
8
- pool.execute{ sleep(0.01); "two" }
9
- pool.execute{ sleep(0.01); "three" }
10
- assert_equal %w[one two three], pool.values
11
- assert_all_threads_worked(pool)
6
+ storm = ThreadStorm.new :size => 1
7
+ storm.execute{ sleep(0.01); "one" }
8
+ storm.execute{ sleep(0.01); "two" }
9
+ storm.execute{ sleep(0.01); "three" }
10
+ assert_equal %w[one two three], storm.values
11
+ assert_all_threads_worked(storm)
12
12
  end
13
13
 
14
14
  def test_partial_concurrency
15
- pool = ThreadStorm.new :size => 2
16
- pool.execute{ sleep(0.01); "one" }
17
- pool.execute{ sleep(0.01); "two" }
18
- pool.execute{ sleep(0.01); "three" }
19
- assert_equal %w[one two three], pool.values
20
- assert_all_threads_worked(pool)
15
+ storm = ThreadStorm.new :size => 2
16
+ storm.execute{ sleep(0.01); "one" }
17
+ storm.execute{ sleep(0.01); "two" }
18
+ storm.execute{ sleep(0.01); "three" }
19
+ assert_equal %w[one two three], storm.values
20
+ assert_all_threads_worked(storm)
21
21
  end
22
22
 
23
23
  def test_full_concurrency
24
- pool = ThreadStorm.new :size => 3
25
- pool.execute{ sleep(0.01); "one" }
26
- pool.execute{ sleep(0.01); "two" }
27
- pool.execute{ sleep(0.01); "three" }
28
- assert_equal %w[one two three], pool.values
29
- assert_all_threads_worked(pool)
24
+ storm = ThreadStorm.new :size => 3
25
+ storm.execute{ sleep(0.01); "one" }
26
+ storm.execute{ sleep(0.01); "two" }
27
+ storm.execute{ sleep(0.01); "three" }
28
+ assert_equal %w[one two three], storm.values
29
+ assert_all_threads_worked(storm)
30
30
  end
31
31
 
32
32
  def test_timeout_no_concurrency
33
- pool = ThreadStorm.new :size => 1, :timeout => 0.015
34
- pool.execute{ sleep(0.01); "one" }
35
- pool.execute{ sleep(0.02); "two" }
36
- pool.execute{ sleep(0.01); "three" }
37
- assert_equal ["one", nil, "three"], pool.values
38
- assert pool.executions[1].timed_out?
39
- assert_all_threads_worked(pool)
33
+ storm = ThreadStorm.new :size => 1, :timeout => 0.015
34
+ storm.execute{ sleep(0.01); "one" }
35
+ storm.execute{ sleep(0.02); "two" }
36
+ storm.execute{ sleep(0.01); "three" }
37
+ assert_equal ["one", nil, "three"], storm.values
38
+ assert storm.executions[1].timed_out?
39
+ assert_all_threads_worked(storm)
40
40
  end
41
41
 
42
42
  # Tricky...
@@ -44,73 +44,119 @@ class TestThreadStorm < Test::Unit::TestCase
44
44
  # 2 0.015s ------
45
45
  # 3 0.01s ----
46
46
  def test_timeout_partial_concurrency
47
- pool = ThreadStorm.new :size => 2, :timeout => 0.015
48
- pool.execute{ sleep(0.01); "one" }
49
- pool.execute{ sleep(0.02); "two" }
50
- pool.execute{ sleep(0.01); "three" }
51
- assert_equal ["one", nil, "three"], pool.values
52
- assert pool.executions[1].timed_out?
53
- assert_all_threads_worked(pool)
47
+ storm = ThreadStorm.new :size => 2, :timeout => 0.015
48
+ storm.execute{ sleep(0.01); "one" }
49
+ storm.execute{ sleep(0.02); "two" }
50
+ storm.execute{ sleep(0.01); "three" }
51
+ assert_equal ["one", nil, "three"], storm.values
52
+ assert storm.executions[1].timed_out?
53
+ assert_all_threads_worked(storm)
54
54
  end
55
55
 
56
56
  def test_timeout_full_concurrency
57
- pool = ThreadStorm.new :size => 3, :timeout => 0.015
58
- pool.execute{ sleep(0.01); "one" }
59
- pool.execute{ sleep(0.02); "two" }
60
- pool.execute{ sleep(0.01); "three" }
61
- assert_equal ["one", nil, "three"], pool.values
62
- assert pool.executions[1].timed_out?
63
- assert_all_threads_worked(pool)
57
+ storm = ThreadStorm.new :size => 3, :timeout => 0.015
58
+ storm.execute{ sleep(0.01); "one" }
59
+ storm.execute{ sleep(0.02); "two" }
60
+ storm.execute{ sleep(0.01); "three" }
61
+ assert_equal ["one", nil, "three"], storm.values
62
+ assert storm.executions[1].timed_out?
63
+ assert_all_threads_worked(storm)
64
64
  end
65
65
 
66
66
  def test_timeout_with_default_value
67
- pool = ThreadStorm.new :size => 1, :timeout => 0.015, :default_value => "timed out"
68
- pool.execute{ sleep(0.01); "one" }
69
- pool.execute{ sleep(0.02); "two" }
70
- pool.execute{ sleep(0.01); "three" }
71
- assert_equal ["one", "timed out", "three"], pool.values
72
- assert pool.executions[1].timed_out?
73
- assert_all_threads_worked(pool)
67
+ storm = ThreadStorm.new :size => 1, :timeout => 0.015, :default_value => "timed out"
68
+ storm.execute{ sleep(0.01); "one" }
69
+ storm.execute{ sleep(0.02); "two" }
70
+ storm.execute{ sleep(0.01); "three" }
71
+ assert_equal ["one", "timed out", "three"], storm.values
72
+ assert storm.executions[1].timed_out?
73
+ assert_all_threads_worked(storm)
74
74
  end
75
75
 
76
76
  def test_shutdown
77
77
  original_thread_count = Thread.list.length
78
78
 
79
- pool = ThreadStorm.new :size => 3
80
- pool.execute{ sleep(0.01); "one" }
81
- pool.execute{ sleep(0.01); "two" }
82
- pool.execute{ sleep(0.01); "three" }
83
- pool.join
79
+ storm = ThreadStorm.new :size => 3
80
+ storm.execute{ sleep(0.01); "one" }
81
+ storm.execute{ sleep(0.01); "two" }
82
+ storm.execute{ sleep(0.01); "three" }
83
+ storm.join
84
84
 
85
85
  assert_equal original_thread_count + 3, Thread.list.length
86
- pool.shutdown
86
+ storm.shutdown
87
87
  assert_equal original_thread_count, Thread.list.length
88
88
  end
89
89
 
90
90
  def test_shutdown_before_pop
91
- pool = ThreadStorm.new :size => 3
92
- pool.shutdown
91
+ storm = ThreadStorm.new :size => 3
92
+ storm.shutdown
93
93
  end
94
94
 
95
95
  def test_args
96
- pool = ThreadStorm.new :size => 2
96
+ storm = ThreadStorm.new :size => 2
97
97
  %w[one two three four five].each do |word|
98
- pool.execute(word){ |w| sleep(0.01); w }
98
+ storm.execute(word){ |w| sleep(0.01); w }
99
99
  end
100
- pool.join
101
- assert_equal %w[one two three four five], pool.values
100
+ storm.join
101
+ assert_equal %w[one two three four five], storm.values
102
102
  end
103
103
 
104
104
  def test_new_with_block
105
105
  thread_count = Thread.list.length
106
- pool = ThreadStorm.new :size => 1 do |party|
107
- party.execute{ sleep(0.01); "one" }
108
- party.execute{ sleep(0.01); "two" }
109
- party.execute{ sleep(0.01); "three" }
106
+ storm = ThreadStorm.new :size => 1 do |storm|
107
+ storm.execute{ sleep(0.01); "one" }
108
+ storm.execute{ sleep(0.01); "two" }
109
+ storm.execute{ sleep(0.01); "three" }
110
110
  end
111
111
  assert_equal thread_count, Thread.list.length
112
- assert_equal %w[one two three], pool.values
113
- assert_all_threads_worked(pool)
112
+ assert_equal %w[one two three], storm.values
113
+ assert_all_threads_worked(storm)
114
+ end
115
+
116
+ def test_execute_blocks
117
+ t1 = Thread.new do
118
+ storm = ThreadStorm.new :size => 1, :execute_blocks => true
119
+ storm.execute{ sleep }
120
+ storm.execute{ nil }
121
+ end
122
+ t2 = Thread.new do
123
+ storm = ThreadStorm.new :size => 1, :execute_blocks => false
124
+ storm.execute{ sleep }
125
+ storm.execute{ nil }
126
+ end
127
+ sleep(0.1) # How else??
128
+ assert_equal "sleep", t1.status
129
+ assert_equal false, t2.status
130
+ end
131
+
132
+ def test_clear_executions
133
+ storm = ThreadStorm.new :size => 3
134
+ storm.execute{ sleep }
135
+ storm.execute{ sleep(0.1) }
136
+ storm.execute{ sleep(0.1) }
137
+ sleep(0.2) # Ugh another test based on sleeping.
138
+ finished = storm.clear_executions(:finished?)
139
+ assert_equal 2, finished.length
140
+ assert_equal 1, storm.executions.length
141
+ end
142
+
143
+ def test_execution_blocks_again
144
+ storm = ThreadStorm.new :size => 10, :execute_blocks => true
145
+ 20.times{ storm.execute{ sleep(rand) } }
146
+ storm.join
147
+ storm.shutdown
148
+ end
149
+
150
+ def test_for_deadlocks
151
+ ThreadStorm.new :size => 10, :execute_blocks => true do |storm|
152
+ 20.times do
153
+ storm.execute do
154
+ ThreadStorm.new :size => 10, :timeout => 0.5 do |storm2|
155
+ 20.times{ storm2.execute{ sleep(rand) } }
156
+ end
157
+ end
158
+ end
159
+ end
114
160
  end
115
161
 
116
162
  end
data/thread_storm.gemspec CHANGED
@@ -5,11 +5,11 @@
5
5
 
6
6
  Gem::Specification.new do |s|
7
7
  s.name = %q{thread_storm}
8
- s.version = "0.4.0"
8
+ s.version = "0.5.0"
9
9
 
10
10
  s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
11
11
  s.authors = ["Christopher J. Bottaro"]
12
- s.date = %q{2010-06-07}
12
+ s.date = %q{2010-06-21}
13
13
  s.description = %q{Simple thread pool with timeouts, default values, error handling, state tracking and unit tests.}
14
14
  s.email = %q{cjbottaro@alumni.cs.utexas.edu}
15
15
  s.extra_rdoc_files = [
@@ -27,7 +27,7 @@ Gem::Specification.new do |s|
27
27
  "lib/thread_storm.rb",
28
28
  "lib/thread_storm/active_support.rb",
29
29
  "lib/thread_storm/execution.rb",
30
- "lib/thread_storm/queue.rb",
30
+ "lib/thread_storm/sentinel.rb",
31
31
  "lib/thread_storm/worker.rb",
32
32
  "test/helper.rb",
33
33
  "test/test_thread_storm.rb",
metadata CHANGED
@@ -1,13 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: thread_storm
3
3
  version: !ruby/object:Gem::Version
4
- hash: 15
4
+ hash: 11
5
5
  prerelease: false
6
6
  segments:
7
7
  - 0
8
- - 4
8
+ - 5
9
9
  - 0
10
- version: 0.4.0
10
+ version: 0.5.0
11
11
  platform: ruby
12
12
  authors:
13
13
  - Christopher J. Bottaro
@@ -15,7 +15,7 @@ autorequire:
15
15
  bindir: bin
16
16
  cert_chain: []
17
17
 
18
- date: 2010-06-07 00:00:00 -05:00
18
+ date: 2010-06-21 00:00:00 -05:00
19
19
  default_executable:
20
20
  dependencies: []
21
21
 
@@ -39,7 +39,7 @@ files:
39
39
  - lib/thread_storm.rb
40
40
  - lib/thread_storm/active_support.rb
41
41
  - lib/thread_storm/execution.rb
42
- - lib/thread_storm/queue.rb
42
+ - lib/thread_storm/sentinel.rb
43
43
  - lib/thread_storm/worker.rb
44
44
  - test/helper.rb
45
45
  - test/test_thread_storm.rb
@@ -1,45 +0,0 @@
1
- require "thread"
2
-
3
- class ThreadStorm
4
- class Queue #:nodoc:
5
-
6
- def initialize
7
- @lock = Mutex.new
8
- @cond = ConditionVariable.new
9
- @die = false
10
- @queue = []
11
- end
12
-
13
- # Pushes a value on the queue and wakes up the next thread waiting on #pop.
14
- def push(value)
15
- @lock.synchronize do
16
- @queue.push(value)
17
- @cond.signal # Wake up next thread waiting on #pop.
18
- end
19
- end
20
-
21
- # Pops a value of the queue. Blocks if the queue is empty.
22
- def pop
23
- @lock.synchronize do
24
- @cond.wait(@lock) if @queue.empty? and not die?
25
- @queue.pop
26
- end
27
- end
28
-
29
- # Clears the queue. Any calls to #pop will immediately return with nil.
30
- def die!
31
- @lock.synchronize do
32
- @die = true
33
- @queue.clear
34
- @cond.broadcast # Wake up any threads waiting on #pop.
35
- end
36
- end
37
-
38
- private
39
-
40
- def die?
41
- !!@die
42
- end
43
-
44
- end
45
- end