concurrent-ruby 0.4.1 → 0.5.0.pre.1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +31 -33
- data/lib/concurrent.rb +11 -3
- data/lib/concurrent/actor.rb +29 -29
- data/lib/concurrent/agent.rb +98 -16
- data/lib/concurrent/atomic.rb +125 -0
- data/lib/concurrent/channel.rb +36 -1
- data/lib/concurrent/condition.rb +67 -0
- data/lib/concurrent/copy_on_notify_observer_set.rb +80 -0
- data/lib/concurrent/copy_on_write_observer_set.rb +94 -0
- data/lib/concurrent/count_down_latch.rb +60 -0
- data/lib/concurrent/dataflow.rb +85 -0
- data/lib/concurrent/dereferenceable.rb +69 -31
- data/lib/concurrent/event.rb +27 -21
- data/lib/concurrent/future.rb +103 -43
- data/lib/concurrent/ivar.rb +78 -0
- data/lib/concurrent/mvar.rb +154 -0
- data/lib/concurrent/obligation.rb +94 -9
- data/lib/concurrent/postable.rb +11 -9
- data/lib/concurrent/promise.rb +101 -127
- data/lib/concurrent/safe_task_executor.rb +28 -0
- data/lib/concurrent/scheduled_task.rb +60 -54
- data/lib/concurrent/stoppable.rb +2 -2
- data/lib/concurrent/supervisor.rb +36 -29
- data/lib/concurrent/thread_local_var.rb +117 -0
- data/lib/concurrent/timer_task.rb +28 -30
- data/lib/concurrent/utilities.rb +1 -1
- data/lib/concurrent/version.rb +1 -1
- data/spec/concurrent/agent_spec.rb +121 -230
- data/spec/concurrent/atomic_spec.rb +201 -0
- data/spec/concurrent/condition_spec.rb +171 -0
- data/spec/concurrent/copy_on_notify_observer_set_spec.rb +10 -0
- data/spec/concurrent/copy_on_write_observer_set_spec.rb +10 -0
- data/spec/concurrent/count_down_latch_spec.rb +125 -0
- data/spec/concurrent/dataflow_spec.rb +160 -0
- data/spec/concurrent/dereferenceable_shared.rb +145 -0
- data/spec/concurrent/event_spec.rb +44 -9
- data/spec/concurrent/fixed_thread_pool_spec.rb +0 -1
- data/spec/concurrent/future_spec.rb +184 -69
- data/spec/concurrent/ivar_spec.rb +192 -0
- data/spec/concurrent/mvar_spec.rb +380 -0
- data/spec/concurrent/obligation_spec.rb +193 -0
- data/spec/concurrent/observer_set_shared.rb +233 -0
- data/spec/concurrent/postable_shared.rb +3 -7
- data/spec/concurrent/promise_spec.rb +270 -192
- data/spec/concurrent/safe_task_executor_spec.rb +58 -0
- data/spec/concurrent/scheduled_task_spec.rb +142 -38
- data/spec/concurrent/thread_local_var_spec.rb +113 -0
- data/spec/concurrent/thread_pool_shared.rb +2 -3
- data/spec/concurrent/timer_task_spec.rb +31 -1
- data/spec/spec_helper.rb +2 -3
- data/spec/support/functions.rb +4 -0
- data/spec/support/less_than_or_equal_to_matcher.rb +5 -0
- metadata +50 -30
- data/lib/concurrent/contract.rb +0 -21
- data/lib/concurrent/event_machine_defer_proxy.rb +0 -22
- data/md/actor.md +0 -404
- data/md/agent.md +0 -142
- data/md/channel.md +0 -40
- data/md/dereferenceable.md +0 -49
- data/md/future.md +0 -125
- data/md/obligation.md +0 -32
- data/md/promise.md +0 -217
- data/md/scheduled_task.md +0 -156
- data/md/supervisor.md +0 -246
- data/md/thread_pool.md +0 -225
- data/md/timer_task.md +0 -191
- data/spec/concurrent/contract_spec.rb +0 -34
- data/spec/concurrent/event_machine_defer_proxy_spec.rb +0 -240
data/md/scheduled_task.md
DELETED
@@ -1,156 +0,0 @@
|
|
1
|
-
# I'm late! For a very important date!
|
2
|
-
|
3
|
-
`ScheduledTask` is a close relative of `Concurrent::Future` but with one
|
4
|
-
important difference. A `Future` is set to execute as soon as possible
|
5
|
-
whereas a `ScheduledTask` is set to execute at a specific time. This implementation
|
6
|
-
is loosely based on Java's
|
7
|
-
[ScheduledExecutorService](http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ScheduledExecutorService.html).
|
8
|
-
|
9
|
-
## Scheduling
|
10
|
-
|
11
|
-
The scheduled time of task execution is set on object construction. The first
|
12
|
-
parameter to `#new` is the task execution time. The time can be a numeric (floating
|
13
|
-
point or integer) representing a number of seconds in the future or it can ba a
|
14
|
-
`Time` object representing the approximate time of execution. Any other value,
|
15
|
-
a numeric equal to or less than zero, or a time in the past will result in
|
16
|
-
an `ArgumentError` being raised.
|
17
|
-
|
18
|
-
The constructor can also be given zero or more processing options. Currently the
|
19
|
-
only supported options are those recognized by the
|
20
|
-
[Dereferenceable](https://github.com/jdantonio/concurrent-ruby/blob/master/md/dereferenceable.md)
|
21
|
-
module.
|
22
|
-
|
23
|
-
The final constructor argument is a block representing the task to be performed
|
24
|
-
at the scheduled time. If no block is given an `ArgumentError` will be raised.
|
25
|
-
|
26
|
-
### States
|
27
|
-
|
28
|
-
`ScheduledTask` mixes in the
|
29
|
-
[Obligation](https://github.com/jdantonio/concurrent-ruby/blob/master/md/obligation.md)
|
30
|
-
module thus giving it "future" behavior. This includes the expected lifecycle states.
|
31
|
-
`ScheduledTask` has one additional state, however. While the task (block) is being
|
32
|
-
executed the state of the object will be `:in_progress`. This additional state is
|
33
|
-
necessary because it has implications for task cancellation.
|
34
|
-
|
35
|
-
### Cancellation
|
36
|
-
|
37
|
-
A `:pending` task can be cancelled using the `#cancel` method. A task in any other
|
38
|
-
state, including `:in_progress`, cannot be cancelled. The `#cancel` method returns
|
39
|
-
a boolean indicating the success of the cancellation attempt. A cancelled `ScheduledTask`
|
40
|
-
cannot be restarted. It is immutable.
|
41
|
-
|
42
|
-
## Obligation and Observation
|
43
|
-
|
44
|
-
The result of a `ScheduledTask` can be obtained either synchronously or asynchronously.
|
45
|
-
`ScheduledTask` mixes in both the
|
46
|
-
[Obligation](https://github.com/jdantonio/concurrent-ruby/blob/master/md/obligation.md)
|
47
|
-
module and the
|
48
|
-
[Observable](http://ruby-doc.org/stdlib-2.0/libdoc/observer/rdoc/Observable.html)
|
49
|
-
module from the Ruby standard library. With one exception `ScheduledTask` behaves
|
50
|
-
identically to
|
51
|
-
[Concurrent::Future](https://github.com/jdantonio/concurrent-ruby/blob/master/md/future.md)
|
52
|
-
with regard to these modules.
|
53
|
-
|
54
|
-
Unlike `Future`, however, an observer added to a `ScheduledTask` *after* the task
|
55
|
-
operation has completed will *not* receive notification. The reason for this is the
|
56
|
-
subtle but important difference in intent between the two abstractions. With a
|
57
|
-
`Future` there is no way to know when the operation will complete. Therefore the
|
58
|
-
*expected* behavior of an observer is to be notified. With a `ScheduledTask` however,
|
59
|
-
the approximate time of execution is known. It is often explicitly set as a constructor
|
60
|
-
argument. It is always available via the `#schedule_time` attribute reader. Therefore
|
61
|
-
it is always possible for calling code to know whether the observer is being added
|
62
|
-
prior to task execution. It is also easy to add an observer long before task
|
63
|
-
execution begins (since there is never a reason to create a scheduled task that starts
|
64
|
-
immediately). Subsequently, the *expectation* is that the caller of `#add_observer`
|
65
|
-
is making the call within an appropriate time.
|
66
|
-
|
67
|
-
## Examples
|
68
|
-
|
69
|
-
Successful task execution using seconds for scheduling:
|
70
|
-
|
71
|
-
```ruby
|
72
|
-
require 'concurrent'
|
73
|
-
|
74
|
-
task = Concurrent::ScheduledTask.new(2){ 'What does the fox say?' }
|
75
|
-
task.pending? #=> true
|
76
|
-
task.schedule_time #=> 2013-11-07 12:20:07 -0500
|
77
|
-
|
78
|
-
# wait for it...
|
79
|
-
sleep(3)
|
80
|
-
|
81
|
-
task.pending? #=> false
|
82
|
-
task.fulfilled? #=> true
|
83
|
-
task.rejected? #=> false
|
84
|
-
task.value #=> 'What does the fox say?'
|
85
|
-
```
|
86
|
-
|
87
|
-
Failed task execution using a `Time` object for scheduling:
|
88
|
-
|
89
|
-
```ruby
|
90
|
-
require 'concurrent'
|
91
|
-
|
92
|
-
t = Time.now + 2
|
93
|
-
task = Concurrent::ScheduledTask.new(t){ raise StandardError.new('Call me maybe?') }
|
94
|
-
task.pending? #=> true
|
95
|
-
task.schedule_time #=> 2013-11-07 12:22:01 -0500
|
96
|
-
|
97
|
-
# wait for it...
|
98
|
-
sleep(3)
|
99
|
-
|
100
|
-
task.pending? #=> false
|
101
|
-
task.fulfilled? #=> false
|
102
|
-
task.rejected? #=> true
|
103
|
-
task.value #=> nil
|
104
|
-
task.reason #=> #<StandardError: Call me maybe?>
|
105
|
-
```
|
106
|
-
|
107
|
-
Task execution with observation:
|
108
|
-
|
109
|
-
```ruby
|
110
|
-
require 'concurrent'
|
111
|
-
|
112
|
-
observer = Class.new{
|
113
|
-
def update(time, value, reason)
|
114
|
-
puts "The task completed at #{time} with value '#{value}'"
|
115
|
-
end
|
116
|
-
}.new
|
117
|
-
|
118
|
-
task = Concurrent::ScheduledTask.new(2){ 'What does the fox say?' }
|
119
|
-
task.add_observer(observer)
|
120
|
-
task.pending? #=> true
|
121
|
-
task.schedule_time #=> 2013-11-07 12:20:07 -0500
|
122
|
-
|
123
|
-
# wait for it...
|
124
|
-
sleep(3)
|
125
|
-
|
126
|
-
#>> The task completed at 2013-11-07 12:26:09 -0500 with value 'What does the fox say?'
|
127
|
-
```
|
128
|
-
|
129
|
-
## Copyright
|
130
|
-
|
131
|
-
*Concurrent Ruby* is Copyright © 2013 [Jerry D'Antonio](https://twitter.com/jerrydantonio).
|
132
|
-
It is free software and may be redistributed under the terms specified in the LICENSE file.
|
133
|
-
|
134
|
-
## License
|
135
|
-
|
136
|
-
Released under the MIT license.
|
137
|
-
|
138
|
-
http://www.opensource.org/licenses/mit-license.php
|
139
|
-
|
140
|
-
> Permission is hereby granted, free of charge, to any person obtaining a copy
|
141
|
-
> of this software and associated documentation files (the "Software"), to deal
|
142
|
-
> in the Software without restriction, including without limitation the rights
|
143
|
-
> to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
144
|
-
> copies of the Software, and to permit persons to whom the Software is
|
145
|
-
> furnished to do so, subject to the following conditions:
|
146
|
-
>
|
147
|
-
> The above copyright notice and this permission notice shall be included in
|
148
|
-
> all copies or substantial portions of the Software.
|
149
|
-
>
|
150
|
-
> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
151
|
-
> IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
152
|
-
> FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
153
|
-
> AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
154
|
-
> LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
155
|
-
> OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
156
|
-
> THE SOFTWARE.
|
data/md/supervisor.md
DELETED
@@ -1,246 +0,0 @@
|
|
1
|
-
# You don't need to get no supervisor! You the supervisor today!
|
2
|
-
|
3
|
-
One of Erlang's claim to fame is its fault tolerance. Erlang systems have been known
|
4
|
-
to exhibit near-mythical levels of uptime. One of the main reasons is the pervaisve
|
5
|
-
design philosophy of "let it fail." When errors occur most Erlang systems simply let
|
6
|
-
the failing component fail completely. The system then restarts the failed component.
|
7
|
-
This "let it fail" resilience isn't an intrinsic capability of either the language
|
8
|
-
or the virtual machine. It's a deliberate design philosophy. One of the key enablers
|
9
|
-
of this philosophy is the [Supervisor](http://www.erlang.org/doc/man/supervisor.html)
|
10
|
-
of the OTP (standard library).
|
11
|
-
|
12
|
-
The Supervisor module answers the question "Who watches the watchmen?" A single
|
13
|
-
Supervisor can manage any number of workers (children). The Supervisor assumes
|
14
|
-
responsibility for starting the children, stopping them, and restarting them if
|
15
|
-
they fail. Several classes in this library, including `Actor` and `TimerTask` are
|
16
|
-
designed to work with `Supervisor`. Additionally, `Supervisor`s can supervise others
|
17
|
-
`Supervisor`s (see *Supervision Trees* below).
|
18
|
-
|
19
|
-
The `Concurrent::Supervisor` class is a faithful and nearly complete implementaion
|
20
|
-
of Erlang's Supervisor module.
|
21
|
-
|
22
|
-
## Basic Supervisor Behavior
|
23
|
-
|
24
|
-
At the core a `Supervisor` instance is a very simple object. Simply create a `Supervisor`,
|
25
|
-
add at least one worker using the `#add_worker` method, and start the `Supervisor` using
|
26
|
-
either `#run` (blocking) or `#run!` (non-blocking). The `Supervisor` will spawn a new thread
|
27
|
-
for each child and start the chid on its thread. The `Supervisor` will then continuously
|
28
|
-
monitor all its child threads. If any of the children crash the `Supervisor` will restart
|
29
|
-
them in accordance with its *restart strategy* (see below). Later, stop the `Supervisor`
|
30
|
-
with its `#stop` method and it will gracefully stop all its children.
|
31
|
-
|
32
|
-
A `Supervisor` will also track the number of times it must restart children withing a
|
33
|
-
defined, sliding window of time. If the onfigured threshholds are exceeded (see *Intervals*
|
34
|
-
below) then the `Supervisor` will assume there is a catastrophic failure (possibly within
|
35
|
-
the `Supervisor` itself) and it will shut itself down. If the `Supervisor` is part of a
|
36
|
-
*supervision tree* (see below) then its `Supervisor` will likely restart it.
|
37
|
-
|
38
|
-
```ruby
|
39
|
-
task = Concurrent::TimerTask.new{ print "[#{Time.now}] Hello world!\n" }
|
40
|
-
|
41
|
-
supervisor = Concurrent::Supervisor.new
|
42
|
-
supervisor.add_worker(task)
|
43
|
-
|
44
|
-
supervisor.run! # the #run method blocks, #run! does not
|
45
|
-
```
|
46
|
-
|
47
|
-
## Workers
|
48
|
-
|
49
|
-
Any object can be managed by a `Supervisor` so long as the class to be supervised supports
|
50
|
-
the required API. A supervised object needs only support three methods:
|
51
|
-
|
52
|
-
* `#run` is a blocking call that starts the child then blocks until the child is stopped
|
53
|
-
* `#running?` is a predicate method indicating whether or not the child is running
|
54
|
-
* `#stop` gracefully stops the child if it is running
|
55
|
-
|
56
|
-
### Runnable
|
57
|
-
|
58
|
-
To facilitate the creation of supervisorable classes, the `Runnable` module is provided.
|
59
|
-
Simple include `Runnable` in the class and the required API methods will be provided.
|
60
|
-
`Runnable` also provides several lifecycle methods that may be overridden by the including
|
61
|
-
class. At a minimum the `#on_task` method *must* be overridden. `Runnable` will provide an
|
62
|
-
infinite loop that will start when either the `#run` or `#run!` method is called. The subclass
|
63
|
-
`#on_task` method will be called once in every iteration. The overridden method should provide
|
64
|
-
some sort of blocking behavior otherwise the run loop may monopolize the processor and spike
|
65
|
-
the processor utilization.
|
66
|
-
|
67
|
-
The following optional lifecycle methods are also provided:
|
68
|
-
|
69
|
-
* `#on_run` is called once when the object is started via the `#run` or `#run!` method but before the `#on_task` method is first called
|
70
|
-
* `#on_stop` is called once when the `#stop` method is called, after the last call to `#on_task`
|
71
|
-
|
72
|
-
```ruby
|
73
|
-
class Echo
|
74
|
-
include Concurrent::Runnable
|
75
|
-
|
76
|
-
def initialize
|
77
|
-
@queue = Queue.new
|
78
|
-
end
|
79
|
-
|
80
|
-
def post(message)
|
81
|
-
@queue.push(message)
|
82
|
-
end
|
83
|
-
|
84
|
-
protected
|
85
|
-
|
86
|
-
def on_task
|
87
|
-
message = @queue.pop
|
88
|
-
print "#{message}\n"
|
89
|
-
end
|
90
|
-
end
|
91
|
-
|
92
|
-
echo = Echo.new
|
93
|
-
supervisor = Concurrent::Supervisor.new
|
94
|
-
supervisor.add_worker(echo)
|
95
|
-
supervisor.run!
|
96
|
-
```
|
97
|
-
|
98
|
-
## Supervisor Configuration
|
99
|
-
|
100
|
-
A newly-created `Supervisor` will be configured with a reasonable set of options that should
|
101
|
-
suffice for most purposes. In many cases no additional configuration will be required. When
|
102
|
-
more granular control is required a `Supervisor` may be given several configuration options
|
103
|
-
during initialization. Additionally, a few per-worker configuration options may be passed
|
104
|
-
during the call to `#add_worker`. Once a `Supervisor` is created and the workers are added
|
105
|
-
no additional configuration is possible.
|
106
|
-
|
107
|
-
### Intervals
|
108
|
-
|
109
|
-
A `Supervisor` monitors its children and conducts triage operations based on several configurable
|
110
|
-
intervals:
|
111
|
-
|
112
|
-
* `:monitor_interval` specifies the number of seconds between health checks of the workers. The
|
113
|
-
higher the interval the longer a particular worker may be dead before being restarted. The
|
114
|
-
default is 1 second.
|
115
|
-
* `:max_restart` specifies the number of times (in total) the `Supevisor` may restart children
|
116
|
-
before it assumes there is a catastrophic failure and it shuts itself down. The default is 5
|
117
|
-
restarts.
|
118
|
-
* `:max_time` if the time interval over which `#max_restart` is tracked. Since `Supervisor` is
|
119
|
-
intended to be used in applications that may run forever the `#max_restart` count must be
|
120
|
-
timeboxed to prevent erroneous `Supervisor shutdown`. The default is 60 seconds.
|
121
|
-
|
122
|
-
### Restart Strategy
|
123
|
-
|
124
|
-
When a child thread dies the `Supervisor` will restart it, and possibly other children,
|
125
|
-
with the expectation that the workers are capable of cleaning themselves up and running
|
126
|
-
again. The `Supervisor` will call each targetted worker's `#stop` method, kill the
|
127
|
-
worker's thread, spawn a new thread, and call the worker's `#run` method.
|
128
|
-
|
129
|
-
* `:one_for_one` When this restart strategy is set the `Supervisor` will only restart
|
130
|
-
the worker thread that has died. It will not restart any of the other children.
|
131
|
-
This is the default restart strategy.
|
132
|
-
* `:one_for_all` When this restart strategy is set the `Supervisor` will restart all
|
133
|
-
children when any one child dies. All workers will be stopped in the order they were
|
134
|
-
originally added to the `Supervisor`. Once all childrean have been stopped they will
|
135
|
-
all be started again in the same order.
|
136
|
-
* `:rest_for_one` This restart strategy assumes that the order the workers were added
|
137
|
-
to the `Supervisor` is meaningful. When one child dies all the downstream children
|
138
|
-
(children added to the `Supervisor` after the dead worker) will be restarted. The
|
139
|
-
`Supervisor` will begin by calling the `#stop` method on the dead worker and all
|
140
|
-
downstream workers. The `Supervisor` will then iterate over all dead workers and
|
141
|
-
restart each by creating a new thread then calling the worker's `#run` method.
|
142
|
-
|
143
|
-
When a restart is initiated under any strategy other than `:one_for_one` the
|
144
|
-
`:max_restart` value will only be incremented by one, regardless of how many children
|
145
|
-
are restarted.
|
146
|
-
|
147
|
-
### Worker Restart Option
|
148
|
-
|
149
|
-
When a worker dies the default behavior of the `Supervisor` is to restart one or more
|
150
|
-
workers according to the restart strategy defined when the `Supervisor` is created
|
151
|
-
(see above). This behavior can be modified on a per-worker basis using the `:restart`
|
152
|
-
option when calling `#add_worker`. Three worker `:restart` options are supported:
|
153
|
-
|
154
|
-
* `:permanent` means the worker is intended to run forever and will always be restarted
|
155
|
-
(this is the default)
|
156
|
-
* `:temporary` workers are expected to stop on their own as a normal part of their operation
|
157
|
-
and will only be restarted on an abnormal exit
|
158
|
-
* `:transient` workers will never be restarted
|
159
|
-
|
160
|
-
### Worker Type
|
161
|
-
|
162
|
-
Every worker added to a `Supervisor` is of either type `:worker` or `:supervisor`. The defauly
|
163
|
-
value is `:worker`. Currently this type makes no functional difference. It is purely informational.
|
164
|
-
|
165
|
-
## Supervision Trees
|
166
|
-
|
167
|
-
One of the most powerful aspects of Erlang's supervisor module is its ability to supervise
|
168
|
-
other supervisors. This allows for the creation of deep, robust *supervision trees*.
|
169
|
-
Workers can be gouped under multiple bottom-level `Supervisor`s. Each of these `Supervisor`s
|
170
|
-
can be configured according to the needs of its workers. These multiple `Supervisor`s can
|
171
|
-
be added as children to another `Supervisor`. The root `Supervisor` can then start the
|
172
|
-
entire tree via trickel-down (start its children which start their children and so on).
|
173
|
-
The root `Supervisor` then monitor its child `Supervisor`s, and so on.
|
174
|
-
|
175
|
-
Supervision trees are the main reason that a `Supervisor` will shut itself down if its
|
176
|
-
`:max_restart`/`:max_time` threshhold is exceeded. An isolated `Supervisor` will simply
|
177
|
-
shut down forever. A `Supervisor` that is part of a supervision tree will shut itself
|
178
|
-
down and let its parent `Supervisor` manage the restart.
|
179
|
-
|
180
|
-
## Examples
|
181
|
-
|
182
|
-
```ruby
|
183
|
-
QUERIES = %w[YAHOO Microsoft google]
|
184
|
-
|
185
|
-
class FinanceActor < Concurrent::Actor
|
186
|
-
def act(query)
|
187
|
-
finance = Finance.new(query)
|
188
|
-
print "[#{Time.now}] RECEIVED '#{query}' to #{self} returned #{finance.update.suggested_symbols}\n\n"
|
189
|
-
end
|
190
|
-
end
|
191
|
-
|
192
|
-
financial, pool = FinanceActor.pool(5)
|
193
|
-
|
194
|
-
timer_proc = proc do
|
195
|
-
query = QUERIES[rand(QUERIES.length)]
|
196
|
-
financial.post(query)
|
197
|
-
print "[#{Time.now}] SENT '#{query}' from #{self} to worker pool\n\n"
|
198
|
-
end
|
199
|
-
|
200
|
-
t1 = Concurrent::TimerTask.new(execution_interval: rand(5)+1, &timer_proc)
|
201
|
-
t2 = Concurrent::TimerTask.new(execution_interval: rand(5)+1, &timer_proc)
|
202
|
-
|
203
|
-
overlord = Concurrent::Supervisor.new
|
204
|
-
|
205
|
-
overlord.add_worker(t1)
|
206
|
-
overlord.add_worker(t2)
|
207
|
-
pool.each{|actor| overlord.add_worker(actor)}
|
208
|
-
|
209
|
-
overlord.run!
|
210
|
-
```
|
211
|
-
|
212
|
-
## Additional Reading
|
213
|
-
|
214
|
-
* [Supervisor Module](http://www.erlang.org/doc/man/supervisor.html)
|
215
|
-
* [Supervisor Behaviour](http://www.erlang.org/doc/design_principles/sup_princ.html)
|
216
|
-
* [Who Supervises The Supervisors?](http://learnyousomeerlang.com/supervisors)
|
217
|
-
* [OTP Design Principles](http://www.erlang.org/doc/design_principles/des_princ.html)
|
218
|
-
|
219
|
-
## Copyright
|
220
|
-
|
221
|
-
*Concurrent Ruby* is Copyright © 2013 [Jerry D'Antonio](https://twitter.com/jerrydantonio).
|
222
|
-
It is free software and may be redistributed under the terms specified in the LICENSE file.
|
223
|
-
|
224
|
-
## License
|
225
|
-
|
226
|
-
Released under the MIT license.
|
227
|
-
|
228
|
-
http://www.opensource.org/licenses/mit-license.php
|
229
|
-
|
230
|
-
> Permission is hereby granted, free of charge, to any person obtaining a copy
|
231
|
-
> of this software and associated documentation files (the "Software"), to deal
|
232
|
-
> in the Software without restriction, including without limitation the rights
|
233
|
-
> to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
234
|
-
> copies of the Software, and to permit persons to whom the Software is
|
235
|
-
> furnished to do so, subject to the following conditions:
|
236
|
-
>
|
237
|
-
> The above copyright notice and this permission notice shall be included in
|
238
|
-
> all copies or substantial portions of the Software.
|
239
|
-
>
|
240
|
-
> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
241
|
-
> IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
242
|
-
> FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
243
|
-
> AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
244
|
-
> LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
245
|
-
> OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
246
|
-
> THE SOFTWARE.
|
data/md/thread_pool.md
DELETED
@@ -1,225 +0,0 @@
|
|
1
|
-
# We're Going to Need a Bigger Boat
|
2
|
-
|
3
|
-
Thread pools are neither a new idea nor an implementation of the actor pattern. Nevertheless, thread
|
4
|
-
pools are still an extremely relevant concurrency tool. Every time a thread is created then
|
5
|
-
subsequently destroyed there is overhead. Creating a pool of reusable worker threads then repeatedly'
|
6
|
-
dipping into the pool can have huge performace benefits for a long-running application like a service.
|
7
|
-
Ruby's blocks provide an excellent mechanism for passing a generic work request to a thread, making
|
8
|
-
Ruby an excellent candidate language for thread pools.
|
9
|
-
|
10
|
-
The inspiration for thread pools in this library is Java's `java.util.concurrent` implementation of
|
11
|
-
[thread pools](java.util.concurrent). The `java.util.concurrent` library is a well-designed, stable,
|
12
|
-
scalable, and battle-tested concurrency library. It provides three different implementations of thread
|
13
|
-
pools. One of those implementations is simply a special case of the first and doesn't offer much
|
14
|
-
advantage in Ruby, so only the first two (`FixedThreadPool` and `CachedThreadPool`) are implemented here.
|
15
|
-
|
16
|
-
Thread pools share common `behavior` defined by `:thread_pool`. The most imortant method is `post`
|
17
|
-
(aliased with the left-shift operator `<<`). The `post` method sends a block to the pool for future
|
18
|
-
processing.
|
19
|
-
|
20
|
-
A running thread pool can be shutdown in an orderly or disruptive manner. Once a thread pool has been
|
21
|
-
shutdown in cannot be started again. The `shutdown` method can be used to initiate an orderly shutdown
|
22
|
-
of the thread pool. All new `post` calls will reject the given block and immediately return `false`.
|
23
|
-
Threads in the pool will continue to process all in-progress work and will process all tasks still in
|
24
|
-
the queue. The `kill` method can be used to immediately shutdown the pool. All new `post` calls will
|
25
|
-
reject the given block and immediately return `false`. Ruby's `Thread.kill` will be called on all threads
|
26
|
-
in the pool, aborting all in-progress work. Tasks in the queue will be discarded.
|
27
|
-
|
28
|
-
A client thread can choose to block and wait for pool shutdown to complete. This is useful when shutting
|
29
|
-
down an application and ensuring the app doesn't exit before pool processing is complete. The method
|
30
|
-
`wait_for_termination` will block for a maximum of the given number of seconds then return `true` if
|
31
|
-
shutdown completed successfully or `false`. When the timeout value is `nil` the call will block
|
32
|
-
indefinitely. Calling `wait_for_termination` on a stopped thread pool will immediately return `true`.
|
33
|
-
|
34
|
-
Predicate methods are provided to describe the current state of the thread pool. Provided methods are
|
35
|
-
`running?`, `shutdown?`, and `killed?`. The `shutdown` method will return true regardless of whether
|
36
|
-
the pool was shutdown wil `shutdown` or `kill`.
|
37
|
-
|
38
|
-
## FixedThreadPool
|
39
|
-
|
40
|
-
From the docs:
|
41
|
-
|
42
|
-
> Creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue.
|
43
|
-
> At any point, at most `nThreads` threads will be active processing tasks. If additional tasks are submitted
|
44
|
-
> when all threads are active, they will wait in the queue until a thread is available. If any thread terminates
|
45
|
-
> due to a failure during execution prior to shutdown, a new one will take its place if needed to execute
|
46
|
-
> subsequent tasks. The threads in the pool will exist until it is explicitly `shutdown`.
|
47
|
-
|
48
|
-
### Examples
|
49
|
-
|
50
|
-
```ruby
|
51
|
-
require 'concurrent'
|
52
|
-
|
53
|
-
pool = Concurrent::FixedThreadPool.new(5)
|
54
|
-
|
55
|
-
pool.size #=> 5
|
56
|
-
pool.running? #=> true
|
57
|
-
pool.status #=> ["sleep", "sleep", "sleep", "sleep", "sleep"]
|
58
|
-
|
59
|
-
pool.post(1,2,3){|*args| sleep(10) }
|
60
|
-
pool << proc{ sleep(10) }
|
61
|
-
pool.size #=> 5
|
62
|
-
|
63
|
-
sleep(11)
|
64
|
-
pool.status #=> ["sleep", "sleep", "sleep", "sleep", "sleep"]
|
65
|
-
|
66
|
-
pool.shutdown #=> :shuttingdown
|
67
|
-
pool.status #=> []
|
68
|
-
pool.wait_for_termination
|
69
|
-
|
70
|
-
pool.size #=> 0
|
71
|
-
pool.status #=> []
|
72
|
-
pool.shutdown? #=> true
|
73
|
-
```
|
74
|
-
|
75
|
-
## CachedThreadPool
|
76
|
-
|
77
|
-
From the docs:
|
78
|
-
|
79
|
-
> Creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when
|
80
|
-
> they are available. These pools will typically improve the performance of programs that execute many short-lived
|
81
|
-
> asynchronous tasks. Calls to [`post`] will reuse previously constructed threads if available. If no existing
|
82
|
-
> thread is available, a new thread will be created and added to the pool. Threads that have not been used for
|
83
|
-
> sixty seconds are terminated and removed from the cache. Thus, a pool that remains idle for long enough will
|
84
|
-
> not consume any resources. Note that pools with similar properties but different details (for example,
|
85
|
-
> timeout parameters) may be created using [`CachedThreadPool`] constructors.
|
86
|
-
|
87
|
-
### Examples
|
88
|
-
|
89
|
-
```ruby
|
90
|
-
require 'concurrent'
|
91
|
-
|
92
|
-
pool = Concurrent::CachedThreadPool.new
|
93
|
-
|
94
|
-
pool.size #=> 0
|
95
|
-
pool.running? #=> true
|
96
|
-
pool.status #=> []
|
97
|
-
|
98
|
-
pool.post(1,2,3){|*args| sleep(10) }
|
99
|
-
pool << proc{ sleep(10) }
|
100
|
-
pool.size #=> 2
|
101
|
-
pool.status #=> [[:working, nil, "sleep"], [:working, nil, "sleep"]]
|
102
|
-
|
103
|
-
sleep(11)
|
104
|
-
pool.status #=> [[:idle, 23, "sleep"], [:idle, 23, "sleep"]]
|
105
|
-
|
106
|
-
sleep(60)
|
107
|
-
pool.size #=> 0
|
108
|
-
pool.status #=> []
|
109
|
-
|
110
|
-
pool.shutdown #=> :shuttingdown
|
111
|
-
pool.status #=> []
|
112
|
-
pool.wait_for_termination
|
113
|
-
|
114
|
-
pool.size #=> 0
|
115
|
-
pool.status #=> []
|
116
|
-
pool.shutdown? #=> true
|
117
|
-
```
|
118
|
-
|
119
|
-
## Global Thread Pool
|
120
|
-
|
121
|
-
For efficiency, of the aforementioned concurrency methods (agents, futures, promises, and
|
122
|
-
goroutines) run against a global thread pool. This pool can be directly accessed through the
|
123
|
-
`$GLOBAL_THREAD_POOL` global variable. Generally, this pool should not be directly accessed.
|
124
|
-
Use the other concurrency features instead.
|
125
|
-
|
126
|
-
By default the global thread pool is a `NullThreadPool`. This isn't a real thread pool at all.
|
127
|
-
It's simply a proxy for creating new threads on every post to the pool. I couldn't decide which
|
128
|
-
of the other threads pools and what configuration would be the most universally appropriate so
|
129
|
-
I punted. If you understand thread pools then you know enough to make your own choice. That's
|
130
|
-
why the global thread pool can be changed.
|
131
|
-
|
132
|
-
### Changing the Global Thread Pool
|
133
|
-
|
134
|
-
It is possible to change the global thread pool. Simply assign a new pool to the `$GLOBAL_THREAD_POOL`
|
135
|
-
variable:
|
136
|
-
|
137
|
-
```ruby
|
138
|
-
$GLOBAL_THREAD_POOL = Concurrent::FixedThreadPool.new(10)
|
139
|
-
```
|
140
|
-
|
141
|
-
Ideally this should be done at application startup, before any concurrency functions are called.
|
142
|
-
If the circumstances warrant the global thread pool can be changed at runtime. Just make sure to
|
143
|
-
shutdown the old global thread pool so that no tasks are lost:
|
144
|
-
|
145
|
-
```ruby
|
146
|
-
$GLOBAL_THREAD_POOL = Concurrent::FixedThreadPool.new(10)
|
147
|
-
|
148
|
-
# do stuff...
|
149
|
-
|
150
|
-
old_global_pool = $GLOBAL_THREAD_POOL
|
151
|
-
$GLOBAL_THREAD_POOL = Concurrent::FixedThreadPool.new(10)
|
152
|
-
old_global_pool.shutdown
|
153
|
-
```
|
154
|
-
|
155
|
-
### NullThreadPool
|
156
|
-
|
157
|
-
If for some reason an appliction would be better served by *not* having a global thread pool, the
|
158
|
-
`NullThreadPool` is provided. The `NullThreadPool` is compatible with the global thread pool but
|
159
|
-
it is not an actual thread pool. Instead it spawns a new thread on every call to the `post` method.
|
160
|
-
|
161
|
-
### EventMachine
|
162
|
-
|
163
|
-
The [EventMachine](http://rubyeventmachine.com/) library (source [online](https://github.com/eventmachine/eventmachine))
|
164
|
-
is an awesome library for creating evented applications. EventMachine provides its own thread pool
|
165
|
-
and the authors recommend using their pool rather than using Ruby's `Thread`. No sweat,
|
166
|
-
`concurrent-ruby` is fully compatible with EventMachine. Simple require `eventmachine`
|
167
|
-
*before* requiring `concurrent-ruby` then replace the global thread pool with an instance
|
168
|
-
of `EventMachineDeferProxy`:
|
169
|
-
|
170
|
-
```ruby
|
171
|
-
require 'eventmachine' # do this FIRST
|
172
|
-
require 'concurrent'
|
173
|
-
|
174
|
-
$GLOBAL_THREAD_POOL = EventMachineDeferProxy.new
|
175
|
-
```
|
176
|
-
|
177
|
-
## Per-class Thread Pools
|
178
|
-
|
179
|
-
Many of the classes in this library use the global thread pool rather than creating new threads.
|
180
|
-
Classes such as `Agent`, `Defer`, and others follow this pattern. There may be cases where a
|
181
|
-
program would be better suited for one or more of these classes used a different thread pool.
|
182
|
-
All classes that use the global thread pool support a class-level `thread_pool` attribute accessor.
|
183
|
-
This property defaults to the global thread pool but can be changed at any time. Once changed, all
|
184
|
-
new instances of that class will use the new thread pool.
|
185
|
-
|
186
|
-
```ruby
|
187
|
-
Concurrent::Agent.thread_pool == $GLOBAL_THREAD_POOL #=> true
|
188
|
-
|
189
|
-
$GLOBAL_THREAD_POOL = Concurrent::FixedThreadPool.new(10) #=> #<Concurrent::FixedThreadPool:0x007fe31130f1f0 ...
|
190
|
-
|
191
|
-
Concurrent::Agent.thread_pool == $GLOBAL_THREAD_POOL #=> false
|
192
|
-
|
193
|
-
Concurrent::Defer.thread_pool = Concurrent::CachedThreadPool.new #=> #<Concurrent::CachedThreadPool:0x007fef1c6b6b48 ...
|
194
|
-
Concurrent::Defer.thread_pool == Concurrent::Agent.thread_pool #=> false
|
195
|
-
Concurrent::Defer.thread_pool == $GLOBAL_THREAD_POOL #=> false
|
196
|
-
```
|
197
|
-
|
198
|
-
## Copyright
|
199
|
-
|
200
|
-
*Concurrent Ruby* is Copyright © 2013 [Jerry D'Antonio](https://twitter.com/jerrydantonio).
|
201
|
-
It is free software and may be redistributed under the terms specified in the LICENSE file.
|
202
|
-
|
203
|
-
## License
|
204
|
-
|
205
|
-
Released under the MIT license.
|
206
|
-
|
207
|
-
http://www.opensource.org/licenses/mit-license.php
|
208
|
-
|
209
|
-
> Permission is hereby granted, free of charge, to any person obtaining a copy
|
210
|
-
> of this software and associated documentation files (the "Software"), to deal
|
211
|
-
> in the Software without restriction, including without limitation the rights
|
212
|
-
> to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
213
|
-
> copies of the Software, and to permit persons to whom the Software is
|
214
|
-
> furnished to do so, subject to the following conditions:
|
215
|
-
>
|
216
|
-
> The above copyright notice and this permission notice shall be included in
|
217
|
-
> all copies or substantial portions of the Software.
|
218
|
-
>
|
219
|
-
> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
220
|
-
> IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
221
|
-
> FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
222
|
-
> AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
223
|
-
> LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
224
|
-
> OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
225
|
-
> THE SOFTWARE.
|