functional-ruby 0.6.0 → 0.7.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +14 -126
- data/lib/functional.rb +4 -1
- data/lib/functional/utilities.rb +46 -0
- data/lib/functional/version.rb +1 -1
- data/lib/functional_ruby.rb +1 -1
- data/md/utilities.md +2 -0
- data/spec/functional/behavior_spec.rb +2 -2
- data/spec/functional/pattern_matching_spec.rb +2 -2
- data/spec/functional/utilities_spec.rb +131 -43
- data/spec/spec_helper.rb +1 -3
- metadata +3 -40
- data/lib/functional/agent.rb +0 -130
- data/lib/functional/all.rb +0 -13
- data/lib/functional/cached_thread_pool.rb +0 -122
- data/lib/functional/concurrency.rb +0 -35
- data/lib/functional/core.rb +0 -2
- data/lib/functional/event.rb +0 -53
- data/lib/functional/event_machine_defer_proxy.rb +0 -23
- data/lib/functional/fixed_thread_pool.rb +0 -89
- data/lib/functional/future.rb +0 -42
- data/lib/functional/global_thread_pool.rb +0 -3
- data/lib/functional/obligation.rb +0 -121
- data/lib/functional/promise.rb +0 -194
- data/lib/functional/thread_pool.rb +0 -61
- data/md/concurrency.md +0 -465
- data/md/future.md +0 -32
- data/md/obligation.md +0 -32
- data/md/promise.md +0 -220
- data/spec/functional/agent_spec.rb +0 -405
- data/spec/functional/cached_thread_pool_spec.rb +0 -112
- data/spec/functional/concurrency_spec.rb +0 -55
- data/spec/functional/event_machine_defer_proxy_spec.rb +0 -246
- data/spec/functional/event_spec.rb +0 -114
- data/spec/functional/fixed_thread_pool_spec.rb +0 -84
- data/spec/functional/future_spec.rb +0 -115
- data/spec/functional/obligation_shared.rb +0 -121
- data/spec/functional/promise_spec.rb +0 -310
- data/spec/functional/thread_pool_shared.rb +0 -209
@@ -1,61 +0,0 @@
|
|
1
|
-
require 'functional/behavior'
|
2
|
-
require 'functional/event'
|
3
|
-
|
4
|
-
behavior_info(:thread_pool,
|
5
|
-
running?: 0,
|
6
|
-
shutdown?: 0,
|
7
|
-
killed?: 0,
|
8
|
-
shutdown: 0,
|
9
|
-
kill: 0,
|
10
|
-
size: 0,
|
11
|
-
wait_for_termination: -1,
|
12
|
-
post: -1,
|
13
|
-
:<< => 1,
|
14
|
-
status: 0)
|
15
|
-
|
16
|
-
behavior_info(:global_thread_pool,
|
17
|
-
post: -1,
|
18
|
-
:<< => 1)
|
19
|
-
|
20
|
-
module Functional
|
21
|
-
|
22
|
-
class ThreadPool
|
23
|
-
|
24
|
-
def initialize
|
25
|
-
@status = :running
|
26
|
-
@queue = Queue.new
|
27
|
-
@termination = Event.new
|
28
|
-
@pool = []
|
29
|
-
end
|
30
|
-
|
31
|
-
def running?
|
32
|
-
return @status == :running
|
33
|
-
end
|
34
|
-
|
35
|
-
def shutdown?
|
36
|
-
return ! running?
|
37
|
-
end
|
38
|
-
|
39
|
-
def killed?
|
40
|
-
return @status == :killed
|
41
|
-
end
|
42
|
-
|
43
|
-
def shutdown
|
44
|
-
@pool.size.times{ @queue << :stop }
|
45
|
-
@status = :shuttingdown
|
46
|
-
end
|
47
|
-
|
48
|
-
def wait_for_termination(timeout = nil)
|
49
|
-
if shutdown? || killed?
|
50
|
-
return true
|
51
|
-
else
|
52
|
-
return @termination.wait(timeout)
|
53
|
-
end
|
54
|
-
end
|
55
|
-
|
56
|
-
def <<(block)
|
57
|
-
self.post(&block)
|
58
|
-
return self
|
59
|
-
end
|
60
|
-
end
|
61
|
-
end
|
data/md/concurrency.md
DELETED
@@ -1,465 +0,0 @@
|
|
1
|
-
# Go, Clojure, and JavaScript-inspired Concurrency
|
2
|
-
|
3
|
-
The old-school "lock and synchronize" approach to concurrency is dead. The future of concurrency
|
4
|
-
is asynchronous. Send out a bunch of independent [actors](http://en.wikipedia.org/wiki/Actor_model)
|
5
|
-
to do your bidding and process the results when you are ready. Although the idea of the concurrent
|
6
|
-
actor originated in the early 1970's it has only recently started catching on. Although there is
|
7
|
-
no one "true" actor implementation (what *exactly* is "object oriented," what *exactly* is
|
8
|
-
"functional programming"), many modern programming languages implement variations on the actor
|
9
|
-
theme. This library implements a few of the most interesting and useful of those variations.
|
10
|
-
|
11
|
-
Remember, *there is not silver bullet in concurrent programming.* Concurrency is hard. Very hard.
|
12
|
-
These tools will help ease the burden, but at the end of the day it is essential that you
|
13
|
-
*know what you are doing.*
|
14
|
-
|
15
|
-
## Agent
|
16
|
-
|
17
|
-
Agents are inspired by [Clojure's](http://clojure.org/) [agent](http://clojure.org/agents) keyword.
|
18
|
-
An agent is a single atomic value that represents an identity. The current value
|
19
|
-
of the agent can be requested at any time (`deref`). Each agent has a work queue and operates on
|
20
|
-
the global thread pool (see below). Consumers can `post` code blocks to the
|
21
|
-
agent. The code block (function) will receive the current value of the agent as its sole
|
22
|
-
parameter. The return value of the block will become the new value of the agent. Agents support
|
23
|
-
two error handling modes: fail and continue. A good example of an agent is a shared incrementing
|
24
|
-
counter, such as the score in a video game.
|
25
|
-
|
26
|
-
An agent must be initialize with an initial value. This value is always accessible via the `value`
|
27
|
-
(or `deref`) methods. Code blocks sent to the agent will be processed in the order received. As
|
28
|
-
each block is processed the current value is updated with the result from the block. This update
|
29
|
-
is an atomic operation so a `deref` will never block and will always return the current value.
|
30
|
-
|
31
|
-
When an agent is created it may be given an optional `validate` block and zero or more `rescue`
|
32
|
-
blocks. When a new value is calculated the value will be checked against the validator, if present.
|
33
|
-
If the validator returns `true` the new value will be accepted. If it returns `false` it will be
|
34
|
-
rejected. If a block raises an exception during execution the list of `rescue` blocks will be
|
35
|
-
seacrhed in order until one matching the current exception is found. That `rescue` block will
|
36
|
-
then be called an passed the exception object. If no matching `rescue` block is found, or none
|
37
|
-
were configured, then the exception will be suppressed.
|
38
|
-
|
39
|
-
### Examples
|
40
|
-
|
41
|
-
A simple example:
|
42
|
-
|
43
|
-
```ruby
|
44
|
-
require 'functional/agent'
|
45
|
-
# or
|
46
|
-
require 'functional/concurrency'
|
47
|
-
|
48
|
-
score = Functional::Agent.new(10)
|
49
|
-
score.value #=> 10
|
50
|
-
|
51
|
-
score << proc{|current| current + 100 }
|
52
|
-
sleep(0.1)
|
53
|
-
score.value #=> 110
|
54
|
-
|
55
|
-
score << proc{|current| current * 2 }
|
56
|
-
sleep(0.1)
|
57
|
-
deref score #=> 220
|
58
|
-
|
59
|
-
score << proc{|current| current - 50 }
|
60
|
-
sleep(0.1)
|
61
|
-
score.value #=> 170
|
62
|
-
```
|
63
|
-
|
64
|
-
With validation and error handling:
|
65
|
-
|
66
|
-
```ruby
|
67
|
-
score = agent(0).validate{|value| value <= 1024 }.
|
68
|
-
rescue(NoMethodError){|ex| puts "Bam!" }.
|
69
|
-
rescue(ArgumentError){|ex| puts "Pow!" }.
|
70
|
-
rescue{|ex| puts "Boom!" }
|
71
|
-
score.value #=> 0
|
72
|
-
|
73
|
-
score << proc{|current| current + 2048 }
|
74
|
-
sleep(0.1)
|
75
|
-
score.value #=> 0
|
76
|
-
|
77
|
-
score << proc{|current| raise ArgumentError }
|
78
|
-
sleep(0.1)
|
79
|
-
#=> puts "Pow!"
|
80
|
-
score.value #=> 0
|
81
|
-
|
82
|
-
score << proc{|current| current + 100 }
|
83
|
-
sleep(0.1)
|
84
|
-
score.value #=> 100
|
85
|
-
```
|
86
|
-
|
87
|
-
## Future
|
88
|
-
|
89
|
-
Futures are inspired by [Clojure's](http://clojure.org/) [future](http://clojuredocs.org/clojure_core/clojure.core/future) keyword.
|
90
|
-
A future represents a promise to complete an action at some time in the future. The action is atomic and permanent.
|
91
|
-
The idea behind a future is to send an action off for asynchronous operation, do other stuff, then return and
|
92
|
-
retrieve the result of the async operation at a later time. Futures run on the global thread pool (see below).
|
93
|
-
|
94
|
-
Futures have three possible states: *pending*, *rejected*, and *fulfilled*. When a future is created it is set
|
95
|
-
to *pending* and will remain in that state until processing is complete. A completed future is either *rejected*,
|
96
|
-
indicating that an exception was thrown during processing, or *fulfilled*, indicating succedd. If a future is
|
97
|
-
*fulfilled* its `value` will be updated to reflect the result of the operation. If *rejected* the `reason` will
|
98
|
-
be updated with a reference to the thrown exception. The predicate methods `pending?`, `rejected`, and `fulfilled?`
|
99
|
-
can be called at any time to obtain the state of the future, as can the `state` method, which returns a symbol.
|
100
|
-
|
101
|
-
Retrieving the value of a future is done through the `value` (alias: `deref`) method. Obtaining the value of
|
102
|
-
a future is a potentially blocking operation. When a future is *rejected* a call to `value` will return `nil`
|
103
|
-
immediately. When a future is *fulfilled* a call to `value` will immediately return the current value.
|
104
|
-
When a future is *pending* a call to `value` will block until the future is either *rejected* or *fulfilled*.
|
105
|
-
A *timeout* value can be passed to `value` to limit how long the call will block. If `nil` the call will
|
106
|
-
block indefinitely. If `0` the call will not block. Any other integer or float value will indicate the
|
107
|
-
maximum number of seconds to block.
|
108
|
-
|
109
|
-
### Examples
|
110
|
-
|
111
|
-
A fulfilled example:
|
112
|
-
|
113
|
-
```ruby
|
114
|
-
require 'functional/future'
|
115
|
-
# or
|
116
|
-
require 'functional/concurrency'
|
117
|
-
|
118
|
-
count = Functional::Future{ sleep(10); 10 }
|
119
|
-
count.state #=> :pending
|
120
|
-
count.pending? #=> true
|
121
|
-
|
122
|
-
# do stuff...
|
123
|
-
|
124
|
-
count.value(0) #=> nil (does not block)
|
125
|
-
|
126
|
-
count.value #=> 10 (after blocking)
|
127
|
-
count.state #=> :fulfilled
|
128
|
-
count.fulfilled? #=> true
|
129
|
-
deref count #=> 10
|
130
|
-
```
|
131
|
-
|
132
|
-
A rejected example:
|
133
|
-
|
134
|
-
```ruby
|
135
|
-
count = future{ sleep(10); raise StandardError.new("Boom!") }
|
136
|
-
count.state #=> :pending
|
137
|
-
pending?(count) #=> true
|
138
|
-
|
139
|
-
deref(count) #=> nil (after blocking)
|
140
|
-
rejected?(count) #=> true
|
141
|
-
count.reason #=> #<StandardError: Boom!>
|
142
|
-
```
|
143
|
-
|
144
|
-
## Promise
|
145
|
-
|
146
|
-
A promise is the most powerfule and versatile of the concurrency objects in this library.
|
147
|
-
Promises are inspired by the JavaScript [Promises/A](http://wiki.commonjs.org/wiki/Promises/A)
|
148
|
-
and [Promises/A+](http://promises-aplus.github.io/promises-spec/) specifications.
|
149
|
-
|
150
|
-
> A promise represents the eventual value returned from the single completion of an operation.
|
151
|
-
|
152
|
-
Promises are similar to futures and share many of the same behaviours. Promises are far more robust,
|
153
|
-
however. Promises can be chained in a tree structure where each promise may have zero or more children.
|
154
|
-
Promises are chained using the `then` method. The result of a call to `then` is always another promise.
|
155
|
-
Promises are resolved asynchronously in the order they are added to the tree. Parents are guaranteed
|
156
|
-
to be resolved before their children. The result of each promise is passed to each of its children
|
157
|
-
upon resolution. When a promise is rejected all its children will be summarily rejected.
|
158
|
-
|
159
|
-
Promises have three possible states: *pending*, *rejected*, and *fulfilled*. When a promise is created it is set
|
160
|
-
to *pending* and will remain in that state until processing is complete. A completed promise is either *rejected*,
|
161
|
-
indicating that an exception was thrown during processing, or *fulfilled*, indicating succedd. If a promise is
|
162
|
-
*fulfilled* its `value` will be updated to reflect the result of the operation. If *rejected* the `reason` will
|
163
|
-
be updated with a reference to the thrown exception. The predicate methods `pending?`, `rejected`, and `fulfilled?`
|
164
|
-
can be called at any time to obtain the state of the promise, as can the `state` method, which returns a symbol.
|
165
|
-
|
166
|
-
Retrieving the value of a promise is done through the `value` (alias: `deref`) method. Obtaining the value of
|
167
|
-
a promise is a potentially blocking operation. When a promise is *rejected* a call to `value` will return `nil`
|
168
|
-
immediately. When a promise is *fulfilled* a call to `value` will immediately return the current value.
|
169
|
-
When a promise is *pending* a call to `value` will block until the promise is either *rejected* or *fulfilled*.
|
170
|
-
A *timeout* value can be passed to `value` to limit how long the call will block. If `nil` the call will
|
171
|
-
block indefinitely. If `0` the call will not block. Any other integer or float value will indicate the
|
172
|
-
maximum number of seconds to block.
|
173
|
-
|
174
|
-
### Examples
|
175
|
-
|
176
|
-
A simple example:
|
177
|
-
|
178
|
-
```ruby
|
179
|
-
require 'functional/promise'
|
180
|
-
# or
|
181
|
-
require 'functional/concurrency'
|
182
|
-
|
183
|
-
p = Functional::Promise.new{ sleep(1); "Hello world!" }
|
184
|
-
p.value(0) #=> nil (does not block)
|
185
|
-
p.value #=> "Hello world!" (after blocking)
|
186
|
-
p.state #=> :fulfilled
|
187
|
-
```
|
188
|
-
|
189
|
-
An example with chaining:
|
190
|
-
|
191
|
-
```ruby
|
192
|
-
p = promise("Jerry", "D'Antonio"){|a, b| "#{a} #{b}" }.
|
193
|
-
then{|result| sleep(1); result}.
|
194
|
-
then{|result| "Hello #{result}." }.
|
195
|
-
then{|result| "#{result} Would you like to play a game?"}
|
196
|
-
|
197
|
-
p.pending? #=> true
|
198
|
-
p.value(0) #=> nil (does not block)
|
199
|
-
|
200
|
-
p.value #=> "Hello Jerry D'Antonio. Would you like to play a game?"
|
201
|
-
```
|
202
|
-
|
203
|
-
An example with error handling:
|
204
|
-
|
205
|
-
```ruby
|
206
|
-
@expected = nil
|
207
|
-
p = promise{ raise ArgumentError }.
|
208
|
-
rescue(LoadError){|ex| @expected = 1 }.
|
209
|
-
rescue(ArgumentError){|ex| @expected = 2 }.
|
210
|
-
rescue(Exception){|ex| @expected = 3 }
|
211
|
-
|
212
|
-
sleep(0.1)
|
213
|
-
|
214
|
-
@expected #=> 2
|
215
|
-
pending?(p) #=> false
|
216
|
-
fulfilled?(p) #=> false
|
217
|
-
rejected?(p) #=> true
|
218
|
-
|
219
|
-
deref(p) #=> nil
|
220
|
-
p.reason #=> #<ArgumentError: ArgumentError>
|
221
|
-
```
|
222
|
-
|
223
|
-
A complex example with chaining and error handling:
|
224
|
-
|
225
|
-
```ruby
|
226
|
-
p = promise("Jerry", "D'Antonio"){|a, b| "#{a} #{b}" }.
|
227
|
-
then{|result| sleep(0.5); result}.
|
228
|
-
rescue(ArgumentError){|ex| puts "Pow!" }.
|
229
|
-
then{|result| "Hello #{result}." }.
|
230
|
-
rescue(NoMethodError){|ex| puts "Bam!" }.
|
231
|
-
rescue(ArgumentError){|ex| puts "Zap!" }.
|
232
|
-
then{|result| raise StandardError.new("Boom!") }.
|
233
|
-
rescue{|ex| puts ex.message }.
|
234
|
-
then{|result| "#{result} Would you like to play a game?"}
|
235
|
-
|
236
|
-
sleep(1)
|
237
|
-
|
238
|
-
p.value #=> nil
|
239
|
-
p.state #=> :rejected
|
240
|
-
p.reason #=> #<StandardError: Boom!>
|
241
|
-
```
|
242
|
-
|
243
|
-
## Goroutine
|
244
|
-
|
245
|
-
A goroutine is the simplest of the concurrency utilities in this library. It is inspired by
|
246
|
-
[Go's](http://golang.org/) [goroutines](https://gobyexample.com/goroutines) and
|
247
|
-
[Erlang's](http://www.erlang.org/) [spawn](http://erlangexamples.com/tag/spawn/) keyword. The
|
248
|
-
`go` function is nothing more than a simple way to send a block to the global thread pool (see below)
|
249
|
-
for processing.
|
250
|
-
|
251
|
-
### Examples
|
252
|
-
|
253
|
-
```ruby
|
254
|
-
require 'functional/concurrency'
|
255
|
-
|
256
|
-
@expected = nil
|
257
|
-
|
258
|
-
go(1, 2, 3){|a, b, c| sleep(1); @expected = [c, b, a] }
|
259
|
-
|
260
|
-
sleep(0.1)
|
261
|
-
@expected #=> nil
|
262
|
-
|
263
|
-
sleep(2)
|
264
|
-
@expected #=> [3, 2, 1]
|
265
|
-
```
|
266
|
-
|
267
|
-
## Thread Pools
|
268
|
-
|
269
|
-
Thread pools are neither a new idea nor an implementation of the actor patter. Nevertheless, thread
|
270
|
-
pools are still an extremely relevant concurrency tool. Every time a thread is created then
|
271
|
-
subsequently destroyed there is overhead. Creating a pool of reusable worker threads then repeatedly'
|
272
|
-
dipping into the pool can have huge performace benefits for a long-running application like a service.
|
273
|
-
Ruby's blocks provide an excellent mechanism for passing a generic work request to a thread, making
|
274
|
-
Ruby an excellent candidate language for thread pools.
|
275
|
-
|
276
|
-
The inspiration for thread pools in this library is Java's `java.util.concurrent` implementation of
|
277
|
-
[thread pools](java.util.concurrent). The `java.util.concurrent` library is a well-designed, stable,
|
278
|
-
scalable, and battle-tested concurrency library. It provides three different implementations of thread
|
279
|
-
pools. One of those implementations is simply a special case of the first and doesn't offer much
|
280
|
-
advantage in Ruby, so only the first two (`FixedThreadPool` and `CachedThreadPool`) are implemented here.
|
281
|
-
|
282
|
-
Thread pools share common `behavior` defined by `:thread_pool`. The most imortant method is `post`
|
283
|
-
(aliased with the left-shift operator `<<`). The `post` method sends a block to the pool for future
|
284
|
-
processing.
|
285
|
-
|
286
|
-
A running thread pool can be shutdown in an orderly or disruptive manner. Once a thread pool has been
|
287
|
-
shutdown in cannot be started again. The `shutdown` method can be used to initiate an orderly shutdown
|
288
|
-
of the thread pool. All new `post` calls will reject the given block and immediately return `false`.
|
289
|
-
Threads in the pool will continue to process all in-progress work and will process all tasks still in
|
290
|
-
the queue. The `kill` method can be used to immediately shutdown the pool. All new `post` calls will
|
291
|
-
reject the given block and immediately return `false`. Ruby's `Thread.kill` will be called on all threads
|
292
|
-
in the pool, aborting all in-progress work. Tasks in the queue will be discarded.
|
293
|
-
|
294
|
-
A client thread can choose to block and wait for pool shutdown to complete. This is useful when shutting
|
295
|
-
down an application and ensuring the app doesn't exit before pool processing is complete. The method
|
296
|
-
`wait_for_termination` will block for a maximum of the given number of seconds then return `true` if
|
297
|
-
shutdown completed successfully or `false`. When the timeout value is `nil` the call will block
|
298
|
-
indefinitely. Calling `wait_for_termination` on a stopped thread pool will immediately return `true`.
|
299
|
-
|
300
|
-
Predicate methods are provided to describe the current state of the thread pool. Provided methods are
|
301
|
-
`running?`, `shutdown?`, and `killed?`. The `shutdown` method will return true regardless of whether
|
302
|
-
the pool was shutdown wil `shutdown` or `kill`.
|
303
|
-
|
304
|
-
### FixedThreadPool
|
305
|
-
|
306
|
-
From the docs:
|
307
|
-
|
308
|
-
> Creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue.
|
309
|
-
> At any point, at most `nThreads` threads will be active processing tasks. If additional tasks are submitted
|
310
|
-
> when all threads are active, they will wait in the queue until a thread is available. If any thread terminates
|
311
|
-
> due to a failure during execution prior to shutdown, a new one will take its place if needed to execute
|
312
|
-
> subsequent tasks. The threads in the pool will exist until it is explicitly `shutdown`.
|
313
|
-
|
314
|
-
#### Examples
|
315
|
-
|
316
|
-
```ruby
|
317
|
-
require 'functional/fixed_thread_pool'
|
318
|
-
# or
|
319
|
-
require 'functional/concurrency'
|
320
|
-
|
321
|
-
pool = Functional::FixedThreadPool.new(5)
|
322
|
-
|
323
|
-
pool.size #=> 5
|
324
|
-
pool.running? #=> true
|
325
|
-
pool.status #=> ["sleep", "sleep", "sleep", "sleep", "sleep"]
|
326
|
-
|
327
|
-
pool.post(1,2,3){|*args| sleep(10) }
|
328
|
-
pool << proc{ sleep(10) }
|
329
|
-
pool.size #=> 5
|
330
|
-
|
331
|
-
sleep(11)
|
332
|
-
pool.status #=> ["sleep", "sleep", "sleep", "sleep", "sleep"]
|
333
|
-
|
334
|
-
pool.shutdown #=> :shuttingdown
|
335
|
-
pool.status #=> []
|
336
|
-
pool.wait_for_termination
|
337
|
-
|
338
|
-
pool.size #=> 0
|
339
|
-
pool.status #=> []
|
340
|
-
pool.shutdown? #=> true
|
341
|
-
```
|
342
|
-
|
343
|
-
### CachedThreadPool
|
344
|
-
|
345
|
-
From the docs:
|
346
|
-
|
347
|
-
> Creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when
|
348
|
-
> they are available. These pools will typically improve the performance of programs that execute many short-lived
|
349
|
-
> asynchronous tasks. Calls to [`post`] will reuse previously constructed threads if available. If no existing
|
350
|
-
> thread is available, a new thread will be created and added to the pool. Threads that have not been used for
|
351
|
-
> sixty seconds are terminated and removed from the cache. Thus, a pool that remains idle for long enough will
|
352
|
-
> not consume any resources. Note that pools with similar properties but different details (for example,
|
353
|
-
> timeout parameters) may be created using [`CachedThreadPool`] constructors.
|
354
|
-
|
355
|
-
#### Examples
|
356
|
-
|
357
|
-
```ruby
|
358
|
-
require 'functional/cached_thread_pool'
|
359
|
-
# or
|
360
|
-
require 'functional/concurrency'
|
361
|
-
|
362
|
-
pool = Functional::CachedThreadPool.new
|
363
|
-
|
364
|
-
pool.size #=> 0
|
365
|
-
pool.running? #=> true
|
366
|
-
pool.status #=> []
|
367
|
-
|
368
|
-
pool.post(1,2,3){|*args| sleep(10) }
|
369
|
-
pool << proc{ sleep(10) }
|
370
|
-
pool.size #=> 2
|
371
|
-
pool.status #=> [[:working, nil, "sleep"], [:working, nil, "sleep"]]
|
372
|
-
|
373
|
-
sleep(11)
|
374
|
-
pool.status #=> [[:idle, 23, "sleep"], [:idle, 23, "sleep"]]
|
375
|
-
|
376
|
-
sleep(60)
|
377
|
-
pool.size #=> 0
|
378
|
-
pool.status #=> []
|
379
|
-
|
380
|
-
pool.shutdown #=> :shuttingdown
|
381
|
-
pool.status #=> []
|
382
|
-
pool.wait_for_termination
|
383
|
-
|
384
|
-
pool.size #=> 0
|
385
|
-
pool.status #=> []
|
386
|
-
pool.shutdown? #=> true
|
387
|
-
```
|
388
|
-
|
389
|
-
## Global Thread Pool
|
390
|
-
|
391
|
-
For efficiency, of the aforementioned concurrency methods (agents, futures, promises, and
|
392
|
-
goroutines) run against a global thread pool. This pool can be directly accessed through the
|
393
|
-
`$GLOBAL_THREAD_POOL` global variable. Generally, this pool should not be directly accessed.
|
394
|
-
Use the other concurrency features instead.
|
395
|
-
|
396
|
-
By default the global thread pool is a `CachedThreadPool`. This means it consumes no resources
|
397
|
-
unless concurrency functions are called. Most of the time this pool can simply be left alone.
|
398
|
-
|
399
|
-
### Changing the Global Thread Pool
|
400
|
-
|
401
|
-
It is possible to change the global thread pool. Simply assign a new pool to the `$GLOBAL_THREAD_POOL`
|
402
|
-
variable:
|
403
|
-
|
404
|
-
```ruby
|
405
|
-
$GLOBAL_THREAD_POOL = Functional::FixedThreadPool.new(10)
|
406
|
-
```
|
407
|
-
|
408
|
-
Ideally this should be done at application startup, before any concurrency functions are called.
|
409
|
-
If the circumstances warrant the global thread pool can be changed at runtime. Just make sure to
|
410
|
-
shutdown the old global thread pool so that no tasks are lost:
|
411
|
-
|
412
|
-
```ruby
|
413
|
-
$GLOBAL_THREAD_POOL = Functional::FixedThreadPool.new(10)
|
414
|
-
|
415
|
-
# do stuff...
|
416
|
-
|
417
|
-
old_global_pool = $GLOBAL_THREAD_POOL
|
418
|
-
$GLOBAL_THREAD_POOL = Functional::FixedThreadPool.new(10)
|
419
|
-
old_global_pool.shutdown
|
420
|
-
```
|
421
|
-
|
422
|
-
### EventMachine
|
423
|
-
|
424
|
-
The [EventMachine](http://rubyeventmachine.com/) library (source [online](https://github.com/eventmachine/eventmachine))
|
425
|
-
is an awesome library for creating evented applications. EventMachine provides its own thread pool
|
426
|
-
and the authors recommend using their pool rather than using Ruby's `Thread`. No sweat,
|
427
|
-
`functional-ruby` is fully compatible with EventMachine. Simple require `eventmachine`
|
428
|
-
*before* requiring `functional-ruby` then replace the global thread pool with an instance
|
429
|
-
of `EventMachineDeferProxy`:
|
430
|
-
|
431
|
-
```ruby
|
432
|
-
require 'eventmachine' # do this FIRST
|
433
|
-
require 'functional/concurrency'
|
434
|
-
|
435
|
-
$GLOBAL_THREAD_POOL = EventMachineDeferProxy.new
|
436
|
-
```
|
437
|
-
|
438
|
-
## Copyright
|
439
|
-
|
440
|
-
*Functional Ruby* is Copyright © 2013 [Jerry D'Antonio](https://twitter.com/jerrydantonio).
|
441
|
-
It is free software and may be redistributed under the terms specified in the LICENSE file.
|
442
|
-
|
443
|
-
## License
|
444
|
-
|
445
|
-
Released under the MIT license.
|
446
|
-
|
447
|
-
http://www.opensource.org/licenses/mit-license.php
|
448
|
-
|
449
|
-
> Permission is hereby granted, free of charge, to any person obtaining a copy
|
450
|
-
> of this software and associated documentation files (the "Software"), to deal
|
451
|
-
> in the Software without restriction, including without limitation the rights
|
452
|
-
> to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
453
|
-
> copies of the Software, and to permit persons to whom the Software is
|
454
|
-
> furnished to do so, subject to the following conditions:
|
455
|
-
>
|
456
|
-
> The above copyright notice and this permission notice shall be included in
|
457
|
-
> all copies or substantial portions of the Software.
|
458
|
-
>
|
459
|
-
> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
460
|
-
> IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
461
|
-
> FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
462
|
-
> AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
463
|
-
> LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
464
|
-
> OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
465
|
-
> THE SOFTWARE.
|