xpool 0.9.0.2 → 0.10.0
Sign up to get free protection for your applications and to get access to all the features.
- data/.travis.yml +1 -1
- data/.yardopts +5 -0
- data/ChangeLog.txt +47 -1
- data/Gemfile +1 -0
- data/README.md +36 -23
- data/docs/unhandled_exceptions.md +43 -0
- data/lib/xpool.rb +94 -15
- data/lib/xpool/process.rb +1 -1
- data/lib/xpool/version.rb +1 -1
- data/test/setup.rb +1 -0
- data/test/xpool_test.rb +47 -9
- metadata +6 -4
data/.travis.yml
CHANGED
data/.yardopts
ADDED
data/ChangeLog.txt
CHANGED
@@ -1,6 +1,52 @@
|
|
1
|
+
== v0.10.0
|
2
|
+
- default to two subprocesses if CPU core count cannot be guessed.
|
3
|
+
Which is change from the old default of five.
|
4
|
+
|
5
|
+
- add XPool#resize
|
6
|
+
Resize the pool at runtime.
|
7
|
+
If subprocesses are removed from the pool a graceful shutdown is performed on
|
8
|
+
each subprocess that is removed (unlike resize!, which forces a shutdown).
|
9
|
+
|
10
|
+
- add XPool#expand
|
11
|
+
Expand the pool with X subprocesses, where X is the argument to expand(…).
|
12
|
+
|
13
|
+
- add XPool#shrink
|
14
|
+
Shrink the pool by X subprocesses, where X is the argument to shrink(…).
|
15
|
+
A graceful shutdown is performed on any subprocesses removed from the pool.
|
16
|
+
|
17
|
+
- add XPool#shrink!
|
18
|
+
Shrink the pool by X subprocesses, where X is the argument to shrink!(…)
|
19
|
+
A forceful shutdown is performed on any subprocesses removed from the pool.
|
20
|
+
|
21
|
+
- XPool#resize! now accepts a Fixnum, not a Range.
|
22
|
+
If you want to resize the pool to two subprocesses, the API is now:
|
23
|
+
pool.resize!(2)
|
24
|
+
|
25
|
+
- XPool#failed_processes persists after a shutdown
|
26
|
+
If a process in the pool has failed and the pool is subsequently shutdown the
|
27
|
+
failed process will remain in XPool#failed_processes. When a failed process
|
28
|
+
is restarted the pool is repopulated with an active subprocess and can be
|
29
|
+
used to schedule work again.
|
30
|
+
|
31
|
+
- optimize XPool#resize!
|
32
|
+
A few optimizations:
|
33
|
+
|
34
|
+
* when new_size == current_size, there's no resize. nothing happens. it's
|
35
|
+
a no-op.
|
36
|
+
|
37
|
+
* when new_size > current_size, only X new subprocesses are spawned, where X is
|
38
|
+
the difference between "new_size - current_size".
|
39
|
+
|
40
|
+
* when new_size < current_size, all subprocesses indexed after new_size are
|
41
|
+
shutdown and removed from the pool. no new subprocesses are spawned.
|
42
|
+
|
43
|
+
This new behavior reuses subprocesses that are already active in the pool when
|
44
|
+
it can, where as before we always spawned new subprocesses, no matter what the
|
45
|
+
case.
|
46
|
+
|
1
47
|
== v0.9.0.1,v0.9.0.2
|
2
48
|
- doc improvements
|
3
|
-
|
49
|
+
Revised & improved the README & API documentation.
|
4
50
|
|
5
51
|
== v0.9.0
|
6
52
|
- upgrade to ichannel v5.1.1
|
data/Gemfile
CHANGED
data/README.md
CHANGED
@@ -10,37 +10,50 @@ __OVERVIEW__
|
|
10
10
|
|
11
11
|
__DESCRIPTION__
|
12
12
|
|
13
|
-
xpool is a lightweight process pool.
|
14
|
-
that are used when
|
15
|
-
|
16
|
-
|
17
|
-
|
18
|
-
|
19
|
-
|
20
|
-
|
21
|
-
|
22
|
-
|
23
|
-
|
24
|
-
|
25
|
-
|
26
|
-
|
27
|
-
|
28
|
-
|
29
|
-
|
30
|
-
|
31
|
-
|
13
|
+
xpool is a lightweight process pool. A pool manages a group of subprocesses
|
14
|
+
that are used when it is asked to dispatch a 'unit of work'. A 'unit of work'
|
15
|
+
is defined as any object that implements the `run` method.
|
16
|
+
|
17
|
+
In order to send a 'unit of work' between processes each subprocess has its own
|
18
|
+
'message queue' that the pool writes to when it has been asked to schedule a
|
19
|
+
unit of work. A unit of work is serialized(on write to queue), and
|
20
|
+
deserialized(on read from queue). The serializer used under the hood is called
|
21
|
+
[Marshal](http://rubydoc.info/stdlib/core/Marshal) and might be familiar to
|
22
|
+
you already.
|
23
|
+
|
24
|
+
The logic for scheduling a unit of work is straightforward. A pool asks each
|
25
|
+
and every subprocess under its control how frequently its message queue has
|
26
|
+
been written to. The subprocess with the queue that has the least writes is told
|
27
|
+
to schedule the next unit of work. In practical terms this means if you have a
|
28
|
+
pool with five subprocesses and schedule a unit of work five times, each
|
29
|
+
subprocess in the pool would have executed the unit of work once.
|
30
|
+
|
31
|
+
A pool can become "dry" whenever all its subprocesses are busy. If you schedule
|
32
|
+
a unit of work on a dry pool the same scheduling logic apllies but instead of
|
33
|
+
the unit of work executing right away it will be executed whenever the
|
34
|
+
assigned subprocess is no longer busy. It is also possible to query the pool
|
35
|
+
and ask if it is dry, but you can also ask an individual subprocess if it is
|
36
|
+
busy.
|
32
37
|
|
33
38
|
By default xpool will create a pool with X subprocesses, where X is the number
|
34
39
|
of cores on your CPU. This seems like a reasonable default, but if you should
|
35
40
|
decide to choose otherwise you can set the size of the pool when it is
|
36
|
-
initialized. The pool can also be resized at runtime if you decide you need
|
37
|
-
scale up or down.
|
41
|
+
initialized. The pool can also be resized at runtime if you decide you need
|
42
|
+
to scale up or down.
|
43
|
+
|
44
|
+
A unit of work may fail whenever an exception is left unhandled. When this
|
45
|
+
happens xpool rescues the exception, marks the process as "failed", and
|
46
|
+
re-raises the exception so that the failure can be seen. Finally, the process
|
47
|
+
running the unit of work exits, and pool is down one process. A failed process
|
48
|
+
can be restarted and interacted with, though, so it is possible to recover.
|
38
49
|
|
39
50
|
__EXAMPLES__
|
40
51
|
|
41
52
|
The examples don't demonstrate everything that xpool can do. The
|
42
|
-
[API docs](http://rubydoc.info/github/robgleeson/xpool)
|
43
|
-
|
53
|
+
[API docs](http://rubydoc.info/github/robgleeson/xpool)
|
54
|
+
and
|
55
|
+
[docs/](https://github.com/robgleeson/xpool/tree/master/docs)
|
56
|
+
directory cover the missing pieces.
|
44
57
|
|
45
58
|
_1._
|
46
59
|
|
@@ -0,0 +1,43 @@
|
|
1
|
+
__ABSTRACT__
|
2
|
+
|
3
|
+
A unit of work may fail whenever an exception is left unhandled. When this
|
4
|
+
happens xpool rescues the exception, marks the process as "failed", and
|
5
|
+
re-raises the exception so that the failure can be seen. Finally, the
|
6
|
+
process running the unit of work exits, and pool is down one process. A failed
|
7
|
+
process can be restarted and interacted with, though, so it is possible to
|
8
|
+
recover.
|
9
|
+
|
10
|
+
__DETAILS__
|
11
|
+
|
12
|
+
A quick summary of the classes and methods that are most interesting when
|
13
|
+
interacting with failed processes:
|
14
|
+
|
15
|
+
- [XPool#failed_processes](http://rubydoc.info/github/robgleeson/xpool/XPool#failed_processes-instance_method)
|
16
|
+
Returns an array of
|
17
|
+
[XPool::Process](http://rubydoc.info/github/robgleeson/xpool/XPool/Process)
|
18
|
+
objects that are in a failed state.
|
19
|
+
|
20
|
+
- [XPool::Process](http://rubydoc.info/github/robgleeson/xpool/XPool/Process)
|
21
|
+
Provides an object oriented interface on top of a subprocess in the pool.
|
22
|
+
The most interesting methods when a process is in a 'failed' state might be
|
23
|
+
[XPool::Process#restart](http://rubydoc.info/github/robgleeson/xpool/XPool/Process#restart-instance_method)
|
24
|
+
and
|
25
|
+
[XPool::Process#backtrace](http://rubydoc.info/github/robgleeson/xpool/XPool/Process#backtrace-instance_method).
|
26
|
+
|
27
|
+
__EXAMPLES__
|
28
|
+
|
29
|
+
__1.__
|
30
|
+
|
31
|
+
```ruby
|
32
|
+
class Unit
|
33
|
+
def run
|
34
|
+
raise RuntimeError, "", []
|
35
|
+
end
|
36
|
+
end
|
37
|
+
|
38
|
+
pool = XPool.new 2
|
39
|
+
pool.schedule Unit.new
|
40
|
+
sleep 0.05
|
41
|
+
pool.failed_processes.each(&:restart)
|
42
|
+
pool.shutdown
|
43
|
+
```
|
data/lib/xpool.rb
CHANGED
@@ -41,6 +41,55 @@ class XPool
|
|
41
41
|
@pool = Array.new(size) { Process.new }
|
42
42
|
end
|
43
43
|
|
44
|
+
#
|
45
|
+
# @param [Fixnum] number
|
46
|
+
# The number of subprocesses to add to the pool.
|
47
|
+
#
|
48
|
+
# @return
|
49
|
+
# (see XPool#resize!)
|
50
|
+
#
|
51
|
+
def expand(number)
|
52
|
+
resize! size + number
|
53
|
+
end
|
54
|
+
|
55
|
+
#
|
56
|
+
# @param [Fixnum] number
|
57
|
+
# The number of subprocesses to remove from the pool.
|
58
|
+
# A graceful shutdown is performed.
|
59
|
+
#
|
60
|
+
# @raise
|
61
|
+
# (see XPool#shrink!)
|
62
|
+
#
|
63
|
+
# @return
|
64
|
+
# (see Xpool#shrink!)
|
65
|
+
#
|
66
|
+
def shrink(number)
|
67
|
+
present_size = size
|
68
|
+
raise_if number > present_size,
|
69
|
+
ArgumentError,
|
70
|
+
"cannot shrink pool by #{number}. pool is only #{present_size} in size."
|
71
|
+
resize present_size - number
|
72
|
+
end
|
73
|
+
|
74
|
+
#
|
75
|
+
# @param [Fixnum] number
|
76
|
+
# The number of subprocesses to remove from the pool.
|
77
|
+
# A forceful shutdown is performed.
|
78
|
+
#
|
79
|
+
# @raise [ArgumentError]
|
80
|
+
# When _number_ is greater than {#size}.
|
81
|
+
#
|
82
|
+
# @return
|
83
|
+
# (see XPool#resize!)
|
84
|
+
#
|
85
|
+
def shrink!(number)
|
86
|
+
present_size = size
|
87
|
+
raise_if number > present_size,
|
88
|
+
ArgumentError,
|
89
|
+
"cannot shrink pool by #{number}. pool is only #{present_size} in size."
|
90
|
+
resize! present_size - number
|
91
|
+
end
|
92
|
+
|
44
93
|
#
|
45
94
|
# @return [Array<XPool::Process>]
|
46
95
|
# Returns an Array of failed processes.
|
@@ -104,23 +153,32 @@ class XPool
|
|
104
153
|
end
|
105
154
|
|
106
155
|
#
|
107
|
-
# Resize the pool
|
108
|
-
#
|
109
|
-
#
|
156
|
+
# Resize the pool (gracefully, if neccesary)
|
157
|
+
#
|
158
|
+
# @param
|
159
|
+
# (see XPool#resize!)
|
160
|
+
#
|
161
|
+
# @return [void]
|
162
|
+
#
|
163
|
+
def resize(new_size)
|
164
|
+
_resize new_size, false
|
165
|
+
end
|
166
|
+
|
167
|
+
#
|
168
|
+
# Resize the pool (with force, if neccesary).
|
110
169
|
#
|
111
170
|
# @example
|
112
171
|
# pool = XPool.new 5
|
113
|
-
# pool.resize!
|
172
|
+
# pool.resize! 3
|
114
173
|
# pool.shutdown
|
115
174
|
#
|
116
|
-
# @param [
|
175
|
+
# @param [Fixnum] new_size
|
117
176
|
# The new size of the pool.
|
118
177
|
#
|
119
178
|
# @return [void]
|
120
179
|
#
|
121
|
-
def resize!(
|
122
|
-
|
123
|
-
@pool = range.to_a.map { Process.new }
|
180
|
+
def resize!(new_size)
|
181
|
+
_resize new_size, true
|
124
182
|
end
|
125
183
|
|
126
184
|
#
|
@@ -149,9 +207,7 @@ class XPool
|
|
149
207
|
# Returns the number of alive subprocesses in the pool.
|
150
208
|
#
|
151
209
|
def size
|
152
|
-
@pool.count
|
153
|
-
process.alive?
|
154
|
-
end
|
210
|
+
@pool.count(&:alive?)
|
155
211
|
end
|
156
212
|
|
157
213
|
#
|
@@ -162,9 +218,13 @@ class XPool
|
|
162
218
|
@pool.all?(&:busy?)
|
163
219
|
end
|
164
220
|
|
165
|
-
|
166
|
-
|
167
|
-
|
221
|
+
private
|
222
|
+
def raise_if(predicate, e, m)
|
223
|
+
if predicate
|
224
|
+
raise e, m
|
225
|
+
end
|
226
|
+
end
|
227
|
+
|
168
228
|
def number_of_cpu_cores
|
169
229
|
case RbConfig::CONFIG['host_os']
|
170
230
|
when /linux/
|
@@ -174,7 +234,26 @@ class XPool
|
|
174
234
|
when /solaris/
|
175
235
|
Integer(`kstat -m cpu_info | grep -w core_id | uniq | wc -l`)
|
176
236
|
else
|
177
|
-
|
237
|
+
2
|
238
|
+
end
|
239
|
+
end
|
240
|
+
|
241
|
+
def _resize(new_size, with_force)
|
242
|
+
if Range === new_size
|
243
|
+
warn "[DEPRECATED] XPool#resize! no longer accepts a Range." \
|
244
|
+
"Please use a Fixnum instead."
|
245
|
+
new_size = range.to_a.size
|
246
|
+
end
|
247
|
+
new_size -= 1
|
248
|
+
old_size = size - 1
|
249
|
+
if new_size == old_size
|
250
|
+
# do nothing
|
251
|
+
elsif new_size < old_size
|
252
|
+
meth = with_force ? :shutdown! : :shutdown
|
253
|
+
@pool[new_size+1..old_size].each(&meth)
|
254
|
+
@pool = @pool[0..new_size]
|
255
|
+
else
|
256
|
+
@pool += Array.new(new_size - old_size) { Process.new }
|
178
257
|
end
|
179
258
|
end
|
180
259
|
end
|
data/lib/xpool/process.rb
CHANGED
data/lib/xpool/version.rb
CHANGED
data/test/setup.rb
CHANGED
data/test/xpool_test.rb
CHANGED
@@ -1,10 +1,12 @@
|
|
1
1
|
require_relative 'setup'
|
2
2
|
class XPoolTest < Test::Unit::TestCase
|
3
|
+
POOL_SIZE = 2
|
3
4
|
def setup
|
4
|
-
@pool = XPool.new
|
5
|
+
@pool = XPool.new POOL_SIZE
|
5
6
|
end
|
6
7
|
|
7
8
|
def teardown
|
9
|
+
mocha_teardown
|
8
10
|
@pool.shutdown!
|
9
11
|
end
|
10
12
|
|
@@ -14,38 +16,38 @@ class XPoolTest < Test::Unit::TestCase
|
|
14
16
|
end
|
15
17
|
|
16
18
|
def test_size_with_graceful_shutdown
|
17
|
-
assert_equal
|
19
|
+
assert_equal POOL_SIZE, @pool.size
|
18
20
|
@pool.shutdown
|
19
21
|
assert_equal 0, @pool.size
|
20
22
|
end
|
21
23
|
|
22
24
|
def test_size_with_forceful_shutdown
|
23
|
-
assert_equal
|
25
|
+
assert_equal POOL_SIZE, @pool.size
|
24
26
|
@pool.shutdown!
|
25
27
|
assert_equal 0, @pool.size
|
26
28
|
end
|
27
29
|
|
28
30
|
def test_queue
|
29
|
-
@pool.resize! 1
|
30
|
-
writers = Array.new(
|
31
|
+
@pool.resize! 1
|
32
|
+
writers = Array.new(POOL_SIZE) { IOWriter.new }
|
31
33
|
writers.each { |writer| @pool.schedule writer }
|
32
34
|
@pool.shutdown
|
33
35
|
writers.each { |writer| assert writer.wrote_to_disk? }
|
34
36
|
end
|
35
37
|
|
36
38
|
def test_distribution_of_work
|
37
|
-
subprocesses = (
|
39
|
+
subprocesses = Array.new(POOL_SIZE) { @pool.schedule Sleeper.new(0.1) }
|
38
40
|
subprocesses.each { |subprocess| assert_equal 1, subprocess.frequency }
|
39
41
|
end
|
40
42
|
|
41
43
|
def test_resize!
|
42
|
-
@pool.resize! 1
|
44
|
+
@pool.resize! 1
|
43
45
|
assert_equal 1, @pool.instance_variable_get(:@pool).size
|
44
46
|
end
|
45
47
|
|
46
48
|
def test_dry?
|
47
49
|
refute @pool.dry?
|
48
|
-
|
50
|
+
POOL_SIZE.times { @pool.schedule Sleeper.new(0.5) }
|
49
51
|
sleep 0.1
|
50
52
|
assert @pool.dry?
|
51
53
|
end
|
@@ -54,6 +56,42 @@ class XPoolTest < Test::Unit::TestCase
|
|
54
56
|
@pool.schedule Raiser.new
|
55
57
|
sleep 0.1
|
56
58
|
assert_equal 1, @pool.failed_processes.size
|
57
|
-
assert_equal
|
59
|
+
assert_equal POOL_SIZE - 1, @pool.size
|
60
|
+
end
|
61
|
+
|
62
|
+
def test_failed_processes_after_shutdown
|
63
|
+
@pool.schedule Raiser.new
|
64
|
+
@pool.shutdown
|
65
|
+
refute @pool.failed_processes.empty?
|
66
|
+
end
|
67
|
+
|
68
|
+
def test_failed_process_to_repopulate_pool
|
69
|
+
@pool.schedule Raiser.new
|
70
|
+
@pool.shutdown
|
71
|
+
@pool.failed_processes.each(&:restart)
|
72
|
+
assert_equal 1, @pool.size
|
73
|
+
end
|
74
|
+
|
75
|
+
def test_expand
|
76
|
+
@pool.expand 1
|
77
|
+
assert_equal POOL_SIZE + 1, @pool.size
|
78
|
+
end
|
79
|
+
|
80
|
+
def test_shrink
|
81
|
+
XPool::Process.any_instance.expects(:shutdown).once
|
82
|
+
@pool.shrink 1
|
83
|
+
assert_equal POOL_SIZE - 1, @pool.size
|
84
|
+
end
|
85
|
+
|
86
|
+
def test_shrink!
|
87
|
+
XPool::Process.any_instance.expects(:shutdown!).once
|
88
|
+
@pool.shrink! 1
|
89
|
+
assert_equal POOL_SIZE - 1, @pool.size
|
90
|
+
end
|
91
|
+
|
92
|
+
def test_shrink_with_excess_number
|
93
|
+
assert_raises ArgumentError do
|
94
|
+
@pool.shrink! POOL_SIZE + 1
|
95
|
+
end
|
58
96
|
end
|
59
97
|
end
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: xpool
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 0.10.0
|
5
5
|
prerelease:
|
6
6
|
platform: ruby
|
7
7
|
authors:
|
@@ -9,7 +9,7 @@ authors:
|
|
9
9
|
autorequire:
|
10
10
|
bindir: bin
|
11
11
|
cert_chain: []
|
12
|
-
date: 2013-03-
|
12
|
+
date: 2013-03-27 00:00:00.000000000 Z
|
13
13
|
dependencies:
|
14
14
|
- !ruby/object:Gem::Dependency
|
15
15
|
name: ichannel
|
@@ -37,6 +37,7 @@ files:
|
|
37
37
|
- .gitignore
|
38
38
|
- .pryrc
|
39
39
|
- .travis.yml
|
40
|
+
- .yardopts
|
40
41
|
- ChangeLog.txt
|
41
42
|
- Gemfile
|
42
43
|
- LICENSE.txt
|
@@ -44,6 +45,7 @@ files:
|
|
44
45
|
- Rakefile
|
45
46
|
- bench/.gitkeep
|
46
47
|
- bench/pool-schedule.rb
|
48
|
+
- docs/unhandled_exceptions.md
|
47
49
|
- lib/xpool.rb
|
48
50
|
- lib/xpool/process.rb
|
49
51
|
- lib/xpool/version.rb
|
@@ -68,7 +70,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
|
|
68
70
|
version: '0'
|
69
71
|
segments:
|
70
72
|
- 0
|
71
|
-
hash: -
|
73
|
+
hash: -4088700810187342227
|
72
74
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
73
75
|
none: false
|
74
76
|
requirements:
|
@@ -77,7 +79,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
77
79
|
version: '0'
|
78
80
|
segments:
|
79
81
|
- 0
|
80
|
-
hash: -
|
82
|
+
hash: -4088700810187342227
|
81
83
|
requirements: []
|
82
84
|
rubyforge_project:
|
83
85
|
rubygems_version: 1.8.23
|