xpool 0.3.0 → 0.9.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- data/.pryrc +1 -0
- data/.travis.yml +8 -0
- data/ChangeLog.txt +57 -0
- data/Gemfile +2 -1
- data/README.md +56 -27
- data/bench/.gitkeep +0 -0
- data/bench/pool-schedule.rb +15 -0
- data/lib/xpool/process.rb +140 -13
- data/lib/xpool/version.rb +1 -1
- data/lib/xpool.rb +46 -39
- data/test/setup.rb +3 -0
- data/test/support/io_writer.rb +22 -0
- data/test/support/raiser.rb +5 -0
- data/test/support/sleeper.rb +9 -0
- data/test/xpool_process_test.rb +82 -0
- data/test/xpool_test.rb +25 -45
- data/xpool.gemspec +1 -1
- metadata +18 -6
data/.pryrc
ADDED
|
@@ -0,0 +1 @@
|
|
|
1
|
+
require "./lib/xpool"
|
data/.travis.yml
ADDED
data/ChangeLog.txt
CHANGED
|
@@ -1,3 +1,60 @@
|
|
|
1
|
+
== v0.9.0
|
|
2
|
+
- upgrade to ichannel v5.1.1
|
|
3
|
+
Which in turn fixes a performance bug in XPool#schedule. With ichannel v5.0.1
|
|
4
|
+
the suite runs in 23~ seconds, but with v5.1.1 we run the suite in 3~ seconds.
|
|
5
|
+
I don't think the performance hit was that bad in earlier versions, but given
|
|
6
|
+
the right set of circumstances I'm sure it would be an issue.
|
|
7
|
+
|
|
8
|
+
- add XPool::Process#idle?
|
|
9
|
+
Returns true when the subprocess is considered idle. "idle" means the
|
|
10
|
+
subprocess is not executing a unit of work.
|
|
11
|
+
|
|
12
|
+
- conserve CPU consumption by sleeping for a short period of time.
|
|
13
|
+
By sleeping for a short period of time we avoid pegging the CPU at
|
|
14
|
+
80-100% when a subprocess is idle.
|
|
15
|
+
|
|
16
|
+
- add XPool::Process#backtrace
|
|
17
|
+
Returns the backtrace of the exception that caused a subprocess to fail. Returns
|
|
18
|
+
nil whenever the subprocess is not in a failed state.
|
|
19
|
+
|
|
20
|
+
- add XPool::Process#failed?
|
|
21
|
+
Returns true when a unit of work does not handle an exception, which causes
|
|
22
|
+
the subprocess it is running in to exit. A failed subprocess can be restarted
|
|
23
|
+
through Process#restart.
|
|
24
|
+
|
|
25
|
+
- add XPool::Process#restart
|
|
26
|
+
Restarts a subprocess by gracefully shutting down and respawning a new subprocess.
|
|
27
|
+
|
|
28
|
+
- add XPool#dry?
|
|
29
|
+
Returns true when all subprocesses in the pool are busy.
|
|
30
|
+
|
|
31
|
+
- XPool::Process#schedule raises when the subprocess is dead
|
|
32
|
+
Incase the subprocess has been shutdown a call to XPool::Process#schedule will
|
|
33
|
+
raise a RuntimeError.
|
|
34
|
+
|
|
35
|
+
- XPool#schedule raises when the pool has no active subprocesses
|
|
36
|
+
Incase the pool has been shutdown a call to XPool#schedule will raise a
|
|
37
|
+
RuntimeError.
|
|
38
|
+
|
|
39
|
+
- add XPool::Process#frequency
|
|
40
|
+
Returns the number of times a subprocess has been asked to schedule work.
|
|
41
|
+
|
|
42
|
+
- XPool#schedule schedules work on the least busy subprocess
|
|
43
|
+
The subprocess picked to run a unit of work is the one who is least busy.
|
|
44
|
+
"least busy" means it has been asked to schedule the least amount of
|
|
45
|
+
work.
|
|
46
|
+
|
|
47
|
+
- XPool#schedule returns a XPool::Process object.
|
|
48
|
+
The subprocess that has been picked to run your unit of work is returned
|
|
49
|
+
by schedule in case you want to interact with the subprocess later on.
|
|
50
|
+
|
|
51
|
+
- Add XPool#broadcast
|
|
52
|
+
The broadcast method can distribute one unit of work across all
|
|
53
|
+
subprocesses in the pool.
|
|
54
|
+
|
|
55
|
+
- Add XPool::Process#busy?
|
|
56
|
+
Returns true when the subprocess is executing a unit of work.
|
|
57
|
+
|
|
1
58
|
== v0.3.0
|
|
2
59
|
* Add XPool#size.
|
|
3
60
|
It returns the number of alive subprocesses in the pool.
|
data/Gemfile
CHANGED
data/README.md
CHANGED
|
@@ -1,31 +1,49 @@
|
|
|
1
1
|
__OVERVIEW__
|
|
2
2
|
|
|
3
|
-
| Project |
|
|
3
|
+
| Project | xpool
|
|
4
4
|
|:----------------|:--------------------------------------------------
|
|
5
5
|
| Homepage | https://github.com/robgleeson/xpool
|
|
6
|
-
| Documentation | http://rubydoc.info/
|
|
7
|
-
| CI | [](https://travis-ci.org/robgleeson/
|
|
6
|
+
| Documentation | http://rubydoc.info/github/robgleeson/xpool/frames
|
|
7
|
+
| CI | [](https://travis-ci.org/robgleeson/xpool)
|
|
8
8
|
| Author | Rob Gleeson
|
|
9
9
|
|
|
10
10
|
|
|
11
11
|
__DESCRIPTION__
|
|
12
12
|
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
in the the pool. If the pool dries up(all subprocesses are busy) the units
|
|
17
|
-
of work are queued & the next available subprocess will pick it up.
|
|
13
|
+
xpool is a lightweight process pool. The pool manages a group of subprocesses
|
|
14
|
+
that are used when the pool is asked to dispatch a 'unit of work'. A
|
|
15
|
+
'unit of work' is defined as any object that implements the `run` method.
|
|
18
16
|
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
17
|
+
All subprocesses in the pool have their own message queue that the pool places
|
|
18
|
+
work onto according to a very simple algorithm: the subprocess who has scheduled
|
|
19
|
+
the least amount of work is the subprocess who is asked to put the work on its
|
|
20
|
+
queue. The message queue that each subprocess has is also what ensures work
|
|
21
|
+
can be queued when the pool becomes dry (all subprocesses are busy).
|
|
22
22
|
|
|
23
|
+
Incase a unit of work raises an exception that it does not handle xpool will
|
|
24
|
+
catch the exception and mark the process as 'failed'. A failed process can be
|
|
25
|
+
restarted, and it is also possible to access the backtrace of a failed process
|
|
26
|
+
through `XPool` and `XPool::Process` objects. The exception is also re-raised
|
|
27
|
+
so that you can see a process has failed from the output ruby prints when an
|
|
28
|
+
exception is left unhandled.
|
|
29
|
+
|
|
30
|
+
__POOL SIZE__
|
|
31
|
+
|
|
32
|
+
By default xpool will create a pool with X subprocesses, where X is the number
|
|
33
|
+
of cores on your CPU. This seems like a reasonable default, but if you should
|
|
34
|
+
decide to choose otherwise you can set the size of the pool when it is
|
|
35
|
+
initialized. The pool can also be resized at runtime if you decide you need to
|
|
36
|
+
scale up or down.
|
|
23
37
|
|
|
24
38
|
__EXAMPLES__
|
|
25
39
|
|
|
40
|
+
The examples don't demonstrate everything that xpool can do. The
|
|
41
|
+
[API docs](http://rubydoc.info/github/robgleeson/xpool)
|
|
42
|
+
cover the missing pieces.
|
|
43
|
+
|
|
26
44
|
_1._
|
|
27
45
|
|
|
28
|
-
A demo of how
|
|
46
|
+
A demo of how to schedule a unit of work:
|
|
29
47
|
|
|
30
48
|
```ruby
|
|
31
49
|
#
|
|
@@ -38,44 +56,47 @@ class Unit
|
|
|
38
56
|
sleep 1
|
|
39
57
|
end
|
|
40
58
|
end
|
|
41
|
-
pool = XPool.new
|
|
42
|
-
|
|
59
|
+
pool = XPool.new 2
|
|
60
|
+
pool.schedule Unit.new
|
|
43
61
|
pool.shutdown
|
|
44
62
|
```
|
|
45
63
|
|
|
46
64
|
_2._
|
|
47
65
|
|
|
48
|
-
A demo of how you
|
|
66
|
+
A demo of how you can interact with subprocesses through
|
|
67
|
+
[XPool::Process](http://rdoc.info/github/robgleeson/xpool/master/XPool/Process)
|
|
68
|
+
objects:
|
|
49
69
|
|
|
50
70
|
```ruby
|
|
51
71
|
class Unit
|
|
52
72
|
def run
|
|
53
|
-
sleep
|
|
73
|
+
sleep 1
|
|
54
74
|
end
|
|
55
75
|
end
|
|
56
|
-
pool = XPool.new
|
|
57
|
-
pool.
|
|
58
|
-
|
|
76
|
+
pool = XPool.new 2
|
|
77
|
+
subprocess = pool.schedule Unit.new
|
|
78
|
+
p subprocess.busy? # => true
|
|
59
79
|
```
|
|
80
|
+
|
|
60
81
|
_3._
|
|
61
82
|
|
|
62
|
-
A demo of how
|
|
63
|
-
|
|
83
|
+
A demo of how to run a single unit of work across all subprocesses in the
|
|
84
|
+
pool:
|
|
64
85
|
|
|
65
86
|
```ruby
|
|
66
87
|
class Unit
|
|
67
88
|
def run
|
|
68
|
-
|
|
89
|
+
puts Process.pid
|
|
69
90
|
end
|
|
70
91
|
end
|
|
71
|
-
pool = XPool.new
|
|
72
|
-
pool.
|
|
73
|
-
pool.shutdown
|
|
92
|
+
pool = XPool.new 4
|
|
93
|
+
pool.broadcast Unit.new
|
|
94
|
+
pool.shutdown
|
|
74
95
|
```
|
|
75
96
|
|
|
76
97
|
__DEBUGGING OUTPUT__
|
|
77
98
|
|
|
78
|
-
|
|
99
|
+
xpool can print helpful debugging information if you set `XPool.debug`
|
|
79
100
|
to true:
|
|
80
101
|
|
|
81
102
|
```ruby
|
|
@@ -86,7 +107,7 @@ Or you can temporarily enable debugging output for the duration of a block:
|
|
|
86
107
|
|
|
87
108
|
```ruby
|
|
88
109
|
XPool.debug do
|
|
89
|
-
pool = XPool.new
|
|
110
|
+
pool = XPool.new 2
|
|
90
111
|
pool.shutdown
|
|
91
112
|
end
|
|
92
113
|
```
|
|
@@ -94,6 +115,14 @@ end
|
|
|
94
115
|
The debugging information you'll see is all about how the pool is operating.
|
|
95
116
|
It can be interesting to look over even if you're not bug hunting.
|
|
96
117
|
|
|
118
|
+
__SIGUSR1__
|
|
119
|
+
|
|
120
|
+
All xpool managed subprocesses define a signal handler for the SIGUSR1 signal.
|
|
121
|
+
A unit of work should never define a signal handler for SIGUSR1 because that
|
|
122
|
+
would overwrite the handler defined by xpool. SIGUSR2 is not caught by xpool
|
|
123
|
+
and it could be a good second option.
|
|
124
|
+
|
|
125
|
+
|
|
97
126
|
__INSTALL__
|
|
98
127
|
|
|
99
128
|
$ gem install xpool
|
data/bench/.gitkeep
ADDED
|
File without changes
|
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
require "xpool"
|
|
2
|
+
require "ruby-prof"
|
|
3
|
+
class Unit
|
|
4
|
+
def run(x = 1)
|
|
5
|
+
sleep x
|
|
6
|
+
end
|
|
7
|
+
end
|
|
8
|
+
|
|
9
|
+
pool = XPool.new 5
|
|
10
|
+
result = RubyProf.profile do
|
|
11
|
+
5.times { pool.schedule Unit.new }
|
|
12
|
+
end
|
|
13
|
+
pool.shutdown
|
|
14
|
+
printer = RubyProf::FlatPrinter.new(result)
|
|
15
|
+
printer.print
|
data/lib/xpool/process.rb
CHANGED
|
@@ -1,10 +1,10 @@
|
|
|
1
1
|
class XPool::Process
|
|
2
2
|
#
|
|
3
|
-
# @
|
|
4
|
-
#
|
|
3
|
+
# @return [XPool::Process]
|
|
4
|
+
# Returns an instance of XPool::Process
|
|
5
5
|
#
|
|
6
|
-
def initialize
|
|
7
|
-
@id =
|
|
6
|
+
def initialize
|
|
7
|
+
@id = spawn
|
|
8
8
|
end
|
|
9
9
|
|
|
10
10
|
#
|
|
@@ -17,7 +17,7 @@ class XPool::Process
|
|
|
17
17
|
# @return [void]
|
|
18
18
|
#
|
|
19
19
|
def shutdown
|
|
20
|
-
_shutdown 'SIGUSR1'
|
|
20
|
+
_shutdown 'SIGUSR1' unless @shutdown
|
|
21
21
|
end
|
|
22
22
|
|
|
23
23
|
#
|
|
@@ -26,7 +26,65 @@ class XPool::Process
|
|
|
26
26
|
# @return [void]
|
|
27
27
|
#
|
|
28
28
|
def shutdown!
|
|
29
|
-
_shutdown 'SIGKILL'
|
|
29
|
+
_shutdown 'SIGKILL' unless @shutdown
|
|
30
|
+
end
|
|
31
|
+
|
|
32
|
+
#
|
|
33
|
+
# @return [Fixnum]
|
|
34
|
+
# The number of times the process has been asked to schedule work.
|
|
35
|
+
#
|
|
36
|
+
def frequency
|
|
37
|
+
@frequency
|
|
38
|
+
end
|
|
39
|
+
|
|
40
|
+
#
|
|
41
|
+
# @param [#run] unit
|
|
42
|
+
# The unit of work
|
|
43
|
+
#
|
|
44
|
+
# @param [Object] *args
|
|
45
|
+
# A variable number of arguments to be passed to #run
|
|
46
|
+
#
|
|
47
|
+
# @raise [RuntimeError]
|
|
48
|
+
# When the process is dead.
|
|
49
|
+
#
|
|
50
|
+
# @return [XPool::Process]
|
|
51
|
+
# Returns self
|
|
52
|
+
#
|
|
53
|
+
def schedule(unit,*args)
|
|
54
|
+
if dead?
|
|
55
|
+
raise RuntimeError,
|
|
56
|
+
"cannot schedule work on a dead process (with ID: #{@id})"
|
|
57
|
+
end
|
|
58
|
+
@frequency += 1
|
|
59
|
+
@channel.put unit: unit, args: args
|
|
60
|
+
self
|
|
61
|
+
end
|
|
62
|
+
|
|
63
|
+
#
|
|
64
|
+
# @return [Boolean]
|
|
65
|
+
# Returns true when the process is executing work.
|
|
66
|
+
#
|
|
67
|
+
def busy?
|
|
68
|
+
synchronize!
|
|
69
|
+
@states[:busy]
|
|
70
|
+
end
|
|
71
|
+
|
|
72
|
+
#
|
|
73
|
+
# @return [Boolean]
|
|
74
|
+
# Returns true when the process is not executing a unit of work.
|
|
75
|
+
#
|
|
76
|
+
def idle?
|
|
77
|
+
!busy?
|
|
78
|
+
end
|
|
79
|
+
|
|
80
|
+
#
|
|
81
|
+
# @return [Boolean]
|
|
82
|
+
# Returns true when a subprocess has failed due to an unhandled
|
|
83
|
+
# exception.
|
|
84
|
+
#
|
|
85
|
+
def failed?
|
|
86
|
+
synchronize!
|
|
87
|
+
@states[:failed]
|
|
30
88
|
end
|
|
31
89
|
|
|
32
90
|
#
|
|
@@ -42,17 +100,86 @@ class XPool::Process
|
|
|
42
100
|
# Returns true when the process is no longer running.
|
|
43
101
|
#
|
|
44
102
|
def dead?
|
|
45
|
-
|
|
103
|
+
synchronize!
|
|
104
|
+
@states[:dead]
|
|
105
|
+
end
|
|
106
|
+
|
|
107
|
+
#
|
|
108
|
+
# If a process has failed (see #failed?) this method returns the backtrace of
|
|
109
|
+
# the exception that caused the process to fail.
|
|
110
|
+
#
|
|
111
|
+
# @return [Array<String>]
|
|
112
|
+
# Returns the backtrace.
|
|
113
|
+
#
|
|
114
|
+
def backtrace
|
|
115
|
+
synchronize!
|
|
116
|
+
@states[:backtrace]
|
|
117
|
+
end
|
|
118
|
+
|
|
119
|
+
#
|
|
120
|
+
# @return [Fixnum]
|
|
121
|
+
# Returns the process ID of the new process.
|
|
122
|
+
#
|
|
123
|
+
def restart
|
|
124
|
+
shutdown
|
|
125
|
+
@id = spawn
|
|
46
126
|
end
|
|
47
127
|
|
|
48
128
|
private
|
|
49
129
|
def _shutdown(sig)
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
130
|
+
Process.kill sig, @id
|
|
131
|
+
Process.wait @id
|
|
132
|
+
rescue Errno::ECHILD, Errno::ESRCH
|
|
133
|
+
ensure
|
|
134
|
+
@states = {dead: true}
|
|
135
|
+
@shutdown = true
|
|
136
|
+
@channel.close
|
|
137
|
+
@s_channel.close
|
|
138
|
+
end
|
|
139
|
+
|
|
140
|
+
def synchronize!
|
|
141
|
+
return if @shutdown
|
|
142
|
+
while @s_channel.readable?
|
|
143
|
+
@states = @s_channel.get
|
|
144
|
+
end
|
|
145
|
+
@states
|
|
146
|
+
end
|
|
147
|
+
|
|
148
|
+
def reset
|
|
149
|
+
@channel = IChannel.new Marshal
|
|
150
|
+
@s_channel = IChannel.new Marshal
|
|
151
|
+
@shutdown = false
|
|
152
|
+
@states = {}
|
|
153
|
+
@frequency = 0
|
|
154
|
+
end
|
|
155
|
+
|
|
156
|
+
def spawn
|
|
157
|
+
reset
|
|
158
|
+
fork do
|
|
159
|
+
trap :SIGUSR1 do
|
|
160
|
+
XPool.log "#{::Process.pid} got request to shutdown."
|
|
161
|
+
@shutdown_requested = true
|
|
162
|
+
end
|
|
163
|
+
loop &method(:read_loop)
|
|
164
|
+
end
|
|
165
|
+
end
|
|
166
|
+
|
|
167
|
+
def read_loop
|
|
168
|
+
if @channel.readable?
|
|
169
|
+
@s_channel.put busy: true
|
|
170
|
+
msg = @channel.get
|
|
171
|
+
msg[:unit].run *msg[:args]
|
|
172
|
+
@s_channel.put busy: false
|
|
173
|
+
end
|
|
174
|
+
sleep 0.05
|
|
175
|
+
rescue Exception => e
|
|
176
|
+
XPool.log "#{::Process.pid} has failed."
|
|
177
|
+
@s_channel.put failed: true, dead: true, backtrace: e.backtrace
|
|
178
|
+
raise e
|
|
179
|
+
ensure
|
|
180
|
+
if @shutdown_requested && !@channel.readable?
|
|
181
|
+
XPool.log "#{::Process.pid} is about to exit."
|
|
182
|
+
exit 0
|
|
56
183
|
end
|
|
57
184
|
end
|
|
58
185
|
end
|
data/lib/xpool/version.rb
CHANGED
data/lib/xpool.rb
CHANGED
|
@@ -37,9 +37,33 @@ class XPool
|
|
|
37
37
|
# @return [XPool]
|
|
38
38
|
#
|
|
39
39
|
def initialize(size=number_of_cpu_cores)
|
|
40
|
-
@
|
|
41
|
-
|
|
42
|
-
|
|
40
|
+
@pool = Array.new(size) { Process.new }
|
|
41
|
+
end
|
|
42
|
+
|
|
43
|
+
#
|
|
44
|
+
# @return [Array<XPool::Process>]
|
|
45
|
+
# Returns an Array of failed processes.
|
|
46
|
+
#
|
|
47
|
+
def failed_processes
|
|
48
|
+
@pool.select(&:failed?)
|
|
49
|
+
end
|
|
50
|
+
|
|
51
|
+
#
|
|
52
|
+
# Broadcasts _unit_ to be run across all subprocesses in the pool.
|
|
53
|
+
#
|
|
54
|
+
# @example
|
|
55
|
+
# pool = XPool.new 5
|
|
56
|
+
# pool.broadcast unit
|
|
57
|
+
#
|
|
58
|
+
# @raise [RuntimeError]
|
|
59
|
+
# When a subprocess in the pool is dead.
|
|
60
|
+
#
|
|
61
|
+
# @return [Array<XPool::Process>]
|
|
62
|
+
# Returns an array of XPool::Process objects
|
|
63
|
+
#
|
|
64
|
+
def broadcast(unit, *args)
|
|
65
|
+
@pool.map do |process|
|
|
66
|
+
process.schedule unit, *args
|
|
43
67
|
end
|
|
44
68
|
end
|
|
45
69
|
|
|
@@ -98,9 +122,7 @@ class XPool
|
|
|
98
122
|
#
|
|
99
123
|
def resize!(range)
|
|
100
124
|
shutdown!
|
|
101
|
-
@pool = range.to_a.map
|
|
102
|
-
spawn
|
|
103
|
-
end
|
|
125
|
+
@pool = range.to_a.map { Process.new }
|
|
104
126
|
end
|
|
105
127
|
|
|
106
128
|
#
|
|
@@ -109,11 +131,19 @@ class XPool
|
|
|
109
131
|
# @param
|
|
110
132
|
# (see Process#schedule)
|
|
111
133
|
#
|
|
112
|
-
# @
|
|
113
|
-
# (
|
|
134
|
+
# @raise [RuntimeError]
|
|
135
|
+
# When the pool is dead (no subprocesses are left running)
|
|
114
136
|
#
|
|
115
|
-
|
|
116
|
-
|
|
137
|
+
# @return [XPool::Process]
|
|
138
|
+
# Returns an instance of XPool::Process.
|
|
139
|
+
#
|
|
140
|
+
def schedule(unit,*args)
|
|
141
|
+
if size == 0 # dead pool
|
|
142
|
+
raise RuntimeError,
|
|
143
|
+
"cannot schedule unit of work on a dead pool"
|
|
144
|
+
end
|
|
145
|
+
process = @pool.reject(&:dead?).min_by { |p| p.frequency }
|
|
146
|
+
process.schedule unit, *args
|
|
117
147
|
end
|
|
118
148
|
|
|
119
149
|
#
|
|
@@ -126,35 +156,12 @@ class XPool
|
|
|
126
156
|
end
|
|
127
157
|
end
|
|
128
158
|
|
|
129
|
-
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
135
|
-
end
|
|
136
|
-
loop do
|
|
137
|
-
begin
|
|
138
|
-
#
|
|
139
|
-
# I've noticed that select can wait an infinite amount of time for
|
|
140
|
-
# a UNIXSocket to become readable. It usually happens on the tenth or
|
|
141
|
-
# so iteration. By checking if we have data to read first we elimate
|
|
142
|
-
# this problem but it is a band aid for a bigger issue I don't
|
|
143
|
-
# understand right now.
|
|
144
|
-
#
|
|
145
|
-
if @channel.readable?
|
|
146
|
-
msg = @channel.get
|
|
147
|
-
msg[:unit].run *msg[:args]
|
|
148
|
-
end
|
|
149
|
-
ensure
|
|
150
|
-
if @shutdown_requested && !@channel.readable?
|
|
151
|
-
XPool.log "#{::Process.pid} is about to exit."
|
|
152
|
-
break
|
|
153
|
-
end
|
|
154
|
-
end
|
|
155
|
-
end
|
|
156
|
-
end
|
|
157
|
-
Process.new pid
|
|
159
|
+
#
|
|
160
|
+
# @return [Boolean]
|
|
161
|
+
# Returns true when all subprocesses in the pool are busy.
|
|
162
|
+
#
|
|
163
|
+
def dry?
|
|
164
|
+
@pool.all?(&:busy?)
|
|
158
165
|
end
|
|
159
166
|
|
|
160
167
|
#
|
data/test/setup.rb
CHANGED
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
require 'tempfile'
|
|
2
|
+
class IOWriter
|
|
3
|
+
def initialize
|
|
4
|
+
file = Tempfile.new '__xpool_test'
|
|
5
|
+
@path = file.path
|
|
6
|
+
file.close false
|
|
7
|
+
end
|
|
8
|
+
|
|
9
|
+
def run
|
|
10
|
+
File.open @path, 'w' do |f|
|
|
11
|
+
f.write 'true'
|
|
12
|
+
sleep 0.1
|
|
13
|
+
end
|
|
14
|
+
end
|
|
15
|
+
|
|
16
|
+
def run?
|
|
17
|
+
return @run if defined?(@run)
|
|
18
|
+
@run = File.read(@path) == 'true'
|
|
19
|
+
FileUtils.rm_rf @path
|
|
20
|
+
@run
|
|
21
|
+
end
|
|
22
|
+
end
|
|
@@ -0,0 +1,82 @@
|
|
|
1
|
+
require_relative 'setup'
|
|
2
|
+
class XPoolProcessTest < Test::Unit::TestCase
|
|
3
|
+
def setup
|
|
4
|
+
@process = XPool::Process.new
|
|
5
|
+
end
|
|
6
|
+
|
|
7
|
+
def teardown
|
|
8
|
+
@process.shutdown!
|
|
9
|
+
end
|
|
10
|
+
|
|
11
|
+
def test_busy_method
|
|
12
|
+
@process.schedule Sleeper.new(0.5)
|
|
13
|
+
sleep 0.1
|
|
14
|
+
assert @process.busy?, 'Expected process to be busy'
|
|
15
|
+
sleep 0.5
|
|
16
|
+
refute @process.busy?, 'Expected process to not be busy'
|
|
17
|
+
end
|
|
18
|
+
|
|
19
|
+
def test_busy_on_exception
|
|
20
|
+
@process.schedule Raiser.new
|
|
21
|
+
sleep 0.1
|
|
22
|
+
refute @process.busy?
|
|
23
|
+
end
|
|
24
|
+
|
|
25
|
+
def test_busy_method_on_dead_process
|
|
26
|
+
@process.schedule Sleeper.new(1)
|
|
27
|
+
@process.shutdown!
|
|
28
|
+
refute @process.busy?
|
|
29
|
+
end
|
|
30
|
+
|
|
31
|
+
def test_frequency
|
|
32
|
+
4.times { @process.schedule Sleeper.new(0.1) }
|
|
33
|
+
assert_equal 4, @process.frequency
|
|
34
|
+
end
|
|
35
|
+
|
|
36
|
+
def test_queue
|
|
37
|
+
unit1 = IOWriter.new
|
|
38
|
+
unit2 = IOWriter.new
|
|
39
|
+
@process.schedule unit1
|
|
40
|
+
@process.schedule unit2
|
|
41
|
+
@process.shutdown
|
|
42
|
+
assert unit1.run?
|
|
43
|
+
assert unit2.run?
|
|
44
|
+
end
|
|
45
|
+
|
|
46
|
+
def test_failed_on_failed_process
|
|
47
|
+
@process.schedule Raiser.new
|
|
48
|
+
sleep 0.1
|
|
49
|
+
assert @process.failed?
|
|
50
|
+
end
|
|
51
|
+
|
|
52
|
+
def test_restart_on_failed_process
|
|
53
|
+
@process.schedule Raiser.new
|
|
54
|
+
assert_instance_of Fixnum, @process.restart
|
|
55
|
+
end
|
|
56
|
+
|
|
57
|
+
def test_failed_process_is_also_dead
|
|
58
|
+
@process.schedule Raiser.new
|
|
59
|
+
sleep 0.1
|
|
60
|
+
assert @process.dead?
|
|
61
|
+
end
|
|
62
|
+
|
|
63
|
+
def test_backtrace
|
|
64
|
+
@process.schedule Raiser.new
|
|
65
|
+
sleep 0.1
|
|
66
|
+
assert_equal %w(42), @process.backtrace
|
|
67
|
+
end
|
|
68
|
+
|
|
69
|
+
def test_race_condition_in_process_forced_to_shutdown
|
|
70
|
+
@process.schedule IOWriter.new
|
|
71
|
+
sleep 0.5
|
|
72
|
+
@process.shutdown!
|
|
73
|
+
assert @process.dead?
|
|
74
|
+
end
|
|
75
|
+
|
|
76
|
+
def test_race_condition_in_process_shutdown_gracefully
|
|
77
|
+
@process.schedule IOWriter.new
|
|
78
|
+
sleep 0.5
|
|
79
|
+
@process.shutdown
|
|
80
|
+
assert @process.dead?
|
|
81
|
+
end
|
|
82
|
+
end
|
data/test/xpool_test.rb
CHANGED
|
@@ -1,40 +1,17 @@
|
|
|
1
1
|
require_relative 'setup'
|
|
2
2
|
class XPoolTest < Test::Unit::TestCase
|
|
3
|
-
class Unit
|
|
4
|
-
def run
|
|
5
|
-
sleep 1
|
|
6
|
-
end
|
|
7
|
-
end
|
|
8
|
-
|
|
9
|
-
class XUnit
|
|
10
|
-
def initialize
|
|
11
|
-
file = Tempfile.new '__xpool_test'
|
|
12
|
-
@path = file.path
|
|
13
|
-
file.close false
|
|
14
|
-
end
|
|
15
|
-
|
|
16
|
-
def run
|
|
17
|
-
File.open @path, 'w' do |f|
|
|
18
|
-
f.write 'true'
|
|
19
|
-
end
|
|
20
|
-
end
|
|
21
|
-
|
|
22
|
-
def run?
|
|
23
|
-
return @run if defined?(@run)
|
|
24
|
-
@run = File.read(@path) == 'true'
|
|
25
|
-
FileUtils.rm_rf @path
|
|
26
|
-
@run
|
|
27
|
-
end
|
|
28
|
-
end
|
|
29
|
-
|
|
30
3
|
def setup
|
|
31
4
|
@pool = XPool.new 5
|
|
32
5
|
end
|
|
33
6
|
|
|
34
7
|
def teardown
|
|
35
|
-
@pool.shutdown
|
|
8
|
+
@pool.shutdown!
|
|
36
9
|
end
|
|
37
10
|
|
|
11
|
+
def test_broadcast
|
|
12
|
+
subprocesses = @pool.broadcast Sleeper.new(1)
|
|
13
|
+
subprocesses.each { |subprocess| assert_equal 1, subprocess.frequency }
|
|
14
|
+
end
|
|
38
15
|
|
|
39
16
|
def test_size_with_graceful_shutdown
|
|
40
17
|
assert_equal 5, @pool.size
|
|
@@ -50,29 +27,32 @@ class XPoolTest < Test::Unit::TestCase
|
|
|
50
27
|
|
|
51
28
|
def test_queue
|
|
52
29
|
@pool.resize! 1..1
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
end
|
|
57
|
-
@pool.shutdown
|
|
58
|
-
units.each do |unit|
|
|
59
|
-
assert unit.run?
|
|
60
|
-
end
|
|
30
|
+
subprocesses = Array.new(5) { @pool.schedule Sleeper.new(0.1) }.uniq!
|
|
31
|
+
assert_equal 1, subprocesses.size
|
|
32
|
+
assert_equal 5, subprocesses[0].frequency
|
|
61
33
|
end
|
|
62
34
|
|
|
63
|
-
def
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
end
|
|
67
|
-
assert_nothing_raised Timeout::Error do
|
|
68
|
-
Timeout.timeout 2 do
|
|
69
|
-
@pool.shutdown
|
|
70
|
-
end
|
|
71
|
-
end
|
|
35
|
+
def test_distribution_of_work
|
|
36
|
+
subprocesses = (0..4).map { @pool.schedule Sleeper.new(0.1) }
|
|
37
|
+
subprocesses.each { |subprocess| assert_equal 1, subprocess.frequency }
|
|
72
38
|
end
|
|
73
39
|
|
|
74
40
|
def test_resize!
|
|
75
41
|
@pool.resize! 1..1
|
|
76
42
|
assert_equal 1, @pool.instance_variable_get(:@pool).size
|
|
77
43
|
end
|
|
44
|
+
|
|
45
|
+
def test_dry?
|
|
46
|
+
refute @pool.dry?
|
|
47
|
+
5.times { @pool.schedule Sleeper.new(0.5) }
|
|
48
|
+
sleep 0.1
|
|
49
|
+
assert @pool.dry?
|
|
50
|
+
end
|
|
51
|
+
|
|
52
|
+
def test_failed_processes
|
|
53
|
+
@pool.schedule Raiser.new
|
|
54
|
+
sleep 0.1
|
|
55
|
+
assert_equal 1, @pool.failed_processes.size
|
|
56
|
+
assert_equal 4, @pool.size
|
|
57
|
+
end
|
|
78
58
|
end
|
data/xpool.gemspec
CHANGED
metadata
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: xpool
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 0.
|
|
4
|
+
version: 0.9.0
|
|
5
5
|
prerelease:
|
|
6
6
|
platform: ruby
|
|
7
7
|
authors:
|
|
@@ -9,7 +9,7 @@ authors:
|
|
|
9
9
|
autorequire:
|
|
10
10
|
bindir: bin
|
|
11
11
|
cert_chain: []
|
|
12
|
-
date:
|
|
12
|
+
date: 2013-03-18 00:00:00.000000000 Z
|
|
13
13
|
dependencies:
|
|
14
14
|
- !ruby/object:Gem::Dependency
|
|
15
15
|
name: ichannel
|
|
@@ -18,7 +18,7 @@ dependencies:
|
|
|
18
18
|
requirements:
|
|
19
19
|
- - ~>
|
|
20
20
|
- !ruby/object:Gem::Version
|
|
21
|
-
version: 5.
|
|
21
|
+
version: 5.1.1
|
|
22
22
|
type: :runtime
|
|
23
23
|
prerelease: false
|
|
24
24
|
version_requirements: !ruby/object:Gem::Requirement
|
|
@@ -26,7 +26,7 @@ dependencies:
|
|
|
26
26
|
requirements:
|
|
27
27
|
- - ~>
|
|
28
28
|
- !ruby/object:Gem::Version
|
|
29
|
-
version: 5.
|
|
29
|
+
version: 5.1.1
|
|
30
30
|
description: A lightweight UNIX(X) Process Pool implementation
|
|
31
31
|
email:
|
|
32
32
|
- rob@flowof.info
|
|
@@ -35,15 +35,23 @@ extensions: []
|
|
|
35
35
|
extra_rdoc_files: []
|
|
36
36
|
files:
|
|
37
37
|
- .gitignore
|
|
38
|
+
- .pryrc
|
|
39
|
+
- .travis.yml
|
|
38
40
|
- ChangeLog.txt
|
|
39
41
|
- Gemfile
|
|
40
42
|
- LICENSE.txt
|
|
41
43
|
- README.md
|
|
42
44
|
- Rakefile
|
|
45
|
+
- bench/.gitkeep
|
|
46
|
+
- bench/pool-schedule.rb
|
|
43
47
|
- lib/xpool.rb
|
|
44
48
|
- lib/xpool/process.rb
|
|
45
49
|
- lib/xpool/version.rb
|
|
46
50
|
- test/setup.rb
|
|
51
|
+
- test/support/io_writer.rb
|
|
52
|
+
- test/support/raiser.rb
|
|
53
|
+
- test/support/sleeper.rb
|
|
54
|
+
- test/xpool_process_test.rb
|
|
47
55
|
- test/xpool_test.rb
|
|
48
56
|
- xpool.gemspec
|
|
49
57
|
homepage: ''
|
|
@@ -60,7 +68,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
|
|
|
60
68
|
version: '0'
|
|
61
69
|
segments:
|
|
62
70
|
- 0
|
|
63
|
-
hash: -
|
|
71
|
+
hash: -372428872314676490
|
|
64
72
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
|
65
73
|
none: false
|
|
66
74
|
requirements:
|
|
@@ -69,7 +77,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
|
69
77
|
version: '0'
|
|
70
78
|
segments:
|
|
71
79
|
- 0
|
|
72
|
-
hash: -
|
|
80
|
+
hash: -372428872314676490
|
|
73
81
|
requirements: []
|
|
74
82
|
rubyforge_project:
|
|
75
83
|
rubygems_version: 1.8.23
|
|
@@ -78,4 +86,8 @@ specification_version: 3
|
|
|
78
86
|
summary: A lightweight UNIX(X) Process Pool implementation
|
|
79
87
|
test_files:
|
|
80
88
|
- test/setup.rb
|
|
89
|
+
- test/support/io_writer.rb
|
|
90
|
+
- test/support/raiser.rb
|
|
91
|
+
- test/support/sleeper.rb
|
|
92
|
+
- test/xpool_process_test.rb
|
|
81
93
|
- test/xpool_test.rb
|